DFID is increasingly interested in understanding issues around sexuality, poverty and human rights, and this interest is particularly focused around LGBT lesbian, gay, bisexual, transgender issues.
One program tested in Kenya jumped out, and the Rwandan government wanted to know whether it would likely work in Rwanda as well. A randomized controlled trial RCT found that showing eighth-grade girls and boys a minute video and statistics on the higher rates of HIV among older men dramatically changed behavior: The number of teen girls who became pregnant with an older man within the following 12 months fell by more than 60 percent.
Random assignment determined which girls received the risk awareness program and which girls continued to receive the standard curriculum. Our government partners could thereby have confidence that the reduction in risky behavior was actually caused by the program.
But if they replicated this approach in a new context, could they expect the impact to be similar? Policy makers repeatedly face this generalizability puzzle—whether the results of a specific program generalize to other contexts—and there has been a long-standing debate among policy makers about the appropriate response.
But the discussion is often framed by confusing and unhelpful questions, such as: Should policy makers rely on less rigorous evidence from a local context or more rigorous evidence from elsewhere? And must a new experiment always be done locally before a program is scaled up?
These questions present false choices. Rigorous impact evaluations are designed not to replace the need for local data but to enhance their value. This complementarity between detailed knowledge of local institutions and global knowledge of common behavioral relationships is fundamental to the philosophy and practice of our work at the Abdul Latif Jameel Poverty Action Lab J-PALa center at the Massachusetts Institute of Technology founded in with a network of affiliated professors and professional staff around the world.
Four Misguided Approaches To give a sense of our philosophy, it may help to first examine four common, but misguided, approaches about evidence-based policy making that our work seeks to resolve. Can a study inform policy only in the location in which it was undertaken?
Kaushik Basu has argued that an impact evaluation done in Kenya can never tell us anything useful about what to do in Rwanda because we do not know with certainty that the results will generalize to Rwanda. Describing general behaviors that are found across settings and time is particularly important for informing policy.
The best impact evaluations are designed to test these general propositions about human behavior. Should we use only whatever evidence we have from our specific location? In an effort to ensure that a program or policy makes sense locally, researchers such as Lant Pritchett and Justin Sandefur argue that policy makers should mainly rely on whatever evidence is available locally, even if it is not of very good quality.
The challenge is to pair local information with global evidence and use each piece of evidence to help understand, interpret, and complement the other.
Should a new local randomized evaluation always precede scale up? One response to the concern for local relevance is to use the global evidence base as a source for policy ideas but always to test a policy with a randomized evaluation locally before scaling it up. With limited resources and evaluation expertise, we cannot rigorously test every policy in every country in the world.
We need to prioritize. For example, there have been more than 30 analyses of 10 randomized evaluations in nine low- and middle- income countries on the effects of conditional cash transfers. While there is still much that could be learned about the optimal design of these programs, it is unlikely to be the best use of limited funds to do a randomized impact evaluation for every new conditional cash transfer program when there are many other aspects of antipoverty policy that have not yet been rigorously tested.
Must an identical program or policy be replicated a specific number of times before it is scaled up? One of the most common questions we get asked is how many times a study needs to be replicated in different contexts before a decision maker can rely on evidence from other contexts.
We think this is the wrong way to think about evidence. There are examples of the same program being tested at multiple sites: For example, a coordinated set of seven randomized trials of an intensive graduation program to support the ultra-poor in seven countries found positive impacts in the majority of cases.
This type of evidence should be weighted highly in our decision making. But if we only draw on results from studies that have been replicated many times, we throw away a lot of potentially relevant information.
Focus on Mechanisms These four misguided approaches would have blocked a useful path forward in deciding whether to introduce the HIV information program in Rwanda. This is because they ignore the key insight from an evaluation: First, such a focus draws attention to more relevant evidence.
When considering whether to implement a specific policy or program, we may not have much existing evidence about that exact program.
But we may have a deep evidence base to draw from if we ask a more general question about behavior. For example, imagine a public health agency that would like to encourage health-care providers to promote flu vaccinations. A review of the literature may produce few, if any, rigorous evaluations of this specific approach.
Second, underlying human behaviors are more likely to generalize than specific programs.Databases. Kent Library offers a wide variety of databases to help with your research.
If you are looking for articles from a specific field or disciple, use the “Databases by Subject” list.
Click on the map or use the pull-down menu to find your location-specific resources. Related Literature Tracer study is an approach which widely being used in most organization especially in the educational institutions to track and to keep record of . Aquaculture & Poverty - A Case Study of Five Coastal Communities in the Philippines "Reviews the literature and evidence on the population and poverty nexus." "Provides an overview of the current nature of poverty-related research and monitoring efforts in the country." A Strategy to Fight Poverty.
Airline sex discrimination policy controversy; Boys are stupid, throw rocks at them! controversy; International Men's Day; Meninism. Major findings in the literatureResults of the review of the literature produced 31 empirical studies that focus on food deserts in the U.S.
It is worthwhile to note that most of the research in this area has focused on exploring racial/ethnic and income disparities within food deserts.