why use randomized evaluation? isabel beltran, world bank
TRANSCRIPT
DIME – FRAGILE STATESDUBAI, MAY 31 – JUNE 4
Why Use Randomized Evaluation?
Isabel Beltran, World Bank
Fundamental Question
What is the effect of a program or intervention?
How can vulnerable groups partake in the state and peace building process?
What political and social accountability mechanisms are most effective in a fragile state?
What measures secure stability and reduce ethnic conflict at the local level?
3
Objective
To Identify the causal effect of an intervention Identify the impact of the program from other
factors
Need to find out what would have happened without the program Cannot observe the same person with and
without the program at the same point of time Create a valid counterfactual
Correlation is not causation
Higher profits
Credit Use
OR ?
2)
?
1) Higher profits
Business Skills
Credit
Question: Does providing credit increase firm profits?
Suppose we observe that firms with more credit also earn higher profits.
2007 20090
2
4
6
8
10
12
14
Treatment GroupTreatment Group
5
(+6) increase in gross operating margin
Illustration: Credit Program (Before-After)
A credit program was offered in 2008.
Why did operating margin increase?
6
Motivation
Hard to distinguish causation from correlation by analyzing existing (retrospective) data However complex, statistics can only see that X moves
with Y Hard to correct for unobserved characteristics, like
motivation/ability May be very important- also affect outcomes of interest
Selection bias a major issue for impact evaluation Projects started at specific times and places for particular
reasons Participants may be selected or self-select into programs People who have access to credit are likely to be very
different from the average entrepreneur, looking at their profits will give you a misleading impression of the benefits of credit
Before After0
2
4
6
8
10
12
14Control GroupTreatment Group
7
(+4) Impact of the program
(+2) Impact of other (external) factors
Illustration: Credit Program (Valid Counterfactual)
* Macroeconomic environment affects control group* Program impact easily identified
8
Experimental Design
All those in the study have the same chance of being in the treatment or comparison group
By design, treatment and comparison have the same characteristics (observed and unobserved), on average Only difference is treatment
Large sample all characteristics average out
Unbiased impact estimates
9
Options for Randomization Lottery (0nly some receive)
Lottery to receive new loans, credit for community Random phase-in (everyone gets it
eventually) Some groups or individuals get credit each year
Variation in treatment Some get matching grant, others get credit, others
get business development services etc Encouragement design
Some farmers get home visit to explain loan product, others do not
Lottery among the qualified
Must receive the program
Not suitable for the program
Randomize who gets the program
11
Opportunities for Randomization
Budget constraint prevents full coverage Random assignment (lottery) is fair and
transparent Limited implementation capacity
Phase-in gives all the same chance to go first
No evidence on which alternative is best Random assignment to alternatives
with equal ex ante chance of success
12
Opportunities for Randomization
Take up of existing program is not complete Provide information or incentive for some to
sign up- Randomize encouragement Pilot a new program
Good opportunity to test design before scaling up
Operational changes to ongoing programs Good opportunity to test changes before
scaling them up
13
Different levels you can randomize at
Individual/owner/firm Business Association Village level School level
Women’s association
Youth groups Regulatory
jurisdiction/ administrative district
Group or individual randomization?
If a program impacts a whole group-- usually randomize whole community to treatment or comparison
Easier to get big enough sample if randomize individuals
Individual randomization Group randomization
15
Unit of Randomization
Randomizing at higher level sometimes necessary: Political constraints on differential treatment within
community Practical constraints—confusing to implement
different versions Spillover effects may require higher level
randomization
Randomizing at group level requires many groups because of within community correlation
16
Elements of an experimental design
Random assignmentTreatment Group Control Group• Participants Non-participants
Evaluation sample
Potential participantsTailors Furniture manufacturers
Target population
SMEs
17
External and Internal Validity (1)
External validity The evaluation sample is representative of the
total population The results in the sample represent the results
in the population We can apply the lessons to the whole population
Internal validity The intervention and comparison groups are
truly comparable estimated effect of the intervention/program
on the evaluated population reflects the real impact on that population
18
External and Internal Validity (2)
An evaluation can have internal validity without external validity
Example: A randomized evaluation of encouraging informal firms to register in urban areas may not tell us much about impact of a similar program in rural areas
An evaluation without internal validity, can’t have external validity
If you don’t know whether a program works in one place, then you have learnt nothing about whether it works elsewhere.
19
Internal & external validity
Random Sample- Randomization
Randomization
National Population National Population
Representative Sample of National
Population
Representative Sample of National
Population
20
Internal validity
Stratification
Randomization
PopulationPopulation
Population stratumPopulation stratumSamples of Population
StratumSamples of Population
Stratum
Example: Evaluating a program that targets women
21
Representative but biased: useless
National Population National Population
Non-random assignment
USELESS!
Non-random assignment
USELESS!
Randomization
22
Efficacy & Effectiveness
Efficacy Proof of concept Smaller scale Pilot in ideal conditions
Effectiveness At scale Prevailing implementation arrangements --
“real life”
Higher or lower impact? Higher or lower costs?
23
Advantages of “experiments”
Clear and precise causal impactRelative to other methods
Provide correct estimates Much easier to analyze- Difference in
averages Easier to explain More convincing to policymakers Methodologically uncontroversial
Randomly assigning machines within a plant to receive regular maintenance
Machines do NOT Raise ethical or practical concerns
about randomization Fail to comply with Treatment Find a better Treatment Move away—so lost to measurement Refuse to answer questionnaires
Human beings can be a little more challenging!
What if there are constraints on randomization?
Some interventions can’t be assigned randomly
Partial take up or demand-driven interventions: Randomly promote the program to some Participants make their own choices about
adoption Perhaps there is contamination- for
instance, if some in the control group take-up treatment
25
Randomly Assigned Marketing(Encouragement Design)
Those who get receive marketing treatment are more likely to enroll
But who got marketing was determined randomly, so not correlated with other observables/non-observables Compare average outcomes of two groups:
promoted/not promoted Effect of offering the encouragement (Intent-To-
Treat) Effect of the intervention on the complier
population (Local Average Treatment Effect)▪ LATE= ITT/proportion of those who took it up
RandomizationAssigned to treatment
Assigned to control
Difference Impact: Average treatment effect on the treated
Non-treated
Treated
Proportion treated
100% 0% 100%Impact of assignment
100%
Mean outcome
103 80 23Intent-to-treat estimate
23/100%=23Average treatment on the treated
Random encouragementRandomlyEncouraged
Not encouraged
Difference Impact: Average treatment effect on compliers
Non-treated(did not take up program)
Treated(did take up program)
Proportion treated
70% 30% 40%Impact of encouragement
100%
Outcome 100 92 8Intent-to-treat estimate
8/40%=20Local average treatment effect
Common pitfalls to avoid
Calculating sample size incorrectly Randomizing one district to treatment and
one district to control and calculating sample size on number of people you interview
Collecting data in treatment and control differently
Counting those assigned to treatment who do not take up program as control—don’t undo your randomization!!
29
30
When is it really not possible?
The treatment already assigned and announced
and no possibility for expansion of treatment
The program is over (retrospective)Universal take up alreadyProgram is national and non
excludable Freedom of the press, exchange rate
policy(sometimes some components can be
randomized)Sample size is too small to make it
worth it
31
Thank You