randomized evaluation: do s and don’t s

24
Randomized Evaluation: Do s and Don’t s An example from Peru Tania Alfonso Training Director, IPA

Upload: martena-burnett

Post on 31-Dec-2015

21 views

Category:

Documents


0 download

DESCRIPTION

Randomized Evaluation: Do s and Don’t s. An example from Peru Tania Alfonso Training Director, IPA. Outline. Design Implementation Analysis. Outline. Design Research question Power Randomization Sampling Implementation Analysis. Research question. - PowerPoint PPT Presentation

TRANSCRIPT

Randomized Evaluation: Dos and Don’ts

An example from Peru

Tania AlfonsoTraining Director, IPA

Outline

• Design• Implementation• Analysis

Outline

• Design– Research question– Power– Randomization– Sampling

• Implementation• Analysis

Research question

• Do make sure the research question is policy relevant

• Do make sure your indicators are answering your research question

Power

• Don’t conduct an under-powered evaluation– What does it mean to be under-powered?– Sample size and power

Power

• Do power calculations first– Effect size– Sample size– Getting data– (What will take-up be?)

Power

• Do cluster your standard errors when doing power calculations– Bad examples (two districts, 10,000 people)

Randomization

• Do Ensure balance– Stratification– Re-randomizing– Costs and benefits

Sampling

• Do make sure your sampling frame is as close to your target population as possible– Effect size

Outline

• Design• Implementation– Measurement– Monitoring– Attrition

• Analysis

Measurement

• Don’t collect data differently for treatment and control groups– Introducing bias

Measurement

• Don’t use as your primary indicator something that may change with the intervention, even when the outcome does not

Monitoring

• Do monitor your intervention to ensure the treatment groups are receiving the treatment, and control groups are not– Contamination

Monitoring

• Do collect process indicators to unpack the black box

Attrition

• Do whatever it takes to minimize attrition– Attrition bias

Outline

• Design• Implementation• Analysis– Treatment integrity– Attrition – Final outcomes– Subgroup analyses– Covariates

Integrity of design“Once in treatment, always in treatment”

• Don’t switch treatment or control status, based on compliance– Intention to Treat– Treatment on Treated

Attrition“Once in sample, always in sample”

• Do not ignore “attritors”– Attrition bias

Attrition

• Don’t relax just because rates of attrition are the same in treatment and control groups– How do we test– How do we know

Final outcomes

• Don’t run regressions on 20 different outcomes and only report on 1 or 2 “significant impacts”

• Do report on all outcomes

Sub-group analysis

• Don’t run regressions on 20 different subgroups and only report on 1 or 2 “significant impacts”

Covariates

• Do specify the regression(s) you plan to run beforehand

• Do include covariates that you stratified on and those helpful for absorbing variance.

External Validity

• Do be modest about the external validity of your results– Consider the context (needs assessment)– Consider the process (process evaluation)

Cost effectiveness

• Do have listened to Iqbal’s lecture yesterday– Not sure if he is presenting or covered this…just a

guess…