choice of study design: randomized and non-randomized approaches iná s. santos federal university...

37
Choice of study design: randomized and non-randomized approaches Iná S. Santos Federal University of Pelotas Brazil PAHO/PAHEF WORKSHOP EDUCATION FOR CHILDHOOD OBESITY PREVENTION: A LIFE-COURSE APPROACH Aruba, June 2012

Upload: felicia-woods

Post on 16-Dec-2015

213 views

Category:

Documents


0 download

TRANSCRIPT

Choice of study design: randomized and non-randomized

approaches

Iná S. SantosFederal University of Pelotas

Brazil

PAHO/PAHEF WORKSHOP EDUCATION FOR CHILDHOOD OBESITY

PREVENTION: A LIFE-COURSE APPROACH Aruba, June 2012

Outline of the presentation

• Introduction– Types of evidence– Internal and external validity

• Randomized controlled trials• Non-randomized designs

• Victora et al. Evidence-based Public Health: moving beyond randomized trials. Am J Public Health 2004;94(3):400-405

• Habicht JP et al. Evaluation designs for adequacy, plausibility and probability of public health programme performance and impact. Intern J Epidemiology 1999;28:10-18

Part I

• Introduction– Types of evidence– Internal and external validity

Types of epidemiological evidence for Public Health

Type of evidence Type of epidemiological study

Frequency of disease Descriptive

Frequency of exposure Descriptive

Exposure/disease relationship

Experimental (or observational)

Coverage of intervention

Descriptive

Efficacy of intervention Experimental (or observational)

Programme effectiveness

Observational

Valididy: internal and external

External population

Target population

Actual population

Sample

Validity• Internal validity

– Are the study results true for the target population?

– Are there errors that affect the study findings?•Systematic error (bias, confounding)•Random error (precision)

• External validity– Generalizability– Are the study results applicable to

other settings?

Validity

• Internal validity– May be judged on the basis of the study

methods

• External validity

– Require a “value judgment”

Part II

Randomized controlled trials(RCTs)

Internal validity in probability studies

Issue:Comparability of

Probability study(RCT)

Bias avoided

Populations Randomization Selection bias

Observations Blinding Information bias

Effects Use of placebo Hawthorne effectPlacebo effectRCTs are the gold standard for internal

validity

RCT (from Cochrane Collaboration)

• In a RCT participants are assigned by chance to receive either an experimental or control treatment.

• When a RCT is done properly, the effect of a treatment can be studied in groups of people who are the same at the outset, and treated in the same way, except for the intervention being studied.

• Any differences then seen in the groups at the end of the trial can be attributed to the difference in treatment alone, and not to bias or chance.

Randomised controlled trials

• Prioritise internal validity– random allocation reduces selection bias

and confounding– blinding reduces information bias

• Gained popularity through clinical trials of new drugs

• Essential for determining efficacy of new biological agents

• Adequate for short causal chains– biological effects of drugs, vaccines,

nutritional supplements, etc.

drug pharmacological reaction disease cure or alleviation

Pooling data from RCTs• Systematic review

– Comprehensive search for all high-quality scientific studies on a specific subject

• E.g. on effects of a drug, vaccine, surgical technique, behavioral intervention, etc

• Meta-analysis– Groups data from different studies to

determine an average effect– Improves the precision of the available

estimates by including a greater number of people

– But: data from different studies cannot always be combined

What does a RCT show?

• The probability that the observed result is due to the intervention

• But additional evidence is required to make this result conceptually plausible

– Biological plausibility

– Operational plausibility

Special issues in RCTs

• “Intent-to-treat” analyses– Individuals/groups should remain in

the group to which they were originally assigned

• Units of analyses– It is incorrect to use group allocation

(e.g., health centers, communities, etc) and to analyse the data at individual level

– This has implications for sample size calculation and for analysis methods

CONSORT Statement

• Allocation• Rationale• Eligibility• Interventions• Objectives• Outcomes• Sample size• Randomization

– Sequence generation– Concealment– Implementation– Blinding (masking)

• Statistical methods• Participant flow• Recruitment• Baseline data• Numbers analyzed• Outcomes and estimation• Ancillary analyses• Adverse events• Interpretation• Generalizability• Overall evidence

Major steps in Public Health trials

• Central-level provision of intervention to local outlets (e.g. health facilities)

• Local providers’ compliance with delivery of intervention

• Recipient compliance with intervention

• Biological effect of intervention

Source: Victora, Habicht, Bryce, AJPH 2004

Example of Public Health Intervention:

Nutrition Counselling Trial

Health workers are trained

Nutritional status improves

HW knowledge increases

HW performance improves

Maternal knowledge increases

Child diets change

Energy intake increases

National programme is implemented

Source: Santos, Victora et al. J Nutr 2001

Utilization is adequate

HWs aretrainable

Equipment is available

Food is available

Central team is competent

Lack of foodis a cause of malnutrition

Example of Public Health Intervention:

Nutrition Counselling Trial

Health workers are trained

Nutritional status improves

HW knowledge increases

HW performance improves

Maternal knowledge increases

Child diets change

Energy intake increases

National programme is implemented

Source: Santos, Victora et al. J Nutr 2001

0.807=0.21

Are RCT findings generalizable to routine

programmes?

• The dose of the intervention may be smaller– behavioural effect

modification• provider behaviour• recipient

behaviour

• The dose-response relationship may be different– biological effect

modification

The longer the causal chain, the more likely is effect

modification

Source: Victora, Habicht, Bryce, AJPH 2004

Curvilinear associations

Need for intervention

Res

po

nse

Trials often done here

Results often applied here

Source: Victora, Habicht, Bryce, AJPH 2004

Why do RCTs have a limited role in large-scale effectiveness

evaluations• Often impossible to randomize

– unethical, politically unacceptable, rapid scaling up

• Evaluation team affects service delivery– service delivery is at least “best-practice”

• Effect modification is the rule – are meta-analyses of complex programmes

meaningful?– need for local data

• Need for supplementary approaches for evaluations in Public Health

Part III

Non-randomized designs

(Quasi-experiments)

Types of inference in impact evaluations

• Adequacy (descriptive studies)– the expected changes are taking

place• Plausibility (observational studies)

– observed changes seem to be due to the programme

• Probability (RCTs)– randomised trial shows that the

programme has a statistically significant impact

Source: Habicht, Victora, Vaughan, IJE 1999

Ensuring internal validity in probability and plausibility

studiesIssue:Comparability

Probability(RCT)

Plausibility (quasi-experiment)

Populations Randomization MatchingUnderstanding determinants of implementationHandling contextual factors

Observations Blinding Avoiding information bias

Effects Use of placebo Being aware of Hawthorne bias and of the placebo effect

Adequacy evaluations

• Questions:– Were the initial goals achieved?

• E.g.: reduce underfive mortality by 20%

– Were the observed trends in impact indicators • in the expected direction?• of adequate magnitude?

Plausibility evaluations

• Question:– Is the observed impact likely due to

the intervention?• Require ruling out influence of

external factors:– need for comparison group– adjustment for confounders

• Also known as quasi-experiments

Adequacy/plausibility designs (1)

• Design: cross-sectional• Measurement points: once• Outcome: difference or ratio• Control group:

– Individuals who did not receive the intervention

– Groups/areas without the intervention– Dose-response analyses, if possible

ORT and diarrhea deaths in Brazil

Spearman r = -0,61 (p=0,04)

Each dot = 1 state

10%

15%

20%

25%

30%

35%

40%

30% 35% 40% 45%

O.R.T. coverage

Infa

nt

dia

rrh

ea d

eath

s (%

)

Adequacy/plausibility designs (2)

• Design: longitudinal (before-and-after)

• Measurement points: twice or more• Outcome: change• Control group:

– The same or similar individuals, before the intervention

– The same groups/areas, before the intervention

– Time-trend analyses, if possible

Hib vaccine in Uruguay

In Uruguay, reported Hib cases declined by over 95 percent after the introduction of routine infant Hib immunisation in 1994.

Source: PAHO, 2004

Adequacy/plausibility designs (3)

• Design: longitudinal-control• Measurement points: twice or more • Outcome: relative change• Control group:

– The same or similar individuals, before the intervention

– The same groups/areas, before the intervention

– Time-trend analyses, if possible

Adequacy/plausibility designs (4)

• Design: case-control• Measurement points: once • Comparison: exposure to

intervention• Groups:

– Cases: individuals with the disease of interest

– Controls: sample of the population from which cases originated

Stunting in Tanzania

Source: Schellenberg J et al

0

20

40

60

80

1999 2002

% s

tun

ted

ch

ild

ren

Morogoro (IMCI) Rufiji (IMCI)

Ulanga Kilombero

Stunting prevalence among children aged 24-59 months

p (mean haz)= 0.05

• Transparent Reporting for Evaluations with Nonrandomised Designs (TREND)

• Similar to CONSORT guidelines

• Include – conceptual frameworks used

– intervention and comparison conditions

– research design

– methods of adjusting for possible biases

• AJPH, March 2004

Source: Des Jarlais, Lyles, Crepaz and the TREND Group, AJPH 2004

Conclusions (1)

• RCTs are essential for – clinical studies– community studies for establishing the

efficacy of relatively simple interventions

• RCTs require additional evidence from non-randomised studies for increasing their external validity

Conclusions (2)

• Given the complexity of many Public Health interventions, adequacy and plausibility studies are essential in different populations– even for interventions proven by RCTs

• Adequacy evaluations should become part of the routine of decision-makers– and plausibility evaluations too, when

possible

THANK YOU