fundamental principles of epidemiologic study design f. bruce coles, d.o. assistant professor...

42
Fundamental Principles of Epidemiologic Study Design F. Bruce Coles, D.O. Assistant Professor Louise-Anne McNutt, PhD Associate Professor University at Albany School of Public Health Department of Epidemiology and Biostatistics

Post on 21-Dec-2015

219 views

Category:

Documents


0 download

TRANSCRIPT

Fundamental Principles of Epidemiologic Study Design

F. Bruce Coles, D.O.Assistant Professor

Louise-Anne McNutt, PhDAssociate Professor

University at AlbanySchool of Public Health

Department of Epidemiology and Biostatistics

What are the primary goals of the Epidemiologist?

To assess the determinants and possiblecauses of disease

To identify/develop effective interventionsto prevent or control disease

To describe the frequency of diseaseand its distribution

Are exposure and disease linked?

(Does the exposure cause the disease?)

The Basic Question…

Based upon: Maldonado G and Greenland S. Estimating Causal EffectsInt J Epidemiol 2002; 31: 422-29.

Please! Show me! What is the true effect of the

exposure on the occurrence of the

disease?

My son, what do you mean by the true

effect?

The true risk if a person is exposed versus if they are not exposed… A true

relative risk…

So, you want a descriptive incidence

proportion ratio?

No…causal! I want to know the real effect of the exposure isolated from all other possible

causes…

For who or whom do you want it? An individual?

A population? If the latter, which one? And for what time period??

What? I…

The value of a causal incidence proportion ratio can be different for different groups of people and for

different time periods. It is not necessarily a biological constant,

you know…Uh… Everyone in the entire population?

And calculate it for one year…

Okay. Comparing what two exposure

levels?

Exposed versus

unexposed…

What do you mean by exposed and unexposed?

Exposed how much, for how long, and in what time

period? There are a lot of different ways you could

define exposed and unexposed…and each of the

corresponding possible ratios can have a different

true value, you know.

Oh please, Great God of Epidemiology…

Why does it have to be so hard? I’ll take any

exposure, for any amount of time versus

no exposure at all. How’s that?

My son, you want an absolute

counterfactual… I’m only the God of Epi…not a miracle

worker.

“We may define a cause to be an object followed by another,and where all the objects, similar to the first, are followed byobjects similar to the second. Or, in other words, where, if thefirst object had not been, the second never had existed.”

David Hume, Philosopher (1748)

Counterfactual Analysis

is a cause of if…

under actual (factual) conditions whenE occurs, D follows…

and…

if under conditions contrary to the actualconditions (counter-factual) when E doesnot occur, D does not occur

Counterfactual Model of Causation

If we can turn back

time…

We can answer

the question!

But, Oh Great God of Epidemiology… If I

can’t go back in time and observe the

unobservable…what CAN I do to

determine the cause of a disease?

You must, my son, move from the dream of

theoretical perfection to the next best thing…

Substitution!!!

Once you clearly define your study question,

choose a target population that corresponds to that

question……then choose a study design and sample

subjects from that target population to balance the tradeoffs in bias, variance,

and loss to follow-up…

Experimental Observational

DescriptiveAnalytical

Case-Control Cohort+ cross-sectional & ecologic

(Randomized

Controlled

Trials)

Epidemiologic Study Designs

Descriptive Studies

Examine patterns of disease

Analytical studies

Studies of suspected causes of diseases

Experimental studies

Compare treatment or intervention modalities

Epidemiologic Study Designs

Grimes & Schulz, 2002

Epidemiologic Study Designs

Tower & Spector, 2007

Hierarchy of Epidemiologic Study Design

We will descend the ladder of perfection. So we begin with…

When considering any etiologic study, keep in mind

two issues related to participant (patient)

heterogeneity: the effect of chance and the effect of bias…

• Recommended to achieve a valid determination of the comparative benefit of competing intervention strategies:

- Prevention

- Screening

- Treatment

- Management

Green SB. Design of Randomized Trials.Epidemiol Reviews 2002;24:4-11

Randomized Controlled Trials

• Continuum: healthy > elevated risk > precursor abnormality (preclinical) > disease

• Prevention or Screening trial: – drawn from “normal” healthy population– may be selected due to elevated (“high”) risk

• Treatment or Management trial:– clinical trials– diseased patients

Green SB. Design of Randomized Trials.Epidemiol Reviews 2002;24:4-11

RCT: Participant Characteristics

• Phase I– Safety test: investigate dosage, route of administration, and toxicity– Usually not randomized

• Phase II – Look for evidence of “activity” of an intervention, e.g., evidence of tumor

shrinkage, change in biomarker– Tolerability

– May be small randomized, blinded or non-randomized

• Phase III– Randomized design to investigate “efficacy” i.e., most ideal conditions

• Phase IV– Designed to assess “effectiveness” of proven intervention in wide-scale (“real

world”) conditions

Green SB. Design of Randomized Trials.Epidemiol Reviews 2002;24:4-11

RCT: Phase Objectives

When we talk about a “clinical trial” of medications we almost always mean a Phase III clinical trial.

RCT: Phases

• Efficacy – does the intervention work in tightly controlled conditions?

– Strict inclusion/exclusion criteria– Highly standardized treatments– Explicit procedures for ensuring compliance– Focus on direct outcomes

Efficacy vs. Effectiveness

• Effectiveness – does the intervention work in ‘real world’ conditions?

– Looser inclusion/exclusion criteria– Treatments carried out by typical clinical personnel– Little or no provision for insuring compliance– Focus on less direct outcomes (e.g., quality of life)

Efficacy vs Effectiveness

• investigator controls the predictor variable (intervention or treatment)

• randomization controls unmeasured confounding

• ability to assess causality much greater than observational studies

RCT: Advantages

timeStudy begins here (baseline point)

Studypopulation

Intervention

Control

outcome

no outcome

outcome

no outcome

baselinefuture

RANDOMIZATION

RCT: Design

1. Select participants– high-risk for outcome (high incidence)– Likely to benefit and not be harmed– Likely to adhere

• Pre-trial run-in period?

RCT: Steps in study procedures

• Pro– Provides stabilization and baseline– Tests endurance/reliability of subjects

• Con– Can be perceived as too demanding

RCT: Pre-trial “Run-in” Period

2. Measure baseline variables

3. Randomize– Eliminates baseline confounding– Types

• Simple – two arm• Stratified – multi-arm; factorial• Block - group

RCT: Steps in study procedures

4. Assess need for blinding the intervention– Can be as important as randomization– Eliminates

• co- intervention• biased outcome ascertainment• biased measurement of outcome

5. Follow subjects– Adherence to protocol– Lost to follow up

6. Measure outcome– Clinically important measures– Adverse events

RCT: Steps in study procedures

Green SB. Design of Randomized Trials.Epidemiol Reviews 2002;24:4-11

• Why is the study being done?- the objectives should be clearly defined- will help determine the outcome measures- single primary outcome with limited secondary outcome measures

• What is being compared to what?- two-arm trial: experimental intervention vs nothing, placebo- standard intervention, different dose or duration- multi-arm- factorial- groups

RCT: Design Concepts (5 questions)

Green SB. Design of Randomized Trials.Epidemiol Reviews 2002;24:4-11

• Which type of intervention is being assessed?- well-defined- tightly controlled: new intervention- flexible: assessing one already in use- multifaceted?

• Who is the target population?- eligibility

= restriction: enhance statistical power by having a more homogeneous group, higher rate of outcome events, higher rate of benefit= practical consideration: accessible

- include: = potential to benefit= effect can be detected= those most likely to adhere

- exclude: = unacceptable risk= competing risk (condition)

RCT: Design Concepts (5 questions)

Green SB. Design of Randomized Trials.Epidemiol Reviews 2002;24:4-11

• How many should be enrolled?- ensure power to detect intervention effect- increase sample size to increase precision

of estimate of intervention effect (decreasesvariability (standard error)

- subgroups = requires increased sample size= risk of spurious results increases with a greater number of subgroup analyses

RCT: Design Concepts (5 questions)

Green SB. Design of Randomized Trials.Epidemiol Reviews 2002;24:4-11

• answer more than one question by addressing more than one comparison of interventions

Intervention A Intervention not-A

Intervention B Intervention not-B Intervention B Intervention not-B

• important that the two interventions can be given together (mechanism of action differ)

- no serious interactions expected- interaction effect is of interest

RCT: Factorial Design

Green SB. Design of Randomized Trials.Epidemiol Reviews 2002;24:4-11

• settings- communities - village- workplace - schools or classrooms

- religious institutions - social organizations- families - clinics

• concerns- less efficient statistically than individual randomization- must account for correlation of individuals within a cluster- must assure adequate sample size

RCT: Group Randomization

Green SB. Design of Randomized Trials.Epidemiol Reviews 2002;24:4-11

• advantages- feasibility of delivering intervention- avoids contamination of those assigned to different

intervention- decrease cost- possibly greater generalizability

• intervention applications- behavioral and lifestyle interventions- infectious disease interventions (vaccines)- studies of screening approaches- health services research- studies of new drugs (or other agents) in short supply

RCT: Group Randomization

Green SB. Design of Randomized Trials.Epidemiol Reviews 2002;24:4-11

• The procedure for randomization should be:- unbiased- unpredictable (for participants and study personnel

recruiting and enrolling them)

• Timing- randomize after determining eligibility- avoid delays in implementation to minimize possibility

of participants becoming non-candidates

• Run-in period- brief- all participants started on same intervention- those who comply are enrolled

RCT: Maintaining the Integrity of Randomization

Single blind – participants are unaware of treatmentgroup

Double blind – both participants and investigatorsare unaware

Triple blind – various meanings• person performing tests• outcome auditors• safety monitoring groups*

(* some clinical trials experts oppose this practice – inhibits ability to weigh benefits and adverse effects and to assureethical standards are maintained)

RCT: Blinding (Masking)

Why blind? … To avoid biased outcome, ascertainment or adjudication

• If group assignment is known- participants may report symptoms or outcomes differently- physicians or investigators may elicit symptoms or outcomes differently- study staff or adjudicators may classify similar events differently in treatment groups

• Problematic with “soft” outcomes- investigator judgment- participant reported symptoms, scales

RCT: Blinding (Masking)

• Unintended effective interventions– participants use other therapy or change behavior

– study staff, medical providers, family or friends treat participants differently

• Nondifferential - decreases power

• Differential - causes bias

RCT: Why Blind? … Co-interventions

Green SB. Design of Randomized Trials.Epidemiol Reviews 2002;24:4-11

• Feasibility depends upon study design- yes: drug – placebo trial- no: surgical vs medical intervention- no: drug with obvious side effects

- trials with survival as an outcome little effected byinability to mask the observer

- independent, masked observer may be used for:

= studies with subjective outcome measures= studies with objective endpoints (scans, photographs, histopathology slides, cardiograms)

RCT: Blinding (Masking)

Green SB. Design of Randomized Trials.Epidemiol Reviews 2002;24:4-11

• includes all participants regardless of what occursafter randomization

• maintains comparability in expectation acrossintervention groups

• excluding participants after randomization introducesbias that randomization was designed to avoid

• may not be necessary with trials using apre-randomization screening test with resultsavailable after intervention begins providedeligibility is not influenced by randomizedassignment

• must consider impact of noncompliance

RCT: Intension-to-treat Analysis

Green SB. Design of Randomized Trials.Epidemiol Reviews 2002;24:4-11

• decrease power- remedy: inflate sample size to account for expected LTF

• increase bias- difficult: design study (and consent process) to follow participants that drop out

RCT: Accounting for loss to follow-up (LTF)

• Intention to treat analysis– Most conservative interpretation– Include all persons assigned to intervention

group (including those who did not get treatment or dropped out)

• Subgroup analysis– Groups identified pre-randomization

RCT: Analysis

• Tamper-proof randomization

• Blinding of participants, study staff, lab staff, outcome ascertainment and adjudication

• Adherence to study intervention and protocol

• Complete follow-up

The Ideal Randomized Trial