conceptualizing intervention fidelity: implications for measurement, design, and analysis

12
Conceptualizing Intervention Fidelity: Implications for Measurement, Design, and Analysis Implementation: What to Consider At Different Stages in the Research Process Panel presentation for the Institute for Education Sciences Annual Grantee Meeting September 7, 2011 Chris S. Hulleman, Ph.D.

Upload: carsyn

Post on 14-Jan-2016

35 views

Category:

Documents


2 download

DESCRIPTION

Conceptualizing Intervention Fidelity: Implications for Measurement, Design, and Analysis. Chris S. Hulleman, Ph.D. Implementation: What to Consider At Different Stages in the Research Process Panel presentation for the Institute for Education Sciences Annual Grantee Meeting September 7, 2011. - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Conceptualizing Intervention Fidelity:  Implications for Measurement, Design, and Analysis

Conceptualizing Intervention Fidelity: Implications for Measurement, Design,

and Analysis

Implementation: What to Consider At Different Stages in the Research ProcessPanel presentation for the Institute for Education Sciences Annual Grantee Meeting

September 7, 2011

Chris S. Hulleman, Ph.D.

Page 2: Conceptualizing Intervention Fidelity:  Implications for Measurement, Design, and Analysis

Implementation vs. Implementation Fidelity

DescriptiveWhat happened as the intervention was implemented?

A priori modelHow much, and with what quality, were the core intervention components implemented?

Implementation Assessment Continuum

Fidelity: How faithful was the implemented intervention (tTx) to the intended intervention (TTx)?

Infidelity: TTx – tTx

Most assessments include both

Page 3: Conceptualizing Intervention Fidelity:  Implications for Measurement, Design, and Analysis

Linking Fidelity to Causal Models

Rubin’s Causal Model:– True causal effect of X is (Yi

Tx – YiC)

– RCT is best approximation– Tx – C = average causal effect

Fidelity Assessment – Examines the difference between implemented causal

components in the Tx and C– This difference is the achieved relative strength (ARS) of the

intervention– Theoretical relative strength = TTx – TC

– Achieved relative strength = tTx – tC

Index of fidelity

Page 4: Conceptualizing Intervention Fidelity:  Implications for Measurement, Design, and Analysis

Implementation assessment typically captures…

(1) Essential or core components (activities, processes, structures)(2) Necessary, but not unique, activities, processes and structures (supporting the essential components of Tx)(3) Best practices

(4) Ordinary features of the setting (shared with the control group)

Intervention Fidelityassessment

Page 5: Conceptualizing Intervention Fidelity:  Implications for Measurement, Design, and Analysis

Why is this Important?

Construct Validity– Which is the cause? (TTx - TC) or (tTx – tC)– Degradation due to poor implementation,

contamination, or similarity between Tx and CExternal Validity

– Generalization is about tTx – tC

– Implications for future specification of Tx– Program failure vs. Implementation failure

Statistical Conclusion Validity– Variability in implementation increases error, and

reduces effect size and power

Page 6: Conceptualizing Intervention Fidelity:  Implications for Measurement, Design, and Analysis

Why is this important? Reading First implementation results

Components Sub-components

Performance Levels ARS

RF Non-RF

Reading Instruction

Daily (min.) 105.0 87.0 0.63

Daily in 5 components (min.)

59.0 50.8 0.35

Daily with High Quality practice

18.1 16.2 0.11

Overall Average 0.35

Adapted from Gamse et al. (2008) and Moss et al. (2008)

Effect Size Impact of Reading First on Reading Outcomes = .05

Page 7: Conceptualizing Intervention Fidelity:  Implications for Measurement, Design, and Analysis

5-Step Process(Cordray, 2007)

1. Specify the intervention model

2. Develop fidelity indices3. Determine reliability and

validity4. Combine indices5. Link fidelity to outcomes

Conceptual

Measurement

Analytical

Page 8: Conceptualizing Intervention Fidelity:  Implications for Measurement, Design, and Analysis

Some ChallengesIntervention Models

•Unclear interventions•Scripted vs. Unscripted•Intervention Components vs. Best Practices

Measurement

• Novel constructs: Standardize methods and reporting (i.e., ARS) but not measures (Tx-specific)

• Measure in both Tx & C• Aggregation (or not) within

and across levels

Analyses

•Weighting of components•Psychometric properties?•Functional form?•Analytic frameworks

• Descriptive vs. Causal (e.g., ITT) vs. Explanatory (e.g., LATE)

• See Howard’s Talk Next!

Future Implementation

•Zone of Tolerable Adaptation•Systematically test impact of fidelity to core components•Tx Strength (e.g., ARS): How big is big enough?

Page 9: Conceptualizing Intervention Fidelity:  Implications for Measurement, Design, and Analysis

Treatment Strength (ARS): How Big is Big Enough?

Effect SizeStudy Fidelity

ARSOutcome

Motivation – Lab

1.88 0.83

Motivation – Field

0.80 0.33

Reading First*

0.35 0.05

*Averaged over 1st, 2nd, and 3rd grades (Gamse et al., 2008).

Page 10: Conceptualizing Intervention Fidelity:  Implications for Measurement, Design, and Analysis

Thank You!And Special Thanks to My Collaborators:

Catherine Darrow, Ph. D.

Amy Cassata-Widera, Ph.D.

David S. CordrayMichael NelsonEvan SommerAnne Garrison

Charles Munter

Page 11: Conceptualizing Intervention Fidelity:  Implications for Measurement, Design, and Analysis

Chris Hulleman is an assistant professor at James Madison University with joint appointments in Graduate Psychology and the Center for Assessment and Research Studies. Chris also co-directs the Motivation Research Institute at James Madison. He received his PhD in social/personality psychology from the University of Wisconsin-Madison in 2007, and then spent two years as an Institute for Education Sciences Research Fellow in Vanderbilt University’s Peabody College of Education. In 2009, he won the Pintrich Outstanding Dissertation Award from Division 15 (Educational Psychology) of the American Psychological Association. He teaches courses in graduate statistics and research methods, and serves as the assessment liaison for the Division of Student Affairs. His motivation research focuses on motivation in academic, sport, work, and family settings. His methodological interests include developing guidelines for translating laboratory research into the field, and developing indices of intervention fidelity.  As a Research Affiliate for the National Center on Performance Incentives, Chris is involved in several randomized field experiments of teacher pay-for-performance programs in K-12 settings. His scholarship has been published in journals such as Science, Psychological Bulletin, Journal of Research on Educational Effectiveness, Journal of Educational Psychology, and Phi Delta Kappan.

Department of Graduate PsychologyJames Madison [email protected]

Page 12: Conceptualizing Intervention Fidelity:  Implications for Measurement, Design, and Analysis

Achieved Relative Strength (ttx) = 0.15

Infidelity

“Infidelity”

0.50d

85 700.50

30d

t c

pooled

Y Yd

sd

(85)-(70) = 15

tC

t tx

cY

tY

TTx

TC

.45

.40

.35

.30

.25

.20

.15

.10

.05

.00

Treatment Strength

with fidelity

with fidelity

90 650.83

30

T C

pooled

Y Yd

sd

d

Expected Relative Strength = TTx - TC = (0.40-0.15) = 0.25

100

90

85

80

75

70

65

60

55

50

Outcome

TY

CY