local vs central image review - icon hosted webinar 2014

36
16 January 2014 The Truth About Local vs. Central Image Review

Upload: iconplc

Post on 15-Aug-2015

102 views

Category:

Health & Medicine


5 download

TRANSCRIPT

Page 1: Local vs Central Image Review - ICON hosted Webinar 2014

16 January 2014

The Truth About Local vs. Central Image Review

Page 2: Local vs Central Image Review - ICON hosted Webinar 2014

• ICON plc is a global provider of outsourced development services to the pharmaceutical, biotechnology and medical device industries.

• The company specialises in the strategic development, management and analysis of programs that support clinical development - from compound selection to Phase I-IV clinical studies.

• ICON currently operates from 77 locations in 38 countries and has approximately 10,300 employees.

• Further information is available at www.iconplc.com

About ICON

Page 3: Local vs Central Image Review - ICON hosted Webinar 2014

• ICON Signature Series is our thought leadership program that offers expert insights into value-driven strategies for clinical development.

• The program features ICON and external experts in all aspects of clinical development and post-approval product value strategies.

• For a list of featured topics and upcoming events go to: http://www.iconplc.com/icon-views/

• ICON Signature Series is our thought leadership program that offers expert insights into value-driven strategies for clinical development.

• The program features ICON and external experts in all aspects of clinical development and post-approval product value strategies.

• For a list of featured topics and upcoming events go to: http://www.iconplc.com/icon-views/

ICON Signature Series

Page 4: Local vs Central Image Review - ICON hosted Webinar 2014

Agenda

• Image Review History

• Proposed New Model

• Audit Methods

• Financial Impact

Page 5: Local vs Central Image Review - ICON hosted Webinar 2014

A recording of this Webinar is also available

To view click on the link below http://www.iconplc.com/webinar/14/TheTruthaboutLocalvsCentralImageReview.wmv

1

Page 6: Local vs Central Image Review - ICON hosted Webinar 2014

Introductions

David Raunig, Ph.D.Senior Vice President, Medical and Scientific Affairs

David Raunig has worked extensively in statistical analysis and design of nonclinical and clinical biomarker studies. He has 15 years of experience as a research statistician in the pharmaceutical industry and directed the statistical support for both preclinical and clinical imaging for Pfizer Global Research and Development, working closely with molecular imaging and pharmacometrics groups to develop novel biomarkers and design and analyze early to late phase clinical trials. He was one of the first statisticians involved in FDA Biomarker qualification and co-inventor of random sample pixel superresolution. He is presently the Chair for QIBA Technical Performance Metrology Working Group and working on research for reader performance real-time monitoring algorithms, imaging biomarker qualification for hemarthropathy and performance characteristics for AD biomarkers.

Page 7: Local vs Central Image Review - ICON hosted Webinar 2014

Introductions

Gregory Goldmacher, M.D., Ph.D.Senior Director, Medical & Scientific Affairs, Head of Oncology Imaging, ICON Clinical Research

Dr. Goldmacher is a radiologist by training. He leads oncology imaging for ICON, and also oversees projects in rheumatology, cardiovascular, pulmonary, CNS, and infectious disease, as well as diagnostic agent trials.

He has given numerous lectures, and published papers in the academic literature on radiology and its application in clinical trials. He has developed methods for standardized imaging response assessment, and trained radiologists, oncologists, and study staff in the United States and worldwide.

He has a leading role in the development and validation of novel imaging biomarkers as a member of the Steering Committee of the Quantitative Imaging Biomarkers Alliance (QIBA), and co-chairs the QIBA Committee on Volumetric CT.

Page 8: Local vs Central Image Review - ICON hosted Webinar 2014

Background

• Clinton-Kessler Oncology Initiative (1996)– Tumor size evidence of benefit– Imaging-based endpoints– No defined process– RECIST in development

Page 9: Local vs Central Image Review - ICON hosted Webinar 2014

Early Discussions

• Central vs. Local– Bias, errors, fraud– Blinding investigators is hard

• Reader variability– Two heads are better than one… but need ONE answer– What design?

• “Consensus” Loudest voice wins• “2+1” model wins out (Obuchowski 2004)

• Need performance monitoring– FDA request / statistician recommendations– Monitor for independence and lack of bias

Page 10: Local vs Central Image Review - ICON hosted Webinar 2014

• Cost– Central 2+1 teams vs. local reader– CRO costs: who manages sites?

• Claim: local reads identical or better– Meta-analysis apparent equivalent results

• Only 11 of 27 independent • Internal review 6 studies ~10% increased variance

– Assumption of no local bias• Notable exceptions

Recent Interest in Local Evaluations

Society for Clinical Trials 2013 21 May 20123 7

Page 11: Local vs Central Image Review - ICON hosted Webinar 2014

Audit Methods

• ODAC Meeting July 2012– Audit methods: NCI and Industry– Cost savings opinions offered– Evidence of local-to-central equivalence– Limits

• Phase IIb/III trials• Solid tumors• PFS / TTE

Page 12: Local vs Central Image Review - ICON hosted Webinar 2014

Proposed Audit Approach

Compare sample to local read results

Central read sample of scans

Collect local reads

Collect all scans centrally

Local Results Confirmed

Local Results Confirmed

Full Independent Review

Full Independent Review

No Bias Possible bias

Page 13: Local vs Central Image Review - ICON hosted Webinar 2014

Implementation

• Details of method undetermined– Blinded vs. unblinded– NCI vs. industry vs. study-specific– Sample size– Sampling: random vs. block vs. site vs. region

• No ODAC or FDA recommendation

Page 14: Local vs Central Image Review - ICON hosted Webinar 2014

Audit Methods

Page 15: Local vs Central Image Review - ICON hosted Webinar 2014

Audit Methodologies – Stated Objectives

• Primary– Guard against falsely declaring a therapeutic intervention as

better than the comparator– Allow local evaluation to enhance patient information

• Secondary– LE = variability seen in practice– Elimination of Central Reviewer disagreement

• False Objective Warning!– Audit of local evaluations does not protect against

informative censoring by the site

Page 16: Local vs Central Image Review - ICON hosted Webinar 2014

Local Evaluator Statistical Assumptions

• Local Evaluators have equivalent results to BICR– 11 verified publications of successful results– Simulations done on studies where LE and BICR agree

• Local Evaluators have more patient information than displaced central radiologists– Published successful trials Equal information– Published unsuccessful trials Not equal information

• BICR is not biased– LE and BICR discordance LE is biased, not BICR

• Audit conducted by 2+1 central read paradigm– Single reader more variability less power

Page 17: Local vs Central Image Review - ICON hosted Webinar 2014

Statistical Design: Hypotheses Tests

• Null Hypothesis– LE PFS NOT BETTER THAN BICR PFS

• HR(local) = HR(central)• PFS(local) = PFS(central)• Other endpoints?

– Accept H0 Accept LE results

• Alternative Hypothesis– LE PFS BETTER THAN BICR PFS

• HR(local) < HR(central)• PFS(local) > PFS(central)• Other endpoints?

– Accept HA 100% central review

Page 18: Local vs Central Image Review - ICON hosted Webinar 2014

Audit Methodology Options

• NCI Method– Evaluates central review HR < Threshold– Follows a successful LE result (HR < 1.0) at end of study

• Industry Method– Evaluates non-directional bias across treatment arms– May be done at interim or end of study

• Mixed Reviewer– Local + Central with central adjudication– Immediate blinded “audit” of all reads

• Study Specific– Requires FDA approval– Difficult?

Page 19: Local vs Central Image Review - ICON hosted Webinar 2014

Audit Methodology – NCI Method

• Sample Size Factors– HR dependent

• 100% audit at HR ≈ 0.6 - 0.7– Minimum Important Difference (MID)

• HR ≤ 1.0• MID ≈ 0.9 100% audit highly likely

• Sensitivity/Specificity– Set by study / 95%

• Timing– End of trial– HR(local) significant (<1.0) Audit

1.0

MID

HR (audit)

Upper 95% CI

0.0

Page 20: Local vs Central Image Review - ICON hosted Webinar 2014

NCI Method Audit Size Simulation

Median Hazard Ratio MID=1.0 MID=0.9 BICR Event

0.48 66% 100% 1210.51 63% 100% 1890.54 28% 37% 3570.73 57% 100% 6300.73 100% 100% 165

From: Dodd LE, Korn EL, Freidlin B, et al. An audit strategy for time-to-event outcomes measured with error: Application to five randomized controlled trials in oncology. Clinical Trials. 2013; 10: 754-60.

Page 21: Local vs Central Image Review - ICON hosted Webinar 2014

Audit Methodology – Industry Method

• Sample Size– Number of events– Criteria for discordance

• Timing– Interim or end of trial– Clinical Cutoff– Caution: Multiple evaluations increase chance finding

• Sensitivity / Specificity– 80+ / 80+ with 80 events and discordance cutoff =0.1

• Conditional– HR extremely low no audit– Caution (Everolimus study)

Page 22: Local vs Central Image Review - ICON hosted Webinar 2014

Industry Method Audit Sample Size - Simulation

Amit, O. A sample based approach for independent review of PFS using differential discordance, PFS Independent Review Working Group, Oct 2009.

Page 23: Local vs Central Image Review - ICON hosted Webinar 2014

Mixed Reviewer

• Identical to central review– Blinded

• Sample size– 100% of patients– No sampling

• Timing– Continuous

• Sensitivity/Specificity– Assisted by adjudicator selection

• Additional: No delay in results

Page 24: Local vs Central Image Review - ICON hosted Webinar 2014

Audit Options

• Central audit– Single central auditor– 2+1 central auditor– Considerations

• Rolling audit or end of study Timing• 2+1 more precise for comparison (smaller confidence limits)

• Confirmation of progression– Protects against informative censoring– Protects against event loss– Not a part of the audit

Page 25: Local vs Central Image Review - ICON hosted Webinar 2014

Financial Impact

Page 26: Local vs Central Image Review - ICON hosted Webinar 2014

Proposed New Approach

Audit

Central read sample of scans

Collect local reads

Collect all scans centrally

Local Results Confirmed

Local Results Confirmed

Full Independent Review

Full Independent Review

No Bias Possible bias

Page 27: Local vs Central Image Review - ICON hosted Webinar 2014

Financial Implications of Audits

• Central imaging costs

• Local read costs

• Trial size impact

• Market delay risk

Page 28: Local vs Central Image Review - ICON hosted Webinar 2014

Central Imaging Costs

• Fixed costs– Startup documentation– System programming– Reader training

• Variable costs Driver– Project management/tech ops Duration– Site initiation and training Sites– Image collection and QC Timepoints– Central reads Timepoints

6% 5%

17%

24%

32%

15%Fixed

Site

Monthly

Image

Read

Ops/Mgmt

“per scan”

“per site/month/study”

Page 29: Local vs Central Image Review - ICON hosted Webinar 2014

Historical Average Trial

• 8 recent solid tumor trials

Variable Average

Subjects 700

Timepoints 4200

Sites 119

Duration 46 months

Page 30: Local vs Central Image Review - ICON hosted Webinar 2014

Central Read Costs

• Collect all scans, 30% audit (2+1 design)

• If 15% go to full read savings 18%– Optimistic!– 18% of $3M = $540K savings

• $100M trial = 0.54%

Variable AverageSubjects 700Timepoints 4200Sites 119Duration 46 monthsTotal central imaging costs $3.0 MProjected savings with audit $660 K (22%)

Page 31: Local vs Central Image Review - ICON hosted Webinar 2014

Local Read Costs

• Currently free

• If FDA wants auditable results– Need local read system with audit trail– Estimate: $2000 per site– 119 sites Additional cost of $238,000

• Local readers might want to be paid– Note: imaging may not be performed at PI’s facility– Note: some hospitals do not allow direct contracts with radiology– Estimate $50 per timepoint– 4200 timepoints Additional cost of $210,000

• Additional monitoring visits

Page 32: Local vs Central Image Review - ICON hosted Webinar 2014

Trial Size Costs

• Local readers– Large, unevenly trained group– IMI site survey: dedicated radiologist = 40%

• Higher variability need more subjects to reach endpoint

• 700 subject trial– Average cost per subject: ~ $50K– Marginal cost per subject: ~ $20K– ~10% increased variance = 70 x $20K = $1.4M additional cost

• Additional duration– Recruit 10% more

Page 33: Local vs Central Image Review - ICON hosted Webinar 2014

Marketing Delay Risk

• If audit failed full central read– If upper bound on HR > MID

• Typical central read happens throughout trial– Can be essentially real time

• Post-audit read begins after trial done

• Estimated market delay: 3 months– If HR = 0.3, risk = 15%– Loss depends on monthly revenue and margins

Page 34: Local vs Central Image Review - ICON hosted Webinar 2014

Overall Financial Implications

Item Savings/costCentral read $540 K savings

Local read Free / $200 K / $400 K

Trial size $1,400 K cost

Market delay Unknown cost, but real risk

Page 35: Local vs Central Image Review - ICON hosted Webinar 2014

Conclusions

• Audits are statistically viable

• Statistical assumptions for need are not validated

• Cost savings unlikely

• Cost increase possible

• Design requires strong statistical, imaging and clinical trial needs considerations

Page 36: Local vs Central Image Review - ICON hosted Webinar 2014

Like to know more?

[email protected]

Twitter handle: @ICONplc

Join us on social: