limitations of adaptive clinical trials

5
Limitations of Adaptive Clinical Trials By Marc Buyse, ScD Overview: Adaptive designs are aimed at introducing flexibil- ity in clinical research by allowing important characteristics of a trial to be adapted during the course of the trial based on data coming from the trial itself. Adaptive designs can be used in all phases of clinical research, from phase I to phase III. They tend to be especially useful in early development, when the paucity of prior data makes their flexibility a key benefit. The need for adaptive designs lessened as new treatments progress to later phases of development, when emphasis shifts to confirmation of hypotheses using fully prespecified, well-controlled designs. W ITH THE large number of promising new molecules that are currently available for clinical testing, clin- ical trials must detect a drug’s benefit (or harm) as quickly as possible. In parallel to the explosion in the number of drugs awaiting clinical testing, the costs of clinical trials have sky-rocketed, which adds to the pressure of optimizing trials, to the extent possible, in terms of sample sizes, timelines, and risk of failure. A new class of designs has emerged to address these challenges, collectively known as adaptive designs. In this chapter, we review different types of adaptive designs and briefly mention some situations in which such designs can be useful. Much of this chapter, however, is devoted to a discussion of the limitations and drawbacks of adaptive designs, which might partly explain why these designs have not been commonly used and might in the future have less of an effect on clinical research than claimed by their advocates. Types of Adaptive Designs One of the difficulties surrounding adaptive designs is that the term is used to encompass different situations. For clarity, we divide group adaptive designs into three broad categories. The first category is treatment effect–independent adap- tive designs. In these designs, some of the design features can be adapted on observation of predefined patient charac- teristics (such as baseline prognostic factors) or outcomes (such as response rate or hazard rate overall or in the control group) but in ignorance of the treatment effect. The second category is treatment effect– dependent adap- tive designs. In these designs, one or more of the design features (such as the sample size, the patient inclusion criteria, the treatment groups being compared, the treat- ment allocation ratio, or even the primary endpoint) can be adapted, depending on the observed treatment effect. The third category is other types of adaptive designs, which include the continual reassessment method (CRM) for phase I trials and seamless phase II/III designs. Treatment Effect–Independent Adaptive Designs In these designs, adaptations do not depend on the treat- ment effect. As such, these adaptations raise few issues and have little effect on the statistical inference; in particular, they do not inflate the type I error. In fact, such adapta- tions are so mild that trials using them are not referred to as “adaptive”. We provide two examples but do not discuss these designs in detail. Covariate-Adaptive Randomization One instance of treatment effect–independent adaptation is covariate-adaptive randomization, for which the probabil- ity of allocating the next patient to one of the trial’s treat- ment groups is computed dynamically to ensure good balance among the treatment groups with respect to impor- tant prognostic factors (center or country, clinicopathologic features, and, increasingly, biomarkers measured at base- line). A common implementation of this approach is minimi- zation, for which a predefined algorithm is used to minimize the imbalance between the distributions of important prog- nostic factors at baseline among treatment groups. When minimization is used, the treatment group the next patient is allocated to can depend on the baseline characteristics of previously accrued patients but not on their outcome. 1 Sample Size Increases Another type of treatment effect–independent adaptation consists of a sample size increase if the incidence of the event of interest is much lower than expected in the control group (to preserve the power of the trial) or if the event rate is much lower than expected overall (to preserve the timelines of event-driven analyses when the outcome of interest is a time-to-event, such as disease-free survival or overall sur- vival). Again, these sample size increases are implemented in ignorance of the treatment effect; hence, they generally have no effect on type I error and, if implemented appropri- ately, raise no special statistical concerns. 2 Treatment Effect–Dependent Adaptive Designs These designs are truly adaptive insofar as adaptations depend on the observed treatment effect, which requires caution to be exercised and a proper statistical approach to be used. If, for instance, the sample size of a trial was increased (or decreased) simply because the observed treat- ment effect was smaller (or larger) than anticipated, the final results of the trial could be biased. 3 For instance, a randomly large treatment effect could lead to a reduction in sample size even though the true effect were as expected. Note that a group sequential design is not subject to this problem because its sample size is fixed and can only be decreased if the trial is stopped for efficacy or futility at an interim analysis. 4 From the International Drug Development Institute, Houston, TX; Interuniversity Insti- tute for Biostatistics and Statistical Bioinformatics, Hasselt University, Diepenbeek, Belgium. Author’s disclosures of potential conflicts of interest are found at the end of this article. Address reprint requests to Marc Buyse, ScD, IDDI Inc, 363 N. Sam Houston Pkwy. E., Suite 1100, Houston, TX 77060; email: [email protected]. © 2012 by American Society of Clinical Oncology. 1092-9118/10/1-10 133

Upload: matias-alvarez-l

Post on 11-Dec-2015

217 views

Category:

Documents


2 download

DESCRIPTION

Artículo sobre oncología

TRANSCRIPT

Page 1: Limitations of Adaptive Clinical Trials

Limitat ions of Adapt ive Cl in ica l Tr ia ls

By Marc Buyse, ScD

Overview: Adaptive designs are aimed at introducing flexibil-ity in clinical research by allowing important characteristicsof a trial to be adapted during the course of the trial basedon data coming from the trial itself. Adaptive designs can beused in all phases of clinical research, from phase I to phaseIII. They tend to be especially useful in early development,

when the paucity of prior data makes their flexibility a keybenefit. The need for adaptive designs lessened as newtreatments progress to later phases of development, whenemphasis shifts to confirmation of hypotheses using fullyprespecified, well-controlled designs.

W ITH THE large number of promising new moleculesthat are currently available for clinical testing, clin-

ical trials must detect a drug’s benefit (or harm) as quicklyas possible. In parallel to the explosion in the number ofdrugs awaiting clinical testing, the costs of clinical trialshave sky-rocketed, which adds to the pressure of optimizingtrials, to the extent possible, in terms of sample sizes,timelines, and risk of failure. A new class of designs hasemerged to address these challenges, collectively known asadaptive designs. In this chapter, we review different typesof adaptive designs and briefly mention some situations inwhich such designs can be useful. Much of this chapter,however, is devoted to a discussion of the limitations anddrawbacks of adaptive designs, which might partly explainwhy these designs have not been commonly used and mightin the future have less of an effect on clinical research thanclaimed by their advocates.

Types of Adaptive Designs

One of the difficulties surrounding adaptive designs isthat the term is used to encompass different situations. Forclarity, we divide group adaptive designs into three broadcategories.

The first category is treatment effect–independent adap-tive designs. In these designs, some of the design featurescan be adapted on observation of predefined patient charac-teristics (such as baseline prognostic factors) or outcomes(such as response rate or hazard rate overall or in the controlgroup) but in ignorance of the treatment effect.

The second category is treatment effect–dependent adap-tive designs. In these designs, one or more of the designfeatures (such as the sample size, the patient inclusioncriteria, the treatment groups being compared, the treat-ment allocation ratio, or even the primary endpoint) can beadapted, depending on the observed treatment effect.

The third category is other types of adaptive designs,which include the continual reassessment method (CRM) forphase I trials and seamless phase II/III designs.

Treatment Effect–Independent Adaptive Designs

In these designs, adaptations do not depend on the treat-ment effect. As such, these adaptations raise few issues andhave little effect on the statistical inference; in particular,they do not inflate the type I error. In fact, such adapta-tions are so mild that trials using them are not referred toas “adaptive”. We provide two examples but do not discussthese designs in detail.

Covariate-Adaptive Randomization

One instance of treatment effect–independent adaptationis covariate-adaptive randomization, for which the probabil-

ity of allocating the next patient to one of the trial’s treat-ment groups is computed dynamically to ensure goodbalance among the treatment groups with respect to impor-tant prognostic factors (center or country, clinicopathologicfeatures, and, increasingly, biomarkers measured at base-line). A common implementation of this approach is minimi-zation, for which a predefined algorithm is used to minimizethe imbalance between the distributions of important prog-nostic factors at baseline among treatment groups. Whenminimization is used, the treatment group the next patientis allocated to can depend on the baseline characteristics ofpreviously accrued patients but not on their outcome.1

Sample Size Increases

Another type of treatment effect–independent adaptationconsists of a sample size increase if the incidence of the eventof interest is much lower than expected in the control group(to preserve the power of the trial) or if the event rate ismuch lower than expected overall (to preserve the timelinesof event-driven analyses when the outcome of interest is atime-to-event, such as disease-free survival or overall sur-vival). Again, these sample size increases are implementedin ignorance of the treatment effect; hence, they generallyhave no effect on type I error and, if implemented appropri-ately, raise no special statistical concerns.2

Treatment Effect–Dependent Adaptive Designs

These designs are truly adaptive insofar as adaptationsdepend on the observed treatment effect, which requirescaution to be exercised and a proper statistical approach tobe used. If, for instance, the sample size of a trial wasincreased (or decreased) simply because the observed treat-ment effect was smaller (or larger) than anticipated, thefinal results of the trial could be biased.3 For instance, arandomly large treatment effect could lead to a reductionin sample size even though the true effect were as expected.Note that a group sequential design is not subject to thisproblem because its sample size is fixed and can only bedecreased if the trial is stopped for efficacy or futility at aninterim analysis.4

From the International Drug Development Institute, Houston, TX; Interuniversity Insti-tute for Biostatistics and Statistical Bioinformatics, Hasselt University, Diepenbeek,Belgium.

Author’s disclosures of potential conflicts of interest are found at the end of this article.Address reprint requests to Marc Buyse, ScD, IDDI Inc, 363 N. Sam Houston Pkwy. E.,

Suite 1100, Houston, TX 77060; email: [email protected].© 2012 by American Society of Clinical Oncology.1092-9118/10/1-10

133

Page 2: Limitations of Adaptive Clinical Trials

Sample Size Recalculation

The most obvious adaptation to consider for an ongoingcomparative (phase III) trial is a sample size recalculation ifthe treatment effect turns out to be vastly different fromthat assumed to design the trial. It might make sense, forexample, to consider an increase in sample size if theassumed treatment effect was grossly overestimated (e.g., asa result of highly promising phase II results), but there areserious theoretical and practical objections to doing so (Ta-ble 1).

Theoretically, the treatment effect assumed in the proto-col was the smallest effect considered to be clinically worth-while; hence, there is no reason to lower this effect merely toreach statistical significance. In practice, however, the sam-ple size is often based on a larger treatment effect than theminimum considered worthwhile, especially when the find-ings of phase II trials suggest that a larger treatment effectcan potentially be achieved. Commonly, other considerationscome into play, including budgetary constraints that tend todrive sample sizes down. It is tempting, especially for smallcompanies with limited financial resources, to shoot for largetreatment effects, hope for the best, and increase the samplesize only if needed. Great caution must, however, be exer-cised when a sample size increase is triggered by theobservation of a smaller than expected treatment benefit,insofar as such an adaptation can lead to changes in thetypes of patients accrued into the trial so that the trial afterthe adaptation is no longer the same as before the adapta-tion. It has also been proven that for any adaptive design onemight consider, there exists a group sequential design thatuniformly outperforms it (i.e., has better power for any given

treatment effect).5 In other words, adaptive designs areinefficient compared with group sequential designs. Thismathematical result has profound consequences because itimplies that even the most astute adaptive design can, intheory, be replaced by a group sequential counterpart thathas better statistical power and none of the dangers createdby adaptations made to an ongoing trial. In fairness toadaptive designs, the superior efficiency of group sequentialdesigns assumes that the appropriate spending function canbe prespecified and that interim analyses come at no cost,which is not the case in practice.6

Figure 1 compares a group sequential design with anadaptive design in the same situation. This figure illustratesthe putative advantage of the adaptive design that startswith a small sample size based on a large treatment effectand increases the sample size if the observed effect issmaller than anticipated, whereas the group sequentialdesign starts with a large sample size based on a smalltreatment effect and stops early if the observed effect islarger than anticipated. Stated differently, adaptive designstake an optimistic view and adapt if required, whereasgroup sequential designs take a pessimistic view and stopearly if indicated. In the example of Fig. 1, the groupsequential design is based on a difference in proportions of0.05 and requires 1,400 patients but can stop after 350, 700,or 1,050 patients if the difference is equal to approximately0.15, 0.075, or 0.05, respectively. In contrast, the adaptivedesign is based on a difference in proportions of 0.10 andrequires only 400 patients, but the sample size can beincreased if the conditional power at 200 patients or 400patients is too low.

The conditional power is the power that the trial isexpected to have at its final analysis, given the data from thealready accrued patients and assuming that the protocol-specified difference will apply to all patients still to beaccrued. Note that at the beginning of the trial, the condi-tional power is simply equal to the power because no dataare available yet. As stated above, it is essential, whenrecalculating the sample size in an adaptive manner, to useappropriate statistical methods. Several methods for thisexist; Tsiatis and Mehta provide a review and examples ofthese methods.5

Outcome-Adaptive Randomization

In outcome-adaptive randomization, the probability ofallocating the next patient to one of the trial’s treatmentgroups is computed dynamically to favor the treatmentgroup with the best outcome so far.7 For instance, if re-sponse (tumor shrinkage) was the outcome of interest,patients would be allocated preferentially to the treatmentgroup with the highest response rate. This approach has

KEY POINTS

● Adaptive designs can be useful in specific situations(e.g., in phase I trials aimed at determining a maxi-mum tolerated dose or in dose-finding trials when awide range of doses needs to be investigated).

● Despite the hype surrounding adaptive designs, theyhave been used infrequently so far; the flexibilityafforded by adaptive designs might not be compen-sated for by the potential loss in credibility associatedwith their use.

● Although adaptive designs can be useful to rescuetrials that are based on incorrect assumptions, theyshould not generally replace well-designed trials thatuse conventional approaches (e.g., group sequentialdesigns that are more statistically efficient).

● Generally speaking, treatment effect–independentadaptations raise few difficulties, whereas treatmenteffect–dependent adaptations are to be implementedwith caution and use of appropriate statistical meth-ods.

● Covariate-adaptive randomization is useful to mini-mize imbalances in prognostic factors among treat-ment groups; in contrast, outcome-adaptiverandomization is unhelpful statistically, difficult lo-gistically, and unnecessary ethically.

Table 1. Pros and Cons of Adaptive Sample Size Increases

Pros Cons

• Potential to rescue a trial that wouldmiss statistical significance

• Statistically inefficient• Emphasis on statistical significance

rather than clinical relevance• Flexibility if design considerations

were inadequate• Changes in patient population or other

temporal trends• Can often be substituted by nonadaptive

sample size increases that do not affectthe type I error

MARC BUYSE

134

Page 3: Limitations of Adaptive Clinical Trials

been called a “play-the-winner” strategy.8 A crucial distinc-tion needs to be made between covariate-adaptive random-ization and outcome-adaptive randomization. Indeed,although the former raises no particular issue, the latter isfraught with problems.

First, the outcome that is used to adapt the randomiza-tion has to be observed early and reliably, and it must bereasonably predictive of important clinical endpoints for theadaptation to succeed at placing more patients in the bettertreatment group.

Second, adaptive randomization can result in major im-balances among treatment arms, which in turn negativelyaffects the statistical power of the trial.

Third, the statistical inference is complicated because thetreatment assignments and the responses are correlated; asa consequence, rerandomization tests must be used insteadof traditional likelihood-based tests.

Fourth, adaptive randomization can cause accrual bias (ifpatients wait for the probability of receiving the bettertreatment to increase) and/or selection bias (if patients areaware of the emerging difference among the treatmentgroups).

Last but not least, it is incorrect to claim that adaptive,randomization is ethically superior to fixed randomizationbecause equipoise mandates that allocation to any of thetreatment groups be considered equally desirable. It mightmake sense to allocate more patients to the experimentalgroup than to the control group, but the justification fordoing so is that more information is needed about a newtreatment than about a well-established standard treat-ment. When such is the case, a fixed unequal allocation ratio(such as a 2:1 ratio in favor of the experimental group) willdo just as well as adaptive randomization, without beingsubject to the problems listed above.9

Other Types of Adaptive Designs

There are many other types of adaptive designs, some ofwhich do not fall into the two clear-cut categories discussed.We briefly mention two types of designs that have attractiveproperties but have also been rarely used in practice.

CRM in Phase I Trials

Classic phase I cancer trials are aimed at determining themaximum tolerated dose (MTD) of a new drug or combina-tion of drugs.10 They are usually performed according to afixed design called the “3 � 3” design. The design proceeds incohorts of three patients, with the first cohort being treatedat the minimum dose of interest and the next cohorts beingtreated at increasing dose levels according to a predeter-mined dose escalation scheme. The dose escalation proceedsuntil at least one dose-limiting toxic effect is observed in acohort of three patients, in which case a second cohort ofthree patients is treated at the same dose level. The doseescalation stops as soon as at least two patients experiencea dose-limiting toxic effect, either in the first cohort of threepatients treated at that dose level or in the two cohorts ofthree patients treated at that dose level. Although thisdesign is used in almost every phase I cancer trial today, ithas several limitations.11 First, too many patients may betreated at low doses, with virtually no chance of efficacy.Second, dose escalation may be too slow because of anexcessive number of escalation steps, resulting in trials thattake longer than needed to get to the MTD. Third, too fewpatients may be treated near the MTD, resulting in substan-tial residual uncertainty about the dose recommended forfurther trials, which raises ethical concerns. Indeed, if therecommended dose is chosen too low, it can fail to haveefficacy in phase II trials, whereas if it is chosen too high, itcan put patients at unacceptable risk in phase II trials. Lastbut not least, the “3 � 3” design makes no allowance forpatient variability. An adaptive design known as the CRMwas originally proposed by O’Quigley et al12 in the early1990s. This design involves a statistical approach based onan assumed dose-response relationship, which is describedthrough a mathematical function that links the probabilityof a dose-limiting toxic effect and the dose level. The CRMdesign is adaptive insofar as the dose to use for the nextpatient is determined from the toxic effects experienced byall the patients already treated so far. Many modifications tothe CRM have been proposed, and simulation studies have

Fig. 1. Comparison between a group sequential design and an adaptive design.

LIMITATIONS OF ADAPTIVE CLINICAL TRIALS

135

Page 4: Limitations of Adaptive Clinical Trials

shown that it generally outperforms the “3 � 3” design.13,14

Ironically, however, the CRM is not as popular as it shouldbe given its attractive properties, probably because it re-quires calculations to be performed for the next dose to bedetermined, whereas the “3 � 3” design does not.

Seamless Phase II/III Designs

It is possible to embed a phase II trial into a phase III trialso that the transition between the two phases is operation-ally seamless—as opposed to performing a randomizedphase II trial followed by a separate phase III trial. Thesimplest version of this approach consists of using a classicphase II design to screen for activity based on response andto calculate the sample size required of the phase III trialbased on the final outcome of interest, such as time toprogression or survival. Because the purpose of the phase IItrial is only to stop for futility on the basis of lack of activity,there is no inflation in type I error. One-stage or two-stagephase II designs can be used, as well as a selection design ifseveral experimental arms are simultaneously screened. Inall cases, the phase II and phase III portions of the trial aredesigned independently of each other. Table 2 contrasts aseamless transition from phase II to phase III with two otherapproaches. Some authors have proposed to use a Bayesianapproach to expand the phase II trial to a phase III trial.15

A different approach that is particularly useful in select-ing one or more doses of a new investigational agent is to use

an inferentially seamless design in which several doses aretested in the phase II portion of the trial and to select onlythe most promising one to continue in the phase III por-tion.16 Various designs have been proposed that control theoverall significance level of the trial, whether or not thereare adaptations of some design aspects at the end of thephase II trial. Jennison and Turnbull offer a review anduseful guidance.17,18

Conclusion

There is little doubt that clinical research is in need ofreengineering to provide efficient readouts of efficacy andsafety on the large number of treatments currently devel-oped by pharmaceutical, biotechnology, and medical devicecompanies. Running large-scale trials that have a highprobability of failure is clearly undesirable. Many innovativemethods have been proposed, including adaptive designs,Bayesian designs, and biomarker-based designs. Some re-cent trial designs combine all of these ideas and constitute,as such, exciting models for further developments.19,20 Thefuture will tell which of these innovations are useful andwhen they constitute a definite improvement over classicapproaches. In the meantime, oncologists involved in clinicalresearch must be aware of the limitations of each approachand adjust their expectations accordingly. Clinical trialsponsors might wish to consult regulatory guidances alreadyavailable on adaptive designs.21,22

Author’s Disclosures of Potential Conflicts of Interest

Author

Employment orLeadershipPositions

Consultant orAdvisory Role

StockOwnership Honoraria

ResearchFunding

ExpertTestimony

OtherRemuneration

Marc Buyse IDDI IDDI

REFERENCES

1. Rosenberger WF, Lachin JM. Randomization in Clinical Trials: Theoryand Practice. New York, NY: Wiley; 2002.

2. Gould AL. Planning and revising the sample size for a trial. Stat Med.1995;14:1039-1051.

3. Mehta CR. Sample size reestimation for confirmatory clinical trials. In:Harrington D (ed). Designs for Clinical Trials: Perspectives on Current Issues.New York, NY: Springer; 2011.

4. Kim KM. Sequential designs for clinical trials. In: Harrington D (ed).Designs for Clinical Trials: Perspectives on Current Issues. New York, NY:Springer; 2011.

5. Tsiatis AA, Mehta C. On the inefficiency of the adaptive design formonitoring clinical trials. Biometrika. 2003;90:367-e78.

6. Brannath W, Bauer P, Posch M. On the efficiency of adaptive designs forflexible interim decisions in clinical trials. J Stat Planning Infer. 2006;136:1956-1961.

7. Hu F, Rosenberger WF. The Theory of Response-Adaptive Randomiza-tion in Clinical Trials. New York, NY: Wiley; 2006.

8. Wei LJ, Durham S. The randomized play-the-winner rule in medicaltrials. J Am Stat Assoc. 1978;73:840-843.

9. Korn EL, Freidlin B. Outcome-adaptive randomization: is it useful?J Clin Oncol. 2003;90:367-378.

10. Eisenhauer E, Twelves C, Buyse M. Phase I Clinical Trials in Cancer.Oxford, England: Oxford University Press; 2006.

11. Cheung YK. Designs for phase I trials. In: Harrington D (ed). Designsfor Clinical Trials: Perspectives on Current Issues. New York, NY: Springer;2011.

12. O’Quigley J, Pepe M, Fisher L. Continual reassessment method: apractical design for phase 1 clinical trials in cancer. Biometrics. 1990;46:33-48.

13. Chevret S. The continual reassessment method in cancer phase Iclinical trials: a simulation study. Stat Med. 1993;12:1093-1108.

Table 2. Comparisons of Three Approaches for Late Clinical Development

Approach Benefits Costs and Risks

Phase II trial followed by phase III trial • Standard approach • Longest development time• No regulatory risk • Largest total sample size• Phase III trial provides independent confirmation of

phase II resultsSeamless phase II/III trial • Can adapt the phase III based on the phase II results • Negotiation with regulatory agencies essential

• Statistical method well established • Less experience with adaptive designs• Upfront commitment for phase II only

Phase III trial with interim analyses • Much experience with group sequential designs • Requires large upfront commitment• No regulatory risk • Difficult to design or start phase III based on scanty

early data• Optimal statistical approach if several interim looksare planned

MARC BUYSE

136

Page 5: Limitations of Adaptive Clinical Trials

14. Goodman SN, Zahurak ML, Piantadosi S. Some practical improve-ments in the continual reassessment method for phase I studies. Stat Med.1995;14:1149-1161.

15. Inoue LY, Thall PF, Berry DA. Seamlessly expanding a randomizedphase II trial to phase III. Biometrics. 2002;58:823-831.

16. Bretz F, Koenig F, Brannath W, et al. Adaptive designs for confirma-tory clinical trials. Stat Med. 2009;28:1181-1217.

17. Jennison C, Turnbull BW. Confirmatory seamless Phase II/III clinicaltrials with hypotheses selection at interim: opportunities and limitations.Biom J. 2006;48:650-655.

18. Jennison C, Turnbull BW. Adaptive seamless designs: selection andprospective testing of hypotheses. J Biopharm Statist. 2007;17:1135-1161.

19. Zhou X, Liu S, Kim ES, Herbst RS, Lee JJ. Bayesian adaptive designfor targeted therapy development in lung cancer—a step toward personalizedmedicine. Clin Trials. 2008;5:181-193.

20. Barker AD, Sigman CC, Kelloff GJ, Hylton NM, Berry DA, EssermanLJ. I-SPY 2: an adaptive breast cancer trial design in the setting of neoadju-vant chemotherapy. Clin Pharmacol Ther. 2009;86:97-100.

21. European Medicines Agency (EMA), Committee for Medicinal Productsfor Human Use (CHMP). Reflection Paper on Methodological Issues inConfirmatory Clinical Trials Planned With an Adaptive Design. http://home.att.ne.jp/red/akihiro/emea/245902enadopted.pdf. Accessed February 10,2012.

22. U.S. Department of Health and Human Services, Food and DrugAdministration (FDA). Draft Guidance for Industry: Adaptive Design Clin-ical Trials for Drugs and Biologics. http://www.fda.gov/downloads/Drugs/GuidanceComplianceRegulatoryInformation/Guidances/UCM201790.pdf. Ac-cessed February 10, 2012.

LIMITATIONS OF ADAPTIVE CLINICAL TRIALS

137