systematic reviews of economic evaluations: utility or futility?

15
HEALTH ECONOMICS Health Econ. 19: 350–364 (2010) Published online 20 April 2009 in Wiley InterScience (www.interscience.wiley.com). DOI: 10.1002/hec.1486 SYSTEMATIC REVIEWS OF ECONOMIC EVALUATIONS: UTILITY OR FUTILITY? ROB ANDERSON Peninsula Medical School, University of Exeter, Exeter, UK y SUMMARY Systematic reviews of studies of effectiveness are the centrepiece of evidence-based medicine and policy making. Increasingly, systematic reviews of economic evaluations are also an expected input into much evidence-based policy making, with some health economists even calling for ‘an economics approach to systematic review’. This paper questions the value of conducting systematic reviews of economic evaluations to inform decision making in health care. It argues that the value of systematic reviews of economic evaluations is usually undermined by three things. Firstly, compared with effectiveness studies, there is a much wider range of factors that limit the generalisability of cost–effectiveness results, over time and between health systems and service settings, including the context-dependency of resource use and opportunity costs, and different decision contexts and budget constraints. Secondly, because economic evaluations are more explicitly intended to be decision-informing, the requirements for generalisability take primacy, and considerations of internal validity become more secondary. Thirdly, since one of the two main forms of economic evaluation – decision analytic modelling – is itself a well- developed method of evidence synthesis, in most cases the need for a comprehensive systematic review of previous economic evaluations of a particular health technology or policy choice is unwarranted. I conclude that apparent ‘meta-analytic expectations’ for clear and widely applicable cost–effectiveness conclusions from systematic reviews of economic evaluations are optimistic and generally futile. For more useful insights and knowledge from previous economic studies in evidence-based policy making, a more limited range of reasons for conducting systematic reviews of health economic studies is proposed. Copyright r 2009 John Wiley & Sons, Ltd. Received 21 July 2008; Revised 28 November 2008; Accepted 19 February 2009 KEY WORDS: systematic review; economic evaluation; economic studies; evidence-based policy 1. INTRODUCTION Evidence-based policy making can rarely rely on single studies, so policy makers and the researchers that support them try to make best use of the various partially relevant studies already available. In the health-care field there is vibrant ongoing debate about the appropriate methods of research synthesis for informing policy and practice (Dixon-Woods et al., 2005; Lavis et al., 2005; Popay, 2006). Within health economics, however, while there have been significant advances in evidence synthesis methods based on decision analytic modelling (Ades et al., 2006; Cooper et al., 2007; Weinstein, 2006), and a key book on the role of systematic reviews and health economics was published in 2002 (Donaldson et al., 2002a), this debate has been comparatively limited. Nevertheless, systematic reviews of economic studies have become a key feature of many policy making and technology assessment processes, and also a common form of published study in certain health economics journals. *Correspondence to: PenTAG, Peninsula College of Medicine & Dentistry, Veysey Building, Salmon Pool Lane, Exeter, EX2 4SG, UK ] . E-mail: [email protected] y] This article was published online on 20 April 2009. An error in the affiliation and correspondence address was subsequently identified. This notice is included in the online and print versions to indicate that both have been corrected on 3 December 2009. Copyright r 2009 John Wiley & Sons, Ltd.

Upload: rob-anderson

Post on 11-Jun-2016

214 views

Category:

Documents


0 download

TRANSCRIPT

HEALTH ECONOMICSHealth Econ. 19: 350–364 (2010)Published online 20 April 2009 in Wiley InterScience (www.interscience.wiley.com). DOI: 10.1002/hec.1486

SYSTEMATIC REVIEWS OF ECONOMIC EVALUATIONS:UTILITY OR FUTILITY?

ROB ANDERSON�

Peninsula Medical School, University of Exeter, Exeter, UKy

SUMMARY

Systematic reviews of studies of effectiveness are the centrepiece of evidence-based medicine and policy making.Increasingly, systematic reviews of economic evaluations are also an expected input into much evidence-basedpolicy making, with some health economists even calling for ‘an economics approach to systematic review’.This paper questions the value of conducting systematic reviews of economic evaluations to inform decision

making in health care. It argues that the value of systematic reviews of economic evaluations is usually underminedby three things. Firstly, compared with effectiveness studies, there is a much wider range of factors that limit thegeneralisability of cost–effectiveness results, over time and between health systems and service settings, includingthe context-dependency of resource use and opportunity costs, and different decision contexts and budgetconstraints. Secondly, because economic evaluations are more explicitly intended to be decision-informing, therequirements for generalisability take primacy, and considerations of internal validity become more secondary.Thirdly, since one of the two main forms of economic evaluation – decision analytic modelling – is itself a well-developed method of evidence synthesis, in most cases the need for a comprehensive systematic review of previouseconomic evaluations of a particular health technology or policy choice is unwarranted.I conclude that apparent ‘meta-analytic expectations’ for clear and widely applicable cost–effectiveness conclusions

from systematic reviews of economic evaluations are optimistic and generally futile. For more useful insights andknowledge from previous economic studies in evidence-based policy making, a more limited range of reasons forconducting systematic reviews of health economic studies is proposed. Copyright r 2009 John Wiley & Sons, Ltd.

Received 21 July 2008; Revised 28 November 2008; Accepted 19 February 2009

KEY WORDS: systematic review; economic evaluation; economic studies; evidence-based policy

1. INTRODUCTION

Evidence-based policy making can rarely rely on single studies, so policy makers and the researchersthat support them try to make best use of the various partially relevant studies already available. In thehealth-care field there is vibrant ongoing debate about the appropriate methods of research synthesis forinforming policy and practice (Dixon-Woods et al., 2005; Lavis et al., 2005; Popay, 2006).

Within health economics, however, while there have been significant advances in evidence synthesismethods based on decision analytic modelling (Ades et al., 2006; Cooper et al., 2007; Weinstein, 2006),and a key book on the role of systematic reviews and health economics was published in 2002(Donaldson et al., 2002a), this debate has been comparatively limited. Nevertheless, systematic reviewsof economic studies have become a key feature of many policy making and technology assessmentprocesses, and also a common form of published study in certain health economics journals.

*Correspondence to: PenTAG, Peninsula College of Medicine & Dentistry, Veysey Building, Salmon Pool Lane, Exeter, EX2 4SG,UK]. E-mail: [email protected]]This article was published online on 20 April 2009. An error in the affiliation and correspondence address was subsequentlyidentified. This notice is included in the online and print versions to indicate that both have been corrected on 3 December 2009.

Copyright r 2009 John Wiley & Sons, Ltd.

Just over a decade ago it was observed that health economists ‘have not yet developed a formalmethodology for reviewing and summing up evidence from individual economic evaluations y or indeedfor assessing whether systematic reviews are possible in this context’ (Jefferson et al., 1996, p. 155).Nowadays, however, those who conduct systematic reviews of economic evaluations have available tothem: guidelines on how to conduct such reviews (Carande-Kulis et al., 2000; Shemilt et al., 2008); checkliststo enable more consistent appraisal of the quality of included studies (Drummond et al., 2005; Evers et al.,2005); graphical tools to summarise the results of numbers of economic evaluations (Nixon et al., 2001); anddedicated bibliographic databases (such as NHS EED) and standardised search filters (also from CRD atYork) to ease the task of identifying economic evaluations and other economic studies. A recent paper byPignone and others has also examined the challenges of conducting systematic reviews of economicevaluations, but deals mainly with the practical implications of conducting such reviews rather than theirultimate value to decision makers (Pignone et al., 2005). There are now even rare examples of publishedmeta-analyses of costing studies and cost-utility studies (e.g. Bower et al., 2003; Cheng and Niparko, 1999).The question is no longer ‘are they possible’ but ‘are they worth the effort’?

Others have voiced similar concerns (Drummond, 2002; Mugford, 2002). For example, in relation tosystematic reviews of economic evaluations, Drummond has observed that ‘it is not clear that themotivation to produce an authoritative statement of cost–effectiveness is as strong as it appears to be forclinical effects size’ and further that ‘there is widespread recognition among economists, and possiblyamong decision makers, that whether or not a particular intervention is cost-effective depends on the localsituation’ (Drummond, 2002, p. 150). In addition, in a review of studies concerning the generalisability ofeconomic evaluations, 26 different factors were identified as making cost–effectiveness results vary fromlocation to location or over time (Sculpher et al., 2004, p. 10, Table 4), and another review identified 14‘transferability factors’ that should be assessed when using economic evaluation results from one countryfor decisions in another (Welte et al., 2004). Studies reporting substantial between-country differencesin public preferences for the same health states have also raised the question of whether anycost–effectiveness findings can be confidently transferred from one place to another (Russell, 2007).Despite these observations and findings, and other empirical evidence that local context does indeedimpact on cost–effectiveness estimates, health economists continue to expend much effort on conductingsystematic and exhaustive reviews of economic evaluations.

This paper develops some of these earlier assertions about why systematic reviews of economicevaluations might rarely allow widely applicable statements concerning the cost–effectiveness of policiesor treatments. It grounds these arguments in the specific factors that limit the generalisability of theresults of economic studies, the explicitly decision-informing purpose of most economic evaluations,and with recognition of the increased role of model-based economic evaluation in decision making – anincreasingly advanced method of research synthesis in its own right. I conclude by proposing a limitedrange of reasons for conducting useful systematic reviews of health economic studies.

2. SYSTEMATIC REVIEWS OF OR FOR ECONOMIC EVALUATIONS?

There are various suggested roles for economic evidence and systematic reviews in policy making.

a. Systematic reviews of economic evaluations.b. Systematic reviews for (model-based) economic evaluations.c. Economic evaluation alongside systematic reviews (of effectiveness).d. ‘Economic methods in systematic reviews’.e. ‘Economics components’ of effectiveness (e.g. Cochrane) reviews.

In this paper, I am not directly concerned with systematic reviews for economic evaluations; that is,primarily, the role of systematic reviews in informing the parameters of (model-based) economic

SYSTEMATIC REVIEWS OF ECONOMIC EVALUATIONS 351

Copyright r 2009 John Wiley & Sons, Ltd. Health Econ. 19: 350–364 (2010)

DOI: 10.1002/hec

evaluations. The arguments that justify the need for model-based analyses to inform decision making,together with the increasingly rigorous standards for developing and reporting them, are welldocumented elsewhere (Buxton et al., 1997; Cooper et al., 2007; Sculpher et al., 2006; Weinstein et al.,2003). In addition – largely on the grounds that its exact meaning is rather unclear – I am omitting anyconsideration of conducting economic evaluations alongside systematic reviews, and the use ofeconomic methods in systematic reviews (Donaldson et al., 2002b). Finally, the notion that reviews ofeffectiveness could have an ‘economics component’ has recently been outlined as part of theCochrane Handbook’s suggested approach to capturing and summarising economic evidence whileconducting a Cochrane review of effectiveness studies (Shemilt et al., 2006, 2008). However, thisapproach is actually a particular form of review of economic studies, or of economic evidence withineffectiveness studies, but where the inclusion criteria are restricted by the needs of the maineffectiveness-focused review.

As already noted, systematic reviews of numbers of economic studies (e.g. of a particular technology,or in a particular patient group) have become a more common type of journal paper, and also a morecommon requirement within formal processes for evidence-based policy making. Some journals,notably Pharmacoeconomics and Expert Reviews in Pharmacoeconomics and Outcomes Research,specifically invite systematic reviews of economic studies. Analysis of the NHS EED databasesuggests that between 100 and 200 reviews or systematic reviews of economic studies are publishedeach year (Figure 1). Meanwhile, many national agencies for the appraisal of health technologiesor public health policies currently require systematic reviews of the relevant economic literature(see Table I).

Although some systematic reviews of economic evaluations have been conducted specifically to assessthe methodological quality of economic evaluations in a given area (Jefferson et al., 2002), or to simplymap what economic studies have been conducted, many more appear to be based around some versionof the following review question: ‘What is the cost–effectiveness of intervention X (compared with Y orZ)’? Can such questions be credibly answered by a systematic review of economic evaluations?Furthermore, if we believe that economic evaluations are primarily meant to be a decision-informing

111

136

108

128

200

167161158145

113

85

43

0

50

100

150

200

250

200620052004200320022001200019991998199719961995

Year of publication

Nu

mb

er o

f p

ub

lish

ed r

evie

ws

Figure 1. Published reviews of economic evaluations� in NHS EED 1995–2006. Source: NHS Economic EvaluationDatabase (at CRD, York); searched 24th August 2007, using search term ‘review’, limited to 1985–2007. Papersinitially included on the basis of a CRD reviewer having designated them as Record type ‘Review of economicevaluations’. A second sub-selection was then made on the basis of paper titles (in order to exclude many whichwere clearly primarily reviews of effectiveness). �May actually include reviews of a broader range of economic

studies, such as costing studies or cost-of-illness studies

R. ANDERSON352

Copyright r 2009 John Wiley & Sons, Ltd. Health Econ. 19: 350–364 (2010)

DOI: 10.1002/hec

Table I. Different health agency’s requirements for reviews of economic studies

Agency RequirementStated purpose or review

question

For health technology appraisalNational Institute for Health andClinical Excellence (NICE), UKa

Evidence on cost–effectiveness ‘also includes thefindings of existing published economics litera-ture’

None stated

(nb. That this is expected to be in the form of asystematic review is indicated in the requiredreport sections for Technology Assessment Re-ports for NICE)

Pharmaceutical BenefitsAdvisory Committee (PBAC),Australiab

‘Present the results of a search of the literature forreports of economic evaluations of similar deci-sion analyses (in terms of similarity to thetreatment algorithm and/or the proposed andsimilar drugs). Where the submission’s model isdifferent from the literature-sourced models,explain the basis for the selection of the submis-sion’s approach’

Not stated, but implicitly tojustify and contextualise what-ever economic evaluation modeland analytical approach (CUA,CEA, or CBA) are adopted bythe manufacturer’s submission

(Under section D3. ‘Structure and rationale foreconomic evaluation’)b

Canadian Agency for Drugsand Technologies in Health(CDATH)c

‘Discuss existing economic studies that address thesame technology, and similar study question(s).Include a summary of methods and results ofreviewed studies’ and ‘Comment on the relevanceand generalisability of the results of the reviewedstudies to the target audience’

‘To summarise the availableknowledge in a systematic waythat will be useful for decisionmakers and researchers’

Use NHS CRD guidance for review methods

Other health policy agenciesNational Institute for Health andClinical Excellence (Centre forPublic Health Excellence), UKd

‘A systematic review of the published economicliterature should be carried out y’‘A thorough systematic review should be at-tempted, but if there is a large amount ofeconomic evidence, it may be necessary to limitthe search’‘Papers identified for inclusion should be criticallyappraised using a validated checklist [and] acommentary should be presented on the qualityof each paper’

‘y to ensure that no economicevaluations are missed duringsearches undertaken in the reviewof effectiveness’ (sic) and todetermine ‘if the published evi-dence is so reliable that furtheranalysis would be superfluous’

US Preventive Services TaskForce (US PSTF)e

‘Systematic reviews of economic evaluations arecompleted for those interventions that are eitherstrongly recommended or recommended by theTask Force on Community Preventive Services’

‘ y so that studies using dis-parate analytical methods can beconsistently compared’Also ‘they can improve theusefulness of existing studies justas systematic reviews bring to-gether and interpret a body ofevidence of the effectiveness ofinterventions’‘with the purpose of reducingerror and bias in the abstractionand adjustment of results andmaking [systematic reviews] com-parable across interventions’

aNICE: Guide to the Methods of Technology Appraisal, April 2004 (para 3.3.2, p. 12).bPBAC: Guidelines for preparing submissions to the PBAC, Section D on Economic evaluation for the main indication.cCADTH: Guidelines for the Economic Evaluation of Health Technologies: Canada (3rd ed.), 2006.dNICE: Methods for development of NICE public health guidance, March 2006.eUS PSTF: Guide to Community Preventive Services (Methods for Systematic Reviews of Economic Evaluations published inAmerican Journal of Preventive Medicine in 2000 (Carande-Kulis et al., 2000).

SYSTEMATIC REVIEWS OF ECONOMIC EVALUATIONS 353

Copyright r 2009 John Wiley & Sons, Ltd. Health Econ. 19: 350–364 (2010)

DOI: 10.1002/hec

method of analysis (and therefore, in a sense, should be inherently jurisdiction- and time-specific) does iteven make sense to seek a generalisable or average answer to such questions?

2.1. Economic evaluations and other ‘economic evidence’

In this paper I refer to economic evaluations as that subset of health economic studies in whichcomparative data are collected or reported on both costs and effects, and this permits an incrementalanalysis (Drummond et al., 2005). I also refer to a broader class of research as ‘economic evidence’ or‘economic studies’, which may include comparative cost comparisons, or non-comparative studies suchas cost analyses, cost-of-illness studies, or even studies presenting only resource/service use data(without any valuation of the resources/services consumed). Such studies usually still shed some light onthe resource (and cost) consequences of different treatments, or the costs associated with the mainhealth states of specific patient groups. While this paper is mainly a critique of systematic reviews ofeconomic evaluations, in describing what might be more useful ‘economic questions’ that are worthy ofsystematic review, it will become clear that the range of economic study types that may need to bereviewed is somewhat wider.

3. LIMITS TO GENERALISING FROM NUMBERS OF ECONOMIC STUDIES

There are a large number of documented reasons why the results of single economic studies may not betransferable between different places and times (Drummond and Pang, 2001; Sculpher et al., 2004;Welte et al., 2004). For much the same reasons, it may not be realistic to expect economic evidencesynthesised from many different places and times (i.e. numbers of economic evaluations) to usefullyinform the likely cost–effectiveness of an intervention in a particular decision context.

3.1. Variation in methods

Clearly, a major reason that the findings of economic evaluations (of the same intervention comparison)vary is that they have used different methods. This is both due to a continuing lack of standardisation ofmethods, and a typical lack of compliance with established standards (Drummond, 2002; Jeffersonet al., 2002). However, it is also acknowledged that there are a number of methodological considerationsthat feed into economic evaluations for which inter-country or regional variations can be expected andjustified (Sculpher and Drummond, 2006). Therefore, even with complete compliance with jurisdiction-specific methods standards, there is considerable scope for methodological variation in economicevaluations within a review.

3.2. Intervention context and intervention costs

Perhaps the most compelling reason for questioning the value of systematic reviews of economicevaluations is that the costs and resource use associated with interventions are highly likely to vary fromcountry to country, in different regional or service settings, as well as over time. Such variations aremost commonly attributed to differences in unit costs (e.g. between countries, and over time due toinflation) (Sculpher et al., 2004). However, intervention context may also substantially impact on thelevels and particular combinations of resources needed for an intervention to be provided in differenthealth systems (e.g. with different clinical grades, or different average length of hospital stay for a keyprocedure) or in different service settings (e.g. a different balance between primary and secondary care).Of the many factors that have been identified, which impact on the variability of cost–effectivenessestimates, several are explicitly associated with cost (absolute/relative costs, economies of scale,exchange rates, (different combinations of) health-care resources, financial incentives, and opportunitycost) (Sculpher et al., 2004).

R. ANDERSON354

Copyright r 2009 John Wiley & Sons, Ltd. Health Econ. 19: 350–364 (2010)

DOI: 10.1002/hec

3.3. Intervention context and intervention effects

The impact of context on the cost component of the cost-benefit equation is further compounded by allthe other reasons that the effectiveness of health interventions are likely to vary from place to place andover time. These have been well documented elsewhere, so I do not expand on them at length here(Jackson et al., 2005; Kraemer et al., 2006; Kravitz et al., 2004). However, it is worth noting that healtheconomists have contributed to this debate, in terms of ‘an intervention’s effectiveness’ actuallyresulting from the interplay of the changes introduced by an intervention and the existing mix of theunderlying determinants of health or disease in a population (Birch, 1997; Birch and Gafni, 2003). Birchand Gafni, for example, show that context may impact both on the ‘technical component’ of economicevaluations (how an intervention alters the causal relationships between the determinants of health andhealth outcomes) and the ‘subjective component’ (how different states of health are valued andcontribute to overall well-being, compared with the mix of other commodities consumed)(Birch and Gafni, 2003). The health system or service context is also believed to be a major factor indetermining the success or failure of using different financial incentives or ‘economic interventions’(Kristiansen and Gosden, 2002). More recently, health economists have also described the challenges ofevaluating public health interventions, which intervene in complex systems, and called for greaterawareness of the non-linear relationships between resource inputs and the level, types, and timing ofoutcomes (Shiell et al., 2008).

3.4. Context of the decision

As well as the context of the intervention, most economic evaluations will have a particular decisioncontext (either explicitly stated, or implicit in the chosen perspective of the analysis). At one level, thiswill determine the current treatment or service comparator, which may not be the same as that includedin other published economic evaluations. However, the decision-making context will also determinewhat resource use does or does not have an opportunity cost (Craig and Sutton, 1998). In somesituations, such as where an operating theatre or another physical resource is being used to full capacity,there will be an alternative use for any capacity freed up, but in other hospitals working under-capacitythere may not be any benefits due to any theatre slots or beds being freed up. Different budgetaryconstraints would similarly alter the opportunity costs of the consumption of the same resources (seeBirch and Gafni, 2003). Thus, the cost–effectiveness of the same compared interventions in two placesmay be different even in circumstances where the incremental resource use and health effects areidentical, due to factors relating to the decision context.

The scope and scale of service changes and related costs are also often linked to whether new serviceswere primarily intended to expand (i.e. supplement) existing service capacity or re-locate (i.e. substitute)capacity, and this can also impact on opportunity costs. The importance of these contextual factorswere well illustrated in Coast and colleagues’ insightful review of four economic evaluations of hospitalat home programmes (Coast et al., 2000).

3.5. Two main types of economic evaluation

It is now more fully recognised that there is a key distinction between decision model (simulation)-basedanalyses, and empirical economic evaluations, which collect patient-level data on costs and outcomes.Individual patient data-based and model-based economic evaluations of an intervention will invariablynot be comparable for all the same reasons that people advocate conducting decision model-basedanalyses (e.g. including the full range of relevant comparators, a representative case-mix of patients, andfollowing patients for a sufficiently long time for all significant cost and outcome differences to becaptured) (Buxton et al., 1997; Sculpher et al., 2006). At the very least, therefore, in order to retaincomparability among reviewed studies, any systematic review of economic evaluations should sensibly

SYSTEMATIC REVIEWS OF ECONOMIC EVALUATIONS 355

Copyright r 2009 John Wiley & Sons, Ltd. Health Econ. 19: 350–364 (2010)

DOI: 10.1002/hec

become two systematic reviews: one of empirical (including trial-based) economic evaluations, and oneof decision model-based economic evaluations.

3.6. Cost–effectiveness not cost-efficacy

Since most economic evaluations are primarily intended to inform decisions, they are more explicitlyconcerned with effectiveness rather than efficacy; they seek to assess the incremental benefits andincremental costs implied by a ‘real-world’ choice between a number of interventions as they would beresourced and implemented in routine service delivery. The distinction between effectiveness and efficacystudies is in fact further recognition that context matters. Essentially, whereas explanatory trials toestablish a treatment’s efficacy serve a valid purpose of defining maximum possible effectiveness, thereseems little point conducting cost-efficacy studies (e.g. in idealised clinical settings and with highlyprotocol-driven care and specially selected patients) because the costs and effectiveness of the ‘sametreatment’ would be different if delivered within routine care settings and across the whole case-mix ofeligible patients. This aspect of generalisability is a key element of the critique of trial-based economicevaluations, and therefore clearly also has implications for the value of conducting systematic reviews ofsuch studies (Sculpher et al., 2006).

4. GOOD REASONS FOR REVIEWING ECONOMIC STUDIES

Given these wide-ranging and commonly present limitations to the generalisability of evidence fromeconomic evaluations, I suggest that there are probably only three good reasons for conductingsystematic reviews of economic studies: (1) to inform the development of a new decision model; (2) toidentify the most relevant one or two studies to inform a particular decision in a jurisdiction; or (3) toidentify the key economic (causal) trade-offs implicit in a given treatment choice or disease area. Idiscuss each of these in turn below, including the likely review questions and the types of healtheconomic studies, which might usefully be included.

4.1. Reviews to justify and inform decision model development

When there is a plan to develop a model for estimating the cost and effectiveness of some health policyalternatives, some kind of systematic review of previous economic evaluations is at least necessary torule out that there is not already in existence a recent, highly relevant and rigourously conductedanalysis of essentially the same decision problem. This is simply the good academic practice of makingsure that a piece of research will not be answering a question which has, effectively, already beenanswered. It may therefore not go much further than a systematic search of the published literature andrecent unpublished sources, in order to confirm that there is no recent economic evaluation of the samecomparators in similar populations and care settings.

If a new model-based analysis is justified, there are various ways in which reviewing previouseconomic studies might usefully inform the development of a new decision model.

� Previous decision model-based analyses might provide insights into some of the keytrade-offs, clinical events, and changes in health states, which are thought to determine howthe types and levels of resource use implied by alternative interventions are associated withdifferent outcomes. Previous models might not reflect all the important resource–outcomerelationships implicit in a given decision problem, but they should provide an initial list of thekey ones.

� Previous model-based analyses examining similar decision problems or treatments might alsoindicate the strengths and weaknesses of different modelling approaches (e.g. simple decision trees

R. ANDERSON356

Copyright r 2009 John Wiley & Sons, Ltd. Health Econ. 19: 350–364 (2010)

DOI: 10.1002/hec

versus Markov models versus discrete event simulations). For example, a recent systematicreview of models for evaluating the cost–effectiveness of smoking cessation yielded insights intothe potential advantages and disadvantages of different model structures and simulationapproaches – although problems remain in judging whether more sophisticated models arenecessarily better (Boyd and Briggs, 2008). Furthermore, there are occasional studies wherethe same decision problem has been evaluated using two different modelling approaches withinthe same report or paper, providing additional insights for developing a new model (ThompsonCoon et al., 2007).

� Previous empirical economic studies, which have collected and reported resource use and/oreffectiveness data for the same individual patients, may also usefully inform a new decisionmodel. However, this will largely depend on whether the study is purely descriptive (andexclusively aggregate outcome-focused), or has attempted to explain how and why different typesof patient are associated with different levels or mixes of resource use, or different levels andpatterns of outcomes. The meta-analysis of individual patient data from a number of economicevaluations may provide particular insights, although examples are rare (Bower et al., 2003;Richardson, 2007). In contrast, studies that merely report which types of resource use weremeasured and valued (but do not provide a breakdown of the cost estimate for each intervention,by type of resource use or by patient sub-groups) would be less useful at helping decisionmodellers decide what resource use or patient pathways are important.

This use of reviews of model-based economic evaluations has been encouraged by Pignone on thebasis of their experience of conducting reviews of economic evaluations for the US Preventive ServicesTask Force (Pignone et al., 2005). However, because model-based economic evaluations are themselvessyntheses, it makes no sense to consider traditional meta-analysis or the pooling of aggregate resultsfrom such studies. Instead, they suggest, reviews of model-based economic evaluations are ‘most usefulfor comparing and contrasting how different investigators have chosen to structure their models andestimate key variables’ and can also ‘clarify how results differ between studies based on these differentassumptions’ (Pignone et al., 2005, p. 1073). A recent example of such a review, in relation to the impactof structural assumptions in decision models, is that by Drummond and colleagues on models forrheumatoid arthritis (Drummond and Barbieri, 2005).

4.2. Reviews to identify the most relevant study

In some decision-making situations there may be insufficient resources to conduct an original model-based economic evaluation of the specific decision problem being faced. In such situations, rather thannot consider any economic evidence at all, it may be better to identify the best quality study (or fewstudies) which is most relevant to the decision being faced, and transfer or adapt those results to the newdecision problem. Judging the ‘best quality study’ would have to include considerations of both internalvalidity (study design and methodological quality) and external validity (e.g. how long ago? similarity ofcomparators? similarity of health service/system settings?). Only if there happened to be several studiesof similar quality and relevance to the current decision context, then it might be worth examining towhat extent and why their cost–effectiveness estimates vary. However, this would be with a view tocontextualising the results of the most relevant high quality study, rather than with the expectation thatsome average result might emerge. There may also be the possibility of updating and re-estimating thecost–effectiveness using local resource use prices, or perhaps through inflating and converting costsfrom past studies in other countries (Carande-Kulis et al., 2000; Welte et al., 2004). This strategy hasparallels with the broad approach of ‘best evidence synthesis’, in which the threshold for the inclusion ofstudies is cumulatively judged according to what the best (most internally valid and most relevant)studies show (Slavin, 1995).

SYSTEMATIC REVIEWS OF ECONOMIC EVALUATIONS 357

Copyright r 2009 John Wiley & Sons, Ltd. Health Econ. 19: 350–364 (2010)

DOI: 10.1002/hec

4.3. Reviews to understand the key economic trade-offs and causal relationships in a decision problem or

treatment area

This reason for conducting systematic reviews of economic studies is almost the antithesis of why manysuch published reviews seem currently to be undertaken; instead of seeking a generalisable empiricalregularity that an intervention is or is not cost-effective, the aim of such a review is explanatory. Ofcourse, the underlying ‘explanation’ of why some new interventions or treatment have a particularincremental cost may simply be that the new technology itself has a much higher per patient price.However, explaining why some interventions are more cost-effective than others will often also bedetermined by a whole range of other factors to do with downstream health effects, different rates andtiming of adverse events, or different levels of patient compliance. With more complex interventions, itbecomes even harder to explain how a specific bundle of intervention components (and their associatedresource use), provided in a given context, have generated the levels and types of outcomes measured(Byford and Sefton, 2003; Coast et al., 2000; Godber et al., 1997; Kelly et al., 2005).

Therefore, either to inform the structure of a decision model, or as an exercise in developing theory, itwould often be useful to conduct systematic reviews of economic studies that seek to answer reviewquestions such as: ‘How do the level and configuration of resources involved in treatment/service designstrategies P, Q and R appear to be related to the levels and types of outcomes observed, and whatcontextual factors affect these relationships?’ Decision models, after all, are essentially a simplifiedexpression of what these key trade-offs are presumed to be, and we often do not describe clearly wherethese structural assumptions come from (Cooper et al., 2005). Such theory-building or explanatoryreviews may need to abandon the more intervention-focussed and research design (i.e. internal validity)focussed processes typical of conventional systematic reviews of effectiveness evidence; instead theycould use approaches such as ‘realist review’, which use more iterative analysis and purposive sampling,and may focus more on intervention theory in order to build up a reliable picture of the key causalrelationships at work in a given area of programme design or treatment (Pawson, 2006, 2002; Pawsonet al., 2005). They would therefore need to make best use of all types of economic study, not justeconomic evaluations.

5. DISCUSSION

Conducting high quality systematic reviews of any research question, in any topic area, requires a greatdeal of time and effort, and increasingly specialist research skills. In some circumstances, the ‘pay-back’from such efforts can be generalisable and credible knowledge with wide probable application in avariety of policy and practice settings. It should not be forgotten that systematic reviews also serve themore basic and useful purpose of ‘mapping’ what research has been conducted relating to a particularquestion, or describing the overall quality of research, or the major ‘knowledge gaps’ in a particular field(Petticrew and Roberts, 2006).

Whether systematic reviews of a particular type of research evidence (e.g. economic evaluations) areuseful, should ultimately rest on whether the likely benefits will outweigh the research effort (and costs)involved. It might seem obvious but, on the benefit side, this critically depends on the number, qualityand heterogeneity of studies that have addressed the same or similar research questions to the review’sresearch question. For example, a systematic review examining the research question ‘Is drug X moreeffective than drug Y (at improving outcomes P and Q) for patients with disease A?’ will be ultimatelyfruitless if there are not a number of clinical trials, which have addressed the same or a very similarquestion. It follows that systematic reviews of economic evaluations usually shed light on little morethan whether the ratio of incremental costs and incremental effects for two or more policy or treatmentcomparators are consistent between times and places, since this is the only empirical question that mosteconomic evaluations explicitly aim to answer. However, as this paper has argued, there are so many

R. ANDERSON358

Copyright r 2009 John Wiley & Sons, Ltd. Health Econ. 19: 350–364 (2010)

DOI: 10.1002/hec

reasons why incremental cost–effectiveness will vary (and legitimately vary) between different economicevaluations, that in most cases no consistent or easily explainable pattern of cost–effectiveness resultswill arise.

This heterogeneity, together with variation in the effectiveness outcomes used (e.g. often spanningdisease-specific measures, life-years, and quality-adjusted life-years) will almost always mean thatstatistical pooling of cost–effectiveness estimates (meta-analysis) is neither feasible nor meaningful (for asimilar argument against the meta-analysis of economic evaluations, see Neymark, 1998). Indeed,though the action of specific drug compounds on the biology of well-defined disease processes may bereasonably expected to reveal a repeatable relationship in different trials (i.e. in humans with the sameclinical diagnosis), patterns of resource use and their associated opportunity costs will be driven by amyriad of different professional/behavioural, socio-economic, and organisational factors, which varyconsiderably between health systems and service settings.

Therefore, at the centre of any discussion of the value of systematic reviews of economic evidence is atension between economic evaluations as a primarily decision-informing or a knowledge-generating(e.g. hypothesis testing) method, and whether they can successfully be both. Many health economistsview economic evaluations as explicitly decision-informing, and therefore view their findings asinherently jurisdiction-specific and time-limited (Sculpher et al., 2005). To this end they haveincreasingly advocated the decision model as a vehicle for such analysis (Sculpher et al., 2006). Thecontrasting assumption underlying many reviews of clinical effectiveness is that the knowledgegenerated will have wide applicability to similarly diagnosed and managed patients elsewhere and in thefuture (often based on well-supported theory of the underlying biological mechanisms of the treatmentand the targeted disease process). The overtly knowledge-generating and even hypothesis-testing ethosof the Cochrane Collaboration illustrates this tension. The applicability of evidence to real decision-making situations is inherently limited when the goal of research synthesis is primarily to uncover somegeneralisable empirical regularity in the supposed underlying effectiveness – or cost–effectiveness – of atreatment.

In contrast, in areas of health care like public health, where effectiveness is recognised as inherentlymore complex and contingent, it has become increasingly recognised that asking whether anintervention ‘is effective’ is of limited value; there is very rarely a clear answer to this reviewquestion. It can so confidently be predicted that complex interventions will work in some instances andnot others, that it makes much more sense to ask from the beginning ‘how and why’ an intervention isor is not effective in different contexts, or when its components are configured or implemented indifferent ways (Jackson et al., 2005; Petticrew and Roberts, 2006). These same insights need to beextended to the consideration of economic evidence.

I have suggested three useful purposes for reviews of economic studies (summarised in Table II). Thefirst is just good research practice; to avoid duplication, and to inform a new model-based analysis byseeing what methods others have used to answer similar research questions. The second purpose, findingthe best one or two studies to inform a particular decision, reflects that there are so many factorsaltering both the costs and effectiveness from place to place and time to time, that overridingconsiderations of generalisability will often rule out the value of appraising many high quality economicstudies. This approach will sometimes identify a recent well-conducted and relevant analysis in thejurisdiction of interest, which may preclude the need for a new empirical economic evaluation (perhapsusing established criteria for judging transferability, Welte et al., 2004). However, weighing up the needfor collecting new primary data on costs and effectiveness, against the risks of adapting evidence frompast studies of varying relevance and certainty, will itself be context dependant and rarely easy. Finally,the third suggested purpose of reviews eschews the traditional focus on comprehensively describing andquality assessing only the methods and quantitative results of each included study, and instead pursues amore explanatory and theory-building objective. This approach explicitly acknowledges the pervasiveheterogeneity in economic evidence, to produce a more detailed understanding of how different

SYSTEMATIC REVIEWS OF ECONOMIC EVALUATIONS 359

Copyright r 2009 John Wiley & Sons, Ltd. Health Econ. 19: 350–364 (2010)

DOI: 10.1002/hec

combinations and levels of resources, and their associated opportunity costs, cause different patterns ofoutcomes in different patient populations or care contexts.

However, a note of caution about the value this type of explanatory review is warranted. Although itwould be useful to be able to explain how different configurations of resources appear to cause differentlevels and types of outcomes, the fact remains that few empirical economic evaluations have thisexplanatory intent. If economic evaluations remain mostly intervention-focussed (with little detaileddescription of context and patient characteristics), and exclusively descriptive in aim (that is, to measureand report the total costs and total effects for a particular comparison), then they will probably berather uninformative material for such a review.

Table II. Suggested purpose, questions and scope of systematic reviews of economic studies

Purpose of review Suggested review question(s) Scope and types of studies to include

To inform the development of aneconomic decision model

What are the key theoretical trade-offs(between levels and types of resources,and levels and types of outcome) implicitin a given treatment/policy choice? Whatdo previously published empirical eco-nomic studies (with patient-level cost andoutcome data) reveal or refute about suchtrade-offs?

Review all previous decision model-basedanalyses, for justification of their modelstructural assumptions, and presumeddrivers of costs, key outcome events,quality of life and/or survival

What are the strengths and weaknesses ofpreviously used decision model structuresand modelling approaches for evaluatingsimilar decision problems?

May focus on review and editorial papersas much as original economic analyses

Are any of the previously developedmodels fit-for-purpose for analysing thecurrent decision problem?

Useful insights may derive from all typesof economic study, as well as economicevaluations (e.g. non-comparative costanalyses, or cost-of-illness studies)

To identify the one or two mostrelevant existing studies to in-form a particular decision

Are there any currently published eco-nomic evaluations of the decision pro-blem, which might be transferred, oradapted and updated, to reliably informour present policy choice?

Initially identify all previously publishedfull economic evaluations, but only re-view in detail those whose characteristicsare sufficiently similar and relevant to thecurrent decision problem and contexta

(judged in terms of both internal validityand external generalisability)

In what ways are the cost–effectivenessresults from this/these studies likely todiffer for this jurisdiction at this point intime?

Choose the most high quality and rele-vant study, and interpret its results in thelight of the variability in results acrossother similar and high quality studies(where they exist)

To identify the key economic(causal) trade-offs implicit in agiven treatment/policy choice orpatient group

What are the key theoretical trade-offs(between levels and types of resources,and levels and types of outcome) implicitin a given treatment/policy choice?

Potentially very wide scope, includingboth economic evaluations (both empiri-cal and model-based), cost-comparisons,regression-based cost analyses, cost-of-illness studies

What do previously published empiricaleconomic studies (with patient-level costand outcome data) reveal or refute aboutsuch trade-offs?

Use an iterative and more purposivesampling procedure among identifiedstudies (as with reviews of qualitativeresearch), and a mixture of data extrac-tion forms relevant to different studydesigns. Organise evidence synthesisaround key theorised mechanisms. If thestudies are grouped by intervention con-text, are different causal trade-offs ob-served or implied?

aFor example, selecting the most rigorous, relevant and adaptable economic evaluation for a given decision problem could use theDecision Chart developed by Welte et al. (2004).

R. ANDERSON360

Copyright r 2009 John Wiley & Sons, Ltd. Health Econ. 19: 350–364 (2010)

DOI: 10.1002/hec

6. CONCLUSION

Evidence-based policy making cannot be driven by the results of individual empirical studies. Instead,some kind of syntheses of evidence from numbers of relevant research studies should be an integral partof the policy-making process. At the same time, given the inescapable reality of opportunity costs inpolicy choices, it is essential that ‘economic evidence’ plays some part in the decision-making process. Inhealth policy making, these two principles appear to have resulted in the widespread presumption thatconducting systematic reviews of economic evaluations provides a valuable basis either for decisionmaking or, somehow, knowledge generation.

This paper has made the case that while increasingly popular, systematic reviews of economicevaluations are usually futile as an input to evidence-based policy making. This is mainly because of thevery wide range of factors that can introduce heterogeneity into either the cost and/or the effectivenessside of economic evaluations, but also because the decision-informing purpose of economic evaluationimposes much stronger requirements for the generalisability of evidence.

The conclusion also relies on the fact that, empirical economic evaluations have largely become apurely descriptive and largely acontextual method (define comparators; measure and value inputs;measure and value outcomes; compare) (Birch and Gafni, 2002). If economic evaluations were tobecome more explanatory in their intent and methods, and sought more explicitly to explain howdifferent levels and configurations of resources cause different levels and combinations of outcomes,then systematic reviews of economic evaluations might generate more useful knowledge for policymakers. This call in fact closely echoes a conclusion of one of the first papers to discuss the feasibility of‘secondary economic analyses’, when it suggested that ‘Progress may require a more coherent theoreticalframework linked to cost and production function theory’ (emphasis added, Jefferson et al., 1996,p. 163). Similarly, a core premise of the new guidance within the revised Cochrane Handbook is that ‘theoverall aim of economics components of reviews should be to summarise what is known from differentsettings about economics aspects of interventions, to help end-users understand key economic trade-offsbetween alternative health-care treatments or tests’ (Shemilt et al., 2008, p. 4). This explanatoryemphasis is also consistent with increasing confidence in more theory-driven methods of systematicreview for understanding the variable effectiveness of complex interventions in different places andpopulations (Dixon-Woods et al., 2005; Greenhalgh et al., 2007; Pawson et al., 2005; Petticrew andRoberts, 2006).

At the very least, commissioners of systematic reviews of ‘economic evidence’ need to be moreexplicit about their overall purpose (e.g. to inform a particular decision, or to inform a newdecision model, or to summarise ‘what is known’?), its specific review questions, and link these to thescope and methods of the review. They can then more clearly consider whether resources might bebetter directly invested in a decision model-based synthesis of evidence, or perhaps in more primaryresearch into how and why particular combinations of resources are associated with different patternsof outcomes.

ACKNOWLEDGEMENTS

I thank current and past colleagues for comments on an earlier paper on the appropriate role ofeconomic evidence in evidence-based policy making more generally, from which the arguments in thecurrent paper have emerged. I am also grateful to two anonymous referees, and to Ian Shemilt andother members of The Campbell & Cochrane Economic Methods Group for sharing pre-publicationversions of the new chapter for the Cochrane Handbook on ‘Incorporating economic evidence’ inCochrane Reviews, and for useful comments on this paper as presented at HESG January 2008(at UEA, Norwich).

SYSTEMATIC REVIEWS OF ECONOMIC EVALUATIONS 361

Copyright r 2009 John Wiley & Sons, Ltd. Health Econ. 19: 350–364 (2010)

DOI: 10.1002/hec

REFERENCES

Ades A, Sculpher M, Sutton A, Abrams K, Cooper N, Welton N, Lu G. 2006. Bayesian methods for evidencesynthesis in cost-effectiveness analysis. Pharmacoeconomics 24: 1–19.

Birch S. 1997. As a matter of fact: evidence-based decision-making unplugged. Health Economics 6: 547–559.Birch S, Gafni A. 2002. Evidence-based health economics. Answers in search of questions? In Evidence-Based

Medicine in its Place, Kristiansen I, Mooney G (eds). Routledge: London, New York; 50–61.Birch S, Gafni A. 2003. Economics and the evaluation of health care programmes: generalisability of methods and

implications for generalisability of results. Health Policy 64: 207–219.Bower P, Byford S, Barber J, Beecham J, Simpson S, Friedli K, Corney R, King M, Harvey I. 2003. Meta-analysis

of data on costs from trials of counselling in primary care: using individual patient data to overcome sample sizelimitations in economic analyses. British Medical Journal 326: 1247–1252.

Boyd K, Briggs A. 2008. A critique of decision-analytic modelling in cost-effectiveness analyses of smokingcessation services: what makes a good model? Paper Presented at HESG Winter 2008 Meeting, UEA, Norwich(cited with permission).

Buxton M, Drummond MF, van Hout BA, Prince RL, Sheldon TA, Szucs T, Vray M. 1997. Modelling in economicevaluation: an unavoidable fact of life. Health Economics 6: 217–227.

Byford S, Sefton T. 2003. Economic evaluation of complex health and social care interventions. National InstituteEconomic Review 186: 98–108.

Carande-Kulis V, Maciosek M, Briss P, Teutsch S, Zaza S, Truman B, Messonnier M, Papaioanou M, Harris J,Fielding J. 2000. Methods for systematic reviews of economic evaluations for the guide to community preventiveservices. American Journal of Preventive Medicine 18(1S): 75–91.

Cheng AK, Niparko JK. 1999. Cost-utility of the cochlear implant in adults: a meta-analysis. Archives ofOtolaryngology – Head & Neck Surgery 125(11): 1214–1218.

Coast J, Hensher M, Mulligan J, Sheppard S, Jones J. 2000. Conceptual and practical difficulties with the economicevaluation of health services developments. Journal of Health Services Research and Policy 5: 42–48.

Cooper N, Coyle D, Abrams K, Mugford M, Sutton A. 2005. Use of evidence in decision models: an appraisal ofhealth technology assessments in the UK since 1997. Journal of Health Services Research and Policy 10: 245–250.

Cooper N, Sutton A, Ades A, Paisley S, Jones D, for Working Group on the ‘Use of evidence in economic decisionmodels. 2007. Use of evidence in economic decision models: practical issues and methodological challenges.Health Economics 16: 1277–1286.

Craig N, Sutton M. 1998. Opportunity costs on trial: new options for encouraging implementation of results fromeconomic evaluations. In Getting Research Findings into Practice, Donald A, Haines A (eds). BMJ Books:London; 124–142.

Dixon-Woods M, Agarwal S, Jones D, Young B, Sutton A. 2005. Synthesising qualitative and quantitativeevidence: a review of possible methods. Journal of Health Services Research and Policy 10: 45–53.

Donaldson C, Mugford M, Vale L. 2002a. Evidence-based Health Economics: from Effectiveness to Efficiency inSystematic Review. BMJ Books: London.

Donaldson C, Mugford M, Vale L. 2002b. Using systematic reviews in economic evaluation: the basic principles. InEvidence-Based Health Economics: from Effectiveness to Efficiency in Systematic Review, Donaldson C,Mugford M, Vale L (eds). BMJ Books: London; 10–24.

Drummond M. 2002. Evidence-based medicine meets economic evaluation – an agenda for research. In Evidence-Based Health Economics: from Effectiveness to Efficiency in Systematic Review, Donaldson C, Mugford M,Vale L (eds). BMJ Books: London; 148–154.

Drummond M, Barbieri MWJ. 2005. Analytic choices in economic models of treatments for rheumatoid arthritis.What makes a difference? Medical Decision Making 25: 520–533.

Drummond M. Pang F. 2001. Transferability of economic evaluation results. In Economic Evaluation in HealthCare: Merging Theory with Practice, Drummond M, McGuire A (eds). Oxford University Press: Oxford;256–276.

Drummond M, Sculpher M, Torrance GW, O’Brien B, Stoddart GL. 2005. Methods for the Economic Evaluation ofHealth Care Programmes. Oxford University Press: New York.

Evers S, Goossens M, de Vet H, van Tulder M, Ament A. 2005. Criteria list for assessment of methodologicalquality of economic evaluations: Consensus on Health Economic Criteria. International Journal of TechnologyAssessment in Health Care 21(1): 240–245.

Godber E, Robinson R, Steiner A. 1997. Economic evaluation and the shifting balance towards primary care:definitions, evidence and methodological issues. Health Economics 6: 275–294.

Greenhalgh PM, Kristjansson I, Robinson V. 2007. Realist review to understand the efficacy of school feedingprogrammes. British Medical Journal 335: 858–861.

R. ANDERSON362

Copyright r 2009 John Wiley & Sons, Ltd. Health Econ. 19: 350–364 (2010)

DOI: 10.1002/hec

Jackson N, Waters E, for the Guidelines for Systematic Reviews of Health Promotion and Public HealthInterventions Taskforce. 2005. Guidelines for Systematic Reviews of Health Promotion and Public HealthInterventions. Deakin University, for the Cochrane Health Promotion and Public Health Field: Melbourne,Australia.

Jefferson T, Mugford M, Gray A. 1996. An exercise on the feasibility of carrying out secondary economic analyses.Health Economics 5: 155–165.

Jefferson T, Vale L, Demicheli V. 2002. Methodological quality of economic evaluations of health careinterventions – evidence from systematic reviews. In Evidence-based Health Economics: from Effectiveness toEfficiency in Systematic Review, Donaldson C, Mugford M, Vale L (eds). BMJ Books: London; 67–88.

Kelly M, McDaid D, Ludbrook A, Powell J. 2005. Economic Appraisal of Public Health Interventions (HDABriefing Paper), Health Development Agency, London.

Kraemer H, Frank E, Kupfer D. 2006. Moderators of treatment outcomes: clinical, research, and policyimportance. Journal of the American Medical Association 296: 1286–1289.

Kravitz R, Duan N, Braslow J. 2004. Evidence-based medicine, heterogeneity of treatment effects, and the troublewith averages. The Milbank Quarterly 82(4): 661–687.

Kristiansen I. Gosden T. 2002. Evaluating economic interventions: a role for non-randomised designs? In Evidence-Based Health Economics: from Effectiveness to Efficiency in Systematic Review, Donaldson C, Mugford M, ValeL (eds). BMJ Books: London; 114–126.

Lavis J, Davies H, Oxman A, Denis J-L, Golden-Biddle K, Ferlie E. 2005. Towards systematic reviews that informhealth care management and policy-making. Journal of Health Services Research and Policy 10(Suppl. 1): 35–48.

Mugford M. 2002. Reviewing economic evidence alongside systematic reviews of effectiveness: example of neonatalexogenous surfactant. In Evidence-Based Health Economics: from Effectiveness to Efficiency in SystematicReview, Donaldson C, Mugford M, Vale L (eds). BMJ Books: London; 25–37.

Neymark N. 1998. Critical reviews of economic analyses in order to make health care decisions for cancer. Annals ofOncology 9: 1167–1172.

Nixon J, Khan KS, Kleijnen J. 2001. Summarising economic evaluations in systematic reviews: a new approach.British Medical Journal 322: 1596–1598.

Pawson R. 2002. Evidence-based policy: the promise of ‘Realist Synthesis’. Evaluation 8: 340–358.Pawson R. 2006. Evidence-Based Policy: A Realist Perspective. Sage Publications: London.Pawson R, Greenhalgh T, Harvey G, Walshe K. 2005. Realist review – a new method of systematic review designed

for complex policy interventions. Journal of Health Services Research and Policy 10: S1:21–S1:34.Petticrew M. Roberts H. 2006. Systematic Reviews in the Social Sciences: A Practical Guide. Blackwell Publishing:

Malden, MA.Pignone M, Saha S, Hoerger T, Lohr K, Teutsch S, Mandelblatt J. 2005. Challenges in systematic reviews of

economic evaluations. Annals of Internal Medicine 142(12): 1073–1079.Popay J. 2006. Moving Beyond Effectiveness in Evidence Synthesis: Methodological Issues in the Synthesis of Diverse

Sources of Evidence, National Institute for Health and Clinical Excellence, London.Richardson G. 2007. Cost-effectiveness of interventions to support self-care. Ph.D. Thesis, University of York.Russell LB. 2007. Is all cost-effectiveness analysis local?. Medical Decision Making 27(3): 231–232.Sculpher M, Claxton K, Akehurst RL. 2005. It’s just evaluation for decision-making: recent developments in, and

challenges for, cost-effectiveness analysis. In Health Policy and Economics: Opportunities and Challenges,Smith P, Sculpher M, Ginnelly L (eds). Open University Press: Buckingham; 8–41.

Sculpher M, Claxton K, Drummond M, McCabe C. 2006. Whither trial-based economic evaluation for health caredecision making? Health Economics 15(7): 677–687.

Sculpher M, Drummond M. 2006. Analysis sans frontieres: Can we ever make economic evaluations generalisableacross jurisdictions? Pharmacoeconomics 24(11): 1087–1099.

Sculpher M, Pang F, Manca A, Drummond M, Golder S, Urdahl H, Davies L, Eastwood A. 2004. Generalisabilityin economic evaluation studies in healthcare: a review and case studies. Health Technology Assessment8(49).

Shemilt I, Mugford M, Byford S, Drummond M, Eisenstein E, Knapp M, Mallender J, McDaid D, Vale L,Walker D. 2008. Chapter 15: incorporating economics evidence. In Cochrane Handbook for Systematic Reviewsof Interventions Version 5.0.0, Higgins J, Green S (eds). The Cochrane Collaboration. (Available from:www.cochrane-handbook.org, updated February 2008.)

Shemilt I, Mugford M, Drummond M, Eisenstein E, Mallender J, McDaid D, Vale L, Walker D, The CampbellCochrane Economics Methods Group (CCEMG). 2006. Economics methods in Cochrane systematic reviews ofhealth promotion and public health related interventions. BMC Medical Research Methodology 6(55) 1–11.

Shiell A, Hawe P, Gold L. 2008. Complex interventions or complex systems? Implications for health economicevaluation. British Medical Journal 336: 1281–1283.

SYSTEMATIC REVIEWS OF ECONOMIC EVALUATIONS 363

Copyright r 2009 John Wiley & Sons, Ltd. Health Econ. 19: 350–364 (2010)

DOI: 10.1002/hec

Slavin R. 1995. Best evidence synthesis: an intelligent alternative to meta-analysis. Journal of Clinical Epidemiology48: 9–18.

Thompson Coon J, Rogers G, Hewson P, Wright D, Anderson R, Cramp M, Jackson S, Ryder S, Price A, Stein K.2007. Surveillance of cirrhosis for hepatocellular carcinoma: systematic review and economic analysis. HealthTechnology Assessment 11(34).

Weinstein MC. 2006. Recent developments in decision-analytic modeling for economic evaluation. Pharmaco-economics 24: 1043–1053.

Weinstein MC, O’Brien B, Hornberger J, Jackson J, Johannesson M, McCabe C, Luce BR. 2003. Principles of goodpractice for decision analytic modeling in health-care evaluation: report of the ISPOR task force on goodresearch practice: modeling studies. Value in Health 6(1): 9–17.

Welte R, Feenstra T, Jager H, Leidl R. 2004. A decision chart for assessing and improving the transferability ofeconomic evaluation results between countries. Pharmacoeconomics 22: 857–876.

R. ANDERSON364

Copyright r 2009 John Wiley & Sons, Ltd. Health Econ. 19: 350–364 (2010)

DOI: 10.1002/hec