surviving or thriving: quality assurance mechanisms to promote innovation in the development of...

11
Surviving or Thriving: Quality Assurance Mechanisms to Promote Innovation in the Development of Evidence-Based Parenting Interventions Matthew R. Sanders & James N. Kirby # Society for Prevention Research 2014 Abstract Parenting interventions have the potential to make a significant impact to the prevention and treatment of major social and mental health problems of children. However, parenting interventions fail to do so because program devel- opers pay insufficient attention to the broader ecological con- text that influences the adoption and implementation of evidence-based interventions. This context includes the pro- fessional and scientific community, end users, consumers, and broader sociopolitical environment within which parenting services are delivered. This paper presents an iterative stage model of quality assurance steps to guide ongoing research and development particularly those related to program inno- vations including theory building, intervention development, pilot testing, efficacy and effectiveness trials, program refine- ment, dissemination, and planning for implementation and political advocacy. The key challenges associated with each phase of the research and development process are identified. Stronger consumer participation throughout the entire process from initial program design to wider community dissemina- tion is an important, but an often ignored part of the process. Specific quality assurance mechanisms are discussed that increase accountability, professional, and consumer confi- dence in an intervention and the evidence supporting its efficacy. Keywords Parenting intervention . Innovation . Quality assurance . Prevention Innovations or adaptations in both the content and process of delivering interventions often evolve in the context of seeking better solutions to unmet needs faced by particular client groups. Paul (1967) noted that the historical impetus behind the development of evidence-based interventions was the underlying question: What treatment, by whom, is most effective for this individual with that specific problem, and under which set of circumstances?This impetus continues to lead parenting researchers to develop interventions for an increasingly diverse range of populations and in the process demonstrate the remarkable robustness of the intervention model. However, a conceptual framework to facilitate pro- gram adaptation and adoption has been lacking. Specifically, a framework to guide the research and development process is needed from initial theory building to scaling up of interven- tions for wide-scale implementation (Axford and Morpeth 2013; Little 2010). This paper aims to fill that gap by describ- ing a quality assurance process designed to facilitate program innovation through different phases of the research and devel- opment cycle. The system of Triple PPositive Parenting Programs is used as an example to illustrate how different elements of the model can be used from initial program development, program adaptation, to global international dissemination. Five decades of experimental clinical research by a range of researchers has demonstrated that structured parenting pro- grams based on social learning models are among the most efficacious and cost-effective interventions available to pro- mote the mental health of children (National Research Council and Institute of Medicine 2009). Empirically supported pro- grams, such as the system of Triple PPositive Parenting Program (Sanders 2012), Incredible Years Program (Webster- Stratton 1998), Parent-Child Interaction Therapy (Fernandez and Eyberg 2009), Parent Management Training-Oregon Model (PMTO) (Forgatch et al. 2013), and Helping the Non-Compliant Child (Forehand and McMahon 1981), all M. R. Sanders : J. N. Kirby The University of Queensland, Brisbane, Australia J. N. Kirby e-mail: [email protected] M. R. Sanders (*) Parenting and Family Support Centre, The University of Queensland, Brisbane, QLD 7072, Australia e-mail: [email protected] Prev Sci DOI 10.1007/s11121-014-0475-1

Upload: james-n

Post on 24-Jan-2017

214 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Surviving or Thriving: Quality Assurance Mechanisms to Promote Innovation in the Development of Evidence-Based Parenting Interventions

Surviving or Thriving: Quality Assurance Mechanismsto Promote Innovation in the Development of Evidence-BasedParenting Interventions

Matthew R. Sanders & James N. Kirby

# Society for Prevention Research 2014

Abstract Parenting interventions have the potential tomake asignificant impact to the prevention and treatment of majorsocial and mental health problems of children. However,parenting interventions fail to do so because program devel-opers pay insufficient attention to the broader ecological con-text that influences the adoption and implementation ofevidence-based interventions. This context includes the pro-fessional and scientific community, end users, consumers, andbroader sociopolitical environment within which parentingservices are delivered. This paper presents an iterative stagemodel of quality assurance steps to guide ongoing researchand development particularly those related to program inno-vations including theory building, intervention development,pilot testing, efficacy and effectiveness trials, program refine-ment, dissemination, and planning for implementation andpolitical advocacy. The key challenges associated with eachphase of the research and development process are identified.Stronger consumer participation throughout the entire processfrom initial program design to wider community dissemina-tion is an important, but an often ignored part of the process.Specific quality assurance mechanisms are discussed thatincrease accountability, professional, and consumer confi-dence in an intervention and the evidence supporting itsefficacy.

Keywords Parenting intervention . Innovation . Qualityassurance . Prevention

Innovations or adaptations in both the content and process ofdelivering interventions often evolve in the context of seekingbetter solutions to unmet needs faced by particular clientgroups. Paul (1967) noted that the historical impetus behindthe development of evidence-based interventions was theunderlying question: “What treatment, by whom, is mosteffective for this individual with that specific problem, andunder which set of circumstances?” This impetus continues tolead parenting researchers to develop interventions for anincreasingly diverse range of populations and in the processdemonstrate the remarkable robustness of the interventionmodel. However, a conceptual framework to facilitate pro-gram adaptation and adoption has been lacking. Specifically, aframework to guide the research and development process isneeded from initial theory building to scaling up of interven-tions for wide-scale implementation (Axford and Morpeth2013; Little 2010). This paper aims to fill that gap by describ-ing a quality assurance process designed to facilitate programinnovation through different phases of the research and devel-opment cycle. The system of Triple P—Positive ParentingPrograms is used as an example to illustrate how differentelements of the model can be used from initial programdevelopment, program adaptation, to global internationaldissemination.

Five decades of experimental clinical research by a rangeof researchers has demonstrated that structured parenting pro-grams based on social learning models are among the mostefficacious and cost-effective interventions available to pro-mote the mental health of children (National Research Counciland Institute of Medicine 2009). Empirically supported pro-grams, such as the system of Triple P—Positive ParentingProgram (Sanders 2012), Incredible Years Program (Webster-Stratton 1998), Parent-Child Interaction Therapy (Fernandezand Eyberg 2009), Parent Management Training-OregonModel (PMTO) (Forgatch et al. 2013), and Helping theNon-Compliant Child (Forehand and McMahon 1981), all

M. R. Sanders : J. N. KirbyThe University of Queensland, Brisbane, Australia

J. N. Kirbye-mail: [email protected]

M. R. Sanders (*)Parenting and Family Support Centre, The University of Queensland,Brisbane, QLD 7072, Australiae-mail: [email protected]

Prev SciDOI 10.1007/s11121-014-0475-1

Page 2: Surviving or Thriving: Quality Assurance Mechanisms to Promote Innovation in the Development of Evidence-Based Parenting Interventions

share a common social learning theory basis and incorporatebehavioral, cognitive, and developmental principles andconcepts.

Contemporary parenting programs began as individuallyadministered interventions (Hanf 1969; Patterson et al.1975) and evolved to programs using a group deliveryformat (Webster-Stratton 1998); then, adjunctive interven-tions targeting a broader range of adjustment problems ofparents were developed and evaluated. These included ad-junctive interventions for reducing marital conflict overparenting and parents’ personal adjustment problems(e.g., Dadds et al. 1987). As a public health approach toparenting support evolved, parenting interventions begantargeting entire populations of parents and a wider range ofdelivery formats were developed to increase the populationreach of parenting programs (Prinz et al. 2009). The TripleP system was the first comprehensive multilevel system ofparenting support that blended universal and targeted inter-ventions. Briefly, Triple P is a multilevel system of parent-ing aimed at preventing behavioral, emotional, and devel-opmental problems in children and adolescents by enhanc-ing the knowledge, skills, and confidence of parents(Sanders 2012, please see Fig. 1 for an illustration of theTriple P system). The public health model of Triple P led tothe development of brief, low-intensity programs that hadmuch greater population reach than traditional individualand group programs. These low-intensity interventions in-cluded the use of mass media such as television and radioprograms on parenting (Calam et al. 2008), large groupseminars (Sanders et al. 2009), specific discussion groups(Morawska et al. 2011), and more recently online parentinginterventions (Sanders et al. 2012).

Paralleling these efforts, preventive-based parenting pro-grams have been applied to an increasingly diverse and com-plex range of parents and children. This has included, but is

not limited to, the Incredible Years Program with motherssuffering maternal depression (Hutchings et al. 2012),Parent-Child Interaction Therapy with military families(Gurwitch et al. 2013), Parent Management Training-OregonModel for single parents or recently separated parents(Forgatch et al. 2013), and Triple P for grandparents (Kirbyand Sanders 2014). Moreover, many parenting programs haveshown benefits with a broad range of culturally diverse par-ents such as African-American families, Caucasian families,and European families (Olds et al. 1998; Sanders 2012;Webster-Stratton 1998).

The examples above illustrate how a core set of cognitivebehavioral principles and clinical consultation procedures topromote positive parenting can be adapted over time to ad-dress a diverse range of child and family problems. What hasbeen missing is a conceptual roadmap to guide the process ofdevelopment, adaptation, and tailoring to the needs ofconsumers.

Quality Assurance and the Program Development Process

A focus on quality assurance first came to prominence inindustrial manufacturing in the 1930s and during mass pro-duction during World War 2 primarily in the USA (Shewhart1931). Quality assurance (QA) refers to the process used tocreate and maintain reliable standards of deliverables. Qualityassurance relates to activities before production work beginsand is typically performed while the product is being devel-oped (Crosby 1984). In contrast, quality control (QC) proce-dures refer to quality-related activities used to verify thatdeliverables are of acceptable quality and that they are com-plete and correct (Stein and Heikkinen 2009). Quality controlactivities are performed after the product is developed. In thecontext of developing an intervention program designed to

Fig. 1 The Triple P model ofgraded reach and intensity ofparenting and family supportservices

Prev Sci

Page 3: Surviving or Thriving: Quality Assurance Mechanisms to Promote Innovation in the Development of Evidence-Based Parenting Interventions

solve a specific problem, an iterative process entailing bothquality assurance and quality control steps are needed toensure that the program meets quality standards increasinglydemanded by the field of prevention science.

Phases in the Development Process

Several models have been proposed to describe the researchand development process (e.g., Institute of Medicine 1994;National Institute of Health (NIH) 2013; Bosch et al. 2011).Many evidence-based parenting programs are attempting toarticulate a clear conceptual framework to aid in the develop-ment and scaling up of parenting programs to ensure that endusers and consumers in the community receive access toevidence-based programs (e.g., PMTO—Forgatch et al.2013; Nurse-Family Partnership—Olds et al. 2003). Figure 2presents a schematic representation of a pragmatic model wehave employed in the development, testing, and subsequentdissemination of our parenting work involving the Triple Psystem (Sanders 2012). The model aims to guide both QA andQC procedures used in the ongoing development of interven-tions. Iteration in the model is shown by the two doubleheaded arrows that guide program developers from theorybuilding through to eventual dissemination and implementa-tion. The model is iterative in that each step builds on theprevious step and incorporates the views of end users (practi-tioners and agencies) and consumers (parents and children)regarding the appropriateness, feasibility, cultural relevance,and usefulness of the intervention. As part of the iterative

process program, developers need to be attuned to the ecolog-ical context within which the program would be deployed.

The development process outlined in Fig. 2 may seem timeconsuming by service systems seeking to access programsrapidly. However, a balance is needed between meeting ser-vice system demands for programs that work with the need todevelop credible evidence base to justify the disseminationand scaling up of interventions (Winston and Jacobsohn2010). A clearly defined pragmatic framework for developerscan facilitate program development, evaluation, and transla-tion enabling greater transparency and efficiency. The pres-sure to disseminate programs prematurely with insufficientevidence can do more harm than good and does a disserviceto parents, children, and the community.

Building a Theoretical Basis for an Intervention

A crucial part of the QA procedure for intervention develop-ment is to ensure that interventions are built from solid theoret-ical foundations. These foundations include having a cleartheoretical framework that informs the specific types of inter-vention procedures used and the component parts of the inter-vention (IOM 2009). Although the most effective parentinginterventions, including the Triple P system, evolved from acommon social learning and cognitive-behavioral and function-al analysis framework, some programs also incorporate princi-ples and procedures drawn from other theories including at-tachment theory, developmental theory, cognitive social learn-ing and self-regulation theory, and public health models ofintervention (Lieberman and Zeanah 1999; Sanders 2012;Webster-Stratton 1998). Having a clear theoretical frameworkhelps to clarify the cognitive, behavioral, and affective changemechanisms employed by the intervention. Thus, it is importantfor interventions to clearly distinguish between their develop-mental theory that has been employed (e.g., the theory about thedevelopment or maintenance of the problem and how parentingwill change the problems) and the action theory underpinningthe program (e.g., theory about how change in parenting will befacilitated), for example, with self-regulation processes.

Theories are also needed to describe the model of profession-al training employed (Sanders andMurphy-Brennan 2010). Thisinvolves considering how practitioners are selected, trained, andsupervised and how practitioners, commissioners of services,and line managers are informed about and engaged in theimplementation process. All components of interventions shouldhave a clear theoretical basis, making them easier to evaluate andfor mechanisms of change that may explain intervention effectsto be more readily identified (Kazdin and Nock 2003).

Program Development and Design

Recognition that an adaptation of an existing program or newprogram is needed can stem from a variety of sources,

Program development and design

Initial feasibility studies and program refinement

Efficacy trials

Effectiveness trials

Program refinement

Scaling up of intervention

Parents and children as consum

ers

Pra

ctit

ione

rs a

nd a

genc

ies

as e

nd u

sers

Dissemination and implementation

Theory building

Fig. 2 A 10-part iterative process of program design and development,incorporating the perspective of consumers

Prev Sci

Page 4: Surviving or Thriving: Quality Assurance Mechanisms to Promote Innovation in the Development of Evidence-Based Parenting Interventions

including epidemiological studies (where available) that helpdefine the extent of the problem in populations of interest. Asystematic review that identifies current prevention and treat-ment programs for the problem is useful to identify potentiallymodifiable protective and risk factors (National Institute ofHealth NIH 2013). Research on cultural diversity and impli-cations of cultural differences relevant to a program are usefulin identifying implementation challenges in working withtarget groups (Morawska et al. 2011). The use of conjointanalysis is a commonly used tool when determining the valueconsumers assign to the different features that contribute to aprogram or service. For example, Spoth and Redmond (1993)utilized conjoint analysis to determine the preferences ofparents for specific components of family-focused preventionprograms (e.g., meeting time, facilitator background) to en-hance practical aspects of program delivery. Furthermore,consumer preference surveys can be used to garner informa-tion about challenges, concerns, needs, and preferences oftarget groups (Sanders et al. 2012). For example, Metzleret al. (2012) used an online survey to pilot test the acceptabil-ity of a prototypical episode of a video series on positiveparenting focusing on taking children shopping during theprogram development stage for the series. Finally, severalstudies have used focus groups, with the intended populationof interest and professionals to help improve “the ecologicalfit” of a new program to the target population (e.g.,Whittingham et al. 2012).

A further QA step is to develop intervention manuals foruse in pilot studies (Chambless and Ollendick 2001). Duringpilot studies, clearly described session-by-session activitiesare developed, including group or individual session activi-ties, protocol adherence checklists, and parent materials (e.g.,workbooks). At this early stage, it is critical to reach agree-ment regarding authorship and to avoid subsequent disputes.In our center, as the Triple P system of interventions is ownedby the University of Queensland, all staff and students work-ing on Triple P projects are required to assign copyright of anynew Triple P program materials to the University ofQueensland. This policy ensures that a program can be dis-seminated under an existing licensing and publication agree-ment between the university and a dissemination organization.

Initial Feasibility Testing and Program Refinement

Pilot Studies

Once an intervention protocol has been developed, and beforeit is subjected to further evaluation through randomized clin-ical trials, it is useful to pilot test the actual protocols, includ-ing all materials, with individual cases or, more formally,using controlled single case or intrasubject replication designs(Baer et al. 1968). Initial feasibility testing is the first oppor-tunity to apply QC procedures to the developed intervention.

The advantage of this early QC step is that the likely effects ofthe intervention can be determined, including the extent towhich change occurs on primary outcome measures, thetiming of observed changes (rapid or gradual), and whetherchanges across different outcome variables are synchronousor desynchronous (Kazdin 2003).

Pilot studies also afford the program developers the oppor-tunity to learn how the program is received from the end users(e.g., practitioners and agencies), as well as consumers (par-ents and children). This can be achieved through includingfocus groups or questionnaires as part of the pilot trial aimed atexamining whether the program is deemed acceptable, cultur-ally appropriate, and useful. Furthermore, the developer dur-ing the initial feasibility testing is alerted to any implementa-tion difficulties, including process issues, timing of activities,consumer acceptability and appropriateness of materials, andthe sequencing of within-session tasks and exercises (Kazdinand Nock 2003). The evaluation of a program also requiresappropriate outcome measures to be selected (Rounsavilleet al. 2001). This includes considering various forms of eval-uation such as psychometrically sound self-report question-naires, observational measures, monitoring forms, DSM-5 orICD-10 symptom reductions, and population indicators (e.g.,reduced levels of founded cases of child maltreatment)(Chambless and Ollendick 2001).

Program Refinement

After the initial feasibility testing, this provides the first op-portunity for program developers to refine the program in lightof the obtained results from the quantitative and qualitativefeedback. This might require the modifications to specificprogram content and delivery in order to assist with successfulimplementation. For example, the steps outlined in protocoladherence checklists might need to be further detailed in orderto best measure program fidelity. Moreover, the selection ofthe most appropriate and least burdensome evaluation mea-sures can also be made.

Efficacy Trials

Efficacy trials refer to the beneficial effects of a program underoptimal conditions of delivery (Flay et al. 2005). After initialfeasibility testing and program refinement, the developedprogram should then be evaluated in a randomized controlledtrial, which is commonly implemented by the program devel-opers—this is also referred to as the “proof of concept” phase(Valentine et al. 2011). The foundation trial should followbest-practice guidelines such as those detailed byCONSORT (Altman et al. 2001). Glenton et al. (2011) rec-ommend that efficacy trails incorporate qualitative and quan-titative research methodologies to better gauge what worksand what does not work during the intervention, to assist with

Prev Sci

Page 5: Surviving or Thriving: Quality Assurance Mechanisms to Promote Innovation in the Development of Evidence-Based Parenting Interventions

the interpretation of the findings, and to help with implemen-tation and intervention design. Efficacy trails also permit theopportunity to examine the potential theoretical mediators, aswell as the obtained behavioral outcomes of the intervention.In determining the impact that a certain variable (e.g., partic-ipation in a Triple P program) has on the outcome of interest(e.g., changes in parenting style), it is important to examinenot only the direct relationship between the two variables butalso any mediation or moderation that occurs as a result ofother variables. Knowing more about the role of mediatorsand moderators is important in improving the effectiveness ofinterventions (Schrepferman and Snyder 2002).

Effectiveness Trials

Programs that are disseminated need to be robust in terms ofevery day service delivery circumstances (Little 2010).Among prevention scientists, effectiveness trials refer to theeffects a program achieves under real-world conditions (Flayet al. 2005). Effectiveness trials specifically permit the explo-ration of program outcomes when delivered as part of usualservice delivery in community settings. Through effectivenesstrials, programs can be assessed for their robustness, andimplementation enablers and barriers can be identified (Flayet al. 2005). Effectiveness trials also provide an opportunity toconduct the first cost-effectiveness analysis on the program(Foster et al. 2008). Specific programs within the Triple Psystem of interventions have had numerous effectiveness,service-based, and cost-effective-based evaluations, for exam-ple, the Level 4 Group Triple P program (e.g., Dean et al.2003; Gallart and Matthey 2005).

Challenges in Conducting Service-Based Evaluations

Service-based evaluations can provide valuable insights re-garding the level of practitioner support and supervision re-quired to ensure fidelity, how to promote practitioners pro-gram use, and the organizational conditions necessary toensure the program works (Sanders and Murphy-Brennan2010). Special challenges in conducting service-based evalu-ations of new interventions include resistance from practi-tioners and agencies regarding the establishment of compari-son or control conditions and the use of randomization(Sanders and Murphy-Brennan 2010). Where there areexisting well-established services for families, some practi-tioners can feel quite threatened when their “care as usual” isbeing evaluated. Projects can be sabotaged by staff or linemanagers who resent the implication that something differentneeds to be done to address the needs of families. Examplesinclude problems such as failing to refer families to a program,using the media to voice disapproval about threats to the statusquo, complaints about extra data collection requirements, orquestioning the cultural relevance of an intervention before it

has been tried with a specific population of parents.Researchers can also undermine projects by failing to ensurethat practitioners in the study are adequately supervised andadhere to intervention protocols, that the participants are ran-domized properly, that data are collected in accordance with aresearch design, and the analyses and interpretation of resultsare handled appropriately (Eisner 2009). Strong line manage-ment support and well-defined clinical governance procedurescan help prevent this potential problem, and appropriate su-pervision mechanisms and staff support are needed to executea robust effectiveness trial.

Program Refinement

Refining interventions is an important QA step in order toensure that the intervention meets the needs of the intendedtarget group. Each time an intervention is evaluated, an op-portunity is created to revise, reflect, or refine interventionprotocols. It is rare for trial prototypes of clinical proceduresnot to require further refinement before wider dissemination(Chambless and Ollendick 2001). This process can involvesoliciting feedback from clients and practitioners concerningtheir experience of the program including the readability ofany written material, the relevance of and usefulness of ex-amples used, the types of activities involved, and the appro-priateness and authenticity of video material (Winston andJacobsohn 2010). Program developers can utilize focusgroups, assessments of variability in implementations, andfindings from outcome evaluations to determine what pro-gram refinement is necessary.

Scaling Up Interventions for Dissemination

Dissemination refers to the process of taking evidence-basedinterventions from the research laboratory and delivering themto the community (Sanders 2012). Dissemination requires awell-developed set of consumer materials (such as manuals,workbooks, and DVDs), as well as professional training pro-grams in order to train practitioners to deliver the intervention(Sanders and Murphy-Brennan 2010). A common problemfaced by program developers when scaling up an interventionis that it can be difficult to do so in a university context. Wetherefore recommend that developers consult with auniversity’s technology transfer company to formalize anagreement concerning intellectual property rights and licensearrangements with organizations (purveyors) with capacity tomanage the dissemination process. The dissemination organi-zation is then responsible for the dissemination process thatincludes publishing resource and materials, video productionand training, providing program consultation and technicalsupport, and meeting QA standards and QC measures in thedelivery of the intervention.

Prev Sci

Page 6: Surviving or Thriving: Quality Assurance Mechanisms to Promote Innovation in the Development of Evidence-Based Parenting Interventions

It is important to build science communication proceduresinto the dissemination process. Science communication refersto the process of engagingwith media outlets (e.g., journalists,newspapers, television, radio, and web/online services) anddescribing and communicating clearly the scientific develop-ment process and outcomes of the research (Weigold 2001).Researchers are notoriously poor at communicating clearlywith the public about their discovery or innovation, and there-fore, it is important that investigators develop a communica-tions policy that defines who is able to speak to the mediaabout trial findings and at what stage.

Determining the Costs and Benefits of Interventions

Cost-effectiveness analyses should be conducted before scal-ing up and translating evidence-based programs to systems(Little 2010). Cost-effectiveness analyses can influencewhether policy makers and other potential systems will adoptthe program, as they need to know if investment in theprogram will have financial benefits for their constituency.Parenting interventions tend to fare well in cost benefit anal-yses. For example, Aos et al. (2011) conducted a carefuleconomic analysis of the costs and benefits of implementingthe Triple P system using only indices of improvement onrates of child maltreatment (out of home placements and ratesof abuse and neglect). Their findings showed that for anestimated total intervention cost of $137 per family if only10 % of parents received Triple P, there would be a positivebenefit of $1237 per participant, with a benefit to cost ratio of$9.22. The benefit-to-cost ratio would be even higher whenhigher rates of participation are modelled.

Dissemination and Implementation

Developing an Implementation Framework

An implementation framework is needed to disseminate aprogram effectively (Fixsen et al. 2005). This includes engag-ing with systems and potential partners, developing contractsand commitments from partners in order tomeet desired goals,developing the plan for implementation with the target sys-tem, and building training and accreditation days within thesystem (Shapiro et al. 2012). An implementation frameworkhas been developed for the Triple P system (Brown andMcWilliam 2012). The framework includes a range of specifictools for use by program consultants working with agencies toguide each stage of the implementation process (e.g., how toconduct line manager briefing, estimate population reach fromdifferent levels of investment in training).

When disseminating the program internationally, it is im-portant to establish a local evidence base at the site where theprogram is being implemented. For example, we have collab-orated with many institutions to identify interested and

competent researchers to conduct local evaluations of specificprograms of the Triple P system to help build a local evidencebase. Not only is sustainability more likely with local evidenceof impact but strategic alliances also can be built to increasethe total pool of researchers around the world, contributing tothe cumulative evidence base on parenting programs. Such anapproach ensures that the program is responsive to localneeds, it fosters a spirit of openness and critical evaluation,and it builds local partnerships to needed to sustain an inter-vention (Sanders 2012). To maintain the local community ofproviders and researchers, it is important to create links to thebroader research community through international confer-ences (e.g., The Helping Families Change Conference—theinternational conference for Triple P) and international net-works (e.g., The Triple P Research Network, www.tprn.net/),to further enable continued research collaborations andinvestments.

Lessons from Large-Scale Implementations

The implementation of a large-scale rollout of the Triple Psystem as a public health intervention is a major undertaking.While a program can be well designed and be demonstrablyeffective when implemented properly, it may still fail if it isnot “systems ready.” Systems readiness refers to the extent towhich a host organization has in place an appropriate imple-mentation plan to support the introduction of the program tothe organization and target community (Little 2010). If aprogram is attempted in a delivery system and it fails, theprogram may be criticized as not being systems ready. On theother hand, some programs can flourish in less than perfectsystems, as they become a vehicle for the successful transfor-mation of a dysfunctional system of care (Fixsen et al. 2005).When programs are developed and initially evaluated in adifferent setting than the environment of adoption, this canstrengthen the programs’ ability to be robust and adapt to theconditions. However, many of the systems into which a pro-gram could be delivered are not ready to accept any innova-tion or new evidence-based practice. The NationalImplementation Research Network (NIRN 2012) has devel-oped some useful guidelines to guide both purveyor organi-zation and host organization about issues that need to beaddress in implementing any evidence-based practice.

Fidelity and Flexibility of Delivery

Critics of evidence-based practices often see programs asbeing prescriptive, making it difficult for practitioners to beresponsive to the perceived needs of individual clients.Overzealous fidelity monitoring enhances this perception.Monitoring that involves videotaping sessions, codingpractitioner-client interactions for compliance with a protocol,which are often used in clinical trials to ensure intervention

Prev Sci

Page 7: Surviving or Thriving: Quality Assurance Mechanisms to Promote Innovation in the Development of Evidence-Based Parenting Interventions

protocols are implemented properly, may add to clinicians’perceptions that manualized interventions are inflexible andnot responsive to client need. Mazzucchelli and Sanders(2010) have argued that manualized treatments need not implyrigid delivery. They argued that interventions could be deliv-ered with fidelity and with responsivity to a client group andproviding specific examples of the kinds of variations in bothcontent and process of delivery that would be considered bydevelopers as high and low risk.

Adaptation of Programs

Ongoing investment in research supporting a program is high-ly desirable to ensure that programs evolve in response toongoing evidence of effectiveness and that innovations aremade to respond to the changing needs of the parent popula-tion. Without constant review, there is a risk that programsbecome outdated and no longer relevant to the needs of thetarget population. Program developers must be willing toreview and, where necessary, modify their program in orderto be flexible and robust. The overarching theory underpin-ning the program or the broader more general strategiesadopted are likely to remain constant, thus allowing replica-tion research to occur (Valentine et al. 2011). However, mod-ifications to specific strategies based on emerging research canand should be made.

One way to foster innovation is to include greater consumerfeedback in the research, development, and disseminationprocess. The participatory action research (PAR) paradigm(Whyte et al. 1989) and diffusion of innovations theory(Rogers 1995) are two models that advocate the direct in-volvement of consumers in determining the research ques-tions, designs, methods, analyses, and products used (Torreand Fine 2006). Such an approach ensures that programdevelopers more readily identify enablers and barriers toparental participation (e.g., time commitment required; thecost, location, and timing of programs), and steps can be takento ensure that programs are tailored or customized to the needsand preferences of parents.

Maintaining Quality of Implementation

Many practitioners and organizations delivering parentingprograms are undertrained, poorly supervised, underpaid,and underresourced, with insecure funding from one year tothe next. Maintaining high-quality implementation of the in-tervention within the organization adopting its use is impor-tant to ensure that fidelity of the program is adhered to (Fixsenet al. 2005). The training organization adopting the interven-tion needs to be responsive tomanaging the training process tominimize program drift. This requires building a partnershipwith the program developers to help ensure implementationsuccess (Brown and McWilliam 2012). In the case of Triple P,

measures that are set in place to help prevent program drift andmaintain quality of implementation include practitioners be-ing trained using standardized materials (including participantnotes, training exercises, and training DVDs demonstratingcore consultation skills), practitioners becoming part of aTriple P practitioner network, and accreditation being requiredfrom all practitioners. Triple P International (TPI) as a pur-veyor organization manages all aspects of the training pro-gram, including the initial training, post-training support, andfollow-up technical assistance. To maintain the quality of theimplementation, TPI liaises with the organization to reviewthe implementation success; to receive feedback on areas thatneed assistance, supervision, or improvement; and to provideongoing technical support. TPI is an independent companythat has been licenced by the University of Queensland topublish the Triple P program and to disseminate it worldwide.

The Translation of Evidence-Based Parenting Programsinto Practice

When independent evidence-based lists or regulatory bodiesendorse programs, this helps inform governments on whatevidence-based programs to adopt. Agencies and govern-ments look to independent evidence-based lists as an unbiasedinformant regarding programs’ effectiveness to make deci-sions when it comes to policies on childhood issues such ashow to prevent child maltreatment, how to reduce child-specific disorders such as behavioral problems and conductdisorder, and help reduce abusive or coercive parenting prac-tices. Endorsement of evidence-based programs by indepen-dent evidence-based lists is a critical step in helping translatethese programs into practice. Through initiatives such asthese, practitioners are increasingly being funded to employevidence-based programs when delivering services to parents.

Promoting Innovation Through Quality AssuranceMechanisms

Developers of programs should provide evidence to show thata program works if it is be included on reputable evidence-based lists of effective practices (e.g., Blueprints for ViolencePrevention 2012; National Academy of Parenting Research2011). An example of an evidence-based list is the Blueprintsfor Healthy Youth Development list of evidence-basedinterventions for reducing antisocial behavior. There hasbeen increasing focus on the need for programs to beevaluated by researchers independent of the program de-velopers (Eisner 2009). However, if programs are to thriveand evolve over time, it is desirable that developers havean ongoing involvement in research, development, andevaluation.

Prev Sci

Page 8: Surviving or Thriving: Quality Assurance Mechanisms to Promote Innovation in the Development of Evidence-Based Parenting Interventions

The Roles of “Developer Led” and “Independently Led”Evaluations

When building an evidence base for a field of research (e.g.,parenting interventions) or specific programs within a field ofresearch (e.g., Triple P), there is a complementary need forboth developer led and independently led evaluations.However, determining what constitutes a developer led orindependently led evaluation is a complicated task. Thus, itis important to operationally define the two roles and thenexamine how the roles complement each other.

Program developers are individuals who initiate the origi-nal idea and develop the program and continue to own theprogram through a copyright agreement, a license agreement,or some form of patent protection. Developer led researchoccurs when the program developers then evaluate the devel-oped program (Sherman and Strang 2009). Within the field ofprevention science, program developers are most often in-volved in the early evaluation of interventions and providethe foundational evidence for the program (Webster-Stratton1998; Sanders 2012; Kazdin and Blase 2011). In these earlystages of evaluation, program developers must ensure that theevidence supporting the practice is reliable, robust, and trans-parent and is conducted in a manner that minimizes thepotential for bias. Once proof of concept evidence is achieved(Valentine et al. 2011), program developers move towardsreplication research and this is where the complementaryprocess of independent evaluations is most valuable.

Independently led research is difficult to define.Independently led research occurs when the program devel-oper is not involved in any stage of the research and is not anauthor on the subsequent publication and the research isconducted at an institution independent of the program devel-oper. Many factors need to be considered when determiningwhether a study is independently led, including who conduct-ed the study, where the study was conducted, who were thecontributing authors and at which institution, whowas respon-sible for the conceptual design of the study, measure selection,analysis, write-up, and interpretation of findings, and whetherthe developer or organization providing approved training ofstaff was consulted during the evaluation process.

Independent evaluations are important for several reasons,as they are a form of replication research. They help controlfor conflict of interest (COI) and some forms of bias and helpidentify issues and or problems with program implementation.Commonly, independent evaluations may be conducted undermore heterogeneous conditions and therefore provide a usefultest of the robustness of the intervention effects (Sherman andStrang 2009). One argument for independent evaluations per-tains to the management of potential COI, either financially orideologically, that can occur when program developers are theleads on evaluation trials (Eisner 2009). Therefore, it is im-portant for evaluations that are developer led, to include

mechanisms designed to avoid or minimize bias. Such safe-guards comprise, but are not limited to, including COI state-ments in publications; registering trials on clinical databasesprospectively (e.g., ClinicalTrials.gov http://www.clinicaltrials.gov/), publishing the prospective trials protocolin peer-reviewed journals, and having an open data repositorywhere independent evaluators for systematic reviews or meta-analyses can have open access to original data. However, itshould be noted that the intention of including COI statementsin published works is not to eliminate conflict of interest, butto provide readers with the opportunity to digest the materialand form their own view of the value of the work.

Independent evaluations also have limitations. As with anystudy, erroneous conclusions occur when interventions areimplemented with inadequate fidelity, when findings are se-lectively reported, when there is a failure to accurately reporton the actual level of developer involvement, and when inde-pendent findings are themselves not replicated and at varianceto other available studies. Furthermore, independent evalua-tions are not free from potential bias. Sherman and Strang(2009) outline a variety of factors that could bias an indepen-dent evaluation, such as scepticism, financial or organizationpressures to show that programs do or do not work, theevaluation predicting null findings or negative results of aprogram, whether the independent evaluation has an affilia-tion with a competing program, or the independent evaluationbeing corrupted by a desire to disprove the value of arespected or popular program. While one form of evaluationis not considered “better” than the other, both need to safe-guard against bias.

Developers could facilitate independent evaluations by“signing off” on the fidelity of a planned implementation.This would follow approved competency-based training ofstaff, implementing the program in a planned independentevaluation. Independent evaluators can then proceed withtheir evaluation, without further developer involvement.However, independent evaluators should also aim to demon-strate the level of fidelity attained during the trial by incorpo-rating assessment of fidelity measures. This two-step processcould facilitate both program developer research and indepen-dently led research while maintaining high fidelity with theprogram procedures. The importance of facilitating qualityindependent research is underpinned by some evidence-based lists stipulating that an independent replication is re-quired of a program before it is considered as demonstratingappropriate efficacy or effectiveness (Blueprints for ViolencePrevention 2012).

A recent systematic review and meta-analysis of Triple P(Kirby and Sanders 2014) included 101 studies, 68 of whichwere randomized controlled trials of specific programs withinthe Triple P system (e.g., the Level 4 Group Triple P program),of which 66 were peer-reviewed publications. A range ofmoderators were examined including developer involvement.

Prev Sci

Page 9: Surviving or Thriving: Quality Assurance Mechanisms to Promote Innovation in the Development of Evidence-Based Parenting Interventions

Developer involvement was classified into two categories: (a)any developer involvement or (b) no involvement. Seventypapers were categorized as having some level of developerinvolvement, whereas 31 papers had no developer involve-ment. Using structural equation modeling, the meta-analysisrevealed that both developer led studies and studies with nodeveloper involvement produced significant small to mediumeffect sizes for a range of child and parent outcomes (Kirbyand Sanders 2014). This is the first time a meta-analysis hasexamined the level of developer involvement as a moderatorvariable for potential intervention effects. Importantly, bothdeveloper led and independent evaluations showed similarpositive effects on a range of child and parent outcomemeasures.

Monitoring and Strengthening the Evidence

Program developers need to monitor the changing evidencebase of the program. This involves cataloging all researchconducted by the program developers and other independentresearchers. We have developed an easily accessible websiteto house all known theoretical and empirically based studiesof the program (www.pfsc.uq.edu.au/research/evidence) toenable other individuals, organizations, and systems toreadily access the evidence base related to the program. Thewebsite facilitates the exchange of ideas and made open housethe collective knowledge of the program. The website alsoenables meta-analytic researchers to find all known publishedpapers of a program. Meta-analyses play a key role indocumenting the overall effects of a program on specificoutcome constructs (e.g., parenting) and can help evidence-based lists determine the efficacy or effectiveness ofprograms.

The Role of Critical Appraisal and Ongoing Innovation

Ongoing research and evaluation means that no programs canrest on their laurels (Winston and Jacobsohn 2010). Theimpetus for changing a program comes with evidence show-ing inadequate outcomes with a specific client group, feed-back from practitioners or parents as consumers, and cross-fertilization from one area of research to another. This critical,analytic approach is a dynamic process that should constantlystrive for self-improvement. A single or indeed several well-conducted studies will never be the final word on the effects ofa program. Although the program refinement process todeveloped programs is about making modifications to specificstrategies based on emerging research, the overarching theoryunderpinning the program is likely to remain constant, thusallowing replication research to occur (Valentine et al. 2011).For example, many parenting programs were developedand tested in the 1970s and 1980s with the baby boom-er generation under prevailing sociopolitical conditions

at that time. Since then, there have been major societalchanges that have affected families, including the devel-opment of the Internet, increases in working hours,participation of women in the paid workforce, increasesin migration, and the global financial crisis. As inter-ventions evolve to meet current conditions and toaddress contemporary problems of families raising chil-dren in the age of technology, the evidence base needsto be refreshed.

In the case of the Triple P system, the Level 4 program wasinitially delivered either individually or in a group setting.However, using a consumer perspective to program design,Metzler and colleagues (2012) found that parents prefer re-ceiving parenting information online, compared to the moretraditional clinic-based delivery with a practitioner. These andsimilar findings from outside the USA led to the developmentof an online version of Triple P that was found to be effectivein a randomized controlled trial for parents of children withearly conduct problems (Sanders et al. 2012). Through thistype of innovation in program delivery, Triple P has tried toremain relevant to parents and illustrates the iterative programdevelopment process that has fostered the continuing evolu-tion of Triple P. Future innovation of the Triple P system willinvolve the continued evaluation of specific programs in low-income countries and with underrepresented populations inhigh-income countries (such as indigenous populations).Through continued evaluations with these new populations,enablers and barriers to program content and delivery will beidentified, and as a result, further program refinement willensue to ensure that programs are attuned to the ecologicalcontext within which programs are deployed.

Lessons Learned from Large-Scale Implementation

Innovations in program development and the evaluations ofthose programs often occur in a university or clinical environ-ment. Translating the knowledge gained in these institutionalcontexts to end users (e.g., parents, children, and practitioners)can be difficult. Institutional and environmental barriers thatcontribute to poor translation include intellectual propertyagreements, preexisting licensing arrangements for productdelivery, lack of effective communication channels betweenuniversity departments and agencies, limited access to at riskpopulations, and lack of funding opportunities for implemen-tation evaluations (Fixsen et al. 2005). Possible avenues avail-able to researchers to reduce the impacts of barriers includeliaising with a university’s technology transfer company in theearly stages of product development, creating partnershipswith agencies when applying for grants from funding bodies,and learning from other programs in different fields (e.g.,pharmaceutical, medical, engineering) that take translationprocesses seriously and have largely succeeded.

Prev Sci

Page 10: Surviving or Thriving: Quality Assurance Mechanisms to Promote Innovation in the Development of Evidence-Based Parenting Interventions

When implementing interventions on a large scale, oftenthere can be implementation variability, for example, withfidelity, participant responsiveness, and adaptation, all ofwhich can impact on achieved outcomes (e.g., Eisner 2009;Little et al. 2012). For large-scale implementations, Triple Pnow utilizes an implementation framework developed specif-ically for the Triple P system (Brown and McWilliam 2012).Through using specific implementation frameworks (e.g.,NIRN 2012), implementation variability can be minimizedand measured, thus resulting in a more informative indicationas to whether an intervention is systems ready.

Implications and Conclusions

A major implication of quality assurance issues is that devel-opers working in research intensive settings need to becomemore focused on the end-user throughout program develop-ment and evaluation. If programs are to survive and flourishover time, there needs to be constant evolution and investmentin research and development. Without such investment, pro-grams become stale, are seen to be irrelevant to the moderngeneration of parents, or are seen to apply concepts andprocedures that fail to reflect advances in knowledge relevantto understanding specific problems or client populations. Onthe other hand, a vibrant, thriving research and developmentgroup, working in collaboration with others, can create out-standing programs that have great potential to benefit children,families, and society for years ahead.

References

Altman, D. G., Schulz, K. F., Moher, D., Egger, M., Davidoff, F.,Elbourne, D., et al. (2001). The revised CONSORT statement forreporting randomized trials: Explanation and elaboration. Annals ofInternal Medicine, 134, 663–694.

Aos, S., Lee, S., Drake, E., Pennuci, A., Klima, T., Miller, M., et al.(2011). Return on Investment: Evidence-based Options to ImproveStatewide Outcomes (Document No. 11-07-1201). Olympia:Washington State Institute of Public Policy.

Axford, N., & Morpeth, L. (2013). Evidence-based programs in chil-dren’s services: A critical appraisal. Children and Youth ServicesReview, 35, 268–277.

Baer, D. M., Wolf, M. M., & Risley, T. R. (1968). Some current dimen-sions of applied behavior analysis. Journal of Applied BehaviorAnnals, 1, 91–97.

Blueprints for Violence Prevention. (2012). Retrieved from http://www.colorado.edu/cspv/

Bosch, M., Tavender, E., Bragge, P., Gruen, R., & Green, S. (2011). Howto define ‘best practice’ for use in knowledge translation research: Apractical, stepped and interactive process. Journal of Evaluation inClinical Practice. doi:10.1111/j.1365-2753.2012.01835.x.

Brown, J., &McWilliam, J. (2012). An implementation framework for theTriple P-Positive Parenting Program. Paper presented at the 15th

Helping Families Change Conference. Los Angeles: America.

Retrieved from http://www.ausimplementationconference.net.au/presentations/4A-McWilliam.pdf.

Calam, R., Sanders, M. R., Miller, C., Sadhnani, V., & Carmont, S. A.(2008). Can technology and the media help reduce dysfunctionalparenting and increase engagement with preventative parentinginterventions. Child Maltreatment, 13, 347–361.

Chambless, D. L., & Ollendick, T. H. (2001). Empirically supportedpsychological interventions: Controversies and evidence. AnnualReview of Psychology, 52, 685–716.

Crosby, P. B. (1984). Quality without tears. Singapore: McGraw-Hill.Dadds, M. R., Sanders, M. R., Behrens, B. C., & James, J. E. (1987).

Marital discord and child behavior problems: A description offamily interactions during treatment. Journal of Clinical Child &Adolescent Psychology, 16, 192–203.

Dean, C., Myors, K., & Evans, E. (2003). Community-wide implemen-tation of a parenting program: The South East Sydney PositiveParenting Project. Australian e-Journal for the Advancement ofMental Health, 2, 179–190. doi:10.5172/jamh.2.3.179.

Eisner, M. (2009). No effects in independent prevention trials: Can wereject the cynical view? Journal of Experimental Criminology, 5,163–183. doi:10.1007/s11292-009-9071-y.

Fernandez, M., & Eyberg, S. (2009). Predicting treatment and follow-upattrition in parent-child interaction therapy. Journal of AbnormalChild Psychology, 37, 431–441. doi:10.1007/s10802-008-9281-1.

Fixsen, D. L., Naoom, S. F., Blase, K. A., Friedman, R.M., &Wallace, F.(2005). Implementation Research: A Synthesis of the Literature.Tampa, FL: University of South Florida, Louis de la Parte FloridaMental Health Institute, The National Implementation ResearchNetwork (FMHI Publication #231).

Flay, B. R., Biglan, A., Boruch, R. F., González Castro, F., Gottfredson, D.,Kellam, S., et al. (2005). Standards of evidence: Criteria for efficacy,effectiveness and dissemination. Prevention Science, 6, 151–175.

Forehand, R., & McMahon, R. J. (1981). Helping the noncompliantchild: A clinician’s guide to parent training. New York: Guilford.

Forgatch, M. S., Patterson, G. R., & Gewirtz, A. H. (2013). Lookingforward: The promise of widespread implementation of parenttraining programs. Perspectives on Psychological Science, 8,682–694.

Foster, E. M., Prinz, R. J., Sanders, M. R., & Shapiro, C. J. (2008). Thecosts of a public health infrastructure for delivering parenting andfamily support. Children Youth Services Review, 30, 493–501.

Gallart, S. C., & Matthey, S. (2005). The effectiveness of Group Triple Pand the impact of the four telephone contacts. Behaviour Change,22, 71–80.

Glenton, C., Lewin, S., & Scheel, I. B. (2011). Still too little qualitativeresearch to shed light on results from reviews of effectiveness trials:A case study of a Cochrane review on the use of lay health workers.Implementation Science, 6, 53. doi:10.1186/1748-5908-6-53.

Gurwitch, R., Fernandez, S., Pearl, E., & Chung, G. (2013). Utilizingparent-child interaction therapy to help improve the outcome ofmilitary families. Retrieved from http://www.apa.org/pi/families/resources/newsletter/2013/01/parent-child-interaction.aspx.

Hanf, C. (1969). A two-stage program for modifying maternal controllingduring mother-child interaction. Vancouver: Paper presented at themeeting of the Western Psychological Association.

Hutchings, J., Bywater, T., Williams, M. E., & Whitaker, C. (2012).Improvements inmaternal depression as amediator of child behaviourchange. Psychology, 3, 795–801. doi:10.4236/psych.2012.329120.

Institute of Medicine. (1994). Reducing risks for mental disorders:Frontiers for preventive intervention research. Washington, DC:National Academy Press.

Kazdin, A. E. (2003). Research design in clinical psy-chology (4th ed.).Needham: Allyn & Bacon.

Kazdin, A. E., & Blase, S. L. (2011). Rebooting psychotherapy researchand practice to reduce the burden of mental illness. Perspectives onPsychological Science, 6, 21–37. doi:10.1177/1745691610393527.

Prev Sci

Page 11: Surviving or Thriving: Quality Assurance Mechanisms to Promote Innovation in the Development of Evidence-Based Parenting Interventions

Kazdin, A. E., & Nock,M. K. (2003). Delineating mechanisms of changein child and adolescent therapy: Methodological issues and researchrecommendations. Journal of Child Psychology and Psychiatry, 44,1116–1129.

Kirby, J. N., & Sanders, M. R. (2014). A randomized controlled trialevaluating a parenting program designed specifically for grandpar-ents. Behaviour Research and Therapy, 52, 35–44.

Lieberman, A. F., & Zeanah, C. H. (1999). Contributions of attachmenttheory to infant–parent psychotherapy and other interventions withinfants and young children. In J. Cassidy & P. R. Shaver (Eds.),Handbook of attachment: Theory, research, and clinicalapplications (pp. 555–574). New York: Guilford.

Little, M. (2010). Looked after children: Can existing services eversucceed? Adoption and Fostering Journal, 34, 3–7. doi:10.1177/030857591003400202.

Little, M., Berry, V., Morpeth, L., Blower, S., Axford, N., Taylor, R., et al.(2012). The impact of three evidence-based programmes deliveredin public systems in Birmingham, UK. International Journal ofConflict and Violence, 6, 260–272.

Mazzucchelli, T. G., & Sanders, M. R. (2010). Facilitating practitionerflexibility within evidence-based practice: Lessons from a system ofparenting support. Clinical Psychology: Science & Practice, 17,238–252.

Metzler, C. W., Sanders, M. R., Rusby, J. C., & Crowley, R. N. (2012).Using consumer preference information to increase the reach andimpact of media-based parenting interventions in a public healthapproach to parenting support. Behavior Therapy, 43, 257–270.

Morawska, A., Haslam,D.,Milne, D., & Sanders,M. R. (2011). Evaluationof a brief parenting discussion group for parents of young children.Journal of Developmental and Behavioral Pediatrics, 32, 136–145.

National Academy of Parenting Research (NAPR) (2011). Retrievedfrom: http://www.parentingresearch.org.uk/Default.aspx.

National Implementation Research Network. (2012). Retrieved fromhttp://nirn.fpg.unc.edu/.

National Institute of Health (NIH) (2013). Retrieved from http://prevention.nih.gov/phases/default.aspx.

National Research Council & Institute of Medicine. (2009). Preventingmental, emotional, and behavioral disorders among young people:Progress and possibilities (M. E. O’Connell, T. Boat, & K. E.Warner, Eds.). Washington, DC: National Academies Press.

Olds, D., Henderson, C. R., Cole, R., Eckenrode, J., Kitzman, H., Luckey,D., et al. (1998). Long-term effects of nurse home visitation onchildren’s criminal and antisocial behavior: Fifteen-year follow-upof a randomized controlled trial. Journal of the American MedicalAssociation, 14, 1238–1244.

Olds, D. L., Hill, P. L., O’Brien, R., Racine, D., & Mortiz, P. (2003).Taking preventive intervention to scale: The nurse-family partner-ship. Cognitive and Behavioral Practice, 10, 278–290.

Patterson, G. R., Reid, J. B., Jones, R. R., & Conger, R. E. (1975). Asocial learning theory approach to family intervention: Volume I.Families with aggressive children. Eugene: Castalia Publishing Co.

Paul, G. L. (1967). Strategy of outcome research in psychotherapy.Journal of Consulting Psychology, 31, 109–118.

Prinz, R. J., Sanders, M. R., Shapiro, C. J., Whitaker, D. J., & Lutzker, J.R. (2009). Population-based prevention of child maltreatment: TheU.S. Triple P system population trial. Prevention Science, 10, 1–12.

Rogers, E. M. (1995). Diffusion of innovations. New York: Free Press.Rounsaville, B. J., Carroll, K.M., &Onken, L. S. (2001). A stagemodel of

behavioral therapies research: Getting started and moving on fromstage I. Clinical Psychology: Science and Practice, 8(2), 133–142.

Sanders, M. R. (2012). Development, evaluation, and multinational dis-semination of the Triple P-Positive Parenting Program. AnnualReview of Clinical Psychology, 8, 1–35.

Sanders, M. R., & Murphy-Brennan, M. (2010). Creating conditions forsuccess beyond the professional training environment. ClinicalPsychology: Science and Practice, 17, 31–35.

Sanders, M. R., Prior, J., & Ralph, A. (2009). An evaluation of a briefuniversal seminar series on positive parenting: A feasibility study.Journal of Children’s Services, 4, 4–20.

Sanders, M. R., Baker, S., & Turner, K. M. T. (2012). A randomizedcontrolled trial evaluating the efficacy of Triple P Online withparents of children with early-onset conduct problems. BehaviourResearch and Therapy, 50, 675–684.

Schrepferman, L., & Snyder, J. (2002). Coercion: The link betweentreatment mechanisms in behavioral parent training and risk reduc-tion in child antisocial behavior. Behavior Therapy, 33, 339–359.

Shapiro, C. J., Prinz, R. J., & Sanders, M. R. (2012). Facilitators andbarriers to implementation of an evidence-based parenting interven-tion to prevent child maltreatment: The triple P-positive parentingprogram. Child Maltreatment, 17, 86–95. doi:10.1177/1077559511424774.

Sherman, L. W., & Strang, H. (2009). Testing for analysts’ bias in crimeprevention experiments: Can we accept Eisner’s one-tailed test?Journal of Experimental Criminology, 5, 185–200. doi:10.1007/s11292-009-9073-9.

Shewhart, W. A. (1931). Economic control of quality of manufacturedproduct. New York: D. Van Nostrand.

Spoth, E., & Redmond, C. (1993). Identifying program preferencesthrough conjoint analysis: Illustrative results from a parent sample.American Journal of Health Promotion, 8, 124–133.

Stein, Z., & Heikkinen, K. (2009). Models, metrics, and measurements indevelopmental psychology. Integral Review, 5, 4–24.

Torre, M. E., & Fine, M. (2006). Researching and resisting:Democratic policy research by and for youth. In S. Ginwright,J. Cammarota, & P. Noguera (Eds.), Beyond resistance: Youthactivism and community change: New democratic possibilitiesfor policy and practice for America’s youth (pp. 269–285). NewYork: Rutledge.

Valentine, J. C., Biglan, A., Boruch, R. F., Castro, F. G., Collins, L. M.,Flay, B. R., et al. (2011). Replication in prevention science.Prevention Science, 12, 103–117. doi:10.1007/s11121-011-0217-6.

Webster-Stratton, C. (1998). Preventing conduct problems in Head Startchildren: Strengthening parenting competencies. Journal ofConsulting and Clinical Psychology, 66, 715–730. doi:10.1037/0022-006X.66.5.715.

Weigold, M. F. (2001). Communicating science: A review of the litera-ture. Science Communication, 23, 164–193. doi:10.1177/1075547001023002005.

Whittingham, K.,Wee, D., Sanders, M. R., & Boyd, R. (2012). Predictorsof psychological adjustment, experienced parenting burden andchronic sorrow symptoms in parents of children with cerebral palsy.Child: Care Health and Development, 39, 366–373. doi:10.1111/j.1365-2214.2012.01396.x.

Whyte, W. F., Greenwood, D. J., & Lazes, P. (1989). Participatory actionresearch: Through practice to science in social research. AmericanBehavioral Scientist, 32, 513–551.

Winston, F. K., & Jacobsohn, L. (2010). A practical approach forapplying best practices in behavioural interventions to injuryprevention. Injury Prevention, 16, 107–112. doi:10.1136/ip.2009.021972a.

Prev Sci