don't mourn: organize. reviving mental health services research for healthcare quality...

7
Don’t Mourn: Organize. Reviving Mental Health Services Research for Healthcare Quality Improvement Kimberly E. Hoagwood, New York University The structure, organization, management, and design of the mental health system are changing profoundly as new healthcare policies reshape its configurations. This special issue is a call to action for the mental health ser- vices research field. The articles represent an important attempt to identify specific concepts, constructs, and findings from psychosocial treatment research about fidelity and integrity of treatment and align them with healthcare quality. However, the current structure and processes for deriving quality indicators place other demands on the extant research base. These will challenge this migration unless changes are made in leadership around consistent measurement strategies, payment mechanisms to support quality, and attention to technological infrastructure development. The mental health services research field should be proactive. Pedi- atric issues need special attention, especially as applied to community-based services for children and their fam- ilies. Key words: children’s mental health services, dissem- ination and implementation, quality of care, treatment integrity. [Clin Psychol Sci Prac 20: 120–126, 2013] Joe Hill’s dying words are purported to have been: “Don’t mourn: Organize.” As an activist union orga- nizer for the Wobblies who was persecuted, jailed, and executed by a firing squad in 1915, his statement to his followers the night before he died exemplifies turning imminent demise into concerted action. The specialty mental health system as it has been structured over the past 30 years is slowly dying. One might state it more optimistically and say it is being transformed. But whatever lens one uses, the way in which this system has been organized, managed, financed, and designed as a separate specialty sector is changing profoundly. These changes have been brought about by a series of new national policies including the Affordable Care Act and the Mental Health Parity Act, as well as expansion of a powerful healthcare establishment, accompanied by the financial quicksands of solo practice. This special issue is a call to action for the mental health services research field. It takes decades of scien- tific findings on effective psychosocial treatments and fidelity measurement, and then it reconfigures this work in terms of healthcare quality indicators, bench- marking, and system redesignall elements of the new world order of health care. It resuscitates and makes relevant a body of important scientific work that might otherwise be peripheralized into an early demise. It may be helpful to reflect briefly on the context of recent healthcare changes and their implications for children’s mental health services. BACKGROUND The Institute of Medicine (IOM) 2001 report Crossing the Quality Chasm: A New Health System for the 21st Century (Committee on Quality of Health Care in America, 2001) initiated national attention to the need for reform of an ailing healthcare system. This and subsequent reports (Committee on Quality of Health Care in America, 2006) outlined a framework and set of constructs for improving healthcare quality, includ- ing six components of effective care: safe, effective, timely, efficient, equitable, and patient-centered. Also included in these reports were recommendations for action: reporting of quality indicators on public web- sites; pay-for-performance programs for hospitals and physicians; and encouragement for agencies and organi- zations to develop, test, and promote quality measures. In the 2006 IOM report (Committee on Quality of Health Care in America, 2006), specific barriers to the integration of mental health and primary care were mentioned. These included (a) fewer objective and standardized metrics for screening and diagnosing men- tal health and substance use disorders than for general health conditions, (b) the insufficiency of the evidence base on which to base quality measures, and (c) the Address correspondence to Kimberly E. Hoagwood, Depart- ment of Child and Adolescent Psychiatry, New York University School of Medicine, One Park Avenue at East 33rd, 8th Floor, New York, NY 10016. E-mail: kimberly. [email protected]. © 2013 American Psychological Association. Published by Wiley Periodicals, Inc., on behalf of the American Psychological Association. All rights reserved. For permissions, please email: [email protected]. 120

Upload: kimberly-e

Post on 30-Mar-2017

212 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Don't Mourn: Organize. Reviving Mental Health Services Research for Healthcare Quality Improvement

Don’t Mourn: Organize. Reviving Mental

Health Services Research for Healthcare

Quality Improvement

Kimberly E. Hoagwood, New York University

The structure, organization, management, and design of

the mental health system are changing profoundly as

new healthcare policies reshape its configurations. This

special issue is a call to action for the mental health ser-

vices research field. The articles represent an important

attempt to identify specific concepts, constructs, and

findings from psychosocial treatment research about

fidelity and integrity of treatment and align them with

healthcare quality. However, the current structure and

processes for deriving quality indicators place other

demands on the extant research base. These will

challenge this migration unless changes are made in

leadership around consistent measurement strategies,

payment mechanisms to support quality, and attention

to technological infrastructure development. The mental

health services research field should be proactive. Pedi-

atric issues need special attention, especially as applied

to community-based services for children and their fam-

ilies.

Key words: children’s mental health services, dissem-

ination and implementation, quality of care, treatment

integrity. [Clin Psychol Sci Prac 20: 120–126, 2013]

Joe Hill’s dying words are purported to have been:

“Don’t mourn: Organize.” As an activist union orga-

nizer for the Wobblies who was persecuted, jailed, and

executed by a firing squad in 1915, his statement to his

followers the night before he died exemplifies turning

imminent demise into concerted action.

The specialty mental health system as it has been

structured over the past 30 years is slowly dying. One

might state it more optimistically and say it is being

transformed. But whatever lens one uses, the way in

which this system has been organized, managed,

financed, and designed as a separate specialty sector is

changing profoundly. These changes have been

brought about by a series of new national policies

including the Affordable Care Act and the Mental

Health Parity Act, as well as expansion of a powerful

healthcare establishment, accompanied by the financial

quicksands of solo practice.

This special issue is a call to action for the mental

health services research field. It takes decades of scien-

tific findings on effective psychosocial treatments and

fidelity measurement, and then it reconfigures this

work in terms of healthcare quality indicators, bench-

marking, and system redesign—all elements of the new

world order of health care. It resuscitates and makes

relevant a body of important scientific work that might

otherwise be peripheralized into an early demise.

It may be helpful to reflect briefly on the context of

recent healthcare changes and their implications for

children’s mental health services.

BACKGROUND

The Institute of Medicine (IOM) 2001 report Crossing

the Quality Chasm: A New Health System for the 21st

Century (Committee on Quality of Health Care in

America, 2001) initiated national attention to the need

for reform of an ailing healthcare system. This and

subsequent reports (Committee on Quality of Health

Care in America, 2006) outlined a framework and set

of constructs for improving healthcare quality, includ-

ing six components of effective care: safe, effective,

timely, efficient, equitable, and patient-centered. Also

included in these reports were recommendations for

action: reporting of quality indicators on public web-

sites; pay-for-performance programs for hospitals and

physicians; and encouragement for agencies and organi-

zations to develop, test, and promote quality measures.

In the 2006 IOM report (Committee on Quality of

Health Care in America, 2006), specific barriers to the

integration of mental health and primary care were

mentioned. These included (a) fewer objective and

standardized metrics for screening and diagnosing men-

tal health and substance use disorders than for general

health conditions, (b) the insufficiency of the evidence

base on which to base quality measures, and (c) the

Address correspondence to Kimberly E. Hoagwood, Depart-

ment of Child and Adolescent Psychiatry, New York

University School of Medicine, One Park Avenue at East

33rd, 8th Floor, New York, NY 10016. E-mail: kimberly.

[email protected].

© 2013 American Psychological Association. Published by Wiley Periodicals, Inc., on behalf of the American Psychological Association.All rights reserved. For permissions, please email: [email protected]. 120

Page 2: Don't Mourn: Organize. Reviving Mental Health Services Research for Healthcare Quality Improvement

absence of strategies for adopting and implementing

quality measures.

Quality measurement was given full thrust with the

passage of the Affordable Care Act of 2010, which

incorporated additional quality initiatives and incen-

tives. It launched a series of quality measurement activ-

ities, including some applicable to mental health and

substance use disorders. In 2010, a notice in the

Federal Register recommended an initial core set of

health quality measures for Medicaid-eligible adults for

voluntary use by state Medicaid programs. This core

set of 51 measures included 11 specifically focused on

mental health and substance use disorders.

Under the Children’s Health Insurance Program

Reauthorization Act of 2009 (CHIPRA), healthcare

quality measures for children were authorized for

development. Their use is to be voluntary in Medicaid

and Children’s Health Insurance Programs (CHIP).

Led by the Agency for Healthcare Research and Qual-

ity (AHRQ), an initial core set of quality measures was

submitted, and those and a new set are being refined

and tested as part of the Pediatric Quality Measures

Program (Zima et al., in press) that includes seven so-

called “Centers of Excellence.” The use of the final set

of measures to monitor quality of care in children’s

health may have traction by virtue of financial incen-

tives that will promote “meaningful use” under the

Electronic Health Records Incentive Program.

During this same time frame, the National Quality

Forum (NQF) was given federal funding for endorsing

measures that could be used to assess child healthcare

quality. The NQF led a process to create standard cri-

teria for evaluating consensus standards for child health

and mental health. While the approaches in CHIPRA

and in NQF are different, the criteria being applied

to evaluate the appropriateness of the proposed quality

measures are similar and both raise questions about

feasibility, thresholds for evidence, metrics for out-

comes, and methodological issues. Of note, there are

currently nine unique measures of the quality of child

mental health care in CHIPRA and NQF combined

(Zima et al., in press). Thus, the recent healthcare

policies and financial incentives are yielding rapid

development of quality indicators for children who

receive treatment in the public sector (Zima et al., in

press).

THE POINT OF IT ALL

The important point about this background is this: The

emphasis on quality measurement is part of a package

of healthcare changes that are fundamentally altering

the way in which all health services, including mental

health services, will be billed, paid for, and delivered.

The reconstruction of the healthcare system refashions

health as a large umbrella under which mental health

and substance abuse services are subsumed. This is a

good thing in that mental health becomes a part of the

continuum of health rather than a separate specialty

sector disconnected from the larger healthcare system.

I believe a cogent argument can be made that this inte-

gration may reduce attitudes of stigma, reinforce atten-

tion to empirically based practices that yield positive

outcomes, and signal a shift toward continuous, coordi-

nated, and person-centered services.

CHALLENGE FOR CHILDREN’S MENTAL HEALTH

The challenge for the field of child and adolescent

mental health services is that it has been constructed

from 30 years of studies focused largely on delineating

risk factors for the development of behavioral/emo-

tional problems, prevention, and treatment develop-

ment studies that have followed traditional efficacy to

effectiveness trajectories, and, not until very recently,

studies attending to processes for installing these pro-

grams into real-world settings. The emphasis has been

largely on development of programs—often compli-

cated, protracted, and expensive ones—with much less

attention to their fit for the real world. The yield from

this work, while substantive, has not been directly

actionable or relevant to the refashioned healthcare

system.

In other areas of health, in contrast to mental health,

health service studies have accumulated a knowledge

base that is more directly positioned for these system

changes. Studies of chronic health conditions in pediat-

rics such as diabetes, asthma, and cystic fibrosis have

identified specific outcomes, measurement systems, and

quality indicators derived from empirical work that can

be embedded in measurement systems and used to

monitor healthcare quality. For example, pediatric

organ transplantation outcomes are measured via

indices that include the functions of the organ tracked

regularly, longevity, as well as years post-transplant,

COMMENTARIES ON THE SPECIAL ISSUE � HOAGWOOD 121

Page 3: Don't Mourn: Organize. Reviving Mental Health Services Research for Healthcare Quality Improvement

and of course survival rates. Pediatric asthma functions

are tracked via performance metrics that have been

established by commissions that oversee health out-

comes (Children’s Asthma Care Performance Measure

Set, 2006).

The research base on children’s mental health, on

the other hand, has concentrated on studies that have

not led directly into developing these kinds of practical

benchmarks. What is important about this special issue

is that it takes a different approach. It describes how

the substantive body of psychosocial treatment and

services research on impact and fidelity can be made

relevant for the revamped healthcare system.

SOME HIGHLIGHTS

The editors (Southam-Gerow and McLeod) not only

have assembled an outstanding group of articles but

have also offered a reconceptualization of psychosocial

treatment research for dissemination and implementa-

tion work. They (Southam-Gerow & McLeod, 2013)

offer three models that are conceptually linked and that

together reposition this body of work for practical

application in quality improvement. They suggest that

one line of work within a dissemination and imple-

mentation research agenda is to assess the integrity of

psychosocial treatment implementation. One of the

shifts that this implies is the definition of treatment

integrity in terms of specific practice elements, rather

than program elements. They describe treatment integ-

rity with respect to four components—treatment

adherence, treatment differentiation, therapist compe-

tence, and relational elements (Perepletchikova & Kaz-

din, 2004, 2005; Waltz, Addis, Koerner, & Jacobson,

1993). This is a critical shift and one that makes the

treatment evidence base on psychosocial treatment

impact amenable to uptake, relevant for decision

makers, and specific enough to be traced back to thera-

pist behaviors and traced forward to client outcomes.

McLeod, Southam-Gerow, Tully, Rodrı́guez, and

Smith (2013) describe the same components of treat-

ment integrity and propose that these indices can also

be used in feedback systems and used for benchmarking.

This again is a very important shift that enables research

findings that might appear to be elegant but irrelevant

to be used to refine measurement approaches and

develop tools that can improve quality. They describe

the distillation and matching model of Chorpita and

Daleiden (Chorpita & Daleiden, 2009; Chorpita, Dale-

iden, & Weisz, 2005; Embry & Biglan, 2008) as a way

to redirect adherence toward practice element profiles

—that is, that actionable aspects of evidence-based treat-

ments for different clinical disorders and client charac-

teristics (Garland, Hawley, Brookman-

Frazee, & Hurlburt, 2008). As opposed to the traditional

psychological research trajectory of taking research

findings about mediators and peering more closely and

minutely into the nuances of each, this approach takes

components and asks whether they can be used to gauge

quality. It is a subtle but profound shift.

Garland and Schoenwald (2013) delve into the

important migration of psychosocial treatment research

(involving specification of treatment procedures and

methods of provider training, as well as clinical super-

vision) into the arena of quality improvement. They

provide an exhaustive review of fidelity methods

within psychosocial treatment and catalog the use of

these methods. They point out that the most effective

fidelity methods are active, experiential instruction (as

opposed to reliance on passive didactic methods or

review of a manual, alone) and ongoing clinical super-

vision that includes review of the actual practices a

therapist uses with clients using observational data.

Based on their review, they find that both experiential

training and review of observational data are used to

implement a small percentage of treatments (21.6%),

and neither is used in about one-third of the studies.

Interestingly, the use varied widely based on treatment

modalities, with the most widely used treatments

(cognitive-behavioral therapy) having the least inclusion

of these methods (4%). However, they also found that

three-fourths of the effective methods used to imple-

ment treatments had been used within community-based

settings and by therapists who reflect the training and

background of those in community settings (e.g., thera-

pists with master’s degrees). This speaks to the issues of

the practical application of empirically based fidelity

methods in the messy world of community practice.

Schoenwald, Mehta, Frazier, and Shernoff (2013)

use a well-established supervision framework and

approach from the decades of work on multisystemic

therapy (MST) and adapt it for a real-world, commu-

nity-based service study called Links. The supervision

CLINICAL PSYCHOLOGY: SCIENCE AND PRACTICE � V20 N1, MARCH 2013 122

Page 4: Don't Mourn: Organize. Reviving Mental Health Services Research for Healthcare Quality Improvement

approach was modeled on the assumptions, structure,

process, and content of MST supervision, a well-vali-

dated approach. The authors describe the adaptation

process, including psychometric evaluation of an instru-

ment to index fidelity to the Links agency supervision

process. The detailed discussion of the process to cap-

ture the content of the Links case summary notes and

to map these processes onto extant adherence and

intervention strategies is both creative and extremely

constructive for its application to other community-

based services.

Hogue, Ozechowski, Robbins, and Waldron (2013)

make an important contribution by highlighting how

the localization of evidence-based practice (EBP) qual-

ity assurance is both doable and necessary for sustained

impact. They describe three types of innovation and

provide specific examples of how to use these innova-

tions to localize and make practical adoption of EBPs.

The innovations include adaptation of observational

fidelity methods for therapist self-report and supervisor

observation of EBPs. This is critical for creating feasible

and simpler methods that can easily be used in commu-

nity settings. The other innovations are benchmarking

methods for continuous tracking of EBP fidelity

strength, and development of local clinical expertise

grounded in a data system that includes outcome data.

These kinds of approaches make accessible the exten-

sive work on treatment fidelity for community-based

quality improvement.

Regan, Daleiden, and Chorpita (2013) provide a

broad, detailed, and very important framework for

integrity measurement that builds on concepts of fidel-

ity but goes well beyond it. The integrity approach is a

way of thinking about system redesign with attention

to data generation and use with the goal of managing

uncertainty. The authors describe how to use data-

driven decision making by comparing observed and

expected values across multiple domains, levels of anal-

ysis, types of services (episodes and events), and pur-

poses (clinical and administrative). The latter is

especially important, as service agencies and providers

are increasingly being held to standards that exact both

high-quality clinical care and efficient business prac-

tices. This article makes a very important contribution

because it describes with great detail an approach to

decision management based on a review of different

types of evidence (literature, theory, history, local

comparisons) for different purposes to generate a flexi-

ble system that enables decision makers to balance local

needs with generalizable knowledge—a visionary

approach to be sure.

HOWEVER, THE HARSH REALITIES …

The development of quality metrics and benchmarks

within a population-based health system is an exceed-

ingly complex, lengthy, contentious, and inconsistent

process. It is a field in great flux and without clear

national leadership. It rests on assumptions about the

nature of evidence and the legitimate ways to derive

quality metrics. It is also heavily dependent upon com-

mittees, forums, review groups, and consensus (usually

lack thereof) about evidence. Inconsistent standards and

absence of leadership are significantly hampering pro-

gress despite huge investments of time, expertise, and

activity. These realities will challenge the mental health

field. There are at least six major challenges.

First, the process of not just developing indicators

but gaining approval for them is lengthy, inconsistent

and subject to the whims of scientific fashion. For

example, the NQF consensus development process

involves nine steps typically taking place over 12–18 months. This is after the process of testing them has

occurred. The steps include the following: (a) call for

intent, (b) call for nominations, (c) call for candidate

standards, (d) candidate consensus standards review, (e)

public and member comment, (f) member voting,

(g) Consensus Standards Approval Committee (CSAC)

decision, (h) board ratification, and (i) 30-day appeals

(Zima et al., in press). Currently, <5% of the NQF’s

list of more than 650 vetted indicators specifically

relates to care for individuals with mental health and

substance use disorders.

Another example: As part of the CHIPRA project

funded by AHRQ, our team was asked to develop a

set of indicators addressing adolescent depression man-

agement. Based on a review of all major guidelines,

evidence reviews, and extensive advice from multiple

advisory panels including family partners, clinicians,

and researchers, we identified a logic model and mea-

surement approach including (a) screening and assess-

ment, (b) treatment options and initiation of treatment,

and (c) symptom monitoring, treatment course, and

COMMENTARIES ON THE SPECIAL ISSUE � HOAGWOOD 123

Page 5: Don't Mourn: Organize. Reviving Mental Health Services Research for Healthcare Quality Improvement

remission. Testing this set of indicators will take at least

a year and will involve selection of sites, chart audits,

development of specifications for analysis of EHR, and

potentially Medicaid claims analyses as well (Scholle,

Sampsel, Davis, & Schor, 2009; Zima et al., in press).

The actual indicators will be specific to one diagnostic

condition for one group of youth (adolescents). And of

course, there is no guarantee that these indicators will

be approved or adopted.

A second challenge is that the evidence that is

deemed suitable for quality metrics must have robust

outcomes clearly linked to well-specified care pro-

cesses. So while, as the authors in this special issue

argue, implementation processes are critical to dissemi-

nability, the implementation process is actually of less

immediate attention in the quality improvement devel-

opment world than a clear link of the proposed indica-

tor to outcomes. In some ways, quality improvement is

returning us to our roots: The emphasis on outcomes

is foremost, and then there is attention to processes for

getting to outcomes. Our work in children’s mental

health has amassed evidence about treatment outcomes,

and the distribution and types of outcomes have

changed dramatically in the past 15 years (Hoagwood

et al., 2012), although less so for services and studies of

service context. Unfortunately, the evidence base on

the processes for getting to outcomes especially for

children, adolescents, and their families (care coordina-

tion, patient activation) is limited. So the mental health

field has an uphill battle to generate quickly the kind

of data needed for installation of processes linked to

outcomes into the revised health system.

Third, the standards of evidence used to vet new

proposed indicators rest on a traditional, linear, and

hierarchical view of EBPs. While the articles in this spe-

cial issue—especially those by Regan et al. and McLeod

et al.—are pushing new approaches, they are likely to

threaten the traditions in adult medical care that are

currently driving the processes by which quality indica-

tors are being developed, tested, and approved.

Fourth, indicators being developed for CHIPRA and

NQF are largely diagnosis specific. While there are

some that attend more to care processes, in general, they

are linked to specific diagnosable conditions. In the

mental health field, particularly children’s mental health,

diagnoses are changeable, rarely occur singly, and are

often suspect because they are driven by payor protocols

rather than by evidence-based assessment practices.

Fifth, to develop the workforce that can be held to

the quality metrics as recommended by the authors in

this special issue will require substantial investment of

training dollars by the health system. Under new fiscal

models in several states, including New York, training

dollars will not necessarily be available for the kind of

retooling of clinical and supervisory staff. Thus, there

are significant issues of feasibility that will need to be

addressed.

Sixth, implementing the kinds of changes described

by the authors in this special issue will depend on a digi-

talized technical infrastructure. This is a sine qua non for

making these kinds of quality improvements. The men-

tal health system, and in particular the children’s system,

is far behind the rest of the healthcare field in having

the capacity to do so. The incentives that exist to jump-

start this process in other areas of health care are not

being applied to children’s mental health. Community

clinics in some parts of New York State, a progressive

state in many respects, still use modems.

DON’T MOURN: ORGANIZE

There are several specific actions that will promote the

development of quality indicators in children’s mental

health. First, the IOM or a federal agency should

develop a national quality measurement strategy that

will provide consistency in standards and clearly defined

parameters for what constitutes “evidence,” taking into

account broader definitions (Kravitz, Duan, & Braslow,

2004; West et al., 2008) and reflecting the differences

between pediatric populations and adult populations. A

framework is needed to provide a clear path for the

development of the empirical base that takes into

account outcomes that expand beyond symptoms, to

include family, workplace, and local contexts (Hoag-

wood et al., 2012). Currently, there are many different

agencies, committees, and competing groups developing

standards with lack of a coordinated strategy.

Second, there is a need for significant payment

reform to help support these quality improvement

efforts for mental health. Specifically:

• Payment for children to begin therapy without a

specific diagnosis.

CLINICAL PSYCHOLOGY: SCIENCE AND PRACTICE � V20 N1, MARCH 2013 124

Page 6: Don't Mourn: Organize. Reviving Mental Health Services Research for Healthcare Quality Improvement

• Payment for screening for postpartum depression in

well-child visits in the first two years of life.

• Payment to support team-based care.

• Payment for parent training in behavior manage-

ment.

• Payment for collateral services, including physician

attendance at team meetings with families.

• Payment for care coordination with school and

other agencies.

Third, there is a need to amp up attention to digital

technologies to support mental health service delivery.

Both the federal government and state governments

should:

• Develop guidance on IT exchange of information

(Health Information Exchanges) and conflict resolu-

tion for children.

• Resolve false confidentiality barriers.

• Develop guidance documents on use of online

behavior health.

• Create consistent national telemedicine licensing and

payment rules.

CONCLUSION

These articles represent an important attempt to identify

specific constructs, concepts, theories, and research

findings from psychosocial treatment studies that can be

used to promote system design in mental health services

and align it with quality improvement efforts in health

care. This is an important redirection of the knowledge

base toward an important and practical end. The chal-

lenges, however, are formidable, given the current con-

text of quality improvement efforts and the tendencies

to sideline the unique issues for children with mental

health needs. As a field we have to be proactive in dis-

tilling extant research findings toward the goal of qual-

ity performance and system design and develop a new

and more relevant research agenda that takes us well

beyond evidence-based programs and toward a popula-

tion-based approach that follows the utilitarian principle

of the greatest good for the greatest number.

REFERENCES

Children’s Asthma Care Performance Measure Set. (2006,

February). Retrieved from http://www.jointcommission.

org/PerformanceMeasurement/PerformanceMeasurement/

Childrens+Asthma+Care+%28CAC%29+Performance+Mea

sure+Set.htm

Chorpita, B. F., & Daleiden, E. L. (2009). Mapping

evidence-based treatments for children and adolescents:

Application of the distillation and matching model to 615

treatments from 322 randomized trials. Journal of

Consulting and Clinical Psychology, 77(3), 566–579.

Chorpita, B. F., Daleiden, E. L., & Weisz, J. R. (2005).

Identifying and selecting the common elements of

evidence based interventions: A distillation and matching

model. Mental Health Services Research, 7(1), 5–20.

Committee on Quality of Health Care in America, Institute of

Medicine. (2001). Crossing the quality chasm: A new health

system for the 21st century. Washington, DC: National

Academies Press.

Committee on Quality of Health Care in America, Institute

of Medicine. (2006). Improving the quality of health care for

mental and substance-use conditions: Quality chasm series.

Washington, DC: National Academies Press.

Embry, D. D., & Biglan, A. (2008). Evidence-based kernels:

Fundamental units of behavioral influence. Clinical Child

and Family Psychology Review, 11(3), 75–113.

Garland, A. F., Hawley, K. M., Brookman-Frazee, L., &

Hurlburt, M. S. (2008). Identifying common elements of

evidence-based psychosocial treatments for children’s

disruptive behavior problems. Journal of the American

Academy of Child & Adolescent Psychiatry, 47(5), 505–514.

Garland, A., & Schoenwald, S. K. (2013). Use of effective

and efficient quality control methods to implement

psychosocial interventions. Clinical Psychology: Science and

Practice, 20(1), 33–43.

Hoagwood, K., Jensen, P. S., Acri, M. C., Olin, S. S.,

Lewandowski, E., & Herman, R. J. (2012). Outcome

domains in child mental health research since 1996: Have

they changed and why does it matter? Journal of the

American Academy of Child & Adolescent Psychiatry, 51(12),

1241–1260. doi:10.1016/j.jaac.2012.09.004.

Hogue, A., Ozechowski, T. J., Robbins, M. S., & Waldron,

H. B. (2013). Making fidelity an intramural game:

Localizing quality assurance procedures to promote

sustainability of evidence-based practices in usual care.

Clinical Psychology: Science and Practice, 20(1), 60–77.

Kravitz, R. L., Duan, N., & Braslow, J. (2004). Evidence-

based medicine, heterogeneity of treatment effects, and

the trouble with averages. The Milbank Quarterly, 82, 661–

687. doi:10.1111/j.0887-378X.2004.00327.x.

McLeod, B. D., Southam-Gerow, M. A., Tully, C. B.,

Rodrı́guez, A., & Smith, M. M. (2013). Making a case

COMMENTARIES ON THE SPECIAL ISSUE � HOAGWOOD 125

Page 7: Don't Mourn: Organize. Reviving Mental Health Services Research for Healthcare Quality Improvement

for treatment integrity as a psychosocial treatment quality

indicator for youth mental health care. Clinical Psychology:

Science and Practice, 20(1), 14–32.

Perepletchikova, F., & Kazdin, A. E. (2004). Assessment of

parenting practices related to conduct problems:

Development and validation of the Management of

Children’s Behavior Scale. Journal of Child and Family

Studies, 13, 385–403.

Perepletchikova, F., & Kazdin, A. E. (2005). Treatment

integrity and therapeutic change: Issues and research

recommendations. Clinical Psychology: Science and Practice,

12(4), 365–383. doi:10.1093/clipsy.bpi045.

Regan, J., Daleiden, E. L., & Chorpita, B. F. (2013).

Integrity in mental health systems: An expanded

framework for managing uncertainty in clinical care.

Clinical Psychology: Science and Practice, 20(1), 78–98.

Schoenwald, S. K., Mehta, T. G., Frazier, S. L., & Shernoff,

E. S. (2013). Clinical supervision in effectiveness and

implementation research. Clinical Psychology: Science and

Practice, 20(1), 44–59.

Southam-Gerow, M. A., & McLeod, B. D. (2013). Advances

in applying treatment integrity research for dissemination

and implementation science: Introduction to special issue.

Clinical Psychology: Science and Practice, 20(1), 1–13.

Scholle, S. H., Sampsel, S. L., Davis, N. E. P., & Schor, E.

(2009). Quality of child health: Expanding the scope and

flexibility of measurement approaches. Issue Brief

(Commonwealth Fund), 54, 1–10.

Waltz, J., Addis, M. E., Koerner, K., & Jacobson, N. S.

(1993). Testing the integrity of a psychotherapy protocol:

Assessment of adherence and competence. Journal of

Consulting and Clinical Psychology, 61(4), 620–630. doi:10.

1037/0022-006X.61.4.620.

West, S. G., Duan, N., Pequegnat, W., Gaist, P., Des Jarlais,

D. C., Holtgrave, D., … Mullen, P. D. (2008).

Alternatives to the randomized controlled trial. American

Journal of Public Health, 98(8), 1359–1366. doi:10.2105/

AJPH.2007.124446.

Zima, B. T., Murphy, J. M., Scholle, S. H., Hoagwood, K.,

Sachdeva, R. C., Mangione-Smith, R., … Jellinek, M.

(in press). National quality measures for child mental

health care: Background, progress and next steps.

Pediatrics.

Received December 11, 2012; accepted December 12, 2012.

CLINICAL PSYCHOLOGY: SCIENCE AND PRACTICE � V20 N1, MARCH 2013 126