r. adams dudley, md, mba institute for health policy studies

33
Rewarding Provider Performance: Key Concepts, Available Evidence, Special Situations, and Future Directions R. Adams Dudley, MD, MBA Institute for Health Policy Studies University of California, San Francisco Support : Agency for Healthcare Research and Quality, California Healthcare Foundation, Robert Wood Johnson Foundation

Upload: hagop

Post on 13-Jan-2016

19 views

Category:

Documents


0 download

DESCRIPTION

Rewarding Provider Performance: Key Concepts, Available Evidence, Special Situations, and Future Directions. R. Adams Dudley, MD, MBA Institute for Health Policy Studies University of California, San Francisco - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: R. Adams Dudley, MD, MBA Institute for Health Policy Studies

Rewarding Provider Performance: Key Concepts, Available Evidence, Special

Situations, and Future Directions

R. Adams Dudley, MD, MBA

Institute for Health Policy Studies

University of California, San Francisco

Support: Agency for Healthcare Research and Quality, California Healthcare Foundation, Robert Wood Johnson

Foundation

Page 2: R. Adams Dudley, MD, MBA Institute for Health Policy Studies

Dudley 2005 2

Outline of Talk

• Review of obstacles to using incentives (using the example of public reporting)

• Summary of available data

• Addressing the tough decisions

• If we have time, consider the value of outcomes reports

Page 3: R. Adams Dudley, MD, MBA Institute for Health Policy Studies

Dudley 2005 3

Goals:• Describe employer hospital report cards• Explore what issues determine success

Qualitative study• 11 communities • 37 semi-structured interviews with hospital and

employer coalition representatives Coding and analysis using NVivo software See Mehrotra, A, et al. Health Affairs 22(2):60.

Project Overview

Page 4: R. Adams Dudley, MD, MBA Institute for Health Policy Studies

Dudley 2005 4

11 Communities

Seattle

N Alabama

Detroit Cleveland

Indianapolis

Orlando

E Tennessee

Memphis

S Central Wisconsin

Maine

Buffalo

Page 5: R. Adams Dudley, MD, MBA Institute for Health Policy Studies

Dudley 2005 5

Only 3 report cards begun before 1998

Majority use mortality and LOS outcomes, patient

surveys also common

Majority use billing data

4/11 communities no public release

Summary of Report Cards

Page 6: R. Adams Dudley, MD, MBA Institute for Health Policy Studies

Dudley 2005 6

4 Issues Determining Success

1. Ambiguity of goals

2. Uncertainty on how to measure quality

3. Lack of consensus on how to use data

4. Relationships between local stakeholders

Page 7: R. Adams Dudley, MD, MBA Institute for Health Policy Studies

Dudley 2005 7

Hospitals Skeptical ofEmployer Goals

Hospitals don’t trust employers, suspect their primary interest is still cost

“An organization that has been a negotiator of cost, first and foremost, that then declares it’s now focused on quality, is

a hard sell.”

“Ultimately, you’re going to contract with me or not contract with me on the basis of cost. Wholly. End of story.”

Ambiguity of Goals

Page 8: R. Adams Dudley, MD, MBA Institute for Health Policy Studies

Dudley 2005 8

Process vs. Outcome Debate

Clinicians: Process measures more useful • “We should have moved from outcomes to process

measures. Process measures are much more useful to hospitals who want to improve.”

Employers: Outcomes better, process measures unnecessary

• “People want longer lasting batteries. Duracell doesn’t stand there with their hands on their hips and say, ‘Tell us

how to make longer-lasting batteries.’ That’s the job of Duracell.”

Uncertainty on How to Measure Quality

Page 9: R. Adams Dudley, MD, MBA Institute for Health Policy Studies

Dudley 2005 9

The Case-mix Adjustment Controversy

Clinicians: Forever skeptical case mix adjustment is good enough:“[The case-mix adjustment] still explained less than 30 percent of the

differences that we saw…”

Employers: We cannot wait for perfect case-mix adjustment“My usual answer to that is ‘OK, let’s make you guys in charge of perfect,

I’m in charge of progress. We have to move on with what we have today. When you find perfect, come back, and we’ll change immediately.’ ”

Ambiguity of GoalsUncertainty on How to Measure Quality

Page 10: R. Adams Dudley, MD, MBA Institute for Health Policy Studies

Dudley 2005 10

Low Level of Public Interest a Positive Trend?

Low levels of consumer interest, at least initially

One interviewee felt slow growth is better:“Food labeling is the right metaphor. You want some model which gets to one and a

half to three percent of the people to pay attention. This gives hospitals time to fix their problems without horrible penalties.… But if they ignore it for five years all

of a sudden you’re looking at a three or four percent market share shift.”

Lack of Consensus on How to Use Quality Data

Page 11: R. Adams Dudley, MD, MBA Institute for Health Policy Studies

Dudley 2005 11

Market Factors

“Market power does not remain constant. Sometimes purchasers are in the ascendancy and at other times, providers are in the ascendancy, like when hospitals consolidate. And that can vary from community to

community at a point in time, too.”

Ambiguity of GoalsRelationships Between Local Stakeholders

Page 12: R. Adams Dudley, MD, MBA Institute for Health Policy Studies

Dudley 2005 12

Key Elements of an Incentive Program

• Measures acceptable to both clinicians and the stakeholders creating the incentives

• Data available in a timely manner at reasonable cost

• Reliable methods to collect and analyze the data

• Incentives that matter to providers

Page 13: R. Adams Dudley, MD, MBA Institute for Health Policy Studies

CHART: California Hospital Assessment

and Reporting Task Force

A collaboration between California hospitals, clinicians, patients, health

plans, and purchasers

Supported by the

California HealthCare Foundation

Page 14: R. Adams Dudley, MD, MBA Institute for Health Policy Studies

Dudley 2005 14

Participants in CHART

• All the stakeholders:– Hospitals: e.g., HASC, hospital systems, individual

hospitals– Physicians: e.g., California Medical Association– Consumers/Labor: e.g., Consumers Union/California

Labor Federation– Employers: e.g., PBGH, CalPERS– Health Plans: e.g., Blue Shield, Wellpoint, Kaiser– Regulators: e.g., JCAHO, OSHPD, NQF– Government Programs: CMS, MediCal

Page 15: R. Adams Dudley, MD, MBA Institute for Health Policy Studies

Dudley 2005 15

ORClinical Measures

IT or Other Structural Measures

Patient Experience and Satisfaction Measures

Admin data

Specialized clinical data collection

H-CAPHS "Plus" Scores

Surveys withAudits

Data Aggregator - Produces one set

of scores per hospital

Reportto

Hospitals

Report toHealthPlansand

Purchasers

Reportto

Public

How CHART Might Play Out

Page 16: R. Adams Dudley, MD, MBA Institute for Health Policy Studies

Dudley 2005 16

CHART Measures

• For public reporting in 2005-6:– JCAHO core measures for MI, CHF, pneumonia,

surgical infection from chart abstraction– Maternity measures from administrative data– Leapfrog data– Mortality rates for MI, pneumonia, and CABG

Page 17: R. Adams Dudley, MD, MBA Institute for Health Policy Studies

Dudley 2005 17

CHART Measures

• For piloting in 2005-6:– ICU processes (e.g., stress ulcer prophylaxis),

mortality, and LOS by chart abstraction– ICU nosocomial infection rates by infection

control personnel– Decubitus ulcer rates and falls by annual survey

Page 18: R. Adams Dudley, MD, MBA Institute for Health Policy Studies

Dudley 2005 18

Tough Decisions: General Ideas and Our Experience in CHART

• Not because we’ve done it correctly in CHART, but just as a basis for discussion

Page 19: R. Adams Dudley, MD, MBA Institute for Health Policy Studies

Dudley 2005 19

Tough Decision #1:Collaboration vs. Competition?

• Among health plans

• Among providers

• With legislators and regulators

Page 20: R. Adams Dudley, MD, MBA Institute for Health Policy Studies

Dudley 2005 20

Tough Decision #2:Same Incentives for Everyone?

• Does it make sense to set up incentive programs that are the same for every provider? – This would be the norm in other industries if

providers were your employees; unusual in many other industries if you were contracting with suppliers.

Page 21: R. Adams Dudley, MD, MBA Institute for Health Policy Studies

Dudley 2005 21

Tough Decision #2:Same Incentives for Everyone?

• But providers differ in important ways– Baseline performance/potential to become top

provider– Preferred rewards (more patients vs. more $)– Monopolies and safety net providers

• But do you want the complexity?

Page 22: R. Adams Dudley, MD, MBA Institute for Health Policy Studies

Dudley 2005 22

Tough Decision #3:Encourage Investment?

• Much of the difficulty we face in starting public reporting or P4P comes from the lack of flexible IT that can cheaply generate performance data.

• Similarly, much QI is best achieved by creating new team approaches to care.

• Should we explicitly pay for these changes, or make the value of these investments an implicit factor in our incentive programs?

• Can be achieved by pay-for-participation, for instance.

Page 23: R. Adams Dudley, MD, MBA Institute for Health Policy Studies

Dudley 2005 23

Tough Decision #4:Moving Beyond HEDIS/JCAHO

• No other measure sets routinely collected and audited as current cost of doing business

• If you want public reporting or P4P of new measures, must balance data collection and auditing costs vs. information gained– Admin data involves less data collection cost, equal or more auditing

costs– Chart abstraction much more expensive data collection, equal or less

auditing

Page 24: R. Adams Dudley, MD, MBA Institute for Health Policy Studies

Dudley 2005 24

Tough Decision #4:Moving Beyond HEDIS/JCAHO

• If purchasers/policymakers drive the introduction of new quality measurement costs, who pays and how?

• So, who picks the measures?

Page 25: R. Adams Dudley, MD, MBA Institute for Health Policy Studies

Dudley 2005 25

Tough Decision #5: Use Only National Measures or Local?

• Well this is easy, national, right?

• Hmmm. Have you ever tried this? Is there any “there” there? Are there agreed upon, non-proprietary data definitions and benchmarks? Even with the National Quality Forum?

• Maybe local initiatives should be leading national??

Page 26: R. Adams Dudley, MD, MBA Institute for Health Policy Studies

Dudley 2005 26

An Example of Collaboration:C-Section Rates in CHART

• Initial measure: total C-section rate (NQF)• Collaborate/advocate within CHART:

– Some OB-GYNs convinced the group to develop an alternative: the C-section rate among nulliparous women with singleton, vertex, term (NSVT) presentations

• Collaborate with hospital:– NSVT not traditionally coded: need to train

Medical Records personnel

Page 27: R. Adams Dudley, MD, MBA Institute for Health Policy Studies

Dudley 2005 27

Tough Decision #6:Use Outcomes Data?

• Especially important issue as sample sizes get small…that is, when you try to move from groups to individual providers in “second generation” incentive programs.

• If we can’t fix the sample size issue, we’ll be forced to use general measures only (e.g., patient experience measures).

Page 28: R. Adams Dudley, MD, MBA Institute for Health Policy Studies

Dudley 2005 28

Some providers are concerned about random events causing variation in reported outcomes that could:

• Ruin reputations (if there is public reporting)

• Cause financial harm (if direct financial incentives are based on outcomes)

Outcome Reports

Page 29: R. Adams Dudley, MD, MBA Institute for Health Policy Studies

Dudley 2005 29

An Analysis of MI Outcomes and Hospital “Grades”

• From California hospital-level risk-adjusted MI mortality data: Fairly consistent pattern over 8 years: 10% of hospitals

labeled “worse than expected”, 10% “better”, 80% “as expected”

Processes of care for MI worse among those with higher mortality, better among those with lower mortality

• From these data, calculate mortality rates for “worse”, “better”, and “as expected” groups

Page 30: R. Adams Dudley, MD, MBA Institute for Health Policy Studies

Dudley 2005 30

Probability Distribution of Risk Adjusted Mortality Rate for Mean Hospital in Each Sub-Group

17.1%12.2%8.6%

7.6% 16.6%

0.0%

2.0%

4.0%

6.0%

8.0%

10.0%

12.0%

14.0%

16.0%

18.0%

20.0%

0% 5% 10% 15% 20% 25% 30%

Risk Adjusted Mortality Rate

Probability Distribution Mortality Outcome

Poor Qu ality Hospitals

Good Quality Hospitals

Superior Qua lity

HospitalsAll Hospitals in Mod el

Low Trim Point

High Trim Point

Poor Hospital Mean

Good Hospital Mean

Superior HospitalMean

Scenario #3: 200 patients per hospital; trim points calculated using normal distribution around population mean, 2 tails, each with 2.5% of distribution contained beyond trim points.

Page 31: R. Adams Dudley, MD, MBA Institute for Health Policy Studies

Dudley 2005 31

3 Groups of Hospitals with Repeated Measurements (3 Years)

Predictive Values

3 Year Star Scores

0.0%

10.0%

20.0%

30.0%

40.0%

50.0%

60.0%

70.0%

80.0%

3 4 5 6 7 8 9

Hospital Star Score

Proportion Total Hospitals

Superior Quality Hospital

Expected Quality Hospital

Poor Quality Hospital

Scenario #3

Page 32: R. Adams Dudley, MD, MBA Institute for Health Policy Studies

Dudley 2005 32

Outcomes Reports and Random Variation: Conclusions

• Random variation can have an important impact on any single measurement

• Repeating measures reduces the impact of chance• Provider performance is more likely to align along a

spectrum rather than lumped into two groups whose outcomes are quite similar

• Providers on the superior end of the performance spectrum will almost never be labeled poor

Page 33: R. Adams Dudley, MD, MBA Institute for Health Policy Studies

Dudley 2005 33

Conclusions

• Many tough decisions ahead

• Nonetheless, paralysis is undesirable

• Collaborate on the choice of measures

• Everyone frustrated with limited (JCAHO and HEDIS) measures…need to figure out how to fund collecting and auditing new measures

• Consider varying incentives across providers