monitoring hospital performance...2.1 hospital performance monitoring mortality prediction systems...

69
1 Monitoring hospital performance: Development and validation the hospital and critical care outcome prediction equation (HOPE & COPE) models for adult acute hospitals Chief investigator: Graeme Duke Data analyst & report author: Anna Barker The Northern Clinical Research Centre October 2008

Upload: others

Post on 23-Jan-2021

5 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Monitoring hospital performance...2.1 Hospital performance monitoring Mortality prediction systems have several applications such as clinical research and monitoring clinical performance

1

Monitoring hospital performance:Development and validation thehospital and critical care outcomeprediction equation (HOPE & COPE)models for adult acute hospitals

Chief investigator: Graeme DukeData analyst & report author: Anna Barker

The Northern Clinical Research Centre

October 2008

Page 2: Monitoring hospital performance...2.1 Hospital performance monitoring Mortality prediction systems have several applications such as clinical research and monitoring clinical performance

2

CONTENTS

List of Tables......................................................................................................................................... 3

List of Figures........................................................................................................................................ 5

Abbreviations ...................................................................................................................................... 6

1. Executive summary ..................................................................................................................... 7

2. Background................................................................................................................................ 10

2.1 Hospital performance monitoring................................................................................ 10

2.2 Prior research by the research team........................................................................... 11

2.3 Comorbidity indexes ...................................................................................................... 12

3. Methods ...................................................................................................................................... 13

3.1 Statistical analyses.......................................................................................................... 13

3.2 Project organisation and data collection.................................................................. 14

4. Cohort 1: HOPE Victorian Major hospitals ............................................................................. 22

4.1 Abstract............................................................................................................................ 22

4.2 Development results....................................................................................................... 24

4.3 Validation results ............................................................................................................. 35

4.3.1 Model 4-2: Demographic, admission and modified WHO principal diagnosis 35

4.3.2 Model 4-2: Demographic, admission, Elixhauser comorbidities and Modified WHOprincipal diagnosis ............................................................................................ 39

5. Cohort 2: COPE Victorian Intensive care hospital admissions ........................................... 44

5.1 Abstract............................................................................................................................ 44

5.2 Development results....................................................................................................... 46

5.3 COPE model validation results ..................................................................................... 51

6. Summary ..................................................................................................................................... 55

6.1 Use of the WHO based principal diagnosis classification grouping ...................... 55

6.2 Inclusion of comorbidity information........................................................................... 55

6.3 Future directions.............................................................................................................. 56

Appendix 1: HOPE phase 3 Principal diagnosis groups ............................................................. 61

Appendix 2: WHO cause of death principal diagnosis groups ................................................ 65

Appendix 3: Modified WHO principal diagnosis groups............................................................ 69

Page 3: Monitoring hospital performance...2.1 Hospital performance monitoring Mortality prediction systems have several applications such as clinical research and monitoring clinical performance

3

List of Tables

Table 1: Inpatient episodes and deaths by cohort ........................................................................................ 14

Table 3: Univariate regression analysis results for age variables 04-05 development data—M-HOPE .. 24

Table 4: 23 Tertiary, metropolitan and regional ICU hospitals Victorian hospitals N=294,767 (inpatient

episodes) ...................................................................................................................................................... 25

Table 5: Model 4-2 Demographic, admission and Modified WHO principal diagnosis............................ 29

Table 5: Model 19-2 Demographic, admission, Charlson comorbidities and Modified WHO principal

diagnosis ....................................................................................................................................................... 31

Table 7: Model 20-2 Demographic, admission, Elixhauser comorbidities and Modified WHO principal

diagnosis....................................................................................................................................................... 33

Table 7: Validation data 1 Model 4-2 Demographic, admission, Elixhauser comorbidities and

Modified WHO principal diagnosis by hospital group .......................................................................... 35

Table 8: Validation data 2-Model 4-2 Demographic, admission and Modified WHO principal

diagnosis by hospital group....................................................................................................................... 35

Table 9: Validation data 1-Model 4-2 Demographic, admission and Modified WHO principal

diagnosis by hospital group (deaths >200)............................................................................................. 36

Table 10: Validation data 2-Model 4-2 Demographic, admission and Modified WHO principal

diagnosis by hospital group (deaths >200)............................................................................................. 37

Table 11: Validation data 1-Model 4-2 Demographic, admission and Modified WHO principal

diagnosis by hospital group (deaths <200)............................................................................................. 37

Table 12: Validation data 2-Model 4-2 Demographic, admission and Modified WHO principal

diagnosis by hospital group (deaths <200)............................................................................................. 38

Table 14: Validation data 1-Model 20-2 Demographic, admission and Modified WHO principal

diagnosis by hospital group....................................................................................................................... 39

Table 15: Validation data 2-Model 20-2 Demographic, admission, Elixhauser comorbidities and

Modified WHO principal diagnosis by hospital group .......................................................................... 39

Table 16: Validation data 1-Model 20-2 Demographic, admission, Elixhauser comorbidities and

Modified WHO principal diagnosis by hospital group (deaths >200)................................................. 40

Table 17: Validation data 2-Model 20-2 Demographic, admission, Elixhauser comorbidities and

Modified WHO principal diagnosis by hospital group (deaths >200)................................................. 42

Table 18: Validation data 1-Model 20-2 Demographic, admission, Elixhauser comorbidities and

Modified WHO principal diagnosis by hospital group (deaths <200)................................................. 42

Table 18: Validation data 2-Model 20-2 Demographic, admission, Elixhauser comorbidities and

Modified WHO principal diagnosis by hospital group (deaths <200)................................................. 43

Table 19: Univariate regression analysis results for age variables 04-05 development data—COPE .... 47

Table 21: 23 Tertiary, metropolitan and regional ICU hospitals Victorian hospitals (ICU episodes)........ 47

Table 21: Model 4-3 Demographic, admission, cardiac surgery, mechanical ventilation and

Modified WHO principal diagnosis variables.......................................................................................... 50

Table 22: Validation data 1-Model 4-3 Demographic, admission, use of mechanical ventilation,

cardiac surgery procedure and Modified WHO principal diagnosis by hospital group ................ 51

Page 4: Monitoring hospital performance...2.1 Hospital performance monitoring Mortality prediction systems have several applications such as clinical research and monitoring clinical performance

4

Table 23: Validation data 2-Model 4-3 Demographic, admission, use of mechanical ventilation,

cardiac surgery procedure and Modified WHO principal diagnosis by hospital group ................ 52

Table 24: Validation data 1-Model 4-3 Demographic, admission, use of mechanical ventilation,

cardiac surgery procedure and Modified WHO principal diagnosis by hospital group (deaths

>200) .............................................................................................................................................................. 52

Table 25: Validation data 2-Model 4-3 Demographic, admission, use of mechanical ventilation,

cardiac surgery procedure and Modified WHO principal diagnosis by hospital group (deaths

>200) .............................................................................................................................................................. 53

Table 26: Validation data 1-Model 4-3 Demographic, admission, use of mechanical ventilation,

cardiac surgery procedure and Modified WHO principal diagnosis by hospital group (deaths

<200) .............................................................................................................................................................. 53

Table 28: Validation data 2-Model 4-3 Demographic, admission, use of mechanical ventilation,

cardiac surgery procedure and Modified WHO principal diagnosis by hospital group (deaths

<200) .............................................................................................................................................................. 54

Page 5: Monitoring hospital performance...2.1 Hospital performance monitoring Mortality prediction systems have several applications such as clinical research and monitoring clinical performance

5

List of Figures

Figure 1: Data set and cohort flow chart ......................................................................................................... 15

Figure 2: Plot of mortality rates by age—HOPE ............................................................................................... 24

Figure 3: Plot of mortality rates by age—COPE............................................................................................... 46

Page 6: Monitoring hospital performance...2.1 Hospital performance monitoring Mortality prediction systems have several applications such as clinical research and monitoring clinical performance

6

Abbreviations

AUC Area under the receiver operating curve

CI Confidence interval

COPE Critical care Outcome Prediction Equation

DHS Department of Human Services

DRG Diagnosis related groups

H-L Hosmer-Lemeshow statistic

HOPE Hospital outcome prediction equation

ICD-10AM Australian modification of the International Classification of Diseases, Version 10

ICU Intensive care unit

M_ICU Metropolitan hospital with on-site ICU

N Study sample size (in-patient episodes)

NCRC Northern clinical research centre

OR Odds ratio

P Probability value

R2 McFadden's pseudo R-squared

RACF Residential aged care facility

R_ICU Regional hospital with on-site ICU

ROC Receiver operating curve

SD Standard deviation

SE Standard error

SMR Standardised mortality ratio

T Tertiary referral hospital

x2 Chi square

VAED Victorian admitted episodes data set

WHO World Health Organisation

Page 7: Monitoring hospital performance...2.1 Hospital performance monitoring Mortality prediction systems have several applications such as clinical research and monitoring clinical performance

7

1. Executive summaryFor the majority of patients admitted to hospital the preferred and expected outcome is a return to

health and independence. Whilst it is not possible to guarantee survival of all patients it is important

to know that a minimum standard of care is provided by the health system and mortality does not

exceed the benchmark. The standard of hospital care has a significant influence on patient survival

[1]. Several high profile examples have recently occurred in Australia highlighting the need for

monitoring hospital performance.

Mortality prediction systems facilitate benchmarking of outcomes across hospitals providing a

quality control process by which hospitals which are under-performing or over-performing their peer

hospitals can be identified and special cause review prompted [2]. The goal of comparative

benchmarking is to facilitate improvements in patient care, and measuring quality of care is a key

component of improving the quality of care [3].

Mortality prediction models provide an estimate of the predicted number of deaths based on

predetermined variables, such patient age or diagnosis. The predicted number is then compared

to the actual number of deaths and if significant difference between the predicted and actual

deaths exists, it suggests that clinical performance is deviating from the benchmark.

The Northern Clinical Research Centre has developed two mortality prediction models. The Hospital

Outcome Prediction Equation (HOPE) model predicts in-hospital mortality for adults admitted to

major Victorian public hospitals. The model incorporates six basic variables: demographic (age and

Page 8: Monitoring hospital performance...2.1 Hospital performance monitoring Mortality prediction systems have several applications such as clinical research and monitoring clinical performance

8

gender), admission (emergency, inter-hospital transfer and residential aged care facility (RACF))

and diagnostic (ICD-10) variables.

The Critical care Outcome Prediction Equation (COPE) model predicts in-hospital mortality for

adults admitted to an intensive care unit (ICU) in major Victorian public hospitals. The COPE model

includes demographic (age), admission (emergency, inter-hospital transfer, RACF), diagnostic (ICD-

10) and clinical (mechanical ventilation use and cardiac surgery) variables.

Importantly both models are based on factors present at, or prior to, admission. This renders them

less likely to be influenced by clinical interventions. In addition all the variables, except for age, are

categorical (dichotomous). This makes the data easier to collect and less prone to error.

In light of recent international research which has identified that comorbidity indices provide a

useful means for prediction of people most at risk of dying in hospital [4, 5], the aim of the current

study was to develop and validate modified HOPE and COPE models which include comorbidity

information. In addition a World Health Organisation (WHO) based system for grouping principal

diagnoses was explored in an effort to standardise the principal diagnosis classification process.

The HOPE model was developed on data from 294,767 adult (>17 years) inpatient episodes from 23

Victorian major public hospitals over a 12-month period, between July 2004–June 2005. A HOPE

model that combined age, gender, admission characteristics and principal diagnosis variables was

found to have excellent discrimination and calibration. External validation on two samples

(validation 1 data: 303,247 episodes between July 2005 and June 2006, and validation 2: 311,541

episodes between July 2006 and June 2007) confirmed the model discrimination and calibration

was stable.

We found that the addition of comorbidity variables to this model offered no significant

improvement in predictive performance. There are a number of possible explanations for this

finding. The HOPE model which included the WHO based principal diagnosis classification system

had comparable predictive performance to the original HOPE model which had been based on a

locally derived system of diagnosis classification. The new HOPE model was more parsimonious than

the original model and so offers greater benefits with respect to clinical utility and statistical stability

and generality.

The COPE model was developed on data from 17,405 adult (>17 years) inpatient ICU episodes from

23 public Victorian major public hospitals over the same 12-month period between July 2004–June

2005. A COPE model that combined age, gender, admission characteristics, use of mechanical

ventilation, cardiac surgery as the primary procedure and principal diagnosis was found to have

excellent discrimination and calibration. External validation confirmed the model discrimination

and calibration was stable in two validation cohorts (validation data 1: 17,309 episodes between

Page 9: Monitoring hospital performance...2.1 Hospital performance monitoring Mortality prediction systems have several applications such as clinical research and monitoring clinical performance

9

July 2005 and June 2006, and validation data 2: 17,522 episodes between July 2006 and June

2007). Similar to the HOPE model, the newly developed COPE model utilising the WHO based

principal diagnosis classification was found to have comparable predictive performance and was

more parsimonious than the original COPE model which included the locally derived diagnosis

classification again affording greater clinical utility and statistical stability.

The findings of this research indicate that routinely collected administrative data can be used to

predict in-hospital mortality risk with high levels of discrimination and calibration. Model utility has

been improved through the use of a WHO based diagnosis classification system. The inclusion of

comorbidity information did not improve the models predictive performance and therefore there

appears to be no added value to their inclusion in HOPE and COPE models. The developed models

provide a practical method for monitoring hospital performance.

Page 10: Monitoring hospital performance...2.1 Hospital performance monitoring Mortality prediction systems have several applications such as clinical research and monitoring clinical performance

10

2. Background

2.1 Hospital performance monitoring

Mortality prediction systems have several applications such as clinical research and monitoring

clinical performance [6, 7]. Mortality outcome prediction models aim to predict death during a

hospital admission on the basis of a given set of variables collected at admission. Mortality

prediction systems have been advocated as means of evaluating the performance of hospitals [4,

6, 8, 9].

To date the majority of research on monitoring quality outcomes, through use of in-patient mortality

prediction models, has been limited to specific clinical subgroups such as those admitted with an

acute myocardial infarction (AMI) [3, 7, 8], pneumonia [3] and heart failure [3], and those who

received specific procedures such as coronary artery bypass grafting (CABG)[8, 10], repair of

abdominal aortic aneurysms [8], colorectal excision for cancer and in clinical sub-populations such

as intensive care patients [9]. Models which incorporate all patients admitted to hospital have not

been developed in Australia. As such, the monitoring of hospital performance has been limited to

specific clinical sub-groups which constitute a relatively small proportion of the greater hospital in-

patient population and therefore may be considered a relatively limited method by which hospital

performance can be measured, and benchmarked.

It is widely recognised that for valid and meaningful performance benchmarking based on

observed mortality rates, risk adjustment is essential as risk profiles are rarely uniform across hospitals

due to case-mix variation [2, 8]. Such differences may be the source of differences in mortality

Page 11: Monitoring hospital performance...2.1 Hospital performance monitoring Mortality prediction systems have several applications such as clinical research and monitoring clinical performance

11

rates, and therefore adjustment for these factors is required if meaningful comparisons are to be

made.

2.2 Prior research by the research team

The pilot version of the Hospital Outcome Prediction Equation model (HOPE Phase 1) was

developed and validated in 2005 by Dr. Graeme Duke, using the Victorian Admitted Episode

Dataset (VAED) from The Northern Hospital. During 2006 and 2007 the Northern Clinical Research

Centre (NCRC) led by Dr. Graeme Duke, applied the HOPE methodology to the state-wide VAED

from all acute public hospitals (HOPE phases 2–3). This model was validated in the 23 major adult

public hospitals.

The aim of the HOPE model is to provide one method for monitoring hospital performance. The

HOPE model developed in 2007 included demographic (age and gender), admission (emergency,

inter-hospital transfer and residential aged care facility (RACF)) and diagnostic (ICD-10) variables.

The final validation of this model, in 384,489 patient episodes from 23 major Victorian public

hospitals, found the model had high calibration (SMR 0.98–1.02) and discrimination (AUC 0.87–0.88).

The Critical care Outcome Prediction Equation (COPE Phase 1) was developed and validated in

2005 by Dr. Graeme Duke, in a similar way to the HOPE model using VAED from The Northern

Hospital. The COPE model is methodologically similar to the HOPE model but is specific to those

hospital separations that include some period of time in an intensive care unit (ICU). The COPE

model is therefore specific to this subgroup of the HOPE model’s population. Since the outcome of

critically ill patients is more likely to be influenced by changes in the quality of hospital care, the

COPE model complements the HOPE model and provides additional insight into the standard of

hospital care.

The COPE model was refined in 2007 and included demographic (age), admission (emergency,

inter-hospital transfer, RACF), diagnostic (ICD-10) and clinical (mechanical ventilation use and

cardiac surgery) variables. The final validation in a dataset of 17,880 patient episodes, from all 23

major Victorian public hospitals with on-site intensive care facilities, found the model had high

calibration (SMR 1.00-1.01) and discrimination (AUC 0.83-0.84). The COPE model compared

favourably to the APACHE-III model (SMR= 0.83-0.86; AUC=0.87-0.88) which is the current “gold

standard” used by the majority of Victorian and Australian ICUs.

In light of recent international research which identified that comorbidity indexes provide a useful

means for prediction of people most at risk of dying in hospital [4, 5], the aim of the current study

was to modify and validate HOPE and COPE models to include comorbidity information. In addition

an international WHO-based (ICD-10) system for grouping diagnoses was also investigated.

Page 12: Monitoring hospital performance...2.1 Hospital performance monitoring Mortality prediction systems have several applications such as clinical research and monitoring clinical performance

12

2.3 Comorbidity indexes

Comorbidities are defined as coexisting medical conditions that are distinct from the primary

condition under investigation [11]. Over the last decades several comorbidity indices have been

devised to provide a measure of the severity of comorbidities by a numeric score [4, 11-14]. The

most commonly used indexes are those developed by Charlson [12] and Elixhauser [14].

The Charlson index is the most common index used to control for comorbidity in health outcomes

studies. The original Charlson index was developed for use with medical records and consisted of

19 different diseases weighted according to disease severity as 1, 2, 3, or 6. The index has since

been adapted to include 17 comorbidities with weights based on disease burden yielding a total

maximum score of 37.

A more recent comorbidity index is that from Elixhauser [14]. The Elixhauser index measures the

effect of 30 different comorbid conditions. The index distinguishes comorbidities from complications

by considering only secondary diagnoses unrelated to the principal diagnosis through the use of

diagnosis related groups (DRGs). For example, a patient with a claim for congestive heart failure

would have this condition coded as a comorbidity only if the medical record did not contain a

DRG for cardiac disease.

The Elixhauser index was run first using pre-period hospital claims alone and then using pre-period

hospital and physician claims. Although DRGs are not available within physician claims, it was

thought that many comorbid conditions would be missed if these data were not included. The

Elixhauser score is calculated as the sum of comorbid conditions present. It is unweighted yielding a

maximum total score of 30. Both the Charlson and Elixhauser index scores have previously been

found to be strongly associated with inpatient death [5, 15].

Page 13: Monitoring hospital performance...2.1 Hospital performance monitoring Mortality prediction systems have several applications such as clinical research and monitoring clinical performance

13

3. Methods

3.1 Statistical analyses

All analyses were performed using the using Stata software (Version 10: Intercooled; StataCorp,

Texas, www.stata.com).

Aims

The objective of this study was to develop and validate two models to predict in-hospital mortality

in:

1. Adult inpatients admitted to major Victorian public hospitals (HOPE model).

2. Adult inpatients admitted to intensive care units in major Victorian public hospitals (COPE

model).

The primary aims of this study were to:

1. Develop and validate a HOPE model which includes demographic, admission, principal

diagnosis and comorbidity variables and to determine if the inclusion of comorbidity

information and a WHO based principal diagnosis classification improved predictive

performance of the previously developed HOPE model.

2. Develop and validate a COPE model which includes demographic, admission, principal

diagnosis, cardiac surgery and mechanical ventilation variables and to determine if the use

of a WHO based principal diagnosis classification improved predictive performance of the

previously developed COPE model.

3. To validate the HOPE and COPE models in independent data sets to ascertain the models

generality and stability.

Page 14: Monitoring hospital performance...2.1 Hospital performance monitoring Mortality prediction systems have several applications such as clinical research and monitoring clinical performance

14

4. To validate the HOPE and COPE models developed from all major Victorian public hospitals

in individual hospitals to ascertain the models generality and stability at the individual

hospital level.

It is anticipated that if the models are found to be predictive in individual hospitals it will provide,

amongst other applications, a mechanism by which hospital performance can be monitored, and

benchmarked across Victoria though the use of statistical process control charts.

3.2 Project organisation and data collection

The HOPE and COPE Phase 4 project was conducted by the Northern Clinical Research Centre. The

project was funded by, and conducted in cooperation with the Victorian Department of Human

Services (DHS). Data were derived from the VAED, and obtained under confidential agreement,

from DHS. De-identified data on adult (>17 years) inpatient episodes from all consecutively

admitted patients to Victorian public hospitals between 01 July 2004 and 31 June 2007 were

obtained. Data included demographic (age, gender) admission type and source, diagnostic (ICD-

10 coded diagnoses), clinical (procedures performed, use of mechanical ventilation, admission to

intensive care), discharge (discharge date and separation mode).

Participants and setting

Data were separated into development and validation data sets. Episodes for adults discharged

between 01 July 2004 and 31 June 2005 were used as the model development dataset. Episodes

for adults discharged between 01 July 2005 and 31 June 2006 (validation 1 dataset) and between

01 July 2006 and 31 June 2007 (validation 2 dataset) were used as the model validation data.

The HOPE cohort included all adult in-patient episodes from the 23 major (tertiary, metropolitan and

regional with on-site intensive care) public hospitals. This data was used to develop and validate

the HOPE model.

The COPE cohort included all adult in-patient episodes associated with any period of time in an

intensive care unit of the same hospital, defined as “ICU Hours” greater than zero in the VAED. This

data was used to develop and validate the COPE model.

Table 1 shows the number of inpatient episodes and deaths included in each analysis cohort, and

Figure 3 the data flow chart.

Table 1: Inpatient episodes and deaths by cohort

Page 15: Monitoring hospital performance...2.1 Hospital performance monitoring Mortality prediction systems have several applications such as clinical research and monitoring clinical performance

15

DevelopmentJuly 2004–June 2005

Validation 1July 2005–June 2006

Validation 2July 2006–June 2007

HOPEEpisodes 294,767 303,247 311,541Deaths 7,196 7,280 7,324Mortality 2.44% 2.40 % 2.35 %

COPEEpisodes 17,405 17,309 17,522Deaths 2,054 2,091 2,155Mortality 11.80 % 12.08 % 12.30 %

Figure 1: Data set and cohort flow chart

DEVELOPMENTJuly 2004– June 2005

VALIDATION 1July 2005– June 2006

VALIDATION 2July 2006– June 2007

HOPEAll major hospital adult

inpatient episodes

HOPEAll major hospital adult

inpatient episodes

HOPEAll major hospital adult

inpatient episodes

COPEAll major hospital adult

ICU episodes

COPEAll major hospital adult

ICU episodes

COPEAll major hospital adult

ICU episodes

Note:Major=Tertiary, metropolitan and regional ICU

Page 16: Monitoring hospital performance...2.1 Hospital performance monitoring Mortality prediction systems have several applications such as clinical research and monitoring clinical performance

16

The demographic characteristics of the study cohorts are presented in Table 2.

Table 2: Development sample population demographics and characteristics by cohort

HOPEN= 294,767

COPEN= 17,405

Mean SD Mean SDAge (years) 56.76 21.06 61.63 17.56

Freq. % Freq. %Deaths 7,196 2.44 2,054 11.80

Admission source

Transfer from mental health residential facility 26 0.01 — —

Admission from private residence 277,899 94.28 14,894 85.57

Transfer from RACF 1,787 0.61 36 0.21

Statistical admission 998 0.34 100 0.57

Transfer from acute/extended care/ rehabilitation/ geriatric center 14,057 4.77 2,375 13.65

Admission type

Emergency admission 172,012 58.36 9,763 56.09

Planned admission 51,394 17.44 3,981 22.87

Maternity 21,082 7.15 65 0.37

Other emergency admission 21,397 7.26 2,170 12.47

Statistical admission 998 0.34 100 0.57

Other planed admission 27,884 9.46 1,326 7.62

Gender

Male 141,372 47.96 7,027 40.37

Female 153,395 52.04 10,378 59.63

Hospital type

Major metropolitan acute hospitals with on-site ICU 106,925 36.27 3,665 21.06

Major regional base hospitals with on-site ICU 61,751 20.95 5,089 29.24

Tertiary referral metropolitan acute hospitals 126,091 42.78 8,651 49.7Note:SD=Standard deviationFreq.=FrequencyN= Number of inpatient episodes

Predictor variables

The primary outcome for all analyses was death during the index episode (VAED code=”D”

sepmode). The candidate variables for inclusion in the model included demographic (age &

gender), diagnostic (ICD-10 coded principal diagnosis, Elixhauser and Charlson comorbidity index

scores and individual comorbidities) and admission details (emergency admission, inter-hospital

transfer and transfer from a residential aged care facility). Cardiac surgery and mechanical

ventilation were also candidate variables for inclusion in the COPE model.

The principal diagnosis is defined in the VAED as “the diagnosis established after study to be chiefly

responsible for occasioning the patient's episode of care in hospital or attendance at the health

care facility.” The first ICD-10 diagnostic field of the VAED is reserved for the “principal diagnosis”

(VAED code=P “primary diagnosis”). One principal diagnosis was recorded for each patient

episode. Three methods for grouping principal diagnoses were used in the analysis:

1. The original diagnosis groupings from the HOPE 2007 analysis (Appendix: 1);

2. The WHO diagnosis grouping for cause of death (Appendix: 2); and

Page 17: Monitoring hospital performance...2.1 Hospital performance monitoring Mortality prediction systems have several applications such as clinical research and monitoring clinical performance

17

3. A modified WHO diagnosis code grouping for cause of death (Appendix: 3).

The modified WHO diagnosis grouping was created by removing diagnosis groups infrequently

occurring in the Victorian hospital population, by collapsing diagnosis groups with similar mortality

risks, and by expanding diagnosis groups where risks were found to vary within the group.

Comorbid conditions are optional fields in the VAED and can by entered into any of the forty

available diagnostic fields except the first (which is reserved for the “principal diagnosis”).

Comorbid diagnoses are given the prefix A, P or C. Prefix “A” implies the comorbid condition was

present on admission but did not necessarily require alteration of therapy. Prefix “P” implies the

condition required diagnostic or therapeutic intervention. Prefix “C” implies a complication that

arose after admission and required alteration of therapy. Comorbidities with prefix “A” were used to

calculate the comorbidity information and “P” and “C” prefixed comorbidities were excluded. The

explanation for this is given below.

In order for a comorbidity to be coded as an additional diagnosis it needs to meet the following

criteria:

(a) commencement, alteration or adjustment of therapeutic treatment;

(b) required diagnostic procedures;

(c) increased clinical care and/or monitoring.

Comorbid diagnoses that are present on admission but do not fulfil any of the above three criteria

are optional inclusions in the VAED. This means that the VAED does not necessarily include all

comorbid conditions present for each patient. This has important implications for their inclusion in

predictive models.

Serious or severe comorbid conditions often increase the risk of death and are included in many

other predictive models for this reason. If comorbid conditions are not reported in all patients this

may potentially introduce bias if these variable are included in a predictive model. If complications

or comorbid conditions requiring therapeutic intervention are included this may introduce bias into

the model. This bias is more important if the predictive model is to be used to monitor quality of

care or clinical performance. The inclusion of complications and comorbidities requiring treatment

are likely to improve the performance of a model solely used to predict mortality, but it will impair its

ability to monitor quality of care. A simple example may assist in explaining this:

Mr Smith has COPD and is admitted to Hospital A with an acute

myocardial infarct (AMI). He is given timely and optimal treatment,

improves rapidly and does not require additional treatment for his

COPD. Mr Brown is an identical patient admitted to Hospital B, but his

Page 18: Monitoring hospital performance...2.1 Hospital performance monitoring Mortality prediction systems have several applications such as clinical research and monitoring clinical performance

18

diagnosis and treatment are delayed and the quality of care is poor.

This leads to an exacerbation of his COPD for which he needs

additional therapy. Both patients survive. A predictive model for

monitoring both hospital A and B includes comorbidity conditions as

one of the variables. Both patients are coded with the principal

diagnosis of AMI. Mr Brown’s COPD is also included as a comorbidity

(with “P” prefix) but Mr Smith’s COPD is not. Therefore the same

predictive model is likely to suggest that Mr Brown had a higher

predicted mortality than Mr Smith and that Hospital B performed as

well as, if not better, than Hospital A.

Previously published ICD-10 coding algorithms were used to calculate the Charlson and Elixhauser

comorbidity indices and individual comorbidites [16]. Up to 39 admission comorbidity ICD-10

diagnosis codes were recorded for each patient episode (VAED code=A “associated condition”).

Model development

Stage 1: Variable selection

Response frequency

Cross tabulations of the frequency of all the independent variables by the outcome variable

(death) were examined to identify floor and ceiling effects of these variables and to provide

information about their discriminative capacity. Items with categories with less than 5 responses

were not considered suitable for regression analysis due to limited estimability. The model

development process involved a series of four steps to identify suitable risk factors for inclusion in

the prediction model (Figure 3).

1. Investigation of response frequency for nominal variables. Action: retain if >5 responses percategory.

2. Univariate analysis Action: retain if P<0.25.3. Check for collinearity and multicollinearity. Action: retain risk factor with greater Mc-Fadden

pseudo R2.4. Multiple regression analysis (using backwards selection) Action: retain if P<0.05.5. Examination of final model predictive validity• Area under the receiver operating curve (AUC/c-index)• Goodness of fit: Hosmer-Lemshow (H-L)• Standardised mortality ratio (SMR)

Figure 3: Model development steps for selection of candidate variables

For each candidate variable a logistic regression model was estimated. Candidate variables for

the multiple regression model included variables that were found to have association with death

(P<0.25) on univariate analysis.

Patient age, the only continuous variable, was plotted against the death rate to determine

whether the relation, if any, was linear or if the variable should be categorised (dichotomous or

Page 19: Monitoring hospital performance...2.1 Hospital performance monitoring Mortality prediction systems have several applications such as clinical research and monitoring clinical performance

19

ordinal response) or transformed. Transformed or categorised continuous variables were then

examined on univariate analysis and the transformation/categorisation was assessed by relative

gains in the Mc-Fadden pseudo R-squared (R2). The variable with the greater Mc-Fadden pseudo

R-squared (R2) was retained.

Multicollinearity

Multicollinearity (or collinearity) occurs when two or more independent variables in the model are

approximately determined by a linear combination of other independent variables in the model.

The degree of multicollinearity can vary and can have different effects on the model. When

perfect collinearity occurs, that is, when one independent variable is a perfect linear combination

of the others, it is impossible to obtain an accurate estimate of regression coefficients with all the

independent variables in the model.

Moderate multicollinearity is fairly common since any correlation among the independent variables

is an indication of collinearity. When severe multicollinearity occurs, the standard errors for the

coefficients tend to be very large (inflated), and sometimes the estimated logistic regression

coefficients can be highly unreliable.

Collinearity between candidate variables was assessed through construction of a two-way

correlation matrix and multicollinearity was assessed by investigation of the tolerance and variance

inflation factors. Correlations of greater than 0.40 were considered evidence of collinearity [17]. The

tolerance provides an indication of how much collinearity that a regression analysis can tolerate,

and the variance inflation factor (VIF) an indicator of how much of the inflation of the standard

error could be caused by collinearity. The tolerance for a particular variable is 1 minus the R2 that

results from the regression of the other variables on that variable. The corresponding VIF is

1/tolerance. If all of the variables are orthogonal to each other (completely uncorrelated), both

the tolerance and VIF are 1. If a variable is very closely related to another variable(s), the tolerance

goes to 0, and the variance inflation gets very large. For collinear risk factors, the risk factor with the

greater Mc-Fadden pseudo R-squared (R2) was retained and the other discarded from the analysis.

Model development: Stage 2

Candidate variables that remained after stage 1 were entered into a backwards selection multiple

regression. After backward selection, only variables with a significant (P<0.05) association with

death were retained in the final model. Odds ratios (OR) were chosen to describe the relationship

between predictors and in-hospital death.

The key attributes of a model are discrimination—the accuracy of the ranking in order of probability

of death) and calibration—the extent to which the model's prediction of probability of death

reflects the true risk of death [6].

Page 20: Monitoring hospital performance...2.1 Hospital performance monitoring Mortality prediction systems have several applications such as clinical research and monitoring clinical performance

20

Calibration and discrimination

Calibration is a measure of how well predicted probabilities agree with actual observed risk. When

the average predicted risk within subgroups of a prospective cohort matches the proportion that

actually experience the outcome event the model is well calibrated. The Hosmer-Lemeshow

statistic compares these proportions directly and is a popular method by which model calibration is

assessed. The Hosmer-Lemeshow goodness-of-fit statistic is computed as the Pearson chi-square

from the contingency table of observed frequencies and predicted frequencies. A well calibrated

model as measured by Hosmer and Lemeshow's test will yield a large non-significant (>0.05) P-value

[18, 19].

The grouping method used for the calculation of the Hosmer-Lemeshow statistic was based on the

percentiles (usually deciles) of the estimated probabilities (C10 statistic) as opposed to the fixed cut-

point method (H10 statistic) due to the presence of a large number of observations with low

probability values [19]. It has been previously reported that the Hosmer-Lemeshow test is sensitive to

sample size, with one author demonstrating an inverse relationship between sample size and the

Hosmer-Lemeshow p value [20]. In acknowledgement of this relationship and to gain more

information about the calibration of the developed models, contingency tables of the observed

and predicted values for the deciles used in calculation of the Hosmer-Lemeshow statistic were

examined. In addition the Hosmer-Lemeshow test for the HOPE models was also calculated on

three random samples of 2,500 episodes drawn from the original development data.

The model calibration was also assessed by the Standardised Mortality Ratio (SMR). Where SMR =

(Observed Deaths / Expected Deaths). A SMR value of 1 indicates perfect calibration while a value

of <1 indicates that the model is over-classifying patients mortality risk and values of >1 under-

classifying risk. The 95% confidence interval for the SMR was calculated and the inclusion of 1.0 in

the confidence interval was interpreted as the absence of any significant difference between the

number of observed and predicted deaths.

Discrimination is a measure of how well the model can separate those who do and those who do

not experience the event of interest [19]. In the example of mortality prediction models, if the

mortality risk scores for those who die are all higher than for those who do not die, the model

discriminates perfectly. Discrimination is most often measured by the area under the receiver

operating characteristic (ROC) curve, or AUC statistic.

The ROC curve is a plot of the sensitivity of a model on the y axis against 1–specificity on the x axis

[21]. The area under the ROC curve (AUC) provides a measure of the overall predictive accuracy

[22]. The AUC ranges from 0.5 (no predictive ability) to a theoretical maximum of 1 representing

perfect prediction [23]. A value above 0.8 was accepted as indicating satisfactory discrimination.

Page 21: Monitoring hospital performance...2.1 Hospital performance monitoring Mortality prediction systems have several applications such as clinical research and monitoring clinical performance

21

Model validation

External validation of the model was performed on the validation datasets. AUC, H-L and SMR

values obtained for the validation data sets were compared to those obtained for the

development data and homogeneity of the values was taken as evidence of the models stability

and generality. H-L values were calculated based on 8 degrees of freedom in the development

data and 10 degrees of freedom in the validation data. [ref?].

Page 22: Monitoring hospital performance...2.1 Hospital performance monitoring Mortality prediction systems have several applications such as clinical research and monitoring clinical performance

22

4. Cohort 1: HOPE Victorian Major hospitals

4.1 Abstract

OBJECTIVE: To develop and validate a model for predicting inpatient death in major acute adult

public hospitals based on demographic and clinical data.

DESIGN: Retrospective audit of hospital maintained administrative datasets: development-external

validation study. Predictive model development involved a systematic process of variable

reduction followed by multiple logistic regression.

SETTING: Twenty-three, major Victorian public hospitals, Australia.

PATIENTS: Adult (>17 years) inpatient episodes, excluding day procedure cases were included in

the analyses. Data included 294,767 episodes between July 2004–June 2005 (development data),

303,247 episodes between July 2005 and June 2006 (validation data 1) and 311,541 episodes

between July 2006 and June 2007 (validation 2).

MAIN OUTCOME MEASURES: Deaths in hospital. Performance of models assessed by the area under

the receiver operating characteristic curve (AUC) measuring discrimination, and Hosmer-

Lemeshow statistics and standardised mortality ratios (SMRs) to assess the model calibration.

RESULTS: There were 7,196 deaths in the development data, 7,280 in the validation 1 data and 7,324

in the validation 2 data. A model that combined age, gender, admission characteristics and

principal diagnosis based on a WHO classification was found to have excellent discrimination

Page 23: Monitoring hospital performance...2.1 Hospital performance monitoring Mortality prediction systems have several applications such as clinical research and monitoring clinical performance

23

(AUC=0.86) and satisfactory calibration (H-L=28.70 P=0.0000). This model had comparable

predictive performance to the previously developed HOPE model despite containing fewer

variables (39 v 64). External validation of the new HOPE model confirmed the model discrimination

and calibration was stable (validation data 1: AUC=0.86, H-L x2=39.60 P=0.0000 & SMR=0.97;

validation data 2: AUC=0.86, H-L x2=49.95 P=0.0000 & SMR=0.94). Individual hospital analyses found

high levels of discrimination (AUC≥0.80) in 22 of 23 (95.7%) hospitals in the validation 1 and 2 data.

Good levels of calibration (H-L P>0.05) were found in 13 of 23 (56.5%) hospitals in the validation 1

data and 17 of the 23(73.9%) hospitals in the validation 2 data. Examination of the SMR also

revealed high levels of calibration (SMR 95%CI includes 1) in 17 of 23 (78.2%) hospitals in the

validation 1 and 21 of 23 (91.3%) hospitals in the validation 2 data set. Adding Elixhauser or Charlson

comorbidities to the model did not improve predictive performance (Elixhauser model: AUC=0.86,

H-L=44.58 P=0.0000; Charlson model: AUC=0.86, H-L=45.66 P=0.0000).

CONCLUSIONS: Routinely collected administrative data can be used to predict in-hospital mortality

risk with high levels of discrimination and calibration. Use of the WHO based principal diagnosis

classification resulted in a model with fewer variables and equal predictive performance to the

previously developed HOPE model which used a locally derived primary diagnosis classification.

Adding Elixhauser or Charlson comorbidity information to the model did not improve predictive

performance. The developed model provides a useful method for monitoring hospital

performance.

Page 24: Monitoring hospital performance...2.1 Hospital performance monitoring Mortality prediction systems have several applications such as clinical research and monitoring clinical performance

24

4.2 Development results

Age Variable

Figure 2 is a plot of mortality rates by age. It revealed a relatively linear relationship between age

and mortality rate. However the marked increased risk in older ages and the relatively uniform rate

in younger ages suggested a quadratic relationship may exist between age and mortality.

Examination of the R2 for age and the quadratic transformation of age however found no

significant gain in explained variation with the quadratic transformation of age (Table 2) ?Table 3.

Thus age was chosen as the candidate variable to use in the step-wise regression procedure.

0

0.05

0.1

0.15

17 22 27 32 37 42 47 52 57 62 67 72 77 82 87 92 97

Age (years)

Mo

rta

lity

rate

Figure 2: Plot of mortality rates by age—HOPE

Table 2: Univariate regression analysis results for age variables 04-05 development data—M-HOPE

Odds Ratio (95% CI) P R2

Age 1.06 (1.06–1.06) 0.000 0.0998Age2 1.00 (1.00–1.00) 0.000 0.0970

Models Investigated

In cohort 2 (Major Victorian public hospitals) 20 models were estimated. The models included

combinations of the demographic, admission and diagnostic variables. The details of these models

are presented in (Table 3).

Page 25: Monitoring hospital performance...2.1 Hospital performance monitoring Mortality prediction systems have several applications such as clinical research and monitoring clinical performance

25

Table 3: 23 Tertiary, metropolitan and regional ICU hospitals Victorian hospitals N=294,767 (inpatientepisodes)

ModelAUC H-L x2 (P)

Variables retained in themodel

AllRandomsamples

(N=2,500)All

Randomsamples

(N=2,500)

1-2 Demographic andadmission

1. Age (years)2. Gender3. ED admission4. Inter-hospital transfer5. Admitted from RACF

0.790.760.770.78

25.62(0.0012)

10.04 (0.4368)17.38(0.0664)16.47 (0.0870)

2-2 Demographic,admission and HOPE2007 principaldiagnosis

1. Age (years)2. Gender3. ED admission4. Inter-hospital transfer5. Admitted from RACF6. HOPE 2007principal

diagnosis (59)

0.880.840.820.86

28.07(0.000)

11.18 (0.3436)10.17 (0.4257)4.34 (0.9306)

3-2 Demographic,admission and WHOprincipal diagnosis

1. Age (years)2. Gender3. ED admission4. Inter-hospital transfer5. Admitted from RACF6. WHO principal

diagnosis (35)

0.850.830.820.85

55.30(0.000)

5.86 (0.8273)10.36 (0.4098)15.86 (0.1037)

4-2 Demographic,admission andModified WHOprincipal diagnosis

1. Age (years)2. Gender3. ED admission4. Inter-hospital transfer5. Admitted from RACF6. Modified WHO

principal diagnosis (34)

0.860.830.820.85

47.88(0.000)

9.07 (0.5254)8.29 (0.6008)7.84 (0.5565)

5-2 Demographic,admission andCharlson comorbidityscore

1. Age (years)2. Gender3. ED admission4. Inter-hospital transfer5. Admitted from RACF6. Charlson score

0.800.770.760.79

27.06(0.0007)

6.21 (0.7970)16.08 (0.0973)15.08 (0.1290)

6-2 Demographic,admission andElixhauser comorbidityscore

1. Age (years)2. Gender3. ED admission4. Inter-hospital transfer5. Admitted from RACF6. Elixhauser score

0.800.760.760.79

33.62(0.000)

10.67 (0.3838)16.05 (0.0983)13.01 (0.2231)

7-2 Demographic,admission andCharlson comorbidities

1. Age (years)2. Gender3. ED admission4. Inter-hospital transfer5. Admitted from RACF6. Charlson comorbidities

(7)

0.800.770.770.79

33.62(0.000)

7.33 (0.6941)14.69 (0.1439)16.94 (0.0756)

8-2 Demographic,admission andElixhausercomorbidities

1. Age (years)2. Gender3. ED admission4. Inter-hospital transfer5. Admitted from RACF6. Elixhauser

comorbidities (11)

0.800.760.770.79

33.69(0.000)

8.58 (0.5722)16.69 (0.0815)16.16 (0.0952)

Page 26: Monitoring hospital performance...2.1 Hospital performance monitoring Mortality prediction systems have several applications such as clinical research and monitoring clinical performance

26

ModelAUC H-L x2 (P)

Variables retained in themodel

AllRandomsamples

(N=2,500)All

Randomsamples

(N=2,500)

9-2 Demographic,admission, Charlsoncomorbidity score andHOPE 2007principaldiagnosis

1. Age (years)2. Gender3. ED admission4. Inter-hospital transfer5. Admitted from RACF6. Charlson score7. HOPE 2007principal

diagnosis (59)

0.870.840.820.86

26.95(0.0007)

11.82 (0.2974)6.83 (0.7411)2.64 (0.9887)

10-2 Demographic,admission, Elixhausercomorbidity score andHOPE 2007 principaldiagnosis

1. Age (years)2. Gender3. ED admission4. Inter-hospital transfer5. Admitted from RACF6. Elixhauser score7. HOPE 2007principal

diagnosis (59)

0.870.840.820.86

28.19(0.000)

12.30 (0.2653)10.36 (0.4097)3.47 (0.9680)

11-2 Demographic,admission, Charlsoncomorbidities andHOPE 2007 principaldiagnosis

1. Age (years)2. Gender3. ED admission4. Inter-hospital transfer5. Admitted from RACF6. Charlson comorbidities

(5)7. HOPE 2007principal

diagnosis (58)

0.870.840.820.86

26.95(0.000)

11.77 (0.3007)12.81 (0.2347)5.47 (0.8576)

12-2 Demographic,admission, Elixhausercomorbidities (10) andHOPE 2007principaldiagnosis

1. Age (years)2. Gender3. ED admission4. Inter-hospital transfer5. Admitted from RACF6. Elixhauser

comorbidities (10)7. HOPE 2007principal

diagnosis (59)

0.870.840.820.86

30.82(0.000)

11.27(0.3366)

12.20 (0.2716)6.82 (0.7419)

13-2 Demographic,admission, Charlsoncomorbidity score andWHO principaldiagnosis

1. Age (years)2. Gender3. ED admission4. Inter-hospital transfer5. Admitted from RACF6. Charlson score7. WHO principal

diagnosis (35)

0.850.830.810.85

65.25(0.000)

7.35 (0.6918)10.73 (0.3790)14.26 (0.1617)

14-2 Demographic,admission, Elixhausercomorbidity score andWHO principaldiagnosis

1. Age (years)2. Gender3. ED admission4. Inter-hospital transfer5. Admitted from RACF6. Elixhauser score7. WHO principal

diagnosis (38)

0.850.830.820.85

60.96(0.000)

6.04 (0.8117)9.69 (0.4684)

10.99 (0.3583)

15-2 Demographic,admission, Charlsoncomorbidities andWHO principaldiagnosis

1. Age (years)2. Gender3. ED admission4. Inter-hospital transfer5. Admitted from RACF6. Charlson comorbidities

(8)7. WHO principal

diagnosis (35)

0.850.830.810.85

76.56(0.000)

6.34 (0.7855)7.86 (0.6424)

15.46 (0.1160)

Page 27: Monitoring hospital performance...2.1 Hospital performance monitoring Mortality prediction systems have several applications such as clinical research and monitoring clinical performance

27

ModelAUC H-L x2 (P)

Variables retained in themodel

AllRandomsamples

(N=2,500)All

Randomsamples

(N=2,500)

16-2 Demographic,admission, Elixhausercomorbidities andWHO principaldiagnosis

1. Age (years)2. Gender3. ED admission4. Inter-hospital transfer5. Admitted from RACF6. Elixhauser

comorbidities (11)7. WHO principal

diagnosis (35)

0.850.830.820.85

77.53(0.000)

5.71 (0.8392)9.62 (0.4745)

12.92 (0.2283)

17-2 Demographic,admission, Charlsoncomorbidity score andModified WHOprincipal diagnosis (36dx variables)

1. Age (years)2. Gender3. ED admission4. Inter-hospital transfer5. Admitted from RACF6. Charlson score7. Modified WHO

principal diagnosis (36)

0.86

0.830.810.85

47.57(0.000)

10.91 (0.3646)8.41 (0.5887)4.74 (0.90770

18-2 Demographic,admission, Elixhausercomorbidity score andModified WHOprincipal diagnosis

1. Age (years)2. Gender3. ED admission4. Inter-hospital transfer5. Admitted from RACF6. Elixhauser score7. WHO principal

diagnosis (34)

0.860.830.810.85

47.20(0.000)

9.07 (0.5257)8.43 (0.5873)8.70 (0.5609)

19-2 Demographic,admission, Charlsoncomorbidities andModified WHOprincipal diagnosis

1. Age (years)2. Gender3. ED admission4. Inter-hospital transfer5. Admitted from RACF6. Charlson comorbidities

(6)7. Modified WHO

principal diagnosis (34)

0.860.830.810.86

45.66(0.000)

15.58 (0.1122)7.99 (0.6299)2.89 (0.9839)

20-2 Demographic,admission, Elixhausercomorbidities andModified WHOprincipal diagnosis

1. Age (years)2. Gender3. ED admission4. Inter-hospital transfer5. Admitted from RACF6. Elixhauser

comorbidities (9)7. Modified WHO

principal diagnosis (34)

0.860.830.810.86

44.58(0.000)

16.12 (0.0961)8.86 (0.5453)3.77 (0.9573)

Note:AUC=Area under the receiver operating curveHL=Hosmer-Lemeshowx2 =Pearson’s Chi squareP=Probability value

A model including demographic and admission only variables (Model 1-2) was found to have the

lowest discrimination of the models tested (Table 3). The addition of either the Charlson or Elixhauser

comorbidity scores or individual comorbidities did not improve discrimination (Table 3: Models 5-

2–8-2). Models including demographic, admission and principal diagnosis variables (Models 2-2–4-

2) were found to have greater discrimination than the demographic and admission only model

(Model 1-2). The HOPE 2007 principal diagnosis models had a substantially higher number of

diagnostic variables than the WHO and modified WHO models (eg. 64 v 40 and 39 respectively).

Page 28: Monitoring hospital performance...2.1 Hospital performance monitoring Mortality prediction systems have several applications such as clinical research and monitoring clinical performance

28

The addition of either Charlson or Elixhauser comorbidity scores or individual comorbidities (Models

9-2–20-2) did not appear to change the discrimination or calibration of the principal diagnosis

models (Models 2-2–4-2). Based on these findings models 4-2, 19-2 & 20-2 were selected as the final

models to be validated in the validation data 1 & 2. The variables included in each of these models

are displayed in Table 4-Table 6.

Page 29: Monitoring hospital performance...2.1 Hospital performance monitoring Mortality prediction systems have several applications such as clinical research and monitoring clinical performance

29

Table 4: Model 4-2 Demographic, admission and Modified WHO principal diagnosis

Odds Ratio(95% CI) P

Male gender 0.86 (0.82–0.90) 0.000

Emergency admission 3.17 (2.92–3.45) 0.000

Admitted via inter–hospital transfer 1.90 (1.74–2.07) 0.000

Admitted from a RACF 1.70 (1.44–2.00) 0.000

Age (years) 1.05 (1.05–1.05) 0.000WHO M–Diseases of the skin and subcutaneous tissue, musculoskeletal systemand connective tissue 0.63 (0.52–0.75) 0.000

WHO M–TB 4.39 (2.18–8.82) 0.000

WHO M–Bacterial diseases 9.59 (8.39–10.95) 0.000

WHO M– Diseases of the genitourinary system 1.84 (1.60–2.11) 0.000

WHO M– Protozal diseases 9.06 (3.88–21.14) 0.000

WHO M– Malignant neoplasm of lip, oral cavity and pharynx 4.76 (2.98–7.59) 0.000

WHO M– Malignant neoplasm of digestive organs 4.67(4.00–5.46) 0.000

WHO M– Malignant neoplasm of the respiratory and intrathoracic organs 9.38 (7.87–11.18) 0.000WHO M– Gastric and duodenal ulcer and diseases of the intestine,pertioneum and other digestive 2.85 (2.52–3.22) 0.000

WHO M– Malignant neoplasm of breast and female genital organs 4.19 (3.03–5.79) 0.000

WHO M– Malignant neoplasm of male genital organs 2.34 (1.53–3.58) 0.000

WHO M– Malignant neoplasm of the urinary tract 2.39 (1.62–3.51) 0.000

WHO M– Malignant neoplasm of the eye, brain and other parts of the CNS 4.59 (2.98–7.08) 0.000WHO M– Malignant neoplasms of ill–defined, secondary and unspecifiedsites 7.42 (6.52–8.45) 0.000WHO M– Malignant neoplasms of lymphoid, haematopoietic and relatedtissues 5.68 (4.80–6.72) 0.000

WHO M– Specific procedures and follow–up care 0.28 (0.13–0.60) 0.001

WHO M– Neoplasms of uncertain or unknown behavior 2.30 (1.48–3.60) 0.000

WHO M– Anaemias 0.61 (0.42–0.90) 0.012

WHO M– Shock and haemorrage 32.88 (18.74–57.68) 0.000

WHO M– Type 2 Diabetes 2.35 (1.98–2.78) 0.000

WHO M– Malnutrition 12.96 (6.19–27.10) 0.000

WHO M– Fluid electrolyte and acid–base disorders 1.47 (1.07–2.03) 0.017WHO M– Symptoms, signs and abnormal clinical and laboratory findings, notelsewhere classified 0.44 (0.37–0.51) 0.000

WHO M– Disorders of the nervous system 1 0.21 (0.13–0.35) 0.000

WHO M– Disorders of the nervous system 2 5.43 (3.84–7.68) 0.000

WHO M– Diseases of the liver" 11.05 (9.13–13.36) 0.000

Page 30: Monitoring hospital performance...2.1 Hospital performance monitoring Mortality prediction systems have several applications such as clinical research and monitoring clinical performance

30

Odds Ratio(95% CI) P

WHO M–AMI 2.58 (2.30–2.90) 0.000

WHO M–IHD 0.35 (0.27–0.46) 0.000

WHO M– Other heart disease 2.83 (2.55–3.15) 0.000

WHO M– Chronic lower respiratory disease 1.74 (1.52–1.99) 0.000

WHO M– Cerebrovascular diseases 6.86 (6.24–7.53) 0.000

WHO M– Arterial disease 4.01 (3.37–4.78) 0.000

WHO M–Pneumonia 3.84 (3.46–4.26) 0.000

WHO M– Remainder of respiratory diseases 7.57 (6.66–8.60) 0.000

Note:WHO M=Modified WHO diagnosis grouping

Page 31: Monitoring hospital performance...2.1 Hospital performance monitoring Mortality prediction systems have several applications such as clinical research and monitoring clinical performance

31

Table 5: Model 19-2 Demographic, admission, Charlson comorbidities and Modified WHO principaldiagnosis

Odds Ratio(95% CI) P

Male gender 0.86 (0.82–0.91) 0.000

Age (years) 1.05 (1.05–1.05) 0.000

Emergency admission 3.04 (2.79–3.30) 0.000

Admitted via inter-hospital transfer 1.90 (1.75–2.07) 0.000

Admitted from a RACF 1.61 (1.36–1.90) 0.000

Ch-Diabetes + Complications 0.87 (0.79–0.96) 0.005

Ch-Cancer 1.27 (1.07–1.51) 0.006

Ch-Congestive heart failure 2.17 (1.70–2.77) 0.000

Ch-Dementia 1.53 (1.33–1.76) 0.000

Ch-COPD 1.64 (1.27–2.13) 0.000

Ch-Mild liver disease 1.67 (1.24–2.25) 0.001

WHO M-Malignant neoplasm of the eye, brain and other parts of the CNS 3.77 (2.44–5.82) 0.000WHO M-Malignant neoplasms of ill-defined, secondary and unspecifiedsites 5.71 (4.96–6.59) 0.000WHO M-Malignant neoplasms of lymphoid, haematopoietic and relatedtissues 4.62 (3.88–5.50) 0.000

WHO M-Neoplasms of uncertain or unknown behavior 1.88 (1.20–2.95) 0.006

WHO M-Anaemias 0.50 (0.34–0.73) 0.000

WHO M-TB 3.61 (1.79–7.25) 0.000

WHO M-Type 2 Diabetes 1.96 (1.64–2.34) 0.000

WHO M-Malnutrition 10.45 (4.99–21.91) 0.000

WHO M-Disorders of the nervous system 1 0.17 (0.10–0.29) 0.000

WHO M-Disorders of the nervous system 2 4.40 (3.10–6.23) 0.000

WHO M-AMI 2.11 (1.86–2.39) 0.000

WHO M-IHD 0.29 (0.22–0.38) 0.000

WHO M-Other heart disease 2.31 (2.06–2.59) 0.000

WHO M-Bacterial diseases 7.66 (6.65–8.82) 0.000

WHO M-Tachycardia, AF and arrhythmias 0.76 (0.62–0.95) 0.014

WHO M-Cerebrovascular diseases 5.60 (5.05–6.23) 0.000

WHO M-Arterial disease 3.20 (2.67–3.83) 0.000

WHO M-Pneumonia 3.09 (2.75–3.46) 0.000

WHO M-Other acute respiratory disease 0.60 (0.42–0.87) 0.007

Page 32: Monitoring hospital performance...2.1 Hospital performance monitoring Mortality prediction systems have several applications such as clinical research and monitoring clinical performance

32

Odds Ratio(95% CI) P

WHO M-Chronic lower respiratory disease 1.43 (1.24–1.65) 0.000

WHO M-Remainder of respiratory diseases 6.04 (5.27–6.92) 0.000WHO M-Gastric and duodenal ulcer and diseases of the intestine,pertioneum and other digestive 2.31 (2.03–2.64) 0.000

WHO M-Diseases of the liver 8.66 (7.11–10.56) 0.000WHO M-Hernia, noninfective enteritis and colitis, and diseases of theintestine 0.67 (0.54–0.83) 0.000

WHO M-Disorders of the gallbladder, biliary tract and pancreas 0.76 (0.60–0.96) 0.021WHO M-Diseases of the skin and subcutaneous tissue, musculoskeletalsystem and connective tissue 0.51 (0.42–0.61) 0.000

WHO M-Diseases of the genitourinary system 1.48 (1.28–1.71) 0.000WHO M-Symptoms, signs and abnormal clinical and laboratory findings,not elsewhere classified 0.36 (0.30–0.42) 0.000

WHO M-Shock and haemorrage 26.21 (14.90–46.10) 0.000

WHO M-Injury 0.76 (0.66–0.86) 0.000

WHO M-Injury 0.23 (0.11–0.48) 0.000

WHO M-Protozal diseases 7.44 (3.18–17.37) 0.000

WHO M-Malignant neoplasm of digestive organs 3.75 (3.19–4.40) 0.000

WHO M-Malignant neoplasm of the respiratory and intrathoracic organs 7.55 (6.30–9.04) 0.000

Note:WHO M=Modified WHO diagnosis groupingCh=Charlson comorbidity

Page 33: Monitoring hospital performance...2.1 Hospital performance monitoring Mortality prediction systems have several applications such as clinical research and monitoring clinical performance

33

Table 6: Model 20-2 Demographic, admission, Elixhauser comorbidities and Modified WHO principaldiagnosis

Odds Ratio(95% CI) P

Male gender 0.87 (0.83–0.91) 0.000

Age (years) 1.05 (1.05–1.05) 0.000

Emergency admission 3.03 (2.79–3.29) 0.000

Admitted via inter-hospital transfer 1.89 (1.74–2.06) 0.000

Admitted from a RACF 1.68 (1.43–1.98) 0.000

El-Congestive Heart Failure 2.00 (1.56–2.56) 0.000

El-Liver Disease 1.55 (1.15–2.08) 0.004

El-Lymphoma 1.77 (1.02–3.07) 0.041

El-Cardiac Arrhythmias 1.32 (1.10–1.57) 0.002

El-Fluid and Electrolyte Disorders 2.12 (1.56–2.89) 0.000

El-Alcohol Abuse 1.29 (1.01–1.63) 0.038

El-Hypertension, Uncomplicated 0.86 (0.80–0.93) 0.000

El-Other Neurological Disorders 1.48 (1.14–1.92) 0.003

El-Chronic Pulmonary Disease 1.61 (1.24–2.09) 0.000

WHO M-Malignant neoplasm of the eye, brain and other parts of the CNS 3.78 (2.45–5.85) 0.000WHO M-Malignant neoplasms of ill-defined, secondary and unspecifiedsites 6.03 (5.26–6.91) 0.000WHO M-Malignant neoplasms of lymphoid, haematopoietic and relatedtissues 4.61 (3.88–5.49) 0.000

WHO M-Neoplasms of uncertain or unknown behaviour 1.85 (1.18–2.90) 0.007

WHO M-Anaemias 0.50 (0.34–0.73) 0.000

WHO M-TB 3.57 (1.78–7.18) 0.000

WHO M-Type 2 Diabetes 1.98 (1.66–2.37) 0.000

WHO M-Malnutrition 10.41 (4.97–21.80) 0.000

WHO M-Disorders of the nervous system 1 0.17 (0.10–0.29) 0.000

WHO M-Disorders of the nervous system 2 4.41 (3.11–6.25) 0.000

WHO M-AMI 2.17 (1.91–2.46) 0.000

WHO M-IHD 0.30 (0.23–0.39) 0.000

WHO M-Other heart disease 2.30 (2.05–2.59) 0.000

WHO M-Bacterial diseases 7.73 (6.71–8.90) 0.000

WHO M-Tachycardia, AF and arrhythmias 0.76 (0.62–0.95) 0.014

WHO M-Cerebrovascular diseases 5.72 (5.15–6.36) 0.000

Page 34: Monitoring hospital performance...2.1 Hospital performance monitoring Mortality prediction systems have several applications such as clinical research and monitoring clinical performance

34

Odds Ratio(95% CI) P

WHO M-Arterial disease 3.20 (2.67–3.83) 0.000

WHO M-Pneumonia 3.11 (2.77–3.48) 0.000

WHO M-Other acute respiratory disease 0.61 (0.42–0.88) 0.008

WHO M-Chronic lower respiratory disease 1.43 (1.24–1.64) 0.000

WHO M-Remainder of respiratory diseases 6.09 (5.32–6.98) 0.000WHO M-Gastric and duodenal ulcer and diseases of the intestine,pertioneum and other digestive 2.31 (2.02–2.63) 0.000

WHO M-Diseases of the liver 8.43 (6.90–10.30) 0.000WHO M-Hernia, noninfective enteritis and colitis, and diseases of theintestine 0.67 (0.54–0.83) 0.000

WHO M-Disorders of the gallbladder, biliary tract and pancreas 0.76 (0.60–0.96) 0.021WHO M-Diseases of the skin and subcutaneous tissue, musculoskeletalsystem and connective tissue 0.51 (0.42–0.61) 0.000

WHO M-Diseases of the genitourinary system 1.49 (1.29–1.73) 0.000WHO M-Symptoms, signs and abnormal clinical and laboratory findings,not elsewhere classified 0.36 (0.30–0.42) 0.000

WHO M-Shock and haemorrage25.73

(14.64–45.20) 0.000

WHO M-Injury 0.76 (0.67–0.87) 0.000

WHO M-Injury 0.23 (0.11–0.49) 0.000

WHO M-Protozal diseases 7.52 (3.22–17.56) 0.000

WHO M-Malignant neoplasm of digestive organs 3.73 (3.18–4.39) 0.000

WHO M-Malignant neoplasm of the respiratory and intrathoracic organs 7.43 (6.20–8.91) 0.000

Note:WHO M=Modified WHO diagnosis groupingEl=Elixhauser comorbidity

Page 35: Monitoring hospital performance...2.1 Hospital performance monitoring Mortality prediction systems have several applications such as clinical research and monitoring clinical performance

35

4.3 Validation results

4.3.1 Model 4-2: Demographic, admission and modified WHO principal diagnosis

The results from validation data 1 by hospital group of Model 4-2 (Demographic, admission and

modified WHO principal diagnosis) are shown in Table 7.

The regional (R_ICU) hospital group was found to have the lowest SMR values and the associated

95% confidence interval did not include unity indicating that there were significantly (P<0.05) fewer

deaths occurring in this hospital group than were predicted to occur. The 95% confidence intervals

for the SMR for the metropolitan (M_ICU) and tertiary (T) hospital groups included unity indicating

that there was no significant (P>0.05) difference between the observed and predicted deaths

occurring in these groups.

The regional (R_ICU) hospital group was found to have the best discrimination (AUC=0.88), while the

tertiary hospital group was found to have the lowest discrimination (AUC=0.84).

Table 7: Validation data 1 Model 4-2 Demographic, admission, Elixhauser comorbidities andModified WHO principal diagnosis by hospital group

Group Episodes Observed Predicted SMR (95% CI) AUC H-LM_ICU 109,152 2779 2781.25 1.00 (0.96-1.04) 0.87 56.95 (0.0000)R_ICU 63,257 1223 1338.24 0.91 (0.86-0.97) 0.88 36.80 (0.0001)

T 130,838 3278 3377.23 0.97 (0.94-1.01) 0.84 16.73 (0.0805)   

TOTAL 303,247 7280 7496.72 0.97 (0.95-0.99) 0.86 39.60 (0.0000)

The results from validation data 2 by hospital group of Model 4-2 (Demographic, admission and

modified WHO principal diagnosis) are shown in Table 8.

The regional (R_ICU) and tertiary hospital groups were found to have the lowest SMR values and the

associated 95% confidence intervals did not include unity indicating that there were significantly

(P<0.05) fewer deaths occurring in these hospital groups than were predicted to occur. The 95%

confidence intervals for the SMR for the metropolitan (M_ICU) and tertiary hospital group included

unity indicating that there was no significant (P>0.05) difference between the observed and

predicted deaths occurring in this hospital group.

The regional (R_ICU) hospital group was found to have the best discrimination (AUC=0.88), while the

tertiary hospital group was found to have the lowest discrimination (AUC=0.84).

Table 8: Validation data 2-Model 4-2 Demographic, admission and Modified WHO principaldiagnosis by hospital group

Page 36: Monitoring hospital performance...2.1 Hospital performance monitoring Mortality prediction systems have several applications such as clinical research and monitoring clinical performance

36

Group Episodes Observed Predicted SMR AUC H-LM_ICU 112,126 2863 2902.43 0.99 (0.95-1.02) 0.87 42.31 (0.0000)R_ICU 62,989 1194 1343.52 0.89 (0.84-0.94) 0.88 35.76 (0.0001)

T 136,426 3267 3516.34 0.93 (0.90-0.96) 0.84 32.10 (0.0004)   

TOTAL 311,541 7324 7762.30 0.94 (0.92-0.97) 0.86 49.95 (0.0000)

Validation in hospitals with more than 200 fatalities per year

The results from validation data 1 for Model 4-2 (Demographic, admission and Modified WHO

principal diagnosis) in those hospitals which had more than 200 deaths during the observation

period are shown in Table 9.

Hospitals BKGO, BKIN, BOIN and BMLN were found to have the lowest SMR values and the 95%

confidence intervals did not include unity indicating that there were significantly (P<0.05) fewer

deaths occurring in these hospitals than were predicted to occur. In contrast, the SMR for hospital

BMEN was the highest and the 95% confidence interval did not include unity indicating that there

were significantly (P<0.05) more deaths occurring in this hospitals than were predicted to occur.

Hospital BMLN was found to have the best discrimination (AUC=0.91), while hospital BKEN was

found to have the lowest discrimination (AUC=0.81).

Table 9: Validation data 1-Model 4-2 Demographic, admission and Modified WHO principaldiagnosis by hospital group (deaths >200)

Hospital Episodes Observed Predicted SMR (95% CI) AUC H-L

BKGO 24,134 653 732.74 0.89 (0.82-0.96) 0.82 36.64 (0.0001)BNGR 24,420 621 571.76 1.09 (1.00-1.18) 0.84 14.57 (0.1486)BKEN 21,315 608 569.79 1.07 (0.98-1.16) 0.81 53.92 (0.0000)BLKN 20,328 561 523.27 1.07 (0.98-1.17) 0.86 28.04 (0.0018)BKIN 20,040 553 613.89 0.90 (0.83-0.98) 0.87 11.38 (0.3290)BLLN 19,421 544 552.95 0.98 (0.90-1.07) 0.85 9.60 (0.4765)CMFN 19,344 544 503.13 1.08 (0.99-1.18) 0.88 35.03 (0.0001)CKIN 21,854 457 481.28 0.95 (0.86-1.04) 0.88 17.42 (0.0655)BOIN 18,787 378 498.39 0.76 (0.68-0.84) 0.87 37.68 (0.0000)CLEO 16,939 341 328.90 1.04 (0.93-1.16) 0.89 6.84 (0.7407)BMEN 11,304 333 290.98 1.14 (1.02-1.28) 0.87 24.97 (0.0054)BMLN 17,373 304 345.87 0.88 (0.78-0.99) 0.91 31.50 (0.0005)CKEN 10,575 241 226.06 1.07 (0.93-1.21) 0.87 14.10 (0.1683)    Minimum 0.76 0.81    Maximum 1.14 0.91

The results from validation data 2 in the same hospitals (which had more than 200 deaths during the

observation period,) of Model 4-2 (demographic, admission and modified WHO principal diagnosis)

are shown in Table 10. Hospitals BKIN and BOIN were found to have the lowest SMR values and the

95% confidence intervals did not include 1.0 indicating that there were significantly (P<0.05) fewer

deaths occurring in these hospitals than were predicted to occur by the developed model.

Page 37: Monitoring hospital performance...2.1 Hospital performance monitoring Mortality prediction systems have several applications such as clinical research and monitoring clinical performance

37

Hospital BMLN was found to have the best discrimination (AUC=0.90), and hospital BKEN was found

to have the lowest discrimination (AUC=0.81).

Table 10: Validation data 2-Model 4-2 Demographic, admission and Modified WHO principaldiagnosis by hospital group (deaths >200)

Hospital Episodes Observed Predicted SMR (95% CI) AUC H-L

BKGO 24,624 733 780.10 0.94 (0.87-1.01) 0.83 9.35 (0.499)BKEN 23,278 631 615.46 1.03 (0.95-1.11) 0.81 67.65 (0.000)BLLN 19,638 612 575.41 1.06 (0.98-1.15) 0.85 10.33 (0.412)

BNGR 25,904 610 610.07 1.00 (0.92-1.08) 0.85 2.60 (0.989)BKIN 20,139 552 632.46 0.87 (0.80-0.95) 0.87 19.82 (0.031)BLKN 20,716 543 536.58 1.01 (0.93-1.10) 0.86 13.25 (0.210)

CMFN 19,860 521 508.90 1.02 (0.94-1.12) 0.88 16.28 (0.091)CKIN 22,559 455 498.25 0.91 (0.83-1.00) 0.87 10.32 (0.413)CLEO 18,731 371 376.71 0.98 (0.89-1.09) 0.89 12.01 (0.284)BMLN 17,934 338 376.01 0.90 (0.80-1.00) 0.90 24.39 (0.006)BMEN 11,368 330 294.54 1.12 (1.00-1.25) 0.87 18.41 (0.048)BOIN 19,345 295 475.88 0.62 (0.55-0.70) 0.84 78.76 (0.000)CKEN 10,577 208 223.34 0.93 (0.81-1.07) 0.86 6.59 (0.763)

Minimum 0.62 0.81Maximum 1.12 0.90

Validation in hospitals with fewer than 200 fatalities per year

The results from both validation data 1 and 2 hospitals which had fewer than 200 deaths during the

observation period, of the demographic, admission, Elixhauser comorbidities and Modified WHO

principal diagnosis model are shown in Table 11 and Table 12. Due to the small number of deaths

included in these analyses the results should not be over-interpreted as their precision may be less

than optimal.

Table 11: Validation data 1-Model 4-2 Demographic, admission and Modified WHO principaldiagnosis by hospital group (deaths <200)

Hospital Episodes Observed Predicted SMR (95% CI) AUC H-LBKFO 9,544 190 214.69 0.88 (0.76-1.02) 0.87 15.00 (0.1321)BLFO 7,699 167 156.83 1.06 (0.91-1.24) 0.88 7.04 (0.7215)BPIN 4,731 160 145.53 1.10 (0.93-1.29) 0.76 23.82 (0.0081)COHN 7,684 152 160.26 0.95 (0.80-1.12) 0.86 8.45 (0.5853)CLJN 5,828 124 129.78 0.96 (0.79-1.14) 0.89 5.84 (0.8287)BLIN 5,605 110 121.32 0.91 (0.74-1.10) 0.87 6.93 (0.7325)CNFN 5,296 67 112.83 0.59 (0.46-0.76) 0.87 23.51 (0.0090)CKJN 4,150 62 81.30 0.76 (0.58-0.98) 0.89 11.41 (0.3268)CLKN 3,764 62 78.28 0.79 (0.60-1.02) 0.91 11.66 (0.3081)BKKO 3,112 48 56.87 0.84 (0.62-1.13) 0.93 11.38 (0.3290)

Minimum 0.59 0.76Maximum 1.14 0.93

Page 38: Monitoring hospital performance...2.1 Hospital performance monitoring Mortality prediction systems have several applications such as clinical research and monitoring clinical performance

38

Table 12: Validation data 2-Model 4-2 Demographic, admission and Modified WHO principaldiagnosis by hospital group (deaths <200)

Hospital Episodes Observed Predicted SMR (95% CI) AUC H-L

BKFO 9,890 180 221.81 0.81 (0.70-0.94) 0.86 13.55 (0.194)

COHN 7,608 163 156.26 1.04 (0.89-1.22) 0.89 11.42 (0.326)

BPIN 4,456 139 138.40 1.00 (0.84-1.19) 0.79 9.01 (0.531)

BLFO 7,793 135 155.06 0.87 (0.73-1.03) 0.86 6.28 (0.791)

CNFN 5,179 107 118.78 0.90 (0.74-1.09) 0.88 7.42 (0.685)

BLIN 5,049 106 114.39 0.93 (0.76-1.12) 0.86 8.07 (0.621)

CLJN 5,863 104 125.03 0.83 (0.68-1.01) 0.90 9.51 (0.484)

CKJN 4,216 84 87.81 0.96 (0.76-1.19) 0.90 6.34 (0.785)

CLKN 3,690 67 82.02 0.82 (0.63-1.04) 0.89 8.70 (0.560)

BKKO 3,124 40 59.04 0.68 (0.48-0.93) 0.92 9.98 (0.442)

Minimum 0.68 0.79

Maximum 1.04 0.92

Page 39: Monitoring hospital performance...2.1 Hospital performance monitoring Mortality prediction systems have several applications such as clinical research and monitoring clinical performance

39

4.3.2 Model 4-2: Demographic, admission, Elixhauser comorbidities and ModifiedWHO principal diagnosis

The results from validation data 1 by hospital group of Model 20-2 (with demographic, admission,

Elixhauser comorbidities and Modified WHO principal diagnosis) are shown in Table 13.

The regional (R_ICU) hospital group was found to have the lowest SMR values and the associated

95% confidence interval did not include unity indicating that there were significantly (P<0.05) fewer

deaths occurring in this hospital group than were predicted to occur.

The 95% confidence intervals for the SMR for the metropolitan (M_ICU) and tertiary (T) hospital

groups included unity indicating that there was no significant (P>0.05) difference between the

observed and predicted deaths occurring in these groups.

The regional (R_ICU) hospital group was found to have the best discrimination (AUC=0.88), while the

tertiary hospital group was found to have the lowest discrimination (AUC=0.85).

Table 13: Validation data 1-Model 20-2 Demographic, admission and Modified WHO principaldiagnosis by hospital group

Group Episodes Observed Predicted SMR (95% CI) AUC H-LM_ICU 109,152 2779 2746.97 1.01 (0.97–1.05) 0.87 55.51 (0.0000)

R_ICU 63,257 1223 1338.08 0.91 (0.86–0.97) 0.88 42.14 (0.0000)

T 130,838 3278 3361.10 0.98 (0.94–1.01) 0.85 15.27 (0.0541)

     

TOTAL 303,247 7280 7446.15 0.98 (0.95–1.00) 0.86 47.53 (0.0000)

The results from validation data 2 by hospital group of Model 20-2 (demographic, admission,

Elixhauser comorbidities and Modified WHO principal diagnosis) are shown in Table 14.

The regional (R_ICU) and tertiary hospital groups were found to have the lowest SMR values and the

associated 95% confidence intervals did not include unity indicating that there were significantly

(P<0.05) fewer deaths occurring in these hospital groups than were predicted to occur. The 95%

confidence intervals for the SMR for the metropolitan (M_ICU) and tertiary hospital groups included

unity indicating that there was no significant (P>0.05) difference between the observed and

predicted deaths occurring in this hospital group.

The regional (R_ICU) hospital group was found to have the best discrimination (AUC=0.88), while the

tertiary hospital group was found to have the lowest discrimination (AUC=0.84).

Table 14: Validation data 2-Model 20-2 Demographic, admission, Elixhauser comorbidities andModified WHO principal diagnosis by hospital group

Page 40: Monitoring hospital performance...2.1 Hospital performance monitoring Mortality prediction systems have several applications such as clinical research and monitoring clinical performance

40

Group Episodes Observed Predicted SMR AUC H-LM_ICU 112,126 2863 2857.06 1.00 (0.96–1.04) 0.87 52.25 (0.0000)R_ICU 62,989 1194 1348.44 0.89 (0.83–0.94) 0.88 36.95 (0.0000)

T 136,426 3267 3490.15 0.94 (0.90–0.97) 0.84 24.33 (0.0021)       

TOTAL 311,541 7324 7695.64 0.95 (0.93–0.97) 0.86 59.30 (0.0000)

Validation in hospitals with more than 200 fatalities per year

The results from validation data 1 in the hospitals which had more than 200 deaths during the

observation period, of Model 20-2 (demographic, admission, Elixhauser comorbidities and Modified

WHO principal diagnosis) are shown in Table 15.

Hospitals BKGO, BKIN and BOIN were found to have the lowest SMR values and the 95% confidence

intervals did not include unity indicating that there were significantly (P<0.05) fewer deaths

occurring in these hospitals than were predicted to occur. In contrast, the SMR for hospitals BNGR

and BMEN were the highest and the 95% confidence interval did not include unity indicating that

there were significantly (P<0.05) more deaths occurring in these hospitals than were predicted to

occur.

Hospital BMLN was found to have the best discrimination (AUC=0.91), while hospitals BKGO and

BKEN were found to have the lowest discrimination (AUC=0.82).

Table 15: Validation data 1-Model 20-2 Demographic, admission, Elixhauser comorbidities andModified WHO principal diagnosis by hospital group (deaths >200)

Hospital Episodes Observed Predicted SMR (95% CI) AUC H-LBKGO 24,134 653 718.68 0.91 (0.84–0.98) 0.82 28.97 (0.0003)

BNGR 24,420 621 566.38 1.10 (1.01–1.19) 0.84 18.96 (0.0151)

BKEN 21,315 608 578.25 1.05 (0.97–1.14) 0.82 48.37 (0.0000)

BLKN 20,328 561 515.42 1.09 (1.00–1.18) 0.86 20.29 (0.0072)

BKIN 20,040 553 611.05 0.91 (0.83–0.99) 0.87 16.77 (0.0326)

BLLN 19,421 544 547.32 0.99 (0.91–1.08) 0.85 10.08 (0.2594)

CMFN 19,344 544 497.77 1.09 (1.00–1.19) 0.88 27.10 (0.0007)

CKIN 21,854 457 491.87 0.93 (0.84–1.02) 0.88 15.29 (0.0537)

BOIN 18,787 378 490.50 0.77 (0.69–0.85) 0.86 34.52 (0.0000)

CLEO 16,939 341 325.81 1.05 (0.94–1.17) 0.89 8.76 (0.3632)

BMEN 11,304 333 290.25 1.15 (1.02–1.28) 0.87 25.26 (0.0014)

BMLN 17,373 304 337.61 0.90 (0.80–1.01) 0.91 29.80 (0.0002)

CKEN 10,575 241 222.90 1.08 (0.95–1.23) 0.87 17.28 (0.0273)

    Minimum 0.77 0.82  

    Maximum 1.15 0.91  

The results from validation data 2 in hospitals which had more than 200 deaths during the

observation period, of Model 20-2 (demographic, admission, Elixhauser comorbidities and Modified

WHO principal diagnosis) are shown in Table 16.

Page 41: Monitoring hospital performance...2.1 Hospital performance monitoring Mortality prediction systems have several applications such as clinical research and monitoring clinical performance

41

Hospitals BKGO, BKIN, CKIN and BOIN were found to have the lowest SMR values and the 95%

confidence intervals did not include 1.0 indicating that there were significantly (P<0.05) fewer

deaths occurring in these hospitals than were predicted to occur by the developed model.

Hospital BKGO was found to have the best discrimination (AUC=0.93), and hospital BKEN was found

to have the lowest discrimination (AUC=0.82).

Page 42: Monitoring hospital performance...2.1 Hospital performance monitoring Mortality prediction systems have several applications such as clinical research and monitoring clinical performance

42

Table 16: Validation data 2-Model 20-2 Demographic, admission, Elixhauser comorbidities andModified WHO principal diagnosis by hospital group (deaths >200)

Hospital Episodes Observed Predicted SMR (95% CI) AUC H-L

BKGO 24,624 733 769.06 0.95 (0.88–1.03) 0.93 13.42 (0.0982)BKEN 23,278 631 619.08 1.02 (0.94–1.10) 0.82 53.48 (0.0000)BLLN 19,638 612 562.41 1.09 (1.00–1.18) 0.85 20.52 (0.0086)

BNGR 25,904 610 601.85 1.01 (0.93–1.10) 0.85 6.03 (0.6438)BKIN 20,139 552 626.98 0.88 (0.81–0.96) 0.87 20.86 (0.0075)BLKN 20,716 543 527.78 1.03 (0.94–1.12) 0.86 11.83 (0.1589)

CMFN 19,860 521 503.03 1.04 (0.95–1.13) 0.88 18.33 (0.0189)CKIN 22,559 455 507.66 0.90 (0.81–0.98) 0.87 13.77 (0.0880)CLEO 18,731 371 372.38 1.00 (0.90–1.11) 0.89 10.25 (0.2480)BMLN 17,934 338 368.60 0.92 (0.82–1.02) 0.90 24.67 (0.0018)BMEN 11,368 330 293.88 1.12 (1.00–1.25) 0.88 21.05 (0.0070)BOIN 19,345 295 464.73 0.63 (0.56–0.71) 0.84 70.98 (0.0000)CKEN 10,577 208 219.14 0.95 (0.82–1.09) 0.86 5.88 (0.6608)

Minimum 0.63 0.82  Maximum 1.12 0.93  

Validation in hospitals with fewer than 200 fatalities per year

The results from both validation data 1 and 2 in hospitals which had fewer than 200 deaths during

the observation period, of the demographic, admission, Elixhauser comorbidities and Modified

WHO principal diagnosis model are shown in Table 17 and Table 20.

Due to the small number of deaths included in these analyses the results should not be over-

interpreted as their precision may be less than optimal.

Table 17: Validation data 1-Model 20-2 Demographic, admission, Elixhauser comorbidities andModified WHO principal diagnosis by hospital group (deaths <200)

Hospital Episodes Observed Predicted SMR (95% CI) AUC H-LBKFO 9,544 190 215.41 0.88 (0.76–1.02) 0.87 12.58 (0.1270)

BLFO 7,699 167 155.15 1.08 (0.92–1.26) 0.88 9.44 (0.3066)

BPIN 4,731 160 137.18 1.17 (0.99–1.37) 0.77 29.14 (0.0003)

COHN 7,684 152 159.60 0.95 (0.80–1.12) 0.86 9.83 (0.2769)

CLJN 5,828 124 129.85 0.95 (0.79–1.14) 0.89 6.91 (0.5460)

BLIN 5,605 110 122.71 0.90 (0.73–1.08) 0.87 7.59 (0.4741)

CNFN 5,296 67 114.09 0.59 (0.45–0.75) 0.87 24.72 (0.0017)

CKJN 4,150 62 82.06 0.76 (0.58–0.97) 0.89 10.62 (0.2245)

CLKN 3,764 62 78.33 0.79 (0.60–1.02) 0.9 12.29 (0.1389)

BKKO 3,112 48 57.98 0.83 (0.61–1.10) 0.92 5.18 (0.7378)

Minimum 0.59 0.77

Maximum 1.17 0.92

Page 43: Monitoring hospital performance...2.1 Hospital performance monitoring Mortality prediction systems have several applications such as clinical research and monitoring clinical performance

43

Table 18: Validation data 2-Model 20-2 Demographic, admission, Elixhauser comorbidities andModified WHO principal diagnosis by hospital group (deaths <200)

Hospital Episodes Observed Predicted SMR (95% CI) AUC H-L

BKFO 9,890 180 225.48 0.80 (0.68–0.93) 0.86 13.81 (0.0868)

COHN 7,608 163 160.20 1.02 (0.86–1.19) 0.89 14.61 (0.0672)

BPIN 4,456 139 129.77 1.07 (0.90–1.27) 0.81 11.29 (0.1860)

BLFO 7,793 135 154.78 0.87 (0.73–1.04) 0.86 6.31 (0.6120)

CNFN 5,179 107 120.71 0.89 (0.72–1.08) 0.88 5.86 (0.6624)

BLIN 5,049 106 114.86 0.92 (0.75–1.12) 0.85 7.53 (0.4804)

CLJN 5,863 104 124.40 0.84 (0.68–1.02) 0.9 13.20 (0.1051)

CKJN 4,216 84 88.87 0.95 (0.75–1.18) 0.9 5.72 (0.6780)

CLKN 3,690 67 80.88 0.83 (0.64–1.06) 0.89 10.50 (0.2319)

BKKO 3,124 40 59.11 0.68 (0.48–0.93) 0.91 10.03 (0.2633)

Minimum 0.63 0.81

Maximum 1.12 0.93

Page 44: Monitoring hospital performance...2.1 Hospital performance monitoring Mortality prediction systems have several applications such as clinical research and monitoring clinical performance

44

5. Cohort 2: COPE Victorian Intensive care hospitaladmissions

5.1 Abstract

OBJECTIVE: To develop and validate a model for predicting inpatient death in people attending a

Victorian hospital intensive care unit (ICU) based on demographic and clinical data.

DESIGN: Retrospective audit of hospital maintained administrative datasets: development-external

validation study. Predictive model development involved a systematic process of variable

reduction followed by multiple logistic regression.

SETTING: Twenty-three, major Victorian public hospitals, Australia.

PATIENTS: Adult (>17 years) ICU episodes were included in the analyses. Data included 17,405

episodes between July 2004–June 2005 (development data), 17,309 episodes between July 2005

and June 2006 (validation data 1) and 17,522 episodes between July 2006 and June 2007

(validation 2).

MAIN OUTCOME MEASURES: Deaths in hospital. Performance of models assessed by the area under

the receiver operating characteristic curve (AUC) measuring discrimination, and Hosmer-

Lemeshow statistics and standardised mortality ratios (SMRs) to assess the model calibration.

RESULTS: There were 2,054 deaths in the development data, 2,091 in the validation 1 data and 2,155

in the validation 2 data. A model that combined age, gender, admission characteristics, use of

Page 45: Monitoring hospital performance...2.1 Hospital performance monitoring Mortality prediction systems have several applications such as clinical research and monitoring clinical performance

45

mechanical ventilation, cardiac surgery procedure and principal diagnosis was found to have

excellent discrimination (AUC=0.84) and calibration (H-L=49.45 P=0.0000,). This model had

comparable predictive performance to the previously developed COPE model despite containing

fewer variables (28 v 41).

External validation of the new COPE model confirmed the model discrimination and calibration

was stable (validation data 1: AUC=0.84, H-L x2=27.07 P=0.025 & SMR=1.01; validation data 2:

AUC=0.81, H-L x2=65.53 P=0.0000 & SMR=1.00).

Adding Elixhauser or Charlson comorbidities to the model did not improve predictive performance

(Elixhauser model: AUC=0.84, H-L x2=48.75 P=0.0000; Charlson model: AUC=0.84, H-L x2=48.83

P=0.0000).

Individual hospital analysis found high levels of discrimination (AUC≥0.80) in 16 of 23 (69.6%)

hospitals in the validation 1 and validation 2 data. Good levels of calibration (H-L>0.05) were found

in 20 of 23 (87.0%) hospitals in the validation 1 data and 15 of the 23(65.2%) hospitals in the

validation 2 data. Examination of the SMR also revealed high levels of calibration (SMR 95%CI

includes 1) in 22 of 23 (95.7%) hospitals in both the validation 1 and 21 of 23 (91.3%) validation 2

data sets.

CONCLUSIONS: Routinely collected administrative data can be used to predict in-hospital mortality

risk for critically ill patients with high levels of discrimination and calibration. The developed model

provides a useful method for monitoring ICU performance. Use of the WHO based principal

diagnosis classification resulted in a model with fewer variables and equal predictive performance

to the previously developed COPE model which used a locally derived principal diagnosis

classification. Adding Elixhauser or Charlson comorbidity information to the model did not improve

predictive performance.

Page 46: Monitoring hospital performance...2.1 Hospital performance monitoring Mortality prediction systems have several applications such as clinical research and monitoring clinical performance

46

5.2 Development results

0

0.1

0.2

0.3

0.4

0.5

17 22 27 32 37 42 47 52 57 62 67 72 77 82 87 92 97

Age (years)

Mo

rta

lity

rate

Figure 3: Plot of mortality rates by age—COPE

Figure 3 is a plot of mortality rates by age. It revealed a relatively linear relationship between age

and mortality rate. However the marked increased risk in older ages and the relatively uniform rate

in younger ages suggested a quadratic relationship may exist between age and mortality.

Examination of the R2 for age and the quadratic transformation of age however found no

significant gain in explained variation with the quadratic transformation of age (Table 19). Thus age

was chosen as the candidate variable to use in the step-wise regression procedure.

Page 47: Monitoring hospital performance...2.1 Hospital performance monitoring Mortality prediction systems have several applications such as clinical research and monitoring clinical performance

47

Table 19: Univariate regression analysis results for age variables 04-05 development data—COPE

Variable Odds Ratio (95% CI) P R2

Age 1.03 (1.03–1.04) 0.000 0.0363Age2 1.00 (1.00–1.00) 0.000 0.0376

Models Investigated

In cohort 2 (Major Victorian public hospitals) 20 models were estimated. The models included

combinations of the demographic, admission and diagnostic variables. The details of these models

are presented in Table 20.

Table 20: 23 Tertiary, metropolitan and regional ICU hospitals Victorian hospitals (ICU episodes)

Model and variables entered Variables retained in the model AUCHL

x2 (P)

1-3 Demographic, admission, cardiac surgery andmechanical ventilation

1. Age (years)2. ED admission3. Admitted from RACF4. Cardiac surgery5. Mechanical ventilation

0.8242.50

(0.000)

2-3 Demographic, admission, cardiac surgery,mechanical ventilation and HOPE2007principal diagnosis

1. Age (years)2. ED admission3. Admitted from RACF4. Cardiac surgery5. Mechanical ventilation6. HOPE 2007principal diagnosis

(36)

0.8548.66

(0.000)

3-3 Demographic, admission, cardiac surgery,mechanical ventilation and WHO principaldiagnosis

1. Age (years)2. ED admission3. Cardiac surgery4. Mechanical ventilation5. WHO principal diagnosis (23)

0.8443.86

(0.000)

4-3 Demographic, admission, cardiac surgery,mechanical ventilation and Modified WHOprincipal diagnosis

1. Age (years)2. ED admission3. Admitted from RACF4. Cardiac surgery5. Mechanical ventilation6. Modified WHO principal

diagnosis (23)

0.8449.45

(0.000)

5-3 Demographic, admission, cardiac surgery,mechanical ventilation and Charlsoncomorbidity score

1. Age (years)2. ED admission3. Inter-hospital transfer4. Admitted from RACF5. Cardiac surgery6. Mechanical ventilation7. Charlson score

0.8244.10

(0.000)

6-3 Demographic, admission, cardiac surgery,mechanical ventilation and Elixhausercomorbidity score

1. Age (years)2. ED admission3. Admitted from RACF4. Cardiac surgery5. Mechanical ventilation6. Elixhauser score

0.8242.50

(0.000)

7-3 Demographic, admission, cardiac surgery,mechanical ventilation and Charlsoncomorbidities

1. Age (years)2. ED admission3. Admitted from RACF4. Cardiac surgery5. Mechanical ventilation6. Charlson comorbidities (2)

0.8241.96

(0.000)

Page 48: Monitoring hospital performance...2.1 Hospital performance monitoring Mortality prediction systems have several applications such as clinical research and monitoring clinical performance

48

Model and variables entered Variables retained in the model AUCHL

x2 (P)

8-3 Demographic, admission, cardiac surgery,mechanical ventilation and Elixhausercomorbidities

1. Age (years)2. ED admission3. Inter-hospital transfer4. Admitted from RACF5. Cardiac surgery6. Mechanical ventilation7. Elixhauser comorbidities (1)

0.8242.69

(0.000)

9-3 Demographic, admission, cardiac surgery,mechanical ventilation, Charlson comorbidityscore and HOPE 2007principal diagnosis

1. Age (years)2. ED admission3. Admitted from RACF4. Cardiac surgery5. Mechanical ventilation6. HOPE 2007principal diagnosis

(36)

0.8548.66

(0.000)

10-3 Demographic, admission, cardiac surgery,mechanical ventilation, Elixhauser comorbidityscore and HOPE 2007principal diagnosis

1. Age (years)2. ED admission3. Admitted from RACF4. Cardiac surgery5. Mechanical ventilation6. HOPE 2007principal diagnosis

(66)

0.8548.66

(0.000)

11-3 Demographic, admission, cardiac surgery,mechanical ventilation, Charlson comorbiditiesand HOPE 2007principal diagnosis

1. Age (years)2. ED admission3. Admitted from RACF4. Cardiac surgery5. Mechanical ventilation6. Charlson comorbidities (1)7. HOPE 2007principal diagnosis

(36)

0.8549.40

(0.000)

12-3 Demographic, admission, cardiac surgery,mechanical ventilation, Elixhausercomorbidities and HOPE 2007principaldiagnosis

1. Age (years)2. ED admission3. Admitted from RACF4. Cardiac surgery5. Mechanical ventilation6. Elixhauser comorbidities (1)7. HOPE 2007principal diagnosis

(35)

0.8550.41

(0.000)

13-3 Demographic, admission, cardiac surgery,mechanical ventilation, Charlson comorbidityscore and WHO principal diagnosis

1. Age (years)2. ED admission3. Inter-hospital transfer4. Admitted from RACF5. Cardiac surgery6. Mechanical ventilation7. Charlson score8. WHO principal diagnosis (23)

0.8445.02

(0.000)

14-3 Demographic, admission, cardiac surgery,mechanical ventilation, Elixhauser comorbidityscore and WHO principal diagnosis

1. Age (years)2. ED admission3. Cardiac surgery4. Mechanical ventilation5. WHO principal diagnosis (23)

0.8443.86

(0.000)

15-3 Demographic, admission, cardiac surgery,mechanical ventilation, Charlson comorbiditiesand WHO principal diagnosis

1. Age (years)2. ED admission3. Cardiac surgery4. Mechanical ventilation5. Charlson comorbidities (1)6. WHO principal diagnosis (23)

0.8443.24

(0.000)

Page 49: Monitoring hospital performance...2.1 Hospital performance monitoring Mortality prediction systems have several applications such as clinical research and monitoring clinical performance

49

Model and variables entered Variables retained in the model AUCHL

x2 (P)

16-3 Demographic, admission, cardiac surgery,mechanical ventilation, Elixhausercomorbidities and WHO principal diagnosis

1. Age (years)2. ED admission3. Cardiac surgery4. Mechanical ventilation5. Elixhauser comorbidities (1)6. WHO principal diagnosis (23)

0.8443.21

(0.000)

17-3 Demographic, admission, cardiac surgery,mechanical ventilation, Charlson comorbidityscore and Modified WHO principal diagnosis

1. Age (years)2. ED admission3. Cardiac surgery4. Mechanical ventilation5. Modified WHO principal

diagnosis (22)

0.8449.45

(0.000)

18-3 Demographic, admission, cardiac surgery,mechanical ventilation, Elixhauser comorbidityscore and Modified WHO principal diagnosis

1. Age (years)2. ED admission3. Admitted from RACF4. Cardiac surgery5. Mechanical ventilation6. WHO principal diagnosis (22)

0.8449.45

(0.000)

19-3 Demographic, admission, cardiac surgery,mechanical ventilation, Charlson comorbiditiesand Modified WHO principal diagnosis

1. Age (years)2. ED admission3. Admitted from RACF4. Cardiac surgery5. Mechanical ventilation6. Charlson comorbidities (1)7. Modified WHO principal

diagnosis (22)

0.8448.83

(0.000)

20-3 Demographic, admission, cardiac surgery,mechanical ventilation, Elixhausercomorbidities and Modified WHO principaldiagnosis

1. Age (years)2. ED admission3. Admitted from RACF4. Cardiac surgery5. Mechanical ventilation6. Elixhauser comorbidities (1)7. Modified WHO principal

diagnosis (28)

0.8448.75

(0.000)

Note:AUC=Area under the receiver operating curveHL=Hosmer-Lemeshowx2 =Pearson’s Chi squareP=Probability value

A model including demographic, admission, cardiac surgery and mechanical ventilation variables

(Model 1-3) was found to have the lowest discrimination and calibration of the models tested

(Table 20). The addition of either the Charlson or Elixhauser comorbidity scores or individual

comorbidities did not improve calibration or discrimination (Table 20: Models 5-3–8-3).

Models including demographic, admission, cardiac surgery, mechanical ventilation and principal

diagnosis variables (Models 2-3–4-3) were found to have greater discrimination and calibration

than the demographic and admission only model (Model 1-3).

The HOPE 2007 principal diagnosis model had a substantially higher number of diagnostic variables

than the WHO and modified WHO models (eg. 41 v 28) (Table 20). The addition of either Charlson

or Elixhauser comorbidity scores or individual comorbidities (Models 9-3–20-3) did not appear to

improve the discrimination or calibration of the models.

Page 50: Monitoring hospital performance...2.1 Hospital performance monitoring Mortality prediction systems have several applications such as clinical research and monitoring clinical performance

50

Based on these findings model 4-3 was selected as the final model to be validated in the validation

data 1 & 2. This model included demographic, admission, cardiac surgery, mechanical ventilation

and Modified WHO principal diagnosis variables (Table 21).

Table 21: Model 4-3 Demographic, admission, cardiac surgery, mechanical ventilation andModified WHO principal diagnosis variables

Independent Variable

Odds Ratio[95% Conf.

Interval] P

Age (years) 1.04 (1.04–1.05) 0.000Emergency admission 1.99 (1.72–2.31) 0.000

WHO M-Shock and haemorrage 3.36 (1.43–7.90) 0.005

Admitted from a RACF 2.51 (1.13–5.55) 0.024

WHO M-Poisoning 0.31 (0.19–0.51) 0.000

Mechanical ventilation use 6.65 (5.92–7.47) 0.000

Cardiac surgery procedure 0.09 (0.07–0.13) 0.000

WHO M-Bacterial diseases 2.69 (2.09–3.46) 0.000

WHO M-Pneumonia 1.77 (1.40–2.25) 0.000

WHO M-Protozal diseases 3.86 (1.20–12.43) 0.024

WHO M-Diseases of the liver 2.77 (1.91–4.03) 0.000

WHO M-Malignant neoplasm of digestive organs 0.68 (0.50–0.93) 0.016

WHO M-Malignant neoplasm of the respiratory and intrathoracic organs 2.51 (1.57–4.02) 0.000

WHO M-Disorders of the gallbladder, biliary tract and pancreas 0.64 (0.45–0.92) 0.017

WHO M-Injury 0.74 (0.57–0.96) 0.022

WHO M-Remainder of respiratory diseases 1.43 (1.09–1.86) 0.009

WHO M-Malignant neoplasms of ill-defined, secondary and unspecified sites 3.22 (2.10–4.94) 0.000

WHO M-Malignant neoplasms of lymphoid, haematopoietic and related tissues 6.94 (4.70–10.25) 0.000

WHO M-Diseases of the genitourinary system 1.65 (1.11–2.44) 0.013

WHO M-Anaemias 8.07 (3.15–20.67) 0.000

WHO M-Type 2 Diabetes 1.61 (1.13–2.31) 0.009

WHO M-Disorders of the nervous system 1 0.21 (0.09–0.48) 0.000

WHO M-Disorders of the nervous system 2 4.60 (2.48–8.54) 0.000

WHO M-IHD 0.33 (0.21–0.51) 0.000

WHO M-Other heart disease 1.43 (1.17–1.76) 0.001

WHO M-Cerebrovascular diseases 2.33 (1.89–2.87) 0.000

WHO M-Arterial disease 0.65 (0.48–0.88) 0.006

Page 51: Monitoring hospital performance...2.1 Hospital performance monitoring Mortality prediction systems have several applications such as clinical research and monitoring clinical performance

51

5.3 COPE model validation results

The development results found that a model that combined age, gender, admission

characteristics, use of mechanical ventilation, cardiac surgery procedure and principal diagnosis

was found to have excellent discrimination (AUC=0.84) and calibration (H-L=49.45 P=0.0000,).

External validation confirmed the model discrimination and calibration was stable (validation data

1: AUC=0.84, H-L x2=27.07 P=0.025 & SMR=1.01; validation data 2: AUC=0.81, H-Lx2=65.53 P=0.0000 &

SMR=1.00).

Individual hospital analysis found high levels of discrimination (AUC≥0.80) in 16 of 23 (69.6%)

hospitals in the validation 1 and validation 2 data. Good levels of calibration (H-L>0.05) were found

in 20 of 23 (87.0%) hospitals in the validation 1 data and 15 of the 23(65.2%) hospitals in the

validation 2 data. Examination of the SMR also revealed high levels of calibration (SMR 95%CI

includes 1) in 22 of 23 (95.7%) hospitals in both the validation 1 and 21 of 23 (91.3%) validation 2

data sets.

The associated 95% confidence intervals for the validation 1 and 2 data included unity indicating

that there was no significant (P>0.05) difference between the observed and expected deaths

occurring in the validation 1 and 2 cohorts.

Table 22: Validation data 1-Model 4-3 Demographic, admission, use of mechanical ventilation,cardiac surgery procedure and Modified WHO principal diagnosis by hospital group

Group Episodes Observed Predicted SMR (95% CI) AUC H-L

M_ICU 4217 640 630.53 1.02 (0.94–1.10) 0.82 21.30 (0.0191)

R_ICU 4315 337 374.32 0.90 (0.80–1.00) 0.82 26.98 (0.0026)

T 8772 1114 1073.15 1.04 (0.98–1.10) 0.84 21.32 (0.0190)

Total 17,304 2091 2078.00 1.01 (0.96–1.05) 0.81 65.53 (0.0000)

The results from validation data 1 by hospital group of Model 4-3 (demographic, admission, use of

mechanical ventilation, cardiac surgery procedure and Modified WHO principal diagnosis model)

are shown in Table 22. The 95% confidence intervals for the SMR for all hospital groups included

unity indicating that there was no significant (P>0.05) difference between the observed and

predicted deaths occurring in all hospital groups.

The tertiary hospital group was found to have the best discrimination (AUC=0.84), while the

metropolitan and regional ICU hospital groups were found to have the lowest discrimination

(AUC=0.82).

Page 52: Monitoring hospital performance...2.1 Hospital performance monitoring Mortality prediction systems have several applications such as clinical research and monitoring clinical performance

52

Table 23: Validation data 2-Model 4-3 Demographic, admission, use of mechanical ventilation,cardiac surgery procedure and Modified WHO principal diagnosis by hospital group

Group Episodes Observed Predicted SMR (95% CI) AUC H-L

M_ICU 4,302 631 645.74 0.98 (0.90–1.06) 0.82 16.00 (0.0997)

R_ICU 3,988 377 376.55 1.00 (0.90–1.11) 0.86 28.68 (0.0014)

T 9,232 1,147 1129.49 1.02 (0.96–1.08) 0.82 35.44 (0.0001)

Total 17,522 2,155 2151.78 1.00 (0.96–1.05) 0.83 25.07 (0.0250)

The results from validation data 2 by hospital group of Model 4-3 (demographic, admission, use of

mechanical ventilation, cardiac surgery procedure and Modified WHO principal diagnosis model)

are shown in Table 23. The 95% confidence intervals for the SMR for all hospital groups included

unity indicating that there was no significant (P>0.05) difference between the observed and

predicted deaths occurring in all hospital groups.

The regional hospital group was found to have the best discrimination (AUC=0.86), while the

metropolitan and tertiary hospital groups were found to have the lowest discrimination (AUC=0.82).

Validation in hospitals with more than 200 fatalities per year

The results from validation data 1 by hospitals which had more than 200 deaths during the

observation period, of Model 4-3 (demographic, admission, use of mechanical ventilation, cardiac

surgery procedure and Modified WHO principal diagnosis model) are shown in Table 24.

The 95% confidence intervals for the SMR for all hospitals with more than 200 deaths included unity

indicating that there was no significant (P>0.05) difference between the observed and predicted

deaths occurring in these hospitals. Hospital BNGR was found to have the best discrimination

(AUC=0.85), while hospital BKGO was found to have the lowest discrimination (AUC=0.78).

Table 24: Validation data 1-Model 4-3 Demographic, admission, use of mechanical ventilation,cardiac surgery procedure and Modified WHO principal diagnosis by hospital group (deaths >200)

Hospital Episodes Observed Predicted SMR (95% CI) AUC H-L

BKEN 1,764 265 233.05 1.14 (1.00–1.29) 0.82 19.04 (0.0398)

BNGR 1,820 240 213.52 1.12 (0.98–1.28) 0.85 6.39 (0.7814)

BKGO 1,728 203 220.16 0.92 (0.80–1.06) 0.78 37.39 (0.0000)

The results from validation data 2 by hospitals which had more than 200 deaths during the

observation period, of Model 4-3 (demographic, admission, use of mechanical ventilation, cardiac

surgery procedure and Modified WHO principal diagnosis model) are shown in Table 25.

Hospital BKGO was found to have the highest SMR values and the 95% confidence intervals did not

include 1.0 indicating that there were significantly (P<0.05) more deaths occurring in this hospital

than were predicted to occur by the developed model. The 95% confidence intervals for the SMR

for hospitals BKGO and BNGR included unity indicating that there was no significant (P>0.05)

Page 53: Monitoring hospital performance...2.1 Hospital performance monitoring Mortality prediction systems have several applications such as clinical research and monitoring clinical performance

53

difference between the observed and predicted deaths occurring in these hospitals. Hospital

BNGR was found to have the best discrimination (AUC=0.82), and hospitals BKEN and BKGO were

found to have the lowest discrimination (AUC=0.80).

Table 25: Validation data 2-Model 4-3 Demographic, admission, use of mechanical ventilation,cardiac surgery procedure and Modified WHO principal diagnosis by hospital group (deaths >200)

Hospital Episodes Observed Predicted SMR (95% CI) AUC H-L

BKEN 1,942 275 241.42 1.14 (1.01–1.29) 0.80 30.06 (0.0008)

BKGO 1,999 247 253.38 0.97 (0.85–1.11) 0.80 31.23 (0.0005)

BNGR 1,845 209 222.01 0.94 (0.82–1.08) 0.82 22.04 (0.0149)

Validation in hospitals with fewer than 200 fatalities per year

The validation data 1 and 2 results by hospitals which had fewer than 200 deaths during the

observation period, of the demographic, admission, use of mechanical ventilation, cardiac surgery

procedure and Modified WHO principal diagnosis model are shown in Table 26 and Table 27. Due

to the small number of deaths included in these analyses the results should not be over-interpreted

as their precision may be less than optimal.

Table 26: Validation data 1-Model 4-3 Demographic, admission, use of mechanical ventilation,cardiac surgery procedure and Modified WHO principal diagnosis by hospital group (deaths <200)

Hospital Episodes Observed Predicted SMR (95% CI) AUC H-L

BOIN 1,064 153 143.43 1.07 (0.90–1.25) 0.82 10.78 (0.3746)

BLLN 790 152 141.96 1.07 (0.90–1.26) 0.83 12.15 (0.2749)

BKIN 617 135 127.81 1.06 (0.88–1.25) 0.78 8.30 (0.5997)

CKIN 1,384 129 130.82 0.99 (0.82–1.18) 0.84 9.28 (0.5055)

BLKN 1,012 124 132.17 0.94 (0.78–1.12) 0.79 20.23 (0.0272)

CMFN 595 92 91.83 1.00 (0.80–1.23) 0.80 6.42 (0.7785)

CLEO 759 91 97.25 0.94 (0.75–1.15) 0.79 8.76 (0.5548)

BMLN 721 89 89.42 1.00 (0.80–1.23) 0.81 8.71 (0.5594)

BLFO 683 64 60.03 1.07 (0.82–1.37) 0.87 16.30 (0.0915)

BMEN 520 58 63.12 0.92 (0.69–1.19) 0.83 6.19 (0.7991)

BLIN 806 46 55.94 0.82 (0.60–1.10) 0.83 10.51 (0.3970)

BKFO 150 45 39.05 1.15 (0.83–1.55) 0.75 7.09 (0.7166)

CKEN 177 35 36.20 0.97 (0.67–1.35) 0.82 3.53 (0.9660)

COHN 342 30 32.61 0.92 (0.61–1.32) 0.90 7.43 (0.6846)

CLKN 626 29 43.74 0.66 (0.44–0.96) 0.85 8.22 (0.6074)

CKJN 462 28 33.96 0.82 (0.54–1.20) 0.87 4.15 (0.9402)

BPIN 215 23 19.14 1.20 (0.75–1.82) 0.79 9.66 (0.4706)

CLJN 392 23 25.08 0.92 (0.57–1.39) 0.87 3.19 (0.9765)

CNFN 419 21 29.71 0.71 (0.43–1.09) 0.80 5.50 (0.8554)

BKKO 258 16 18.00 0.89 (0.50–1.46) 0.70 4.92 (0.8967)

Page 54: Monitoring hospital performance...2.1 Hospital performance monitoring Mortality prediction systems have several applications such as clinical research and monitoring clinical performance

54

Table 27: Validation data 2-Model 4-3 Demographic, admission, use of mechanical ventilation,cardiac surgery procedure and Modified WHO principal diagnosis by hospital group (deaths <200)

Episodes Observed Predicted SMR (95% CI) AUC H-L

BLKN 1127 161 153.76 1.05 (0.89–1.23) 0.80 52.44 (0.0000)

BLLN 737 150 127.27 1.18 (0.99–1.39) 0.81 26.93 (0.0027)

BOIN 1,001 133 125.76 1.06 (0.88–1.26) 0.81 22.04 (0.0149)

CMFN 735 132 120.27 1.10 (0.91–1.31) 0.84 9.50 (0.4849)

CKIN 1,318 122 133.16 0.92 (0.76–1.10) 0.83 13.42 (0.2013)

BKIN 572 111 119.04 0.93 (0.76–1.13) 0.71 22.80 (0.0115)

CLEO 867 85 96.95 0.88 (0.70–1.09) 0.77 13.20 (0.2126)

BMLN 761 81 100.74 0.80 (0.64–1.00) 0.81 13.66 (0.1889)

BLFO 684 72 62.78 1.15 (0.89–1.45) 0.83 13.70 (0.1873)

BLIN 784 57 62.26 0.92 (0.69–1.19) 0.87 7.15 (0.7110)

BMEN 412 51 59.78 0.85 (0.63–1.13) 0.78 15.06 (0.1299)

CNFN 453 44 30.93 1.42 (1.03–1.92) 0.76 17.54 (0.0632)

BKFO 161 42 40.99 1.02 (0.73–1.39) 0.65 9.69 (0.4681)

CKEN 187 40 33.39 1.20 (0.85–1.64) 0.65 26.63 (0.0030)

CKJN 380 37 32.06 1.15 (0.81–1.60) 0.83 13.88 (0.1783)

CLKN 558 27 42.63 0.63 (0.41–0.93) 0.81 13.21 (0.2120)

COHN 83 23 21.57 1.07 (0.67–1.61) 0.76 11.05 (0.3538)

BPIN 218 21 21.69 0.97 (0.59–1.49) 0.84 10.94 (0.3619)

CLJN 405 19 30.48 0.62 (0.37–0.98) 0.86 7.74 (0.6539)

BKKO 293 16 19.46 0.82 (0.46–1.35) 0.84 5.62 (0.8464)

Page 55: Monitoring hospital performance...2.1 Hospital performance monitoring Mortality prediction systems have several applications such as clinical research and monitoring clinical performance

55

6. Summary

6.1 Use of the WHO based principal diagnosis classification grouping

Use of the WHO based principal diagnosis classification groupings resulted in both the HOPE and

COPE models with fewer variables than the previously developed HOPE and COPE models which

was based on a locally derived diagnosis classification. Despite having a smaller number of

variables, predictive performance was comparable to the previous HOPE and COPE models.

Model parsimony is a priority consideration in predictive model development as it minimises the

chance of over-fitting, and facilitates statistical stability, generality and clinical utility. The use of the

WHO based diagnosis classification groupings is considered to offer other advantages over the

locally derived diagnosis classification such as greater national and international acceptance.

6.2 Inclusion of comorbidity information

Adding Elixhauser or Charlson comorbidity information to the model did not improve predictive

performance. This finding was surprising and novel in light of the findings of other research that has

found comorbidity information offers significant predictive ability in hospital mortality prediction

models.

There are several possible explanations for this finding. First, models including demographic,

admission and principal diagnosis information were found to have very high levels of predictive

performance and as such the inclusion of further diagnostic information such as comorbidities

provided little added value. In essence a predictive ceiling effect may have occurred.

Page 56: Monitoring hospital performance...2.1 Hospital performance monitoring Mortality prediction systems have several applications such as clinical research and monitoring clinical performance

56

Second, there may be an association between comorbidities and readmission to hospital.

Readmission may lead to clustering and auto-correlation of episode observations. Several

readmissions with the same comorbidity will potentially reduce the statistical association of that

comorbidity with eventual death despite a strong clinical association between the two. We were

unable to identify readmissions from the datasets used for this investigation. This hypothesis requires

further investigation.

Third, the selection criteria for use of diagnoses included in the calculation of the Charlson and

Elixhauser comorbidity indices was the prefix to the diagnosis field. The models reported included

only diagnoses which were prefixed “A”. Use of other prefixed diagnoses, or combinations of these,

may have resulted in different comorbidity scores which subsequently resulted in models with more

or less Charlson or Elixhauser comorbidities included, and different regression coefficients and

overall model predictive characteristics. In an effort to investigate this preliminary analysis was

undertaken in which all “A” and “P” prefixed diagnoses were included in the calculation of the

comorbidity scores. The initial findings suggested that the use of “A” and “P” prefixed diagnoses

resulted in a HOPE model which included more comorbidities but offered no greater predictive

ability and poorer calibration than the model without comorbidities or the model including only “A”

prefixed diagnoses. Further investigation into the relative change in predictive accuracy with

inclusion of different diagnostic prefixes is required.

Fourth, the calculation of comorbidity indices from Victorian hospital administrative data may be

inaccurate. Despite international studies finding that hospital administrative data sets can be used

to accurately calculate Charlson and Elixhauser comorbidity scores, to our knowledge no similar

studies have been conducted in Australia.

Fifth, although several comorbidities can be included in the VAED the coding of comorbidities is

optional. This may have led to an under-reporting of comorbidities although we have no data to

support this.

Finally, comorbidities may be more important in selected diagnoses than in the overall hospital

population. Several of these possible explanations warrant further investigation in the next phase.

6.3 Future directions

The findings of this study have led to the identification of several hypotheses that warrant

investigation and may further improve the performance of the HOPE and COPE models. These are:

• Use of variance adjustment to account for the presence of clustered, and likely

autocorrelated data.

• Investigation of the added predictive value of use of a quadratic transformation in

the older age groups, e.g. age >40 years in the HOPE analysis and >65 years in the

COPE analysis.

Page 57: Monitoring hospital performance...2.1 Hospital performance monitoring Mortality prediction systems have several applications such as clinical research and monitoring clinical performance

57

• Investigation of the added predictive value of use of interaction terms both HOPE

and COPE models.

• Investigation of the predictive value of survival models which incorporate time-to-

death analysis.

• Investigation of the predictive value of step-wise regression which uses Akaike

Information Criterion (AIC) or Bayesian Information Criterion (BIC) as the decision rule.

• Investigation of the predictive value of Bayesian shrinkage in the regression analysis

to account for the regression to the mean in mortality rates [3].

• Investigation of the model performance in condition specific sub-groups of interest

such as cardiac surgery, fractured neck of femur, stroke, or high-mortality diagnoses

such as pneumonia, septicaemia, AMI.

• Investigation of the model performance in inter-state and international data sets/

hospitals

• Investigation of co-morbidity scores obtained from administrative data compared

with those obtained from manual chart audit.

Page 58: Monitoring hospital performance...2.1 Hospital performance monitoring Mortality prediction systems have several applications such as clinical research and monitoring clinical performance

58

References

1. Pitches, D.W., M.A. Mohammed, and R.J. Lilford, What is the empirical evidence thathospitals with higher-risk adjusted mortality rates provide poorer quality care? A systematicreview of the literature. BMC Health Serv Res, 2007. 7: p. 91.

2. Matheny, M.E., L. Ohno-Machado, and F.S. Resnic, Risk-adjusted sequential probability ratiotest control chart methods for monitoring operator and institutional mortality rates ininterventional cardiology. Am Heart J, 2008. 155(1): p. 114-20.

3. Werner, R.M. and E.T. Bradlow, Relationship between Medicare's hospital compareperformance measures and mortality rates. JAMA, 2006. 296(22): p. 2694-2702.

4. Martins, M. and R. Blais, Evaluation of comorbidity indices for inpatient mortality predictionmodels. J Clin Epidemiol, 2006. 59(7): p. 665-9.

5. Baldwin, L.M., C.N. Klabunde, P. Green, W. Barlow, and G. Wright, In search of the perfectcomorbidity measure for use with administrative claims data: does it exist? Med Care, 2006.44(8): p. 745-53.

6. Cook, D.A., Methods to assess performance of models estimating risk of death in intensivecare patients: a review. Anaesth Intensive Care, 2006. 34(2): p. 164-75.

7. Scott, I.A., P.L. Thomson, and S. Narasimhan, Comparing risk-prediction methods usingadministrative or clinical data in assessing excess in-hospital mortality in patients with acutemyocardial infarction. Med J Aust, 2008. 188(6): p. 332-6.

8. Aylin, P., A. Bottle, and A. Majeed, Use of administrative data or clinical databases aspredictors of risk of death in hospital: comparison of models. Bmj, 2007. 334(7602): p. 1044.

9. Cook, D.A., S.H. Steiner, R.J. Cook, V.T. Farewell, and A.P. Morton, Monitoring theevolutionary process of quality: risk-adjusted charting to track outcomes in intensive care.Crit Care Med, 2003. 31(6): p. 1676-82.

10. O'Connor, G.T., S.K. Plume, E.M. Olmstead, et al., Multivariate prediction of in-hospitalmortality associated with coronary artery bypass graft surgery. Northern New EnglandCardiovascular Disease Study Group. Circulation, 1992. 85(6): p. 2110-8.

11. Klabunde, C.N., A.L. Potosky, J.M. Legler, and J.L. Warren, Development of a comorbidityindex using physician claims data. J Clin Epidemiol, 2000. 53(12): p. 1258-67.

12. Charlson, M.E., P. Pompei, K.L. Ales, and C.R. MacKenzie, A new method of classifyingprognostic comorbidity in longitudinal studies: development and validation. J Chronic Dis,1987. 40(5): p. 373-83.

13. Di Bari, M., A. Virgillo, D. Matteuzzi, et al., Predictive validity of measures of comorbidity inolder community dwellers: the Insufficienza Cardiaca negli Anziani Residenti a DicomanoStudy. J Am Geriatr Soc, 2006. 54(2): p. 210-6.

14. Elixhauser, A., C. Steiner, D.R. Harris, and R.M. Coffey, Comorbidity measures for use withadministrative data. Med Care, 1998. 36(1): p. 8-27.

15. D'Hoore, W., C. Sicotte, and C. Tilquin, Risk adjustment in outcome assessment: the Charlsoncomorbidity index. Methods Inf Med, 1993. 32(5): p. 382-7.

16. Quan, H., V. Sundararajan, P. Halfon, et al., Coding algorithms for defining comorbidities inICD-9-CM and ICD-10 administrative data. Med Care, 2005. 43(11): p. 1130-9.

17. Van den Poel, D. and B. Larivière, Customer attrition analysis for financial services usingproportional hazard models. European Journal of Operational Research 2004. 157(1): p. 196-217.

18. Koenig, W., D. Twardella, H. Brenner, and D. Rothenbacher, Lipoprotein-associatedphospholipase A2 predicts future cardiovascular events in patients with coronary heartdisease independently of traditional risk factors, markers of inflammation, renal function,and hemodynamic stress. Arterioscler Thromb Vasc Biol, 2006. 26(7): p. 1586-93.

19. Hosmer, D.W. and S. Lemeshow, Applied logistic regression. 2nd ed. 2000, New York: Wiley.xii, 373 p.

20. Kramer, A.A. and J.E. Zimmerman, Assessing the calibration of mortality benchmarks incritical care: The Hosmer-Lemeshow test revisited. Crit Care Med, 2007. 35(9): p. 2052-6.

21. Akobeng, A.K., Understanding diagnostic tests 3: Receiver operating characteristic curves.Acta Paediatr, 2007. 96(5): p. 644-7.

22. Cook, N.R., Use and misuse of the receiver operating characteristic curve in risk prediction.Circulation, 2007. 115(7): p. 928-35.

23. Kirkwood, B.R., J.A.C. Sterne, and B.R. Kirkwood, Essential medical statistics. 2nd ed. ed.2003, Malden, Mass. :: Blackwell Science. x, 501 p. :.

Page 59: Monitoring hospital performance...2.1 Hospital performance monitoring Mortality prediction systems have several applications such as clinical research and monitoring clinical performance

59

Page 60: Monitoring hospital performance...2.1 Hospital performance monitoring Mortality prediction systems have several applications such as clinical research and monitoring clinical performance

60

Page 61: Monitoring hospital performance...2.1 Hospital performance monitoring Mortality prediction systems have several applications such as clinical research and monitoring clinical performance

61

Appendix 1: HOPE phase 3 Principal diagnosis groups

ICD-10 Description

A0 Intestinal Infections

A1 Tuberculosis

A3 Bacterial diseases

A4 Septicaemia & other bacterial disease

A8 CNS viral infections

B0 Viral skin/mucosal infections

B1 Viral hepatitis

B2 HIV disease

B3-4 Mycoses

B50-B64 Protozoal disease

C0-C14 Malignancy mouth/pharynx

C15-C21 Malignancy upper GIT

C22-C26 Malignancy of biliary & pancreatic

C3 Malignancy respiratory

C4 Malignancy bone/CT/skin

C5 Malignancy breast/femaleGU

C60-63 Malignancy prostate

C69-72 Malignancy CNS

C76-79 Malignancy, secondary

C8-9 Malignancy lymphoid/haemopoietic

D10-36 Benign neoplasa

D37-49 Uncertain neoplasia

D5 Anaemias

D6 Aplastic anaemia

D7 Other blood disease

D8 Immune disease

E0 Thyroid disease

E100-E109 Type I Diabetes

E110-E119 Type II Diabetes

E13-16 Other diabetic

E2-E3 Endocrine other

E4-85 Nutritional deficiencies & metabolic disease

E86-88 Fluid, electrolyte, acid-base disorders

F0 Dementias

F2-9 Psychiatric disease

G0 CNS Infection

G1-3 Degenerative CNS disease

G4 Epilepsy

G5-G6 Neuropathies

G7 Neuromyopathies

G8 Cerebral palsy

G9 Other CNS

H0-H5 Eye disease

H6-H9 Ear disease

I10-I15 ??

I20 Angina

I21 AMI

I22-25 Congestive cardiac failure

I26-28 Pulmonary vascular disease

I3 Endocarditis

Page 62: Monitoring hospital performance...2.1 Hospital performance monitoring Mortality prediction systems have several applications such as clinical research and monitoring clinical performance

62

I40-43 Cardiomyopathy

I44-45 Conduction block

I46 Cardiac arrest

I47-48 Supraventricular arrhythmias

I49 Other cardiac arrhythmias

I5 Heart failure

I60-62 Intracranial haemorrhage (SAH, ICH)

I63-64 Stroke

I65-69 Other cerbrovascular disease

I7 Arteriopathies

I8 Venous & lymphatic diseases

I9 Other CVS

J0 URTIs

J1 Pneumonia, viral & bacterial

J2 Acue bronchitis

J3 Allergic URT disease

J40-44 COPD

J45-46 Asthma

J47 ??

J6-7 Pneumoconioses

J8 Interstitial lung disease

J90-94 Pleural disease

J95-99 Respiratory failure

K1 Salivary disease

K35-38 Appendiciitis

K4 Herniae

K50-52 Enteritis, colitis (non-infective)

K55 Intestinal vascular disease

K56 Bowel obstruction

K57 Diverticular disease

K58-59 Dysfunctional bowel disease

K60-62 Anorectal diseases

K63 Other intetinal disease

K65-67 Peritonitis

K7 Liver disease

K80-81 Cholecystitis

K82-83 Biliary disease

K85-86 Pancreatitis

K9 Malabsorbtion

L0 Skin infections

L1-7 Skin disease

L8-9 Decubitus ulcers & other skin disease

M00-03 Infectious arhtropathy

M05-36 Inflammatory arthropathy

M4 Kyphosis, scoliosis

M5 Spondylopathy

M6-7 Myositis, Synovial, Other Soft tissue disease

M80-85 Bone density disease

M86 Osteomyelitis

M87-99 Other bone/cartilage disease

N1 Tubulo-interstitial disease

N2 Urolithiasis

N3 Urethritis, cystitis

Page 63: Monitoring hospital performance...2.1 Hospital performance monitoring Mortality prediction systems have several applications such as clinical research and monitoring clinical performance

63

N40-42 Prostate disease

N43-50 Male genital diseases

N7-N9 Female genital diseases

R0 Investigation of cardiorespiratory symptoms

R1 Investigation of gastrointestinal symptoms

R2 Investigation of skin, muscle, movement disorders

R3 Investigation of urinary symptoms

R4 Investigation of cognitive, emotional disorders

R50-53 Investigation of fever, pain, headache

R55 Investigation of syncope

R56 Investigation of seizures

R57-58 Investigation of shock, haemorrhage

R7-9 Investigation of abnormal pathology, imaging test

S0 Head injury

S1 Neck injury

S2 Chest injury

S3 Abdominal, pelvic, spinal injury

S4 Upper arm injury

S5 Forearm injury

S6 Hand injury

S8 Knee, leg injury

T0-T14 Multiple injury

T2-31 Burns

T36-T50 Drug side effects/ poisoning

T51-65 Alcohol or other toxic substance

T66-T7 Environmental exposure: radiation, hypothermia, hypoxia

T80-81 Transfusion reaction

T82 Cadiovascular prothetic complications

T83 Genitourinary prosthetic complications

T84-85 Orhtopedic prosthetic complications

T86 Transplant rejection

Z0-Z3 Examination of healthy person

Z4 Follow-up care

Z5-Z9 Health issue related to family history / socieconomic / pyschosocial/ circumstances

Page 64: Monitoring hospital performance...2.1 Hospital performance monitoring Mortality prediction systems have several applications such as clinical research and monitoring clinical performance

64

Page 65: Monitoring hospital performance...2.1 Hospital performance monitoring Mortality prediction systems have several applications such as clinical research and monitoring clinical performance

65

Appendix 2: WHO cause of death principal diagnosis groups

ICD-10 DescriptionA00-B99 Certain infectious and parasitic diseasesA00 CholeraA09 Diarrhoea and gastroenteritis of presumed

infectious originA01-A08 Other intestinal infectious diseasesA15-A16 Respiratory tuberculosisA17-A19 Other tuberculosisA20 PlagueA33-A35 TetanusA36 DiphtheriaA37 Whooping coughA39 Meningococcal infectionA40-A41 SepticaemiaA50-A64 Infections with a predominantly sexual mode of

transmissionA80 Acute poliomyelitisA82 RabiesA95 Yellow feverA90-A94, A96-A99 Other arthropod-borne viral fevers and viral

haemorrhagic feversB05 MeaslesB15-B19 Viral hepatitisB20-B24 Human immunodeficiency virus [HIV] diseaseB50-B54 MalariaB55 LeishmaniasisB56-B57 TrypanosomiasisB65 SchistosomiasisA21-A32, A38, A42-A49, A65-A79, A81, A83-A89, B00-B04,B06-B09, B25-B49, B58-B64, B66-B94, B99

Remainder of certain infectious and parasiticdiseases

C00-C14 Malignant neoplasm of lip, oral cavity andpharynx

C15 Malignant neoplasm of oesophagusC16 Malignant neoplasm of stomachC18-C21 Malignant neoplasm of colon, rectum and anusC22 Malignant neoplasm of liver and intrahepatic bile

ductsC25 Malignant neoplasm of pancreasC32 Malignant neoplasm of larynxC33-C34 Malignant neoplasm of trachea, bronchus and

lungC43 Malignant melanoma of skinC50 Malignant neoplasm of breastC53 Malignant neoplasm of cervix uteriC54-C55 Malignant neoplasm of other and unspecified

parts of uterusC56 Malignant neoplasm of ovaryC61 Malignant neoplasm of prostateC67 Malignant neoplasm of bladderC70-C72 Malignant neoplasm of meninges, brain and

other parts of central nervous systemC82-C85 Non-Hodgkin's lymphomaC90 Multiple myeloma and malignant plasma cell

neoplasmsC91-C95 LeukaemiaC17, C23-C24, C26-C31, C37-C41, C44-C49, C51-C52, C57-C60, C62-C66,C68-C69,C73-C81,C88,C96-C97

Remainder of malignant neoplasms

D00-D48 Remainder of neoplasms

Page 66: Monitoring hospital performance...2.1 Hospital performance monitoring Mortality prediction systems have several applications such as clinical research and monitoring clinical performance

66

ICD-10 DescriptionD50-D64 AnaemiasD65-D89 Remainder of diseases of the blood and blood-

forming organs and certain disorders involvingthe immune mechanism

E00-E88 Endocrine, nutritional and metabolic diseasesE10-E14 Diabetes mellitusE40-E46 MalnutritionE00-E07, E15-E34, E50-E88 Remainder of endocrine, nutritional and

metabolic diseasesF10-F19 Mental and behavioural disorders due to

psychoactive substance useF20-F99 Remainder of mental and behavioural disordersG00, G03 MeningitisG30 Alzheimer's diseaseG04-G25, G31-G98 Remainder of diseases of the nervous systemH00-H57 Diseases of the eye and adnexaH60-H93 Diseases of the ear and mastoid processI00-I09 Acute rheumatic fever and chronic rheumatic

heart diseasesI10-I13 Hypertensive diseasesI20-I25 Ischaemic heart diseasesI26-I51 Other heart diseasesI60-I69 Cerebrovascular diseasesI70 AtherosclerosisI71-I99 Remainder of diseases of the circulatory systemJ10-J11 InfluenzaJ12-J18 PneumoniaJ20-J22 Other acute lower respiratory infectionsJ40-J47 Chronic lower respiratory diseasesJ00-J06, J30-J39, J60-J98 Remainder of diseases of the respiratory systemK25-K27 Gastric and duodenal ulcerK70-K76 Diseases of the liverK00-K22, K28-K66, K80-K92 Remainder of diseases of the digestive systemL00-L98 Diseases of the skin and subcutaneous tissueM00-M99 Diseases of the musculoskeletal system and

connective tissueN00-N98 Diseases of the genitourinary systemN00-N15 Glomerular and renal tubulo-interstitial diseasesN17-N98 Remainder of diseases of the genitourinary

systemO00-O07 Pregnancy with abortive outcomeO10-O92 Other direct obstetric deathsO98-O99 Indirect obstetric deathsO95-O97 Remainder of pregnancy, childbirth and the

puerperiumP00-P96 Certain conditions originating in the perinatal

periodQ00-Q99 Congenital malformations, deformations and

chromosomal abnormalitiesR00-R99 Symptoms, signs and abnormal clinical and

laboratory findings, not elsewhere classifiedV01-V99 Transport accidentsW00-W19 FallsW65-W74 Accidental drowning and submersionX00-X09 Exposure to smoke, fire and flamesX40-X49 Accidental poisoning by and exposure to noxious

substancesX60-X84 Intentional self-harmX85-Y09 Assault

Page 67: Monitoring hospital performance...2.1 Hospital performance monitoring Mortality prediction systems have several applications such as clinical research and monitoring clinical performance

67

ICD-10 DescriptionW20-W64, W75-W99, X10-X39, X50-X59, Y10-Y89 All other external causes

Page 68: Monitoring hospital performance...2.1 Hospital performance monitoring Mortality prediction systems have several applications such as clinical research and monitoring clinical performance

68

Page 69: Monitoring hospital performance...2.1 Hospital performance monitoring Mortality prediction systems have several applications such as clinical research and monitoring clinical performance

69

Appendix 3: Modified WHO principal diagnosis groups

ICD-10 DescriptionA0 Intestinal infectious diseasesA1 TuberculosisA3-A4 Bacterial diseaseB3-B4 MycosesB50-B64 Protozal diseaseC00-C14 Malignant neoplasm of lip, oral cavity and pharynxC15-C26 Malignant neoplasm of digestive organsC3 Malignant neoplasm of respiratory and intrathoracic organsC4 Malignant neoplasm of bone, articular cartilage, skin or soft tissueC5 Malignant neoplasm of breast or female genital organsC60-C63 Malignant neoplasm of male genital organsC64-C68 Malignant neoplasm of the urinary tractC69-C72 Malignant neoplasm of eye, meninges, brain and other parts of central nervous

systemC76-C80 Malignant neoplasms of ill-defined, secondary and unspecified sitesC81-C96 Malignant neoplasms of lymphoid, haematopoietic and related tissuesD10-D36 Benign neoplasmsD37-D48 Neoplasms of uncertain or unknown behaviourD50-D64 AnaemiasE10 Diabetes mellitus: Type 1E11 Diabetes mellitus: Type 2E4 MalnutritionE86-E88 Fluid, electrolyte, acid-base disoredsF0 Organic mental disorders, dementias and Alzheimer’s diseaseG4-G6, G8 Diseases of the nervous system: 1G7 & G9 Diseases of the nervous system: 2I10-I13 Hypertensive diseasesI21 Acute myocardial infarctionI20-I25 Ischaemic heart diseasesI26-I52 Other heart diseasesI47-I49 Tachycardia, atrial fibrillation and arrhythmiaI6 Cerebrovascular diseasesI7 Arterial diseaseJ12-J18 PneumoniaJ0, J2 & J3 Other acute respiratory infectionsJ4 Chronic lower respiratory diseasesJ6-J9 Remainder of diseases of the respiratory systemK25-K27, K55, K56, K63, K65-K67 &K9

Gastric and duodenal ulcer, and diseases of the intestine, peritoneum and otherdigestive diseases

K7 Diseases of the liverK4, K51, K52, K57-K62 Hernia, non-infective enteritis and colitis and diseases of the intestineK8 Disorders of the gallbladder, biliary tract and pancreasL00-L98, M00-M99 Diseases of the skin and subcutaneous tissue, musculoskeletal system and

connective tissueN0-N3 Diseases of the genitourinary systemR0-R53 Symptoms, signs and abnormal clinical and laboratory findings, not elsewhere

classifiedR57-R58 Shock and haemorrhageS1-S9 InjuryT36-T39, T50, T4 PoisoningT8 Complications of surgical treatmentZ5-Z9 Health issue relating to family history, socioeconomic and psychosocial

circumstances