evaluation – the 4 ms: models, measures, monitoring and methods

19
Evaluation - the 4 Ms: models, measures, monitoring and methods Chair- Professor Nick Harding OBE - Chair of new care models evaluation oversight

Upload: health-and-care-innovation-expo

Post on 14-Apr-2017

103 views

Category:

Healthcare


1 download

TRANSCRIPT

Evaluation - the 4 Ms: models, measures, monitoring and methods

Chair- Professor Nick Harding OBE - Chair of new care models evaluation oversight group

Reflections on evaluation and the New Care Models programme

Fraser Battye, Strategy Unit [email protected] / 077364 71057

This presentation is brief and broad. It covers three inter-related topics

1: The nature of the NCM programme

2: The implied role for evaluation (especially local evaluation) given 1

3: Our experience of doing 2 (focus on Dudley MCP)

The Vanguards are a set of themed experiments

Defines problems, outlines models

Sets models in train

= multiple local tests of broadly described care models

Implied headline questions for evaluation: What are these models? (How) do they work? How might they be replicated?

The design of the NCM programme has further implications for evaluation

1: Local practice informs national model codification / development

Aid the process of description and definition (e.g. logic models); consider wider

application of local findings

2: The care models are not ‘a thing’; the result can’t be ‘x works, do x’

Expose ‘active ingredients’ / combinations of interventions; local context vital

(predecessor efforts / local problems etc)

3: There is a clear expectation of roll-out (STP process / associated target)

Avoid ‘do Vs don’t do’ pronouncements; focus on practical improvements and ‘things

to consider if adopting x / y / z’

The nature of the programme also creates pressures for local sites and their evaluators

Significant problems

High profile response

Planned roll out of

response

Demand for evidence of success

But overall, there are multiple opportunities for local evaluators to add real value to Vanguard sites

Clarify thinking and programme design: how

will doing x address problem y?

Aid local programme and service implementation: a ‘live’ source of evidence

Support the development and replication of the models: what are

they and how / when do they work?Improve local evaluation capacity and

culture: healthy agnosticism and learning (not audits and beatings)

(etc, etc). All framed by NHSE striking a sensible balance

between local and national evaluation

We are trying to realise these benefits through our evaluation work with Dudley’s MCP

Highly multi-disciplinary team:o Strategy Unit (overall lead, quants expertise, NHS)o Health Services Management Centre (academic rigour, broader lessons)o ICF International (research expertise with consultancy focus)

Guided by overall local evaluation strategy and logic models A close ‘no surprises’ relationship: mutual confidence and respect

All Vanguards are complex and multi-component…so where to start? Dudley’s strategy defines three ‘levels’ of evaluation

Synthesised and summarised to extract lessons

(Helped by our wider work – Modality evaluation, NIHR project on MCP)

We have (forthcoming) early findings…until these are out, here are some general reflections on local evaluation in the programme

o Don’t rush to action, take time to understand and focus efforts to maximise value o Use short outputs, focused on ‘what to do next’ (not method and caveat heavy tomes) o Keep an eye on policy lessons as well as local practice (Site → Model → Policy) o Be appropriately modest about what standard of evidence can be produced in this

context o The NHS has an underdeveloped evaluation culture and an overdeveloped audit / blame

culture. Methodological and personal approaches fundamental to changing this

Evaluation of the new care model in North East Hampshire and Farnham

Our reflections and learningPaul Gray, Programme DirectorNorth East Hampshire and Farnham Vanguard

❶ Local people being happier, healthier and receiving more of the care they need at home or in the community.

❷ Better value for money for taxpayers, contributing £23M towards the £73M gap we face between the available resources and the costs of delivering care

The changes we are making are designedto have three key impacts:

❸ Improved staff satisfaction ability of health and care providers to recruit and retain sufficient numbers of skilled staff to meet the needs of local people

We gave considerable emphasis to the development of robust logic models for the programme and each of its key elements

Eight outcomes identified by which we judge our success, and 14 metrics (some existing, some new) to measure progress against these outcomes

Patients report a significant improvement in their quality of life. Particularly feeling less worried and low.Their health confidence increases significantly – feeling better able to manage their healthTheir reported wellbeing improves significantly. In particular they are happier. Their experience of using the service increases.

Example outputs

Quantitative

Qualitative

Dashboards for metrics relating to logic models,

Continuous measurement Meaningful visualisation

Working with the Universities, R-outcomes, CSU, Local team

Including Patient experience, patient perception, well being, Staff experience, job confidence, job satisfaction

Elements of our evaluation programmeAttribution

Economicevaluation

Engagementand learning

Healthy, Happy and at Home Complex; understanding the

many contributing factors

Easy to value changes in demand but difficult to demonstrate cash released until models replicated at scale.

With replication at pace in mind Mainstream new sources of

data/dashboards Quarterly Symposiums; share

evaluation

Our learning and reflections We keep coming back to what we are trying to achieve – and the logic model is a key

foundation to the evaluation Developing new measures as well as utilising existing – we have found the R-outcomes

family of measures hugely helpful Mainstreaming new data collection – metrics and processes The challenge of creating a culture of evaluation Working out how to rapidly replicate things that work, at scale The pressure is on now to determine which interventions to fund locally in 2017/18

Q&A

Interested in evaluation?

Charles Tallack- Head of Operational Research and Evaluation at NHS England ([email protected])

Laura Freeman- New care models evaluation team at NHS England ([email protected])