developing user-valued outcome measures in mental heath diana rose, phd co-director service user...
Post on 30-Mar-2015
213 Views
Preview:
TRANSCRIPT
Developing User-Valued Outcome Measures in Mental Heath
Diana Rose, PhDCo-Director Service User Research Enterprise (SURE)Institute of PsychiatryKing’s College London
The claim of neutrality in RCTs RCTs considered the ‘gold standard’ in
medicine Neutrality depends on blinding But is everything in an RCT neutral? Outcome measures are devised by
clinicians and academics May not be the outcomes that matter to
service users Try to develop measures that are valued by
service users (and others) in mental health
What is special about mental health? Uniquely in medicine, people with mental health
problems may be detained and treated against their will
Has led to strong service user input to research as well as service development
General medicine – outcomes agreed –as between patients and clinicians – survival rates and quality of life
Mental health service users may not agree on either outcomes or methods in research
For example, symptom relief Prized by academics and clinicians May not be for service users
The Service User Research Enterprise (SURE) Team of 10 people located at the Institute
of Psychiatry, King’s College London Majority are or have been mental health
service users – ‘insider knowledge’ IoP not an easy place to be although more
welcoming than the warnings I got before I went there
Aims to do research from the service user perspective and to bring about change in treatments and services
SURE’s New Methods Consumer-centred systematic reviews
Electroconvulsive shock therapy (ECT) New anti-depressant medication
Participatory research To reduce the power relations between researcher
and researched Researchers are service users This method used to develop user-valued outcome
measures – many examples now Jo and Caroline will describe the most recent in the
poster session Acute care study used the method with nurses as well
Attitude measures in social sciences
Begin with focus groups Stratified sampling Market research agencies find
participants Fairly low status in the development
of an attitude scale Different to the use of focus groups in
user-focused research
Overview of methods to develop user-valued outcome measures
All involved are service users including the researchers
Reference group - some research expertise Focus groups – but sampled from people
who have experience of the treatment or service under investigation
Expert panels Feasibility Psychometric testing
Focus Groups Made up of participants who have
experienced the treatment or service in question
Reference group drafts topic guide Pilot: to refine topic guide Main groups meet twice Wave 1: discuss own experience facilitated
by the topic guide Transcribed and analysed with Nvivo Wave 2: respondent validation and further
discussion
User Researchers’ Role
Full qualitative analysis
Inter-rater reliability
Generate draft questionnaire
Use own experience
Expert Panels One comprising people who had been in the
focus groups and one independent All have received the treatment or service
we are investigating ‘Experts in their own experience’ Role of Expert Panels is to comment on
draft measure, add or delete items Close attention to the language used – is it
the language of service users? Can be quite a lot of changes made at this
stage
Feasibility Draft measure tested with ~ 40
people to see if it is; Easy to complete In their language Comprehensive for them
Usually still some changes eg to wording
Feasibility study makes changes to the measure as it proceeds
Reference Group
Reference group meets for the second time at the conclusion of measure construction
Opportunity to make final changes
Psychometrics from field trial samples The measure is now considered complete
and ready for psychometric testing
Use the criteria developed by HTA (Fitzpatrick et al) as more comprehensive than traditional psychometrics
Qualitative as well as quantitative
Test on a field trial sample usually recruited from other parts of large projects
Test-retest reliability Independent sample of service users
~ 50 people Complete measure twice with 1 or 2
week interval between T1 and T2 Mostly our measures found to be
reliable Might think too difficult, for example,
for people with psychoses But does seem to work
Criterion validity Usually no ‘gold standard’ because our
measures have been generated in a novel way
Compare our measures with others to identify points of similarity and difference
For example, continuity of care – peer support
HTA – Qualitative criteria Acceptability (to users and as a
measure itself) Asked:
Is it easy to complete? Was it upsetting to fill in? Did you enjoy filling it in?
Generally our measures are acceptable to our respondents
More on participation For most research, participants never know
what happens to the information they give We feed back results
Presentations to groups of participants / user groups
A newsletter The little things
Thank you cards Christmas cards
Ask people if they are proud of the results of their efforts – almost always say “yes”.
Examples of studies where this methodology has been used Continuity of care (in press) Satisfaction with Cognitive Remediation
Therapy for People with Schizophrenia (published)
Satisfaction with CBT for Psychosis Experiences of acute in-patient care –
ongoing – poster presentation this afternoon
The acute care study also used the same method with nurses
Challenges to user-led research 1Frank Scepticism
Peter Tyrer, editor of the British Journal of Psychiatry, writes:
“The engine of user involvement, while welcome in principle,……….may drive mental health research into the sand.”
Challenges 2 - Power Most of the projects we have been involved
with are collaborative
Nearly always headed up by professor(s) of psychiatry
Can undermine user researchers
Are you a researcher or are you a patient?
Challenges 3 – the charge of bias Said, mostly implicitly, that user-research is biased,
anecdotal and carried out by people who are over-involved – ENMESH conference
We make no pretence of neutrality
But all research comes from a certain standpoint
User researchers more explicit about this than mainstream researchers
In my opinion the word ‘bias’ should be banished from research discourse and all researchers should clearly say where they are coming from.
Conclusion It is possible to develop psychometrically
robust measures completely from the perspective of service users
This is a form of participatory research – rarely used in mental health
New in participatory research itself as the researchers belong to the same community as the participants – they too are mental health service users.
top related