clear-cut?: facilitating health librarians to use information research in practice

8
© Health Libraries Group 2003 Health Information and Libraries Journal, 20 (Suppl. 1), pp.45–52 45 Blackwell Publishing Ltd. Clear-cut?: facilitating health librarians to use information research in practice Andrew Booth* and Anne Brice , *Information Resources Section, School of Health and Related Research, University of Sheffield, Sheffield, †Department of Knowledge and Information Sciences, Public Health Resource Unit, Oxford, UK Abstract In 1999, staff at the universities of Sheffield and Oxford commenced an unfunded project to examine whether it is feasible to apply critical appraisal to daily library practice. This aimed to establish whether barriers experienced when appraising medical literature (such as lack of clinical knowledge, poor knowledge of research methodology and little familiarity with statistical terms) might be reduced when appraising research within a librarian’s own discipline. Innovative workshops were devised to equip health librarians with skills in interpreting and applying research. Critical Skills Training in Appraisal for Librarians (CRISTAL) used purpose-specific checklists based on the Users’ Guides to the Medical Literature. Delivery was via half-day workshops, based on a format used by the Critical Appraisal Skills Programme. Two pilot work- shops in Sheffield and Oxford were evaluated using a brief post-workshop form. Participants recorded objectives in attending, their general understand- ing of research, and whether they had read the paper before the workshop. They were asked about the length, content and presentation of the workshop, the general format, organization and learning environment, whether it had been a good use of their time and whether they had enjoyed it. Findings must be interpreted with caution. The workshops were enjoyable and a good use of time. Although the scenario selected required no clinical knowledge, barriers remain regarding statistics and research methodology. Future workshops for librarians should include sessions on research design and statistics. Further developments will take forward these findings. Introduction The role of the health librarian in supporting evidence-based practice is well established. 1,2 Increasingly, this role involves librarians in sup- porting critical appraisal 35 by health profes- sionals within their employing organization. Debate continues as to the extent that librarians should themselves acquire critical appraisal skills. 6,7 In Britain and North America, librarians have been involved from the beginning in organizing and supporting initiatives such as the Critical Appraisal Skills Programme and week long Teaching Evidence-based Practice Work- shops. 8 At a local level, they complement critical appraisal skills workshops by training health practitioners to find the evidence. However, very few information professionals have undergone the intensive generic workshops to prepare them for facilitating the development of appraisal skills. Our collective experience with librarians in most Correspondence: Andrew Booth, Director of Information Resources and Senior Lecturer—Evidence-based Healthcare Information, School of Health and Related Research (ScHARR), University of Sheffield, Regent Court, 30 Regent Street, Sheffield S1 4DA, UK. E-mail: A.Booth@sheffield.ac.uk

Upload: andrew-booth

Post on 06-Jul-2016

213 views

Category:

Documents


1 download

TRANSCRIPT

© Health Libraries Group 2003

Health Information and Libraries Journal

,

20

(Suppl. 1), pp.45–52

45

Blackwell Publishing Ltd.

Clear-cut?: facilitating health librarians to use information research in practice

Andrew

Booth*

and

Anne

Brice

, *Information Resources Section, School of Health and Related Research, University of Sheffield, Sheffield, †Department of Knowledge and Information Sciences, Public Health Resource Unit, Oxford, UK

Abstract

In 1999, staff at the universities of Sheffield and Oxford commenced anunfunded project to examine whether it is feasible to apply critical appraisal todaily library practice. This aimed to establish whether barriers experiencedwhen appraising medical literature (such as lack of clinical knowledge, poorknowledge of research methodology and little familiarity with statistical terms)might be reduced when appraising research within a librarian’s own discipline.Innovative workshops were devised to equip health librarians with skills ininterpreting and applying research. Critical Skills Training in Appraisal forLibrarians (CRISTAL) used purpose-specific checklists based on the Users’Guides to the Medical Literature. Delivery was via half-day workshops, basedon a format used by the Critical Appraisal Skills Programme. Two pilot work-shops in Sheffield and Oxford were evaluated using a brief post-workshopform. Participants recorded objectives in attending, their general understand-ing of research, and whether they had read the paper before the workshop.They were asked about the length, content and presentation of the workshop,the general format, organization and learning environment, whether it hadbeen a good use of their time and whether they had enjoyed it. Findings mustbe interpreted with caution. The workshops were enjoyable and a good use oftime. Although the scenario selected required no clinical knowledge, barriersremain regarding statistics and research methodology. Future workshops forlibrarians should include sessions on research design and statistics. Furtherdevelopments will take forward these findings.

Introduction

The role of the health librarian in supportingevidence-based practice is well established.

1,2

Increasingly, this role involves librarians in sup-porting critical appraisal

3

5

by health profes-sionals within their employing organization.Debate continues as to the extent that librarians

should themselves acquire critical appraisalskills.

6,7

In Britain and North America, librarianshave been involved from the beginning inorganizing and supporting initiatives such as theCritical Appraisal Skills Programme and weeklong Teaching Evidence-based Practice Work-shops.

8

At a local level, they complement criticalappraisal skills workshops by training healthpractitioners to find the evidence. However, veryfew information professionals have undergone theintensive generic workshops to prepare them forfacilitating the development of appraisal skills.Our collective experience with librarians in most

Correspondence: Andrew Booth, Director of Information Resourcesand Senior Lecturer—Evidence-based Healthcare Information, Schoolof Health and Related Research (ScHARR), University of Sheffield,Regent Court, 30 Regent Street, Sheffield S1 4DA, UK. E-mail:[email protected]

Clear-cut?—critical appraisal for librarians,

Andrew Booth and Anne Brice

© Health Libraries Group 2003

Health Information and Libraries Journal

,

20

(Suppl. 1), pp.45–52

46

of the NHS Regions in England, as well as at anational level, suggests three particular barriers togreater librarian participation in critical appraisal:

a lack of clinical knowledge (the context);

poor knowledge of research methods anddesigns (the methods);

a lack of confidence in managing the statistics(the skills).Recently, the evidence-based paradigm has

migrated from medicine to social work, educationand human resource management.

9

In the year2000, the term ‘evidence-based librarianship’received increasing emphasis through articles inthe health information literature and throughexposure at several international conferences.

10

Every day, library managers, and those working in‘technical’ professional roles, face numerous deci-sions with regard to services and resources.

11

‘Should we concentrate on end-user training at theexpense of mediated search services?’, ‘Should weintroduce a clinical librarian initiative?’, ‘Shouldwe subscribe to electronic journals instead of theirprinted equivalents?’, ‘Should I send my staff to a1-day workshop or give them time to undertake adistance learning course?’. How do they resolvethese decisions? By taking advice from colleagues,by following their professional judgement, byresponding to anecdotal reports in the profes-sional press—a myriad of ways developed throughcustom and practice. Evidence-based practice isan opportunity for the information profession toimprove the quality of such decision making.

Background to the project

The seeds of the Critical Skills Training inAppraisal for Librarians (CRISTAL) Project canbe traced to a meeting organized to advance aLibrarian Development Programme for the NHS.Once it became apparent that funds would not beforthcoming to develop skills in interpreting andapplying research evidence to health library prac-tice, the authors decided to advance this agendaindependently. They initiated an unfunded collab-oration between the School of Health and RelatedResearch (University of Sheffield) and the HealthCare Libraries Unit (University of Oxford).

12

Subsequently, a detailed proposal, to establishwhether it is practicable and feasible for health

care librarians to apply critical appraisal skills intheir day-to-day practice, was firmed up amidstthe appropriately evidence-based atmosphere ofthe Cochrane Collaboration Colloquium in Rome,in October 1999. The CRISTAL Programme soughtto capitalize on library professionals’ own knowl-edge of the context for their work, to introduce themto a rudimentary knowledge of research design andto present necessary statistics in a way that wasmeaningful and non-threatening.

Guiding principles

The CRISTAL Programme was guided by twoparticular influences from mainstream evidence-based practice. Early in the development ofevidence-based medicine, an Evidence-basedMedicine Working Group had produced a seriesof Users’ Guides to the Medical Literature.

13

Eachguide was designed to address a particular ques-tion type (e.g. diagnosis, therapy etceteras) orstudy design (e.g. systematic review). Further-more, each User’s Guide was exemplified in achecklist to be used in carrying out the mechanicsof critical appraisal. The CRISTAL project teambelieved that a series of similar guides, based onquestion types as opposed to study types, wouldenable librarians to ask meaningful questions ofpublished research. A candidate list of questiontypes included Evaluating User Education, Evalu-ating End User Searching, Assessing ClinicalLibrarian Projects, Identifying Information Needs,Evaluating Current Awareness Services andAssessing User Studies. From this list, two topics,Identifying Information Needs and AssessingUser Studies (a generic guide examining studiesthat measure use of any library service), wereprioritized for development on the basis of preval-ence of studies and the importance of their topics.

The other major influence on the project hadbeen the authors’ involvement in generic criticalappraisal skills training to health professionals aspioneered by the Critical Appraisal Skills Pro-gramme (CASP) in Oxford.

14

Features such as theuse of workshops (typically of half-day duration),a problem based scenario, use of checklists and anintegrated approach to evaluation all figured pro-minently in delivery of the CRISTAL Programme.The methods pioneered by the Critical Appraisal

Clear-cut?—critical appraisal for librarians,

Andrew Booth and Anne Brice

© Health Libraries Group 2003

Health Information and Libraries Journal

,

20

(Suppl. 1), pp.45–52

47

Skills Programme (CASP) were judged particularlyappropriate for translation to the health informa-tion context because:

1

They are multidisciplinary in intent and haveproved successful with all professional groupsincluding doctors, nurses, Professions Allied toMedicine, Maternity Services Liaison Commit-tees, health service managers and consumergroups.

2

They provide a standard approach to the impor-tant dimensions of reliability, applicability andvalidity.

3

They are familiar to many librarians, alreadyinvolved in supporting critical appraisal or find-ing the evidence workshops.It was recognized that, as paralleled in the

generic CASP programme, librarians would alsoneed to be able to use databases to locate evidencein their professional literature (e.g. Library andInformation Science Abstracts,

, SocialScience Citation Index,

and

).However this was flagged as a topic for subsequentdevelopment.

Based on experiences from these existing initiat-ives the following operating principles were agreed:

1

Checklists would be specific to a type of study,e.g. information needs analysis, user study etceteras

not

to a specific study design.

2

Checklists would share the standard threeCASP dimensions regarding validity (i.e. appro-priateness of methods to research question),reliability (i.e. the rigour with which the actualstudy was carried out) and applicability (i.e. theusefulness of research to the user’s own practice).

3

Checklists would be orientated towards thepractitioner not towards the creation of anacademic tool.

Methods

The work was carried out in three phases:

1

Development of an initial critical appraisal tool.

2

‘Cross-over’ evaluation of the tool.

3

Workshop based evaluation.

Development of an initial critical appraisal tool

The project team resisted the assumption that thehealth information sector could readily adopt an

approach to evaluating research from generichealthcare without significant modification. Adual approach was employed to incorporate per-spectives grounded in the theory and practice ofthe health information professional:

1

One investigator (ABo) reviewed existing cri-tical appraisal checklists for their value to theproposed tools (an evolutionary approach).

2

The other investigator (ABr) started from anexample of a study to be appraised and used thisto suggest appropriate appraisal criteria withinthe domains of validity, reliability and applica-bility (a revolutionary approach).

‘Cross-over’ evaluation of the tool

The resultant two sets of criteria were then integr-ated into a single list, prior to the pilot stage.Investigator One (ABo) used the tool with theoriginal study identified by Investigator Two.Meanwhile Investigator Two (ABr) used the toolto appraise a second study of a similar type, iden-tified by Investigator One. The integrated tool wasthen revised in the light of this ‘cross-over’ phase.

Workshop based evaluation

Following the design and evaluation phasesdescribed above, two workshops were convened inAutumn 2000; one in Oxford and one in Trent.Two groups of librarians were chosen, primarilyfor convenience of access but also because, forhistoric reasons, they represented contrastinglevels of prior involvement in critical appraisal(‘CASP-aware librarians’ and ‘CASP-neutrallibrarians’). Between 10 and 20 librarians at eachvenue were invited to workshops facilitated by thetwo investigators. They received free training aspart of a continuing professional developmentprogramme in return for commitment to theevaluation. At each workshop, participants usedthe Checklist for User Studies to appraise a paper.In the interests of consistency the same paper wasused at each session (Fig. 1).

15

A standardevaluation sheet was used at each workshop.Following the format of a standard CASPworkshop, participants were asked to vote at thebeginning and end of the sessions, on the questionposed by the scenario.

Clear-cut?—critical appraisal for librarians,

Andrew Booth and Anne Brice

© Health Libraries Group 2003

Health Information and Libraries Journal

,

20

(Suppl. 1), pp.45–52

48

Evaluation

The original plans for evaluation, to be fundedunder the NHS Librarian Development Pro-gramme, were very comprehensive and involvedmultiple opportunities for evaluation, both of theinstrument itself and of the workshop process:

1

Matched data comparing the assessments by thetwo investigators (ABo and ABr). This wouldallow us to quantify the extent of agreementbetween the two investigators using the so-called ‘kappa statistic’.

2

Matched data comparing assessments by agroup of external evaluators. Again it would bepossible to compare agreement using the kappastatistic between those involved in developingthe instrument and those charged with inter-preting it.

3

Intra-workshop and inter-workshop compari-sons between the two workshops. Comparisonswere planned between both groups using astandard evaluation form. Variables to be col-lected would include years as a health librarian,extent of prior experience of CASP methods,number and type of journals read each month,extent of academic qualifications and previousexperience of research projects.Faced by the pragmatic considerations of an

unfunded project, however, evaluation focusedsingly on data collected at the two workshops. Par-ticipants were asked questions concerning theirobjectives in attending the workshop, and whether

these had been met, what their general under-standing of research was, and whether they hadread the paper before the workshop. They werealso asked questions about the length, content andpresentation of the workshop sessions and thegeneral format, organization and learning envi-ronment. They were asked to indicate whetherthey felt that the workshop had been a good use oftheir time, and if they had enjoyed it.

Objectives in attending

The workshops were attended by 25 participants.Of these, 22 reported that their objectives inattending the workshop were to learn how toappraise a piece of library research, and 23expected it to contribute to their general pro-fessional development. Twenty-one reported theobjective of increasing their understanding ofresearch, and 18 to gain expertise to pass on tocolleagues (Table 1).

Prior knowledge

Two out of a total of 25 participants said that theyhad undertaken a lot of research, 14 hadundertaken a little and nine had not undertakenany. Four reported that they read a lot of researchpapers as part of their job, with 17 reportingthat they read a little and three reporting that theydid not read research papers (Table 2). Theassumptions that had been made in planning that

Figure 1 Scenario—Keeping a finger on the pulse

Clear-cut?—critical appraisal for librarians,

Andrew Booth and Anne Brice

© Health Libraries Group 2003

Health Information and Libraries Journal

,

20

(Suppl. 1), pp.45–52

49

the Oxford group would be more sophisticatedwith regard to appraisal seem to be borne out bythe differential responses for both undertakingand reading research between both sites.

Four participants reported using existing userguides to help in appraising research papers. Theseincluded CASP questions, JAMA User Guides,How to Read a Paper

16

, Information for Evidence-based Care

17

the Cochrane Collaboration Review-ers’ Handbook, and a checklist devised by theCochrane Non-Randomised Methods Group.

In the Oxford workshop, 10 had read the papercarefully, four had only skimmed it. The Sheffieldparticipants were not required to look at the paperbeforehand in an attempt to control for priorfamiliarity with the paper.

Objectives achieved

Twenty-two of the 25 participants felt that thesmall group session had been the right length, with19 reporting that they had understood the meaningof the questions, and four that they had not. Twoquestions from the appraisal checklist [Are anylimitations in the methodology (that might haveinfluenced results) identified and discussed?] and(What additional information do you need to obtainlocally to assist you in responding to the findings ofthis study?) had proved the most difficult to answer.

In general, participants found that the feedbacksession following the small group work helpedclarify areas of uncertainty—23 reported this ashaving been achieved, and 19 felt that this sessionhad been the right length.

Twenty-three of the 25 participants were happywith the workshop format. Nine felt that the work-shop had been an excellent use of their time, 14that it had been a good use of their time and two afair use of their time. Interestingly, the differencebetween the two groups with regard to fulfilmentof the learning objectives (Table 3) saw the Shef-field Group having felt that the session had mettheir objectives more than did the Oxford Group.This suggests that prior knowledge and familiaritywith research increases participant expectationsfrom a critical appraisal session. Nevertheless,responses for both groups were concentrated inthe ‘quite a lot’ and ‘very much’ categories.

Discussion

The pilot project addressed the need for a tool forappraising library-related literature and explored

Table 1 Learning objectives.

Oxford (n = 14)

Sheffield (n = 11)

Learn how to critically appraise a piece of library research

12 10

Increase understanding of research issues

11 10

Contribute to general continuing professional development programme

12 11

Gain expertise to pass on to colleagues 10 8Other 1* 2+

*Other in this instance was ‘to understand what was trying to be achieved in these workshops and stage of development’—CASP colleague).†Other in this instance were ‘to meet other librarians in the Region and gain confidence in my new role’ and ‘to enhance my knowledge of critical appraisal of librarianship issues’.

Table 2 General understanding of research.

OXFORD Responses received 14 Yes, a lot Yes, a little No

Have you undertaken research? 2 9 3Do you read research papers as part of your job? 3 11

SHEFFIELDResponses received 11 Yes, a lot Yes, a little No

Have you undertaken research? 5 6Do you read research papers as part of your job? 1 6 3 (+ 1 null response)

Clear-cut?—critical appraisal for librarians,

Andrew Booth and Anne Brice

© Health Libraries Group 2003

Health Information and Libraries Journal

,

20

(Suppl. 1), pp.45–52

50

the feasibility of using workshops as an effectiveeducational intervention. It has demonstrated thatthe appraisal tool, together with the workshopformat, helped participants improve their under-standing of research methods and their ability touse research to aid their decision making.

Two noteworthy factors reported by partici-pants, associated with the ability to use the tool tomake a judgement of the validity, reliability andapplicability of the research paper, were priorknowledge of statistical techniques and researchmethodology. This was particularly an issue whendealing with questions 6 and 7 on the checklist.This finding has informed the development of a1-day programme on evidence-based librarianshipwhere the basic half-day critical appraisal work-shop is augmented by two substantive sessions:Statistics for Petrified Librarians (STAPL) andMatching the Research Design to the ResearchQuestion. The 1-day format of this course hasbeen run twice—once for health librarians in Walesand once for librarians in South-west England.

A further noteworthy factor associated with theability to use the tool was having read the paperbeforehand. Although 10 participants in theOxford workshop reported that they had read thepaper carefully beforehand, several commentedthat participants would have liked to have receivedthe paper earlier. These participants stated thatthey needed more preparation time if they were toreach a decision about the paper. This finding isnot unequivocal, however, as experience suggeststhat regardless of how much time is allowed forreading the paper some will always consider thisinsufficient. Similarly, participants indicated dif-

ferent preferred learning styles with regard to theoptimal size and interactivity of their group.

Several participants alluded to the difficulty ofassessing statistics as a major block to appraisingthe paper—this confirms other observations madeby the authors concerning participation of librar-ians, and indeed all professional disciplines, ingeneral critical appraisal sessions. Additional pre-workshop tools and preparation may be neces-sary—for example, worksheets/glossaries forterminology, etc. to enable participants to get themost from the learning possibilities in the work-shop. One Oxford participant asked that basicterms be covered first while a Sheffield participantsuggested that we could perhaps ‘give a referencebeforehand to a text which explains unfamiliarstatistical terminology’. Such texts do in factexist—the book

A-Z of Medical Statistics: a com-panion for critical appraisal

18

being one such exam-ple. These observations suggest that library coursecurricula may need to consider incorporating sta-tistics as a core competence for potential informa-tion professionals.

Comments received substantiated the choice ofsmall group work as supportive, inclusive and dis-cursive. However, it has not been possible to reinforcelearning through ongoing interaction or follow-up.

At the Oxford workshop, the feedback sessionsdiscussed why such large numbers of participantsreported that they didn’t read papers as part oftheir job. Several factors contribute to this practice-research gap. There is a reported gap between theideal availability of methodologically soundlibrary and information science research, and thereality. Problems were noted in gaining access to a

Table 3 Objectives achieved.

Not at all Not much Quite a lot Very much

Oxford Sheffield Oxford Sheffield Oxford Sheffield Oxford Sheffield

Learn how to critically appraise 0 0 0 0 10 4 3 7a piece of library research

Increase understanding of 0 0 3 0 11 7 0 4research issues

Contribute to general continuing 0 0 1 1 11 8 2 2professional developmentprogramme

Gain expertise to pass on 0 0 1 0 11 7 0 2to colleagues

Clear-cut?—critical appraisal for librarians,

Andrew Booth and Anne Brice

© Health Libraries Group 2003

Health Information and Libraries Journal

,

20

(Suppl. 1), pp.45–52

51

relevant resource base. Participants were keen tofind good examples of research, and suggestionsfor future developments included the availabilityof CATS (Critically Appraised Topics).

19

Partici-pants also expressed a need to improve the depthof general critical appraisal skills throughoutthe whole profession. However, such a need mustbe placed in the context of recent thinking onevidence-based practice in general which suggeststhat not all practitioners will be able to undertakethe complete evidence-based process. Instead, allcan aspire to better ways of getting appraised, syn-thesized research reports to their profession in amuch more readily accessible format, linked toidentified work-based questions.

20

One additional complication of the pilot work-shops that would not usually be encountered is thefact that they had two different, and not necessar-ily compatible, objectives—to appraise the valueof the checklist itself and to use the checklist toappraise a paper. Usually instrument developmentand instrument use are distinct—some partici-pants highlighted this as a possible area for confu-sion. Respondents also found some duplication inthe checklist questions. The project team has iden-tified a need to further validate the checklist, andrefine it where necessary.

Conclusion

The findings from this unfunded pilot projectsuggest that there is indeed a need for criticalappraisal sessions tailored to the specific needs ofinformation professionals. Although the methodsused do not differ substantially from thoseemployed by generic critical appraisal sessions forhealth professionals, there does appear to beadded value in using a checklist tailored to aparticular information practice question and inconsidering a topic familiar to the audience.Nevertheless, librarians are no different fromother professional groups in reflecting uncertaintywith regard to statistical methods, interpretationof results and knowledge of research design. Auseful spin-off from librarians acquiring criticalappraisal skills within their own professionalcontext might be that they would then feel moreable to facilitate similar sessions with multi-disciplinary groups within their organization.

Planned future developments

The CRISTAL project team has only developedtwo checklists to date, prioritized according to thevolume and importance of the literature. Howeverdevelopments in systematic reviews within healthinformation suggest that other checklists may beeasier to produce as a by-product of such reviews.For example, two systematic reviews presented atthe Evidence-based Librarianship Conference inSheffield and included in this issue will likely helpto generate checklists for clinical librarianship anduser training. Another suggestion is for a checklistto evaluate articles reporting the development ofoptimal filters. The authors intend to reproduce suchchecklists in their forthcoming book on Evidence-based Practice for Information Professionals. Inthe meantime the two checklists, reported in theirabridged form ( Tables 4 and 5), in this article willbe made available with supporting hints and full

Table 4 Twelve questions to help you make sense of a user study.

A. Is the study a close representation of the truth?1. Does the study address a clearly focused issue?2. Does the study position itself in the context of other

studies?3. Is there a direct comparison that provides an additional

frame of reference?4. Were those involved in collection of data also involved in

delivering a service to the user group?5. Were the methods used in selecting the users appropriate

and clearly described?6. Was the planned sample of users representative of all

users (actual and eligible) who might be included in the study?

B. Are the results credible and repeatable?7. What was the response rate and how representative was

it of the population under study?8. Are the results complete and have they been analysed in

an easily interpretable way?9. Are any limitations in the methodology (that might have

influenced results) identified and discussed?

C. Will the results help me in my own information practice10. Can the results be applied to your local population?11. What are the implications of the study for your practice?

In terms of current deployment of services?In terms of cost?In terms of the expectations or attitudes of your users?

12. What additional information do you need to obtain locally to assist you in responding to the findings of this study?

Clear-cut?—critical appraisal for librarians,

Andrew Booth and Anne Brice

© Health Libraries Group 2003

Health Information and Libraries Journal

,

20

(Suppl. 1), pp.45–52

52

documentation from the Evidence-based Librar-ianship website at: http://www.eblib.net.

Once a substantial body of useful evidence-based materials has been identified as a result ofthis initiative it may be possible, given necessaryresources, to pursue the authors’ plan to develop areference management database of reviews inlibrarianship/information work, entitled REVEL(REViews of Evidence in Librarianship).

Finally, an already tangible result of the produc-tion of these checklists has been their use in sys-tematic review activities. For example, researchersfrom the Information Resources section inScHARR have already used the checklist on infor-mation needs analysis in a review of the informa-tion needs of visually impaired persons and thechecklist on use studies in their review of clinicallibrarianship published in this issue. In creating

synergies between critical appraisal and systematicreview activities and between checklists developedwithin health information and their wider use withinhealth care, the future of evidence-based informa-tion practice will become increasingly clear-cut!

Acknowledgements

We gratefully acknowledge the participation andsupport of the librarians of the former Oxford andTrent Regions, particularly Jennie Kelson who hasshared with us in delivering the CRISTAL materials.

References

1 McKibbon, K. A. Evidence-based practice.

Bulletin of the Medical Library Association

1998,

86

, 396–401.2 Tsafrir, J. & Grinberg, M. Who needs evidence-based health care?

Bulletin of the Medical Library Association

1998,

86

, 40–5.3 Landrivon, G. & Ecochard, R. Principles of the critical appraisal of

medical literature.

Health Information and Libraries

1992,

3

, 29–34.4 Scherrer, C. S. & Dorsch, J. L. The evolving role of the librarian in

evidence-based medicine.

Bulletin of the Medical Library Association

1999,

87

, 322–8.5 Dorsch, J. L., Frasca, M. A., Wilson, M. L. & Tomsic, M. L. A

multidisciplinary approach to information and critical appraisal instruction.

Bulletin of the Medical Library Association

1990,

78

, 38–44.

6 Jerome, R. N., Giuse, N. B., Gish, K. W., Sathe, N. A. & Dietrich, M. S. Information needs of clinical teams: analysis of questions received by the Clinical Informatics Consult Service.

Bulletin of the Medical Library Association

2001,

89

, 177–84.7 Gray, M. National electronic Library for Health.

Vine

1999,

115

, 57–61.

8 Booth, A. Research.

Health Libraries Review

2000,

17

, 232–5.9 Trinder, L. & Reynolds, S. (eds)

Evidence-Based Practice: A Critical Appraisal

. Oxford: Blackwell Science, 2000.10 Booth, A. Spotlight on evidence-based librarianship.

Bibliotheca Medica Canadiana

2002,

23

, 84–5.11 Booth, A. Asking questions, knowing answers.

Health Information and Libraries Journal

2001,

18

, 238–40.12 Booth, A. & Brice, A. Research.

Health Information and Libraries Journal

2001,

18

, 175–7.13 Guyatt, G. & Rennie, D.

Users’ Guides to the Medical Literature: Essentials of Evidence-Based Clinical Practice

. Chicago, IL: American Medical Association, 2002.

14 Ibbotson, T., Grimshaw, J. & Grant, A. Evaluation of a programme of workshops for promoting the teaching of critical appraisal skills. Medical Education,1998, 32, 486–91.

15 Young, J. M. & Ward, J. E. General practitioners’ use of evidence databases. MJA 1999, 170, 56–9.

16 Greenhalgh, T. How to Read a Paper, 2nd edn. London: BMJ Publishing Group, 2001.

17 Roberts, R. Information for Evidence Based Care. Oxford: Radcliffe, 1999.

18 Pereira-Maxwell, F. A–Z of Medical Statistics: A Companion for Critical Appraisal. London: Arnold, 1998.

19 Wyer, P. C. The critically appraised topic: closing the evidence-transfer gap. Annals of Emergency Medicine 1997, 30, 639–40.

20 Guyatt, G. H., Meade, M. O., Jaeschke, R. Z., Cook, D. J. & Haynes, R. B. Practitioners of evidence-based care. Not all clinicians need to appraise evidence from scratch but all need some skills. BMJ 2000, 320, 954–5.

Table 5 Twelve questions to help you make sense of an information needs analysis/information audit.

A. Is the study a close representation of the truth?1. Does the study address a clearly focused issue?2. Does the study position itself in the context of other

studies?3. Is there a direct comparison that provides an additional

frame of reference?4. Were those involved in collection of data also involved in

delivering a service to the user group?5. Were the methods used in acquiring data on information

needs appropriate and clearly described?6. Was the planned sample of users representative of all users

(actual and eligible) who might be included in the study?

B. Are the results credible and repeatable?7. What was the response rate and how representative was

it of the population under study?8. Are the results complete and have they been analysed in

an easily interpretable way?9. What attempts have been made to ensure reliability of

responses?

C. Will the results help me in my own information practice10. Can the results be applied to your local population?11. What are the implications of the study for your practice?

In terms of current deployment of services?In terms of cost?In terms of the expectations or attitudes of your users?

12. What additional information do you need to obtain locally to assist you in responding to the findings of this study?