assessment of continuing interprofessional education: lessons learned

4
Foundations of Continuing Education Assessment of Continuing Interprofessional Education: Lessons Learned BRIAN SIMMONS, BSC, MMED, BM, FRCPC, FAAP; SUSAN WAGNER, BSC (SPA), MSC (CD), REG CASPLO, S-LP (C) Although interprofessional education (IPE) and continuing interprofessional education (CIPE) are becoming es- tablished activities within the education of health professions, assessment of learners continues to be limited. Arguably, this in part is due to a lack of IPE and CIPE within in the clinical workplace. The accountability of interprofessional teams has been driven by quality assurance and patient safety, though sound assessment of these activities has not yet been achieved. The barriers to team assessment in CIPE appear related to access and resources. Simulated team training and assessment are expensive, and because of staffing shortages, learn- ing in clinical practice is often the only way forward, but is obviously not ideal. Despite these difficulties, the principles of assessment should be adhered to in any CIPE program. This article explores key issues related to the assessment of CIPE. It reflects on processes of designing and introducing an IPE activity into an existing university curriculum and focuses on determining the purpose of the assessment and the use of collaborative competencies to help determine assessment. The article also discusses the use of an assessment blueprint to ensure that learners are exposed to the relevant collaborative competencies. In addition, the article discusses the use of multiple assessment methods and the potential of simulation in the assessment of CIPE. Key Words: continuing interprofessional education, competencies, blueprints, assessment, team development, continuing education Introduction Assessment appears to be the “poor relation” in the educa- tion of pre- and postlicensure 0professional interprofessional education ~ IPE! and continuing interprofessional education ~CIPE!. Teamwork and interprofessional collaboration are now common activities within health care, and are expected to be included within the education of all health care pro- fessionals. 1 However, measures that learning of collabora- tive competencies has occurred are limited. 2 Learning with, from, and about each other is a relatively new phenomenon, and measuring that learning of IPE competencies has oc- curred is a complex process. 3 For the purposes of this article, assessment is distin- guished from evaluation by being defined as the ability to measure what learning has occurred in a student or team ~ people!, whereas evaluation looks at the value of the pro- cess of learning that has occurred, eg, programs or systems ~things!. 4 Given its central role in IPE and CIPE, how might as- sessment play a more integral role in this type of learning, particularly as there is very little literature on assessment of IPE and CIPE? This article aims to reflect on the processes ~educational, professional, and organizational! of designing and introducing an IPE activity into an existing university curriculum and how this might lead to assessment of con- tinuing professional development or interprofessional practice. Assessment Development Drawing on the broader assessment literature, 5 one can see that in developing a CIPE assessment the following ques- tions need to be considered: 1. What is the purpose of assessment? 2. What will be assessed? 3. How will it be assessed? 4. Who will assess it? Disclosures: The authors report none. Dr. Simmons: Associate Professor and Staff Neonatologist, Sunnybrook Health Sciences Centre at Women’s College Hospital, Department of Pae- diatrics, Faculty of Medicine, University of Toronto, Faculty Lead— Assessment, Office of Interprofessional Education; Ms. Wagner: Senior Lecturer and Senior Coordinator of Clinical Education, Associate Member, School of Graduate Studies, Academy Associate, Department of Speech– Language Pathology, Faculty of Medicine, Faculty Lead—Curriculum and Placements, Office of Interprofessional Education, University of Toronto. Correspondence: Brian Simmons, Office of Interprofessional Education, 750 Dundas Street West, Suite 302, Toronto, Ontario, M5S 1B2, Canada; e-mail: [email protected]. © 2009 The Alliance for Continuing Medical Education, the Society for Academic Continuing Medical Education, and the Council on CME, Association for Hospital Medical Education. • Published online in Wiley InterScience ~www.interscience.wiley.com!. DOI: 10.10020chp.20031 JOURNAL OF CONTINUING EDUCATION IN THE HEALTH PROFESSIONS, 29(3):168–171, 2009

Upload: brian-simmons

Post on 11-Jun-2016

215 views

Category:

Documents


3 download

TRANSCRIPT

Foundations of Continuing Education

Assessment of Continuing InterprofessionalEducation: Lessons Learned

BRIAN SIMMONS, BSC, MMED, BM, FRCPC, FAAP;SUSAN WAGNER, BSC (SPA), MSC (CD), REG CASPLO, S-LP (C)

Although interprofessional education (IPE) and continuing interprofessional education (CIPE) are becoming es-tablished activities within the education of health professions, assessment of learners continues to be limited.Arguably, this in part is due to a lack of IPE and CIPE within in the clinical workplace. The accountability ofinterprofessional teams has been driven by quality assurance and patient safety, though sound assessmentof these activities has not yet been achieved. The barriers to team assessment in CIPE appear related to accessand resources. Simulated team training and assessment are expensive, and because of staffing shortages, learn-ing in clinical practice is often the only way forward, but is obviously not ideal. Despite these difficulties, theprinciples of assessment should be adhered to in any CIPE program. This article explores key issues related tothe assessment of CIPE. It reflects on processes of designing and introducing an IPE activity into an existinguniversity curriculum and focuses on determining the purpose of the assessment and the use of collaborativecompetencies to help determine assessment. The article also discusses the use of an assessment blueprint toensure that learners are exposed to the relevant collaborative competencies. In addition, the article discusses theuse of multiple assessment methods and the potential of simulation in the assessment of CIPE.

Key Words: continuing interprofessional education, competencies, blueprints, assessment, team development,continuing education

Introduction

Assessment appears to be the “poor relation” in the educa-tion of pre- and postlicensure0professional interprofessionaleducation ~IPE! and continuing interprofessional education~CIPE!. Teamwork and interprofessional collaboration arenow common activities within health care, and are expectedto be included within the education of all health care pro-fessionals.1 However, measures that learning of collabora-tive competencies has occurred are limited.2 Learning with,from, and about each other is a relatively new phenomenon,

and measuring that learning of IPE competencies has oc-curred is a complex process.3

For the purposes of this article, assessment is distin-guished from evaluation by being defined as the ability tomeasure what learning has occurred in a student or team~people!, whereas evaluation looks at the value of the pro-cess of learning that has occurred, eg, programs or systems~things!.4

Given its central role in IPE and CIPE, how might as-sessment play a more integral role in this type of learning,particularly as there is very little literature on assessment ofIPE and CIPE? This article aims to reflect on the processes~educational, professional, and organizational! of designingand introducing an IPE activity into an existing universitycurriculum and how this might lead to assessment of con-tinuing professional development or interprofessional practice.

Assessment Development

Drawing on the broader assessment literature,5 one can seethat in developing a CIPE assessment the following ques-tions need to be considered:

1. What is the purpose of assessment?2. What will be assessed?3. How will it be assessed?4. Who will assess it?

Disclosures: The authors report none.

Dr. Simmons: Associate Professor and Staff Neonatologist, SunnybrookHealth Sciences Centre at Women’s College Hospital, Department of Pae-diatrics, Faculty of Medicine, University of Toronto, Faculty Lead—Assessment, Office of Interprofessional Education; Ms. Wagner: SeniorLecturer and Senior Coordinator of Clinical Education, Associate Member,School of Graduate Studies, Academy Associate, Department of Speech–Language Pathology, Faculty of Medicine, Faculty Lead—Curriculum andPlacements, Office of Interprofessional Education, University of Toronto.

Correspondence: Brian Simmons, Office of Interprofessional Education,750 Dundas Street West, Suite 302, Toronto, Ontario, M5S 1B2, Canada;e-mail: [email protected].

© 2009 The Alliance for Continuing Medical Education, the Society forAcademic Continuing Medical Education, and the Council on CME,Association for Hospital Medical Education. • Published online in WileyInterScience ~www.interscience.wiley.com!. DOI: 10.10020chp.20031

JOURNAL OF CONTINUING EDUCATION IN THE HEALTH PROFESSIONS, 29(3):168–171, 2009

Following the approach described by Crossley et al,5 thefirst step for program developers is to define the purpose ofthe assessment for the learning activity. Assessment can beused for feedback, to measure progress related to a CIPEprogram, to grade learners, for quality assurance purposesand0or feedback to the program developers. Often in IPEthe learners are requested to complete questionnaires andself-assess to determine if there is a change in their attitudesand behaviors;1 this, however, does not determine if learn-ing has actually occurred.

It should be noted that all the key components of theCrossley et al model5 are essential. Often the objectives areset, the participants are defined, and the methods for assess-ment are determined. However, the weakest link in this pro-cess is “what is being assessed?” This is the content validityof the assessment and must be related to the curriculum orcourse being taught ~are we assessing what we think weshould be assessing!. Without thoughtful representation ofthe content of the course0curriculum, balanced assessmentdoes not occur.

Purpose of Assessment

In designing an assessment activity one needs to be clearabout its purpose. Therefore, program developers need toask themselves what assessment offers the learner and what“credit” could be given for their involvement in CIPE. Thereare a range of educational credits that can be given, such ascertificates or diplomas. The use of such credits often pro-vides learners with motivation for involvement and com-mitment to an interprofessional activity such as CIPE.

Determining Team Performance

Within an IPE or CIPE program, working together in a col-laborative manner is a key issue. However, determining whichelements constitute an “adequate” team performance is acomplex activity. Does team assessment require a centrallyagreed standard? If so, how is a standard or pass markdetermined?

The difficulties with determining adequate performance~or competence! can be illustrated by drawing upon some ofthe current work at the University of Toronto in creating aprelicensure or preprofessional IPE curriculum for 1200 stu-dents in 10 health science programs. At present, there aredifferences in the standards required in the 10 profession-specific programs. Some may designate 60%, 70%, or 80%as a pass; others may designate an A, B, C, or D as a pass.In introducing an IPE curriculum, agreement of the standardadopted must be equivalent across all programs. It is alsoimportant to develop a common exit point for all learners.As this is a complex assessment issue, it is not surprisingthat a formative approach is favored. However, such an ap-proach means the assessment of team performance is lim-ited, and often not regarded to be as meaningful as summativeassessment.

In relation to continuing professional development andCIPE, professional regulatory bodies such as the Royal Col-lege of Physicians and Surgeons of Canada “CanMEDS”framework6 are determining the collaborative competenciesfor their own professional group. Although there is some sim-ilarity between these profession-specific competency-basedapproaches, there is generally little agreement to produce oneshared competency-based framework, resulting in an ongo-ing uncertainty about how to assess team performance.

What to Assess

Regardless of the standard, health professionals in CIPE pro-grams need to know upon what learning outcomes they arebeing assessed. Miller’s pyramid allows us to examine thedomains of assessment ~see FIGURE 1!.7

Within each learning activity, assessment must includeknowledge, application of knowledge, performance of knowl-edge, and what we do in reality, which would be to developa professional competence in practice: “knows,” “knowshow,” “shows how,” and “does.” This approach can also bedefined as content-specific assessment or domain-specificassessment.8

As we move up the pyramid there is increasing authen-ticity with progression from cognitive to behavioral assess-ment. However, each level is independent of the other, andalso dependent upon content. The competencies for IPEand CIPE are domain-independent outcomes and are relatedto each level of the pyramid and independent of content. Itis important that the team develops core competencies inthese domains.

Bloom’s taxonomy, another framework used for assess-ment, attempts to show assessment of higher-order cogni-tive thinking.9 However, it does not express in its hierarchythe degree of authenticity that can be shown using Miller’sapproach.7 The determination of authenticity is essential inthe performance of IPE competencies for teams.

Development of a Blueprint

A single assessment of a single IPE or CIPE activity pro-vides only a limited understanding of the development of a

FIGURE 1. Miller’s pyramid: A competency-based cognitive0behavioralmodel.

Assessment of Continuing Interprofessional Education

JOURNAL OF CONTINUING EDUCATION IN THE HEALTH PROFESSIONS—29(3), 2009 169DOI: 10.1002/chp

collaborative approach to professional practice. It wouldbe a very challenging task to use multiple assessment toolsin a single IPE or CIPE activity. Therefore, the developmentof IPE and CIPE has to be balanced across learning activ-ities ~different learning activities will have differentcompetency-based outcomes!, and this will require the de-velopment of a blueprint to ensure all collaborative compe-tencies are covered in the learning activities. Differentcompetencies can be assessed by different methods, whichwill lead to a balanced assessment-based blueprint.10,11 As-sessment methodologies could include self-assessment, peer-assessment, case-based assessment, or team assessment.

Assessment Tools

A reflective piece could be used as a self-assessment tool inorder to determine if the students are aware of the roles,responsibilities, and relationships of the different profes-sions. The limitation of this methodology is that studentswill both under- and overestimate their level of knowledgeand understanding. As illustrated by Sargeant, definitionsare not clear regarding self-assessment and it is a complexprocess.12 It is unknown if teams can self-assess in an ef-fective manner. Peer insight as an assessment tool addressesthe “with, from, and about” definition, which gives morestrength to understanding from the perspective of a differentprofession. However, not all professions will be involved inthis process, and thus its reliability is limited. Case-basedassessment tools can be used to test application of knowl-edge. As an activity it increases the opportunity for all pro-fessions to participate and interact, therefore increasingauthenticity beyond self and peer assessment but is still notreplicating “real life.” A team assessment competency per-formance tool has the most authenticity, as it will be closerto real life.

Team Assessment

Although, as discussed above, assessing individual learnersin individual learning activities is a challenge in both IPEand CIPE, the ultimate aim must be to assess teamwork andcollaboration. Attempting to assess teamwork can be prob-lematic, especially when a group of learners are broughttogether, without preparation, and required to perform to-gether as an interprofessional team. In practice, however,interprofessional teams may have worked together for manyyears. Nevertheless, team structure, size, and compositionare varied, often due to different levels of professional andinterprofessional experience and expertise.

Techniques to assess the interprofessional team are lim-ited and are not readily available. According to Tuckman,teams usually need to “form,” “storm,” “norm,” and then“perform.” 13 The time taken to move through these stagesof development will depend upon a number of individualand organizational factors ~ie, understanding of roles andresponsibilities, management support for interprofessional

collaboration!. It also assumes that team members all enterthe team task environment at the same level of competency—which is unlikely.

One potentially useful approach to assess interprofes-sional team performance, given the complexities outlinedabove, is to employ an objective structured clinical exam-ination ~OSCE!. Symonds et al describe the use of anOSCE—the ITOSCE ~interprofessional team objective struc-tured clinical examination! by employing a mixed groupof medical and midwifery learners rotating through a se-ries of scenarios on common labor-room problems.14 Fiveteam examination stations were developed with the use ofa checklist related to the task and teamwork. Feedback re-lated to problem-solving skills, knowledge, and attitude to-ward teamworking was reported to the learners by afacilitator. Symonds and colleagues note, however, that theirITOSCE was a formative exercise ~and whether learningactually occurred was not determined!; it was resource in-tensive and was logistically challenging to implement.14

In order to undertake a Team-OSCE ~TOSCE!, learnersmust be given time to “form” in order to perform the task.The time period required to “form” has not yet been deter-mined. Clearly, “forming” is dependent upon an understand-ing of roles, responsibilities, and relationships. Part of theforming is a briefing process to determine the roles that oc-cur within the task requested, which will require an abilityto be able to self-reflect or use self-awareness skills that willneed to be developed prior to the team assessment process.15

Although in well-established teams forming is not an issue,the introduction of a new member can mean that effort isneeded for the forming process to integrate the new member.

Schön showed us that “reflection in action” and “reflec-tion on action” are crucial steps in the learning process ofindividuals.16 This, of course, must be true of teams and ifwe add in the additional step of “reflection before action”;this matches the process of team forming0norming, task per-forming, and team adjourning.

What use does team assessment have in IPE and CIPE?To develop a 5-station TOSCE simulation with 4–5 learnersper station and a station length of 40–45 minutes will re-quire a 4-hour process with a total of 20–25 learners. Giventhese needs, the content of the scenarios would be limitedand therefore would diminish the validity, the reliability ~de-pendent upon adequate rater training!, and acceptability dueto cost. Although the practicability of the TOSCE may belimited, it has a significant amount of potential as a teachingand assessment tool. It also has potential in terms of edu-cational impact ~although this is likely to be formative ratherthan summative!.

Concluding Comments

The assessment of learners in IPE and CIPE is fraughtwith conceptual and practical difficulties. Program plan-ners and designers must be aware of outcomes ~collabo-rative competencies! and design blueprints for both the

Simmons and Wagner

170 JOURNAL OF CONTINUING EDUCATION IN THE HEALTH PROFESSIONS—29(3), 2009DOI: 10.1002/chp

programs and assessment. They also need to use an as-sortment of methods and tools to ensure that learning hasoccurred in an assortment of domains. This has to be re-lated to both content and process to increase the authen-ticity of IPE and CIPE processes. This must be done forboth individuals and the interprofessional teams withinwhich those individuals collaborate together. A variety ofevaluation tools have been used to determine self-reportingof satisfaction, attitude, or behavioral change to IPE in-terventions.1 However, this has not been translated to theassessment of learning in IPE. We must not lose the focusof defining our assessed outcomes. Thus we must find away of determining “what we want to assess,” and therelies the challenge for the future of IPE and continuingeducation.

Acknowledgments

I would like to thank and acknowledge Dr. Scott Reeves forhis comprehensive review of this article and his helpful sug-gestions. A great deal of thanks must also go to MartinaEsdaile for her research and administrative support for thisarticle.

References

1. Barr H, Koppel I, Reeves S, Hammick M, Freeth D. Effective Inter-professional Education: Argument, Assumption and Evidence. London,United Kingdom: Blackwell; 2005.

2. Morison S, Stewart M. Developing interprofessional assessment. LearnHealth Social Care. 2005;4~4!:192–202.

3. Centre for the Advancement of Interprofessional Education ~CAIPE!.Principles of Interprofessional Education. London, United Kingdom:CAIPE: 2001.

4. Goldie, J. AMEE Education Guide No. 29: Evaluating educational pro-grammes. Med Teach. 2006;28~3!:210.

5. Crossley J, Humpris G, Jolly B. Assessing health professionals MedEduc. 2002;36:800–804.

6. Royal College of Physicians and Surgeons of Canada ~RCPSC!.CanMEDS Physician Competencies. Ottawa, Canada: RCPSC; 2005.

7. Miller G. The assessment of clinical skills0competence0performance.Acad Med. 1990;65~suppl!:S63–S67.

8. Van Der Vleuten CPM. Assessment past, present, and future: Theoriesand concepts. Key Note, Wilson Centre Research Day, November 2008.

9. Bloom B. Taxonomy of Educational Objectives: The Classification ofEducational Goals: Handbook 1: Cognitive Domain. New York, NY:David Mackay; 1971.

10. Hamdy H. Blueprinting for assessment of health care professionals.Clin Teach. 2006;3:175–179.

11. D’eon M. A blueprint for interprofessional learning. Med Teach. 2004;26~7!:604–609.

12. Sargeant J. Toward a common understanding of self-assessment. J Con-tin Educ Health Prof. 2008;28~1!:1–4.

13. Tuckman B. Developmental sequence in small groups. Psychol Bull.1965;63:384–399.

14. Symonds I, Cullen L, Fraser D. Evaluation of a formative interprofes-sional team objective structured clinical examination ~ITOSCE!: Amethod of shared learning in maternity education. Med Teach. 2003;25~1!:38–41.

15. Singleton S, Smith F, Harris T, Ross-Harper R, Hilton S. An evaluationof the Team Objective Structured Clinical Examination ~TOSCE!. MedEduc. 1999;33:34–41.

16. Schön DA. Educating the Reflective Practitioner. San Francisco, CA:Jossey-Bass; 1987.

Lessons for Practice

• There are limited assessment approaches,methods, and tools for interprofessional ed-ucation (IPE) and continuing interprofes-sional education (CIPE).

• Understanding and assessing developmentof interprofessional teams is an essentialcomponent of any CIPE activity.

• Define the purpose of the assessment andask what you are going to assess, howyou are going to assess it, and who will beassessed.

• Define outcomes expected for the CIPE ac-tivity how those outcomes may be achieved,and examine how the outcomes can bemeasured, adhering to good assessmentprinciples.

Assessment of Continuing Interprofessional Education

JOURNAL OF CONTINUING EDUCATION IN THE HEALTH PROFESSIONS—29(3), 2009 171DOI: 10.1002/chp