surveying the landscape: an overview of tools for direct observation and assessment

49
Surveying the Landscape: An Overview of a Framework and Tools for Direct Observation and Assessment André F. De Champlain, PhD Acting Director, R&D Assessment – Evolving Beyond the Comfort Zone 102 nd Annual Meeting of the Council Sunday, September 14th, 2014

Upload: medcouncilcan

Post on 22-Dec-2014

352 views

Category:

Education


0 download

DESCRIPTION

Presented by Dr. André De Champlain at the 2014 Annual Meeting

TRANSCRIPT

  • 1. Surveying the Landscape:An Overview of a Frameworkand Tools for Direct Observationand AssessmentAndr F. De Champlain, PhDActing Director, R&DAssessment Evolving Beyond the Comfort Zone102nd Annual Meeting of the CouncilSunday, September 14th, 2014

2. Acknowledgments Ian Bowmer, MD Julien Lemay (PhD candidate) Marguerite Roy, PhD Fang Tian, PhD Claire Touchie, MD, MHPE2 3. Historical Reminder Driving Force Behind MCC Projects Assessment Review Task Force(ARTF) Created in 2009 to undertake a strategicreview of the MCCs assessment processeswith a clear focus on their purposes andobjectives, their structure and theiralignment with MCCs major stakeholderrequirements3 4. The ARTF Report (2011) Recommendation 1 LMCC becomes ultimate credential (legislationissue) Recommendation 2 Validate & update blueprint for MCC examinations Recommendation 3 More frequent scheduling of the exams andassociated automation Recommendation 4 IMG assessment enhancement and nationalstandardization (NAC & Practice ReadyAssessment) Recommendation 5 Physician performance enhancement assessments Recommendation 6 Implementation oversight, including the R&DCommittee priorities and R&D Budget 4 5. ARTF Recommendation 2 That the content of MCC examinations be expandedby: Defining the knowledge and behaviours in all the CanMEDSroles that demonstrate competency of the physician about toenter independent practice. Reviewing the adequacy of content and skill coverage on theblueprints for all MCC examinations. Revising the examination blueprints and reporting systemswith the aim of demonstrating that the appropriateassessment of all core competencies is covered and fulfillsthe purpose of each examination. Determining whether any general core competenciesconsidered essential cannot be tested employing thecurrent MCC examinations, and exploring thedevelopment of new tools to assess these specificcompetencies when current examinations cannot. 5 6. Best Practice in Assessment6 7. Why a New MCCQE Blueprint?7When test content is a primary source of validityevidence in support of the interpretation for the useof a test for credentialing, a close link between testcontent and the professional requirements should bedemonstrated. (Standard 11.3, 2014) If the test content samples tasks with considerablefidelity (e.g.: actual job samples), or in the judgmentof experts (examiners), correctly simulates job taskcontent (e.g.: OSCE stations), or if the test samplesspecific job knowledge, then content-relatedevidence can be offered as the principal form ofevidence of validity (p.178) 8. What Validity Is.8Validity is an overall evaluative judgment of thedegree to which empirical evidence and theoreticalrationales support the adequacy and appropriatenessof interpretations and actions based on test scores orother modes of assessment. (Messick, 1989) E.G.: The admissions dean at your medical school hasasked you to develop a MCQ exam that will be used toadmit students to your undergraduate program Evaluative judgment Score on the admissions exam is a good predictor of MD Year 1GPA Empirical evidence High correlation between exam scores and MD Year 1 GPA 9. What Validity Is Not.9 There is no such thing as a valid orinvalid test Statements such as my test showsconstruct validity are completely devoid ofmeaning Validity refers to the appropriateness ofinferences or judgments based on testscores, given supporting empiricalevidence 10. Our Main Validity ArgumentBased on MCC QE Scores10 Does the performance (score) on the sample ofitems, stations, observations, etc., included in anyassessment, as reflected by the MCC QualifyingExamination blueprint, correspond to my truecompetency level in those domains? How accurately do my scores on a restricted sampleof MCQs, OSCE stations, direct observations, etc.correspond to what I would have obtained on amuch larger collection of tasks? 11. Our Main Validity ArgumentBased on MCCQE Scores One way to assure that our judgments are as accurate aspossible is to develop a blueprint via a practice analysisthat dictates as clearly as possible what should appear aspart of the MCCs Qualifying Examination decision Practice analysis: A study conducted to determine thefrequency and criticality of the tasks performed in practice Blueprint: A plan which outlines the areas (domains) to11be assessed in the exam (with weightings) 12. Sources of Evidence ThatInformed MCC QE Blueprint12Current Issues in HealthProfessional and HealthProfessional TraineeAssessment (MEAAC)Supervising PGY-1Residents (EPAs)Incidence and Prevalenceof Diseases in CanadaNational Survey ofPhysicians and the Public 13. 13SME Panel Meeting -Defining the TaskCandidateDescriptions(D1 & D2)BlueprintTestSpecifications(D1 & D2) 14. Who Were Our SMEs?14MRA Repof CouncilBlueprintCentralExaminationCommitteeObjectivesCommitteeTestCommitteesRCPSCUniversityRep ofCouncilCFPCPGMEDeansUGMEDeans 15. 15Proposed CommonBlueprintDimensions of CareHealth Promotionand IllnessPreventionAcute ChronicPsychosocialAspectsPhysician ActivitiesAssessment/DiagnosisManagementCommunicationProfessionalBehaviors 16. Anticipated Gaps16 Based on a thorough classification exercise(micro-gap analysis), a number ofanticipated gaps were identified based onthe current MCCQE Part I and II exams Physician activities: Communication, PBs Dimensions of care: Health Promotionand Illness Prevention, PsychosocialAspects of Care 17. Anticipated Gaps But at a higher, systems level (macro-gaplevel), what are some anticipated gaps in theMCC QE program?17 Initial philosophical impetus for reviewingwhere gaps might occur given current andfuture directions in medical educationassessments was provided by the MedicalEducation Assessment Advisory Committee(MEAAC) 18. Thinking Outside the Box:An Overview of the Medical EducationAssessment Advisory Committee (MEAAC) 19. MEAAC Mandate The Medical Assessment Advisory Committee(MEAAC) is an external panel of experts in their fieldwho advise the Medical Council of Canada (MCC)directorate (Office of the CEO) and through them,the relevant committees of Council, on ongoing andfuture assessment processes that are necessary toenable MCC to meet its mission. Critical Role Prepare report and make recommendations throughthe directorate, on new approaches to assessmentwhich will enable MCC to meet its mission 19 20. MEAAC Members Kevin Eva (Chair) Georges Bordage Craig Campbell Robert Galbraith Shiphra Ginsburg Eric Holmboe Glenn Regehr 21. 3 MEAAC Themes1. The need to overcome unintended consequences ofcompetency-based assessment (CBA) CBA offers a focused framework that educators & MRAs can make useof for evaluative purposes for the sake of improved patient care CBA tends to decontextualize competency and oversimplify whatcompetence truly means (e.g.: context specificity) Labeling a passing candidate as competentCBA downplays the physician as an active, dynamic multidimensionallearner2. Implement QA efforts to facilitate learning andperformance improvementEmphasizes the notion of the physician as a continuous learner3. Authenticity in assessment practices21 22. Recommendations1. Integrated & continuous model of assessment E-portfolio Linked assessments Breaks learning/exam cycle2. Practice-tailored modules Tailoring assessments, feedback & learning to currenteducational/practice reality (GME) Linking EHRs to tailored exams for formative assessment &feedback purposes Track impact of feedback on longitudinal performance3. Authentic assessment Create unfolding OSCE stations that mimic real practice Determine how MCC might broaden assessments to incorporateWBAs that have the potential to further inform MCC QE decisions22 23. Some Challenges Generalizability of performance Balancing authenticity (validity) with reproducibility What can we conclude from a limited sample of performances ina comprehensive scenario? Better integration of performance data How can low- and high-stakes information be betterintegrated to promote more accurate decisions as well aspromote competent physicians? Will require collaborative models withpartners MRAs, CFPC, RCPSC, medical schools, etc. 23 24. Direct Observation andAssessment in aHigh-stakes Setting Growing call for the inclusion of performance data fromassessment methods based on direct observation ofcandidates (workplace-based assessments WBAs) notonly in low stakes, but also high-stakes decisions Congruent with outcomes-based education andlicensure/certification WBAs are appealing in that they offer the promise of amore complete evaluation of the candidate Does in Millers pyramid Feedback component is appealing and part-and-parcel ofWBAs24 25. Challenges in IncorporatingWBAs in High-stakesAssessment Procedural challenges - Sampling Direct observation of undergraduate trainees occursinfrequently and poorly Structured, observed assessments of undergraduates doneacross clerkships for only 7.4%-23.1% of students(Kassembaum & Eaglen, 1999) During any core clerkship, between 17%-39% of studentswere never observed (AAMC, 2004) Good news: Class of 2013 more frequently observed thanany past cohort, but still not satisfactory (AAMC, 2014) E.g.: 14.2% of 2013 undergrads report not being observed doinga Hx; 11.6% not observed doing a PE25 26. Challenges of IncorporatingWBAs in High-stakesAssessment Procedural challenges - Training Observers are infrequently trained to use WBAs High inter-rater variability (Boulet, Norcini, McKinley & Whelan2002) Rater training sessions occurred for