meeting of the examination resources advisory committee ... agenda septe… · assist in the...
TRANSCRIPT
![Page 1: Meeting of the Examination Resources Advisory Committee ... Agenda Septe… · Assist in the identification of CLEAR Regulatory News stories or Discussion Forum Quick Poll questions](https://reader035.vdocuments.us/reader035/viewer/2022081615/5fd2b9d4ed67aa4193022c7b/html5/thumbnails/1.jpg)
1
Meeting of the Examination Resources Advisory Committee Boston Park Plaza Hotel
Boston, MA Wednesday, September 16, 2015
9:00 – 10:30 a.m.
Draft Agenda
1. Introduction and welcome
2. Approval of agenda
3. Review of draft minutes from the January 08, 2015 meeting
4. Review of committee charter (for information only, taken from the CLEAR Bylaws)
The Examination Resources and Advisory Committee shall:
a. provide examination guidance and assistance to the Board of Directors; other CLEAR
committees, subcommittees, special interest groups and working groups; and the
membership of CLEAR as requested;
b. develop and publish examination guidelines and other materials which would meet the
needs of CLEAR members; and
c. promote the development of examination standards, policies and procedures.
5. Review of CLEAR 2013-2015 strategic plan (for information only)
6. Review of statements of direction from the Board of Directors (Items 7-11)
7. Update on articles or webinar/conference presentations/resources on the following topics (previously identified by the committee):
a. Use of innovative assessment formats (completed and published on CLEAR website) b. Testing issues in cross-jurisdictional mobility (survey deemed the topic not needed) c. Accommodations and technology - when accommodations potentially compromise the
exam (2015 CLEAR Conference Session) d. Mobility (survey deemed the topic not needed) e. Legal challenges (covered in CLEAR Exam Review: Legal Beat) f. Upon their release, author a publication or commentary on the latest revisions to the
Joint Standards, with a particular focus on the implications for regulatory agencies and
organizations – Follow-up (Published in CLEAR Exam Review Spring 2015)
8. As the work under task 7) is completed, the committee is invited to identify additional topics for
which webinar/conference presentations/ publications can be developed
![Page 2: Meeting of the Examination Resources Advisory Committee ... Agenda Septe… · Assist in the identification of CLEAR Regulatory News stories or Discussion Forum Quick Poll questions](https://reader035.vdocuments.us/reader035/viewer/2022081615/5fd2b9d4ed67aa4193022c7b/html5/thumbnails/2.jpg)
2
9. Review of Monographs:
a. Discussion of revisions, suggested changes by ERAC members and establishment of
deadlines for completion
i. Development, Administration, Scoring and Reporting of Credentialing Examinations:
Recommendations for Board Members, Revised Edition: August 2004
ii. Principles of Fairness
10. Support CLEAR Exam Review via the development of additional articles and features
11. Assist in the identification of CLEAR Regulatory News stories or Discussion Forum Quick Poll
questions of interest to the testing and measurement community
12. Other Business
a. Update on Demystifying Professional and Occupational Regulation
b. Discussion of meeting dates for 2016
13. Adjournment
![Page 3: Meeting of the Examination Resources Advisory Committee ... Agenda Septe… · Assist in the identification of CLEAR Regulatory News stories or Discussion Forum Quick Poll questions](https://reader035.vdocuments.us/reader035/viewer/2022081615/5fd2b9d4ed67aa4193022c7b/html5/thumbnails/3.jpg)
3
Draft Minutes Examination Resources and Advisory Committee
The Francis Marion Hotel, Charleston, SC January 8, 2015, 3:00 p.m. Eastern
Those participating were: Chair: Sandra Greenberg, Professional Examination Service Vice Chair: John Pugsley, Pharmacy Examining Board of Canada Members: Danny Breidenbach, Applied Measurement Professionals Sara Cowling, Prometric
Ida Darragh, North American Registry of Midwives Susan Davis-Becker, Alpine Testing Solutions
Chuck Friedman, Professional Examination Service Jodi Herold, University of Toronto
Stacy Lawson, Prometric Peter Mackey, CFA Institute Steve Nettles, Applied Measurement Professionals Linda Waters, Prometric Sarah Wennik, Pearson VUE Elizabeth Witt, Witt Measurement Consulting Cynthia Woodley, Professional Testing, Inc. Tony Zara, Pearson VUE Jim Zukowski, 360training
Visitors: Claude Balthazard, Human Resources Professional Association Francis Picherack, Petrine Consulting Staff: Kelly McKown, CLEAR staff Adam Parfitt, CLEAR staff Chair Sandra Greenberg called the meeting to order at 3:05 p.m. Eastern. After introductions, the agenda was approved by consensus. Approval of minutes from September 10, 2014 meeting Sarah Wennik noted the need to correct the spelling of her name and Elizabeth Witt asked for correction of her company name as well. With those changes noted, Jim Zukowski moved to approve the minutes as amended, with Steve Nettles seconding the motion. The motion passed. Review of committee charter, CLEAR 2013-2015 organizational plan
![Page 4: Meeting of the Examination Resources Advisory Committee ... Agenda Septe… · Assist in the identification of CLEAR Regulatory News stories or Discussion Forum Quick Poll questions](https://reader035.vdocuments.us/reader035/viewer/2022081615/5fd2b9d4ed67aa4193022c7b/html5/thumbnails/4.jpg)
4
The charter and strategic plan have been previously reviewed, and there were no further questions or comments. Statement of Direction Task One: Finalize articles or webinar /conference presentations / publications on the following topics: Alternative assessment formats – Jim Zukowski reported that the document had been submitted to CLEAR staff, and Adam Parfitt confirmed that it is available on the CLEAR website as well as having been announced on CLEAR News and on the member section. It was also sent out as an email blast. A suggestion was made by the committee to look at the analytics of the website to better understand the frequency of use.of the various resources to better inform future decisions regarding dissemination of information. CLEAR staff will follow up. Testing Issues in cross-jurisdictional mobility – Sandy Greenberg reported that 11 people responded to the Quick Poll survey. There was no single issue that rose to prominence though there were concerns about differences in scopes of practice. Accommodations and technology – discussion focused on the need to identify a go-to resource for issues surrounding accommodations. Tony Zara suggested that John Hosterman with Pearson VUE might be able to speak to some of the questions, and John Pugsley recommended Janet Carson, who spoke at the CLEAR Pittsburgh conference. Discussion regarding identifying a specific go-to resource person was tabled until the next meeting. Statement of Direction Task Two: identify additional topics for which webinars/conference presentations/ publications can be developed The committee will continue to identify any additional topics for which webinars/conference presentations/publications can be developed. Author a publication or commentary on the latest revisions to the Joint Standards, with a particular focus on the implications for regulatory agencies and organizations The committee reported that Ron Rodgers will be analyzing the Joint Standards and George Gray will review issues arising from the Joint Standards in the next edition of the CLEAR Exam Review (CER). Support CLEAR Exam Review via the development of additional articles and features A recommendation was made that a Q&A or FAQ could be developed for questions surrounding the Joint Standards, to be included in the next CER. The Spring 2015 issue will focus on the Joint Standards, including Ron Rodgers’ summary. The editorial committee would like more articles for the issue, and Cynthia Woodley indicated that she will talk to several psychometric colleagues regarding their views on the Joint Standards. Publication is anticipated for March.
![Page 5: Meeting of the Examination Resources Advisory Committee ... Agenda Septe… · Assist in the identification of CLEAR Regulatory News stories or Discussion Forum Quick Poll questions](https://reader035.vdocuments.us/reader035/viewer/2022081615/5fd2b9d4ed67aa4193022c7b/html5/thumbnails/5.jpg)
5
Assist in the Identification of CLEAR Regulatory News stories or Discussion Forum Quick Polls Prior to the midyear meeting, five questions were submitted. During in-meeting discussion, another 27 questions (appended to the minutes) were generated. Sandy Greenberg will work with CLEAR staff to develop a survey where committee members can rank their top five questions to generate a Quick Poll question per month. Once the topics have been selected, the committee will ask members if they would be willing to synthesize the information in brief paragraphs for report back to the committee. Other Business Review of Monographs – Considerable discussion took place regarding the need to update The Americans with Disabilities Act: Information for Credentialing Examinations publication to not only reflect any legal changes since the original publication but also include information from the Canadian perspective. Steve Nettles suggested that the committee expand an existing 2015 conference proposal around the issue of accommodations to create a two-hour pre-conference workshop for September. Committee members agreed that it would also be a good method of identifying future experts and might help to identify additional emerging issues. There was also agreement by the committee that Development, Administration, Scoring and Reporting of Credentialing Examinations: Recommendations for Board Members and Principles of Fairness needs to be updated. Steve Nettles offered to work on updating the Development, Administration, Scoring and Reporting of Credentialing Examinations publication with assistance. Cynthia Woodley, Daniel Breidenbach, Jim Zukowski, and Jodi Herold offered to work on the revision as well. Cynthia Woodley also suggested that Principles of Fairness could be easily updated hand-in-hand with the other publication, and the sub-committee will proceed with this option. Committee Meeting Schedule for 2015 – A committee teleconference will be scheduled within a month to follow up and calls will be held with work groups. Update on Demystifying Professional and Occupational Regulation – The document has been edited and reformatted, references have been approved, the document has been reformatted, and it is waiting on the future trends section before being sent to a graphic artist. The document will be available by the September conference with reduced pricing for members. There being no further business, the meeting adjourned at 4:57 p.m., Eastern.
Respectfully submitted,
Kelly McKown
![Page 6: Meeting of the Examination Resources Advisory Committee ... Agenda Septe… · Assist in the identification of CLEAR Regulatory News stories or Discussion Forum Quick Poll questions](https://reader035.vdocuments.us/reader035/viewer/2022081615/5fd2b9d4ed67aa4193022c7b/html5/thumbnails/6.jpg)
6
Actions of the CLEAR Examination Resources and Advisory Committee taken during its
January 8, 2015 meeting
1. With corrections from Elizabeth Witt and Sarah Wennik, approved minutes from September 10, 2014.
2. Developed a comprehensive list of questions that can be used in future Quick Polls 3. Recommended a pre-conference session regarding ADA accommodations for the Annual
Conference 4. Identified a work group to revise the Development, Administration, Scoring and Reporting of
Credentialing Examination: Recommendations for Board Members, possibly in conjunction with a revision of Principles of Fairness
Task list resulting from the January 8, 2015 meeting of the
CLEAR Examination Resources and Advisory Committee
1. Support working group on revision of existing publications. 2. Establish a survey of Quick Poll questions. 3. Follow up with Tony Zara regarding contact with John Hosterman on ADA information as well as
John Pugsley regarding contact with Janet Carson. 4. Review the analytics of CLEAR membership communication methods.
![Page 7: Meeting of the Examination Resources Advisory Committee ... Agenda Septe… · Assist in the identification of CLEAR Regulatory News stories or Discussion Forum Quick Poll questions](https://reader035.vdocuments.us/reader035/viewer/2022081615/5fd2b9d4ed67aa4193022c7b/html5/thumbnails/7.jpg)
7
DEVELOPMENT, ADMINISTRATION, SCORING AND REPORTING OF
CREDENTIALING EXAMINATIONS:
Recommendations for Board Members
The Council on Licensure, Enforcement and Regulation
First Edition: March 1993
Second Edition: August 2004
![Page 8: Meeting of the Examination Resources Advisory Committee ... Agenda Septe… · Assist in the identification of CLEAR Regulatory News stories or Discussion Forum Quick Poll questions](https://reader035.vdocuments.us/reader035/viewer/2022081615/5fd2b9d4ed67aa4193022c7b/html5/thumbnails/8.jpg)
8
Third Edition: September 2015
Original Copyright 1993
The Council of State Governments 3560 Iron Works Pike
P.O. Box 11910 Lexington,
Kentucky 40578-1910
ISBN 0-87292-982-5 C-061-93
Second Edition Copyright 2004
Council on Licensure, Enforcement, and Regulation
Lexington, KY
Third Edition Copyright 2015
Council on Licensure, Enforcement, and Regulation
Lexington, KY
![Page 9: Meeting of the Examination Resources Advisory Committee ... Agenda Septe… · Assist in the identification of CLEAR Regulatory News stories or Discussion Forum Quick Poll questions](https://reader035.vdocuments.us/reader035/viewer/2022081615/5fd2b9d4ed67aa4193022c7b/html5/thumbnails/9.jpg)
9
ACKNOWLEDGMENTS
The first edition of this document was the product of the contributions of many individuals
who served on the Examination Resources and Advisory Committees of The Council on
Licensure, Enforcement and Regulation (CLEAR) from 1991 through 1993. The initial drafts
of the document were prepared by:
Nancy J. Miller, RN, MS NCLEX
Program Manager
National Council of State Boards of Nursing
In January of 1992, Lee Schroeder, Ed.D., President, Schroeder Measurement, accepted
responsibility for the preparation of the subsequent draft of the 1993 First Edition of this
document. The following individuals graciously gave their time to provide valuable
contributions by writing portions of the document and by reviewing the numerous drafts:
Kara Schmitt, Ph.D.
Director of Testing Services Michigan Department of Commerce
Barbara Showers, Ph.D.
Director of the Office of Examinations
Wisconsin Department of Regulations and Licensing
Kate Windom
Director of Testing Services Applied Measurement Services
Jim Zukowski, Ed.D.
Assistant Director of Professional Licensing and Certification Texas
Department of Health
In addition, Educational Testing Services' General Counsel Stanford von Mayrhauser was
very helpful in reviewing one of the very final drafts and in helping to clarify several of the
clumsier explanations.
Appendix A displays the names of those individuals who served on the Examination
Resources and Advisory Committees of The Council on Licensure, Enforcement and
Regulation (CLEAR) from 1991 through 1993, and 2004-2005. These individuals had a
tremendous impact on this document through their comments, direction, and review.
![Page 10: Meeting of the Examination Resources Advisory Committee ... Agenda Septe… · Assist in the identification of CLEAR Regulatory News stories or Discussion Forum Quick Poll questions](https://reader035.vdocuments.us/reader035/viewer/2022081615/5fd2b9d4ed67aa4193022c7b/html5/thumbnails/10.jpg)
10
2015 Acknowledgments
This document was originally published in 1993 by CLEAR, and was reviewed and updated
in 2004. It was prepared under the direction of the Examination Resources and Advisory
Committee (ERAC). Because of continuing developments in the testing industry, the ERAC
decided that a review was necessary to make appropriate revisions to be consistent with
current practice. The intent of this revision was to maintain the original purpose of this
document as a guideline for use by CLEAR member boards. The majority of the review and
revisions to this document were completed by the following individuals, listed in
alphabetical order:
Daniel H. Breidenbach
Program Director
Applied Measurement Professionals, Inc.
Jodi Herold
Psychometric Consultant
University of Toronto
Larry Flint
Director of Operations
Applied Measurement Professionals, Inc.
Steve Nettles
Program Director
Applied Measurement Professionals, Inc.
Cynthia Woodley
Vice President of Operations
Professional Testing, Inc.
Jim Zukowski
Consultant
360training
![Page 11: Meeting of the Examination Resources Advisory Committee ... Agenda Septe… · Assist in the identification of CLEAR Regulatory News stories or Discussion Forum Quick Poll questions](https://reader035.vdocuments.us/reader035/viewer/2022081615/5fd2b9d4ed67aa4193022c7b/html5/thumbnails/11.jpg)
11
CLEAR Examination Resources and Advisory Committee 2014-2015
Chair
Sandra Greenberg
VP for Research & Advisory
Services
Professional Examination
Service
Vice-Chair
John Pugsley
Registrar/Treasurer
The Pharmacy Examining
Board
of Canada
Grady Barnhill
Director of Examination
Programs
Nat'l Commission on
Certification of PAs
Daniel H. Breidenbach
Program Director,
Psychometrics
Applied Measurement
Professionals Inc.
Sara Cowling
Client Services Manager
Prometric
Susan Davis-Becker
Senior Psychometrician
Alpine Testing Solutions
Chuck Friedman
Program Director
Professional Examination
Service
Cathy Giblin
Registrar/Director,
Registration
Services
College & Assn of Registered
Nurses of AB
Jodi Herold
Psychometric Consultant
University of Toronto
Christy Kivari
Exam Administrator
College of RNs of BC
Stacy Lawson
Team Lead, Healthcare Client
Services
Prometric
Peter Mackey
Head, CFA Program & Exam
Dev
CFA Institute
Tammy Murdoch
Manager, Registration
Services
College of Registered Nurses
of MB
![Page 12: Meeting of the Examination Resources Advisory Committee ... Agenda Septe… · Assist in the identification of CLEAR Regulatory News stories or Discussion Forum Quick Poll questions](https://reader035.vdocuments.us/reader035/viewer/2022081615/5fd2b9d4ed67aa4193022c7b/html5/thumbnails/12.jpg)
12
Paul Naylor
Director
PDN Consulting, LLC
Steve Nettles
Program Director,
Psychometrics
Applied Measurement
Professionals, Inc.
Ron Rodgers
Director of Psychometric
Services/President
CTS/Employment Research
Institute
Mary Romelfanger
Associate Director
Institute for Optimal Aging
Linda Waters
Vice President
Prometric
Sarah Wennik
Senior Content Developer
Pearson VUE
Elizabeth Witt
Chief Consultant &
Psychometrician
Witt Measurement
Consulting
Ronald Wohl
Vice President
Wohl Communication
Services
Inc.
Cynthia Woodley
Vice President
Professional Testing Inc.
Anthony Zara
Vice President Assessment
Solutions
Pearson VUE
Jim Zukowski
Consultant
360training
![Page 13: Meeting of the Examination Resources Advisory Committee ... Agenda Septe… · Assist in the identification of CLEAR Regulatory News stories or Discussion Forum Quick Poll questions](https://reader035.vdocuments.us/reader035/viewer/2022081615/5fd2b9d4ed67aa4193022c7b/html5/thumbnails/13.jpg)
vii
Preface
The role and responsibilities of occupational and professional regulatory boards and
agencies in the development, administration, scoring and reporting of credentialing
examinations has become more complex, litigious, and fraught with peril. Assessment
instruments must reasonably comply with professional standards that ensure fairness and
integrity and do not unreasonably restrict access to the occupations and professions that
are regulated.
Protection of the public health, welfare, and safety continues to be the principal charge of
the credentialing boards. While expert in the art and science of the occupation or
profession that is regulated, increasingly credentialing boards have been perplexed by the
complexity of credentialing examinations. This document attempts to demystify the
process. The reader need not be a testing expert to grasp the terminology or standards
that are contained in this publication. Instead, in lay terms, the critical standards
necessary to promote an effective credentialing process are explored and defined.
We are indeed fortunate that national experts in the field of testing have given of their
time, energy, and expertise through CLEAR to create such a timely and important
resource. I trust that you will find that the simply defined standards contained here will
assist credentialing boards in effectively meeting their mission.
Henry A. Fernandez
CLEAR, President, 1992-
93
As we enter the twenty-first century, the issues underlying the roles and responsibilities
remain constant even though the technology available to us grows by leaps and bounds.
As such, the words written by my predecessor remain true today.
Deanna Williams
CLEAR, President, 2003-04
(Insert new message from President here)
![Page 14: Meeting of the Examination Resources Advisory Committee ... Agenda Septe… · Assist in the identification of CLEAR Regulatory News stories or Discussion Forum Quick Poll questions](https://reader035.vdocuments.us/reader035/viewer/2022081615/5fd2b9d4ed67aa4193022c7b/html5/thumbnails/14.jpg)
viii
TABLE OF CONTENTS
I. Introduction ................................................................................................................................. 15
II. Test Development ...................................................................................................................... 18
A. Job/Practice Analysis ........................................................................................................ 18
B. Test Specifications ............................................................................................................ 19
C. Developing Objectively Scored Examinations ................................................................. 21
1. Multiple Choice Item Development ........................................................................... 21
2. Non-Written Examination Items (Oral, Practical, OSCI and Essay) Development ... 25
3. Assembling an Examination Form ............................................................................. 28
4. Standard Setting ......................................................................................................... 28
5. Timing the Examination ............................................................................................. 29
III. Test Administration .................................................................................................................. 30
A. Candidate Bulletin ............................................................................................................. 30
B. Accommodating Candidates With Disabilities ................................................................. 30
C. Test Administration Manual ............................................................................................. 35
D. Computer-Based Testing ................................................................................................... 37
IV. Statistical Analysis and Research ............................................................................................. 40
A. Item Analysis .................................................................................................................... 40
B. Test Analysis ..................................................................................................................... 43
C. Test Equating .................................................................................................................... 45
V. Scoring and Reporting ............................................................................................................... 46
A. Scoring .............................................................................................................................. 46
B. Reporting ........................................................................................................................... 47
VI. Examination Security................................................................................................................ 48
![Page 15: Meeting of the Examination Resources Advisory Committee ... Agenda Septe… · Assist in the identification of CLEAR Regulatory News stories or Discussion Forum Quick Poll questions](https://reader035.vdocuments.us/reader035/viewer/2022081615/5fd2b9d4ed67aa4193022c7b/html5/thumbnails/15.jpg)
I. Introduction
Examinations play a vital role in the credentialing process. While the purpose of
credentialing examinations – assuring minimal competence to practice a profession at
the entry level – is perhaps very well understood, the technical aspects of examinations
are less well known and often misunderstood.
Since the mission of regulatory agencies is public protection, it is essential that
members of these agencies understand the tasks that should be performed in the
development, administration, analysis, and reporting of credentialing examinations.
This monograph has been developed to help provide the information necessary to
understand the technical basis for many of the practices followed by testing
professionals.
Examinations for credentialing are by their nature “high stakes,” meaning that the
consequences of a mistake on the part of the examiner or examinee are significant,
namely the delay or elimination of the candidate from practice. Failure on the part of
an agency to observe the technical foundations of testing may result in errors such as
the denial of credentialing to a competent candidate or the credentialing of an
incompetent candidate.
Regulatory Boards are empowered to establish the criteria that credentialing
candidates must meet. Such criteria are designed to ensure that licensees possess the
appropriate knowledge and skills in sufficient degree to perform important occupational
activities safely and effectively. Frequently, one of these criteria is the ability to pass a
Board-specified examination. Whether the examination is developed by Board
administrative staff, developed by a state testing unit or by a test contractor for the
Board, or purchased from a nationally recognized testing organization, the Board
assumes certain responsibilities in connection with the examination. A major
responsibility of the Board is to ensure that the examination reflects the knowledge and
skills necessary for competent performance from the perspective of public protection.
The degree to which accumulated evidence supports specific interpretations of test
scores entailed by the proposed uses of a test is referred to as validity.
An examination used in credentialing plays a major role in the protection of the public
from incompetent practitioners. It also plays a crucial role in protecting the state and
the credentialing Board itself from the liability associated with failing to license a
competent practitioner. The validity and defensibility of a credentialing examination
depends upon two key criteria:
1. The examination must measure the competence required for safe
and effective entry level job performance.
![Page 16: Meeting of the Examination Resources Advisory Committee ... Agenda Septe… · Assist in the identification of CLEAR Regulatory News stories or Discussion Forum Quick Poll questions](https://reader035.vdocuments.us/reader035/viewer/2022081615/5fd2b9d4ed67aa4193022c7b/html5/thumbnails/16.jpg)
2. The examination must distinguish between candidates who do and
do not possess this competence.
The first of these criteria is met by establishing a link between the items on the
examination and the tasks, knowledge, skills, and/or abilities (KSAs) essential to public
safety that are actually performed on the job. This linkage is initially established
through a job or practice analysis and maintained by ensuring that all test forms are
developed consistently with a test plan that accurately reflects the results of the job
analysis.
The second criterion is satisfied by establishing a minimum passing score that defines
the minimum level of competence, in terms of examination performance, required for
public protection.
In addition to test validity and defensibility, the Board also assumes responsibilities
concerning the reliability of the scores produced by the examination. An examination
is said to be reliable, or to generate reliable scores, if the results of the examination
are dependable and repeatable. That is, the examination should consistently pass
candidates who can demonstrate that they have the knowledge, skills, and/or abilities
needed to perform the job competently, and it should consistently fail candidates who
cannot demonstrate such knowledge, skills, and/or abilities.
A Board's failure to accept responsibility for the quality of the examination may result
in serious consequences. At a minimum, a Board may be embarrassed by its inability
to defend the procedures employed, even if these procedures are those of an outside
testing agency. More troubling consequences are possible, including the potential for
costly lawsuits filed against a Board by candidates who believe they have been
financially harmed by the Board’s use of inadequate testing procedures. It is
important to note that even in a situation where a Board elects to use an
examination developed by an outside agency, it remains the responsibility of
the Board to ensure that the examination meets acceptable standards. In
particular, it is important to realize that decisions based upon a given examination may
be valid in one setting and invalid in another. The Board, therefore, must
independently evaluate the appropriateness of using a published examination and
cannot rely solely on the evaluations of others.
In the following pages, recommendations are made that are designed to assist Boards
in developing and maintaining testing programs that meet the aforementioned
requirements. To help Boards evaluate existing examination programs, questions are
posed at the end of each section. These questions provide a method of applying the
recommendations found within the text.
Lastly, these recommendations are consistent with the Standards for Educational and
Psychological Testing developed by the American Educational Research Association, the
![Page 17: Meeting of the Examination Resources Advisory Committee ... Agenda Septe… · Assist in the identification of CLEAR Regulatory News stories or Discussion Forum Quick Poll questions](https://reader035.vdocuments.us/reader035/viewer/2022081615/5fd2b9d4ed67aa4193022c7b/html5/thumbnails/17.jpg)
American Psychological Association and the National Council on Measurement in
Education, 2014.
![Page 18: Meeting of the Examination Resources Advisory Committee ... Agenda Septe… · Assist in the identification of CLEAR Regulatory News stories or Discussion Forum Quick Poll questions](https://reader035.vdocuments.us/reader035/viewer/2022081615/5fd2b9d4ed67aa4193022c7b/html5/thumbnails/18.jpg)
II. Test Development
A credentialing examination should be job-related and should distinguish between
individuals who are at least minimally competent and those who are not minimally
competent. Experts in the field of test development have developed a process for
constructing examinations that exhibit these characteristics. This process is
described below:
A. Job/Practice Analysis
Job or practice analysis refers to the study of the elements of tasks, knowledge,
skill, and/or ability necessary for an individual to practice. Job/practice analysis
also refers to the determination of the tasks and/or KSAs that job incumbents
typically perform or possess and are significant to competent performance. Once
a job analysis is completed, test specifications are developed to guide the test
development process to ensure that the examination is job-related. In some cases,
an examination may be based on a national job analysis. In this situation, a Board
may still find it necessary to conduct an additional job analysis to ensure that the
findings of the national job analysis conform to the profession as it is practiced
within their jurisdiction.
An initial step in any job analysis study conducted in connection with a
credentialing program is the determination of the scope of practice as defined by
jurisdiction laws and regulations. Usually, tasks outside of the legally defined
scope of practice for a profession cannot be performed legally by members of the
group being regulated. Accordingly, laws and regulations defining scope of
practice establish the boundaries within which the profession must operate and,
thus, define the domain that must be represented in the credentialing
examination.
With the defined scope, it is important to determine how the test is to reflect
what is done on the job. There are several recognized methods for conducting
this aspect of a job analysis, and neither statute nor case law establishes which
must be used.
Professional standards intended to be consistent with legal requirements indicate
that empirical data should be collected from job incumbents by asking them to
evaluate the significance (e.g., frequency, importance, criticality) of each task
and/or KSA as it relates to safe practice. A job analysis is the method used to
gather this information and is the foundation of any examination program. If it is
![Page 19: Meeting of the Examination Resources Advisory Committee ... Agenda Septe… · Assist in the identification of CLEAR Regulatory News stories or Discussion Forum Quick Poll questions](https://reader035.vdocuments.us/reader035/viewer/2022081615/5fd2b9d4ed67aa4193022c7b/html5/thumbnails/19.jpg)
not done properly, examinations built based on the results are not defensible.
There are many methods for performing a job analysis.
An example of one way to collect such data is to have content experts compile a list of tasks
that entry level practitioners are required to do on the job. Next, these content experts may
develop a list of knowledge elements, skills, and/or abilities that are prerequisite to the
performance of these tasks. This list of tasks, knowledge, skills, and/or abilities may be
augmented and refined by interviews with job incumbents or by systematic observations of
entry-level practitioners on the job. Once the list has been compiled, a representative sample
of practitioners is solicited and asked to rate the significance of each task to the profession.
Various dimensions may be used, including the frequency with which they perform the tasks,
the importance of the tasks, the criticality or potential for harm in the event that a task is not
performed correctly, and the knowledge, skills, and abilities essential for public protection.
Such a survey methodology helps to identify the sub-collection of tasks, knowledge, skills,
and/or abilities that are most crucial to public protection. Additional information is presented
in the CLEAR Resource Brief entitled Regulators’ Role in Establishing Defensible Credentialing
Programs
B. Test Specifications
After the job analysis has been completed and the Board has determined the
tasks required of minimally competent, entry-level practitioners, the test
specifications or blueprint for the examination can be created. The purposes of
test specifications are (1) to guide test developers in constructing examinations
that are consistent with the job analysis and ensure that each form of the
examination measures the same basic concepts, and (2) to provide information to
candidates about the content of the examination. Test specifications outline the
content of the examination and indicate the relative emphasis to be given to
various content areas.
Test specifications may also be multi-faceted, describing not only the distribution
of items among content or subject areas, but also among sub-content areas and
cognitive complexity levels. This distribution among cognitive levels will define
how the candidate will demonstrate the task, knowledge, skill, or ability being
tested, for example, by asking knowledge/recall level items, application level
items or analysis/evaluation level items within each content area. Adding the
second dimension cognitive complexity provides additional evidence of the job-
relatedness of the examination. The following is an example of a specification for
a 100-item examination. The specification has two facets: content areas
numbered one through five, and cognitive levels labeled "Knowledge,"
"Application," and "Analysis." Note that the job analysis that was used to develop
![Page 20: Meeting of the Examination Resources Advisory Committee ... Agenda Septe… · Assist in the identification of CLEAR Regulatory News stories or Discussion Forum Quick Poll questions](https://reader035.vdocuments.us/reader035/viewer/2022081615/5fd2b9d4ed67aa4193022c7b/html5/thumbnails/20.jpg)
this set of test specifications indicated that one content area (Quality Assurance)
was not appropriate for testing within one cognitive level (Analysis).
Test Specifications - Medical Specialty Examination
CONTENT AREA KNOWLEDGE APPLICATION ANALYSIS Total Number
of Items
1. Fluid and Electrolytes 5 14 3 22
2. Techniques and Equipment 3 12 4 19
3. Transfusion Therapy 11 9 4 24
4. Pharmacological Agents 7 6 7 20
5. Quality Assurance 9 6 0 15
TOTAL 35 47 18 100
The test specifications must also define the type of examination format that is to be
used (written: multiple choice, essay, or non-written: oral, performance [practical,
OSCI, etc.]). This decision will be based on practical considerations, logical reasoning,
past research, and/or the job analysis information, which may help to identify the
optimal way to measure each content area. These decisions will, of necessity, also be
mindful of the cost of various examination formats. Formats such as multiple choice
items may be more costly to develop but less costly to administer and score. Other
formats, such as performance tests, may be less costly to develop but are usually more
expensive to administer and score. For example, the administration and scoring may
become quite expensive per candidate if special facilities are required, multiple
examiners are needed, and the number of candidates is very small. Also, careful
attention must be paid to the overall scoring process to ensure all candidates are
scored fairly and reliably.
As additional forms of the test are constructed, each must meet the requirements of
the test specifications to ensure that candidates have a similar testing experience,
regardless of what form of the examination they take.
QUESTIONS TO ASK CONCERNING JOB ANALYSIS AND TEST SPECIFICATIONS
![Page 21: Meeting of the Examination Resources Advisory Committee ... Agenda Septe… · Assist in the identification of CLEAR Regulatory News stories or Discussion Forum Quick Poll questions](https://reader035.vdocuments.us/reader035/viewer/2022081615/5fd2b9d4ed67aa4193022c7b/html5/thumbnails/21.jpg)
A. Has a job analysis been conducted recently?
1. A general rule of thumb is a job analysis should be conducted every 5
years or so, with a range of 3-7 years.
B. Are all of the members of the Job Analysis Advisory Committee (AC) subject
matter experts (SMEs) and do they represent the diversity of the profession?
C. Does the job analysis include specific or detailed activities required for
competent performance in the occupation or profession at the entry
level, i.e., task, knowledge, skill, and/or ability (KSA) statements?
D. Did the AC use the results of the job analysis in a logical and rational
manner to develop the test specifications?
1. Competencies were retained or deleted based on the ratings collected
from the job analysis respondents.
2. Weighting (e.g., number of items) of the content areas were based
on the ratings collected from the job analysis respondents and the
AC’s interpretation of the data.
3. Cognitive level distributions of each section were based on the ratings
collected from the job analysis respondents.
E. Does the selected test format reflect the results of the job analysis?
F. Has the job analysis process been documented?
C. Developing Objectively Scored Examinations
1. Multiple Choice Item Development
Item development refers to the actual writing of examination items as well as
the review of the items from the perspectives of both conformance with sound
testing practice and factual accuracy. The development of test items is more
than assembling a group of experts, handing them a textbook and having them
start writing. Sound practices for item development would include thoughtful
selection of the experts who will be developing the examination items, training
the experts on how to construction items that minimize the effect of “test-
wiseness,” and careful review of the newly written examination items.
An initial task in this process is to identify the individuals who will serve as item
writers. Usually these individuals are members of the profession who are
viewed as masters or content experts by their peers. Selected experts should
![Page 22: Meeting of the Examination Resources Advisory Committee ... Agenda Septe… · Assist in the identification of CLEAR Regulatory News stories or Discussion Forum Quick Poll questions](https://reader035.vdocuments.us/reader035/viewer/2022081615/5fd2b9d4ed67aa4193022c7b/html5/thumbnails/22.jpg)
be thoroughly familiar with the content for which they will be constructing
examination items. They should be willing to dedicate the undistracted time
required to write examination items and be willing to adhere to non-disclosure
and security requirements of the Board. Successful incumbent workers, content
experts (trainers, textbook authors, researchers, etc.) often make good item
development experts.
Objectively scored examination items include multiple-choice, true/false, and
matching examination items. All of these item formats may be used for
professional licensing examinations, with multiple-choice being used most often.
The test specifications will indicate the types of examination items that are
required and the correlation between the number of test items with a particular
content area or knowledge area identified by the job/practice analysis.
Additional considerations in determining the number of examination items that
should be stored in the item bank or item pool include reviewing how often the
examination will be given, how many forms of the examination will be created,
how often the forms will be rotated, and how often the forms will be updated
and changed. Finally, on a routine basis, existing examination item pools
should be evaluated and new examination items constructed to replace
outdated, overused, poorly constructed or compromised items. Items
constructed at an item writing meeting should reflect the needs based on the
content outline and the distribution of items among categories set forth in the
test specifications.
Once identified, item writers must be trained to ensure that they understand the
specifications, are familiar with the characteristics of good test items, and are
comfortable with their role in the process. This usually means that the item
writers must be brought together in a group (or several groups in the case of a
large project). At these meetings, item writers are given an opportunity to
review the test specifications and discuss them with other item writers. Item
writers are given instruction in item writing techniques and provided with an
assignment telling them how many items should be written and what topic
areas in the specifications should be covered. Item writers may do their work in
a group setting, sharing items with one another, or they may work individually.
Item writing training includes teaching item writers about best practices in the
construction of the various types of objectively scored items. These best practices
contribute to the development of clear and concise items to help minimize candidate
confusion in understanding the examination items and reduce the chances that the
candidate can guess the right answer by minimizing “test-wiseness.” Test-wiseness is the
ability of a test taker to select the correct answer despite not knowing the content because
of hidden clues unknowingly embedded in the examination items. Finally item writing
training includes teaching item writers on the logistical techniques associated with the test
items themselves (how to enter the items directly into the item bank, how to use the
paper item submission forms, how the items are to be delivered, etc.).
![Page 23: Meeting of the Examination Resources Advisory Committee ... Agenda Septe… · Assist in the identification of CLEAR Regulatory News stories or Discussion Forum Quick Poll questions](https://reader035.vdocuments.us/reader035/viewer/2022081615/5fd2b9d4ed67aa4193022c7b/html5/thumbnails/23.jpg)
Trained item writers are then assigned topics and goals for items to be written,
provided resources (if needed) such as textbooks, codes and regulations, and
allowed to begin writing new examination items.
Part of the role of the item writer is to indicate precisely where in the test
specification outline each newly written item is best classified. Item writers
should also indicate which answer is correct or best and if possible, cite a
reference that can verify the accuracy of the information measured in the item.
After the items have been written, they must be reviewed. Item review consists
of two types of reviews. One is the review of the item for clarity of grammar
and sentence structure, spelling, and adherence to sound item writing practices.
These types of reviews can be conducted by trained item reviewers or by other
trained non-content experts with language and grammatical expertise. The
other is the review of the item for technical accuracy and correct assignment of
the item to the specifications. These types of reviews are best conducted by
expert practitioners in the field. These experts can also review any editorial
changes made by non-content experts to ensure the meaning of items have not
been affected by the edits. Items should also be reviewed to ensure that they
do not inadvertently advantage or disadvantage any population subgroup.
Item reviews are ideally conducted in a secure setting. This is especially
important at this stage because the product from this activity is an item that is
considered ready to be placed on an examination either for actual scored use or
for pilot testing. During the item review process, the reviewers should confirm
the assignment of the item to the specifications, and they should verify the
accuracy of the correct response.
It is usually the case that a substantial number of items will fail to survive the
review process, but the resulting item pool should comprise well-written and
accurate test items which are clearly assigned to test specifications and
correctly keyed.
For this review process to have the greatest value, the decisions and judgments
of the review committee should be documented. Notes should be maintained
concerning the date of the review, whether or not the item was approved, the
nature of any edits deemed necessary by the committee, and, if not approved,
the reason why the item cannot be used on an examination. These notes
document the validity of the process by which the examination is constructed.
QUESTIONS TO ASK CONCERNING THE DEVELOPMENT OF MULTIPLE CHOICE TEST
ITEMS
![Page 24: Meeting of the Examination Resources Advisory Committee ... Agenda Septe… · Assist in the identification of CLEAR Regulatory News stories or Discussion Forum Quick Poll questions](https://reader035.vdocuments.us/reader035/viewer/2022081615/5fd2b9d4ed67aa4193022c7b/html5/thumbnails/24.jpg)
1. Does the item measure an important task, knowledge, skill, or ability
required of a minimally competent, entry-level licensee?
2. Does the item reflect the goal of protecting the health, safety, and welfare of
the public?
3. Has an item pool been developed, and is it reviewed annually?
4. Does the item pool include items testing various levels of cognitive
complexity, e.g., knowledge, application and analysis, which is reflective of
the cognitive demands of the job and not just the recall of facts?
5. Have a sufficient number of items been developed for each element of the
content specifications? (Note that the number required will vary based on
frequency and mode of test administration.)
6. Are the items coded to reflect the content area of the specifications to which
they correspond?
7. Is each item referenced to available published material that confirms the
correct response?
8. Are the items worded clearly and concisely?
9. Is the language level appropriate for the candidate group?
10. Are items stated so that obtaining the correct answer does not rely on the
response to another item?
11. Is a complete item asked, allowing a candidate to form a tentative answer
without having read the options presented?
12. Have clues or tricks been eliminated from the items?
13. Have negatively stated items been used sparingly? When possible, such
items should not be used.
14. Are qualifying words (e.g., NOT, MOST, BEST) emphasized consistently (i.e.,
underlined or capitalized)?
15. Has the use of absolute qualifying words (i.e., always, all, never) been
avoided?
16. Have double negatives been omitted?
17. Has unfamiliar, figurative, literary or textbook language been avoided?
18. Have words which may have different meanings to different persons (i.e.,
some, often) been eliminated?
19. Are response options arranged in a reasonable order (e.g., alphabetically,
numerically, size order, etc.)?
20. Has the response option all of the above been avoided?
21. Has the response option none of the above been avoided?
![Page 25: Meeting of the Examination Resources Advisory Committee ... Agenda Septe… · Assist in the identification of CLEAR Regulatory News stories or Discussion Forum Quick Poll questions](https://reader035.vdocuments.us/reader035/viewer/2022081615/5fd2b9d4ed67aa4193022c7b/html5/thumbnails/25.jpg)
22. Have words in the item that give clues as to the correct answer been
eliminated?
23. Are response options grammatically consistent with the structure of the item
(singular versus plural, male versus female)?
24. Have overlapping response options been eliminated? (e.g., if option A is
“Less than 104” and option B is “Less than 110,” and if A is the correct
response, then B is also correct.)
25. Are all response options parallel in length, form, and content?
26. Are all response options plausible?
27. Is there only ONE correct or best response option?
28. Is the item grammatically correct?
29. Have all words and phrases that may be considered offensive or harmful to
any subgroup been eliminated?
30. Have the items been reviewed and approved by both content experts and
measurement experts?
31. Do candidates have access to all the information required to answer the
items correctly during the test?
32. Should a minimally qualified or entry-level candidate be able to answer this
item?
2. Non-Written Examination Items (Oral, Practical, OSCE and Essay) Development
Oral, practical, Objective Structured Clinical Examination (OSCE) and essay
items and examinations differ from objectively scored examinations with
respect to a number of points of comparison. A fundamental point to
remember is that non-written examinations are not objectively scored in the
same manner as multiple choice examinations. The process of scoring for
multiple choice examinations is obvious; the candidate's responses are
compared with the answer key and the number of correct responses is
counted. The focus of effort is the development of the examination itself.
With non-written examinations, in many (but not all) cases the process of
developing the examination is obvious. However, the focus of effort shifts
instead to the development of a standardized and reliable method of scoring
the examination.
For example, suppose a Board decided to develop a practical examination for
nurses, testing the ability to start an intravenous infusion. The development of
the examination may be straight- forward: Details concerning the task would
be decided, such as the location of the infusion site, type of catheter to be
used, condition of the patient, fluid to be infused, etc. These decisions would
![Page 26: Meeting of the Examination Resources Advisory Committee ... Agenda Septe… · Assist in the identification of CLEAR Regulatory News stories or Discussion Forum Quick Poll questions](https://reader035.vdocuments.us/reader035/viewer/2022081615/5fd2b9d4ed67aa4193022c7b/html5/thumbnails/26.jpg)
effectively standardize the process and ensure that each candidate is asked to
complete essentially the same task and would help to assure the validity of the
examination.
With respect to scoring the examination, a number of potentially more difficult
items need to be answered. How will a successful performance be defined?
How are variations in the sizes of the patients' veins to be taken into account?
How are variations in the patients' tolerance of discomfort to be taken into
account? How are variations in the level of cooperation demonstrated by the
patients to be taken into account? These items demonstrate that even this
seemingly objective task may be affected by very subjective circumstances. It
is necessary to ensure that each candidate has the same opportunity to pass
the examination and, yet, it is also apparent that removing subjectivity from
the scoring process may be difficult or impossible. Removing as much
subjectivity as possible, however, is vital to maintaining the reliability of the
examination process.
Once issues about the nature of the performance are resolved, it is necessary
to confront the logistics of the administration and scoring process. In the case
of the practical examination described above, patients will be required. Can
each candidate be asked to bring a volunteer patient on which to attempt to
start an infusion or would this create a liability problem for the Board? Should
hospital patients be used? What liability does this approach incur? Equipment
is also an issue: Are candidates going to be asked to provide their own
infusion equipment, or will this introduce another subjective variable into the
testing process? Does the provision of equipment carry liability for the Board?
After the matter of who is to receive the infusion is determined, judges or
examiners must be selected. It is crucial that these individuals not only be
expert in the field but that they be willing to make their judgments in
conformance with the criteria already established. That is, it is important to
recognize that experts selected to serve as judges or examiners will have pre-
determined ideas about how a given task is to be performed. These ideas
may differ from the criteria already established for this examination, and
judges must be willing to be trained to understand and use the prescribed
criteria. It is necessary that all judges look at the task from the same
perspective to maximize the reliability of their decisions.
Different strategies for combining the judges' evaluations of candidate
performance may be used. To reduce subjectivity in the observation process,
at least two judges should observe each candidate's performance. Judges
should make independent observations and not discuss their findings with
other judges. Forms must be developed on which judges can record their
observations about the performance of the task. These forms must provide a
place to indicate each of the observations deemed appropriate to evaluating
the performance, plus space in which to record subjective observations which
may affect the performance.
![Page 27: Meeting of the Examination Resources Advisory Committee ... Agenda Septe… · Assist in the identification of CLEAR Regulatory News stories or Discussion Forum Quick Poll questions](https://reader035.vdocuments.us/reader035/viewer/2022081615/5fd2b9d4ed67aa4193022c7b/html5/thumbnails/27.jpg)
One method of scoring a performance evaluation is to combine judges' ratings
of the same performance. The ratings may be combined in a number of ways.
One way is to add the ratings together. The pass-fail decision is based on the
sum of the scores of the two judges. Another method requires that both
judges agree on the candidate's pass-fail status.
In the event that the judges disagree in their independent evaluation of a
candidate's performance, a procedure must be established to resolve the
discrepancy. Ideally, such a procedure would involve the evaluation of the
candidate's performance by a third judge. To make an additional evaluation of
a candidate's performance, it may be necessary for the candidate to repeat
the performance, and this may be difficult or impossible. The viewing of
candidate performance by a third judge, in person, or by reviewing an
electronic recording or documentation, may prove helpful in reaching a
resolution or consensus. In practice, when two discrepant judges disagree,
they are often asked to compare their independent ratings, identify those
areas of substantive disagreement, and come to a joint decision.
A performance can only be viewed once, which is different from a written
examination which can be filed away and reviewed as necessary. In some
"high stakes" settings, it may even be desirable to electronically record a
performance as a means of documenting what has occurred. Documentation of
judge decisions is also important in that this information serves as the basis of
efforts to evaluate the reliability of the scoring process.
While there are many differences, non-written examinations also have much in
common with objectively scored examinations. Both forms of examination
should be standardized so that all candidates have the same opportunity to
demonstrate competence. Both types of examination must have a minimum
passing standard, and the validity and reliability of the examination program is
crucial for both types of examinations.
QUESTIONS TO ASK CONCERNING THE DEVELOPMENT OF NON-WRITTEN
EXAMINATION ITEMS AND TESTS
1. Is the behavior being measured something that could not be evaluated by
the use of a multiple choice or objectively scored examination?
2. Are the evaluators thoroughly trained prior to the examination
administration?
3. Are the evaluators free of conflicts of interest concerning the candidates?
4. Are there detailed criteria for evaluating and scoring?
5. Does each evaluator make an independent rating?
![Page 28: Meeting of the Examination Resources Advisory Committee ... Agenda Septe… · Assist in the identification of CLEAR Regulatory News stories or Discussion Forum Quick Poll questions](https://reader035.vdocuments.us/reader035/viewer/2022081615/5fd2b9d4ed67aa4193022c7b/html5/thumbnails/28.jpg)
6. Are at least two independent evaluations made for each candidate?
7. Is the evaluation free of potentially biasing information about the
candidate which is not related to examination performance?
8. Has the examination session been documented (proctored, audio or
electronically recorded?
3. Assembling an Examination Form
After a pool of acceptable test items has been constructed, the pool serves as
the basis for test construction. To assemble forms of the examination, items are
selected to meet the specifications. If the items have been pre-tested or used
previously, criteria may also be established for the statistical properties of the
items to be selected for the examination (for example, items may be selected
based on a particular difficulty level).
Once a selection of items deemed to meet all specification requirements is
made, the test is reviewed to determine that the items work well as a set and
that one item does not provide the answer to another item. The pattern of
correct responses should also be reviewed to balance the answer key so that
each response option is used as a correct response about the same number of
times and no single correct answer appears too frequently in any section.
QUESTIONS TO ASK ABOUT THE EXAMINATION FORM
1. Does the examination match the test specifications?
2. Has the examination been revised since the last administration?
3. Has the examination been reviewed by a content expert for item overlap or
items that may cue the correct response to other items?
4. Are patterns of correct answers avoided (e.g., an overly large proportion of
one response option being correct or long series of the same response
option being correct)?
4. Standard Setting
Of equal importance to the sound development of an examination form is the
determination of a standard, or minimum passing score, used to compare and
interpret the test scores. If adequate consideration is not given to how the
minimum passing score is determined, the score may not be legally defensible.
![Page 29: Meeting of the Examination Resources Advisory Committee ... Agenda Septe… · Assist in the identification of CLEAR Regulatory News stories or Discussion Forum Quick Poll questions](https://reader035.vdocuments.us/reader035/viewer/2022081615/5fd2b9d4ed67aa4193022c7b/html5/thumbnails/29.jpg)
A defensible standard is clearly linked to minimal competence; judgmental or
performance-based approaches can be used to set a defensible standard.
Judgmental procedures, including those suggested by Angoff, Nedelsky, and
Ebel, are based upon the use of a panel of experts. The modified-Angoff is the
most commonly used in credentialing examinations. Each panel member
estimates the percentage of minimally competent candidates that will answer
each item correctly. When the proportions established by the panel are
averaged across all items, the result is a recommended minimum passing score
or standard.
Two examples of inappropriate approaches to setting standards for credentialing
programs are the use of quotas or arbitrary standards. Quota based systems,
the analog to “grading on the curve,” are approaches to standard setting that
look at the performance of a fixed percentage of top scoring candidates to set
the passing score. For example, such a standard might say that the upper 80
percent of the candidates will pass and the bottom 20 percent of the candidates
will fail. Such an approach has little to do with minimal competence since it may
be the case that all (or none) of the candidates are competent to practice.
Arbitrary standards are also problematic. An arbitrary standard is determined
on the basis of experience or intuition and can be compared to a teacher's policy
that 90 percent is required for an “A.” In contrast to a quota system where
some candidates must pass and some must fail, an arbitrary standard can have
the effect of passing all candidates or failing all candidates. This still has little to
do with minimum competence unless evidence exists to establish that
individuals scoring below the standard are not competent and individuals
scoring above the standard are competent. Standard setting is one of the areas
that a Board should consult a measurement expert for guidance.
5. Timing the Examination
For nearly all credentialing examinations, a time limit is established for the
examination administration ensuring that candidates will have an appropriate
amount of time to complete the examination. In some cases, a job analysis
may indicate that the amount of work done in a given period of time is
important to successful performance, as was once the case with typists where
both the speed and accuracy of their performance were important. Unless the
job analysis establishes a need to evaluate the quantity of correct work
completed in a fixed period of time, the candidates should feel as if the
examination can be completed within the allotted time.
One useful guideline that can be followed is to allow at least one minute per
multiple choice item. Thus, a two hundred item examination would be initially
scheduled for about three hours and twenty minutes. Additional time may be
provided if many items involve calculations or the use of reference materials.
![Page 30: Meeting of the Examination Resources Advisory Committee ... Agenda Septe… · Assist in the identification of CLEAR Regulatory News stories or Discussion Forum Quick Poll questions](https://reader035.vdocuments.us/reader035/viewer/2022081615/5fd2b9d4ed67aa4193022c7b/html5/thumbnails/30.jpg)
III. Test Administration
A. Candidate Bulletin (Handbook)
Candidates should be provided detailed information about the examination prior to
the test administration. For many testing programs, a candidate receives a
candidate bulletin or handbook that explains all aspects of the examination process
as a part of the information sent with the examination application. The candidate
bulletin should describe the examination content in sufficient detail to provide
candidates with information to be used to prepare for the examination. Several
sample items and several suggestions for references that the candidate may consult
should also be included. The bulletin should describe the candidate application
procedures, including fees and refund policies, application deadlines and testing
dates, test sites, admission and administration procedures, length and timing of the
examination, and scoring and reporting procedures. In addition, federal law
mandates that a policy be in place (and described in the candidate handbook)
discussing the accommodations which will be made for candidates with disabilities
or special testing needs. To conserve paper, this information is typically made
available on line and will be addressed in any examination application process. A
useful reference concerning the candidate bulletin is Principles of Fairness: An
Examination Guide for Credentialing Boards, available from CLEAR.
QUESTIONS TO ASK CONCERNING THE CANDIDATE HANDBOOK
1. Is a complete candidate bulletin (outlining all components of
examination development and administration) available to all
candidates?
2. Are the candidates informed of the scoring procedures which will be
used?
3. Are the candidates informed of the time frame for receiving the exam
results?
B. Accommodating Candidates With Disabilities
In the United States, the Americans with Disabilities Act (ADA) prohibits discrimination against
people with disabilities in employment, transportation, public accommodation,
communications, and governmental activities. The law was signed in 1990 and has largely
![Page 31: Meeting of the Examination Resources Advisory Committee ... Agenda Septe… · Assist in the identification of CLEAR Regulatory News stories or Discussion Forum Quick Poll questions](https://reader035.vdocuments.us/reader035/viewer/2022081615/5fd2b9d4ed67aa4193022c7b/html5/thumbnails/31.jpg)
remained the same, although it has gone through many additional interpretations to expand
the law to cover necessary areas. The law is grouped into titles covering five major categories.
Boards and Credentialing organizations are most affected by Title III, the regulations for public
accommodations and public facilities. This is the portion of the law that organizations and test
sponsors must consider when establishing all processes involved in the test administration
process.
The definition of disability according to ADA is “... a physical or mental impairment that
substantially limits a major life activity.” The definition is vague by design, intending to be as
inclusive as possible. In 2008, the ADAAA (Americans Disability Act Amendment Act) was
passed to better define “a major life activity,” providing more clarity for legal proceedings and
decisions. Over the years, the US government has provided revised interpretations to help
expand coverage as needed. This definition of who is disabled now includes a time frame,
which expands the number of people who can be covered under ADA. No longer does it have
to be permanent. It needs to fit the criteria of “… limiting a major life activity” at the time,
which may have lasting repercussions such as loss of job or promotions. When it comes to
testing candidates requesting ADA accommodations, it is up to the person applying to prove
the disability with a reasonable amount of documentation. What is “reasonable” is also open
to interpretation. Typically, a requirement of a diagnosis within the last five years is considered
reasonable. Often, with a diagnosis beyond five years, the person will likely have been
provided tools or have learned to function with their disability, so it is no longer limiting a
major life activity, although they still may have the same diagnosis. Exceptions to the 5-year
term are diagnoses that cannot change, such as blindness, loss of limbs, or other permanent
disabilities that would remain even with additional tools.
It is important to emphasize that accommodations are not meant to provide an advantage but
rather to ensure a “level playing field” for all candidates. The National Commission for
Certifying Agencies (NCCA) Standards address this stating that ADA accommodations “should
be reasonable and not compromise the fundamental nature of assessment or the validity of
the certification decision.” There are many standard accommodations, such as providing a
reader or reading software, allowing additional testing time, or testing in a distraction-free
environment. Other available accommodations can include larger screens/larger print,
adjusted lighting, special seating, or additional material for notes. Each accommodation
requires forethought and planning to ensure fairness. The accommodation must be
appropriate and professionally provided. When a candidate requests additional time, he or
she must qualify for this specific accommodation and provide appropriate supporting
documentation to ensure there is no unfair advantage. When providing a reader, test
![Page 32: Meeting of the Examination Resources Advisory Committee ... Agenda Septe… · Assist in the identification of CLEAR Regulatory News stories or Discussion Forum Quick Poll questions](https://reader035.vdocuments.us/reader035/viewer/2022081615/5fd2b9d4ed67aa4193022c7b/html5/thumbnails/32.jpg)
administrators must consider the examination content. For example, a candidate sitting for a
respiratory care exam may need to be familiar with terms like alveoli, dyspnea, pleurodesis, or
tachycardia. If readers have no experience with medical terminology, they could easily
mispronounce the words, adversely affecting the candidate’s testing experience and
potentially affecting their performance.
There are occasions when a candidate requests special accommodations for a perceived
disability that is not covered by ADA. A common example is when candidates request extra
time or a reader when an exam is not available in their primary language. ADA does not
consider this a disability; therefore, additional time cannot be granted under ADA.
Boards and credentialing organizations may receive an occasional unique request for
accommodations that cannot be fully granted even though the candidate has valid
documentation of a disability. ADA specifies that the requested accommodations must be
reasonable and not cause undue hardship to provide. For example, a candidate with a large
number of severe allergies may request to have all proctors, supervisors, and other personnel
within the testing center bathe with a special hypoallergenic soap and use special cleaning
materials prior to the administration. While this would be considered an unreasonable
request, an organization may wish to take reasonable action to accommodate the candidate by
using organic cleaners in the testing center prior to testing, allowing the candidate to test
alone, and ensuring the proctor wears nothing scented during the administration. While
unreasonable requests are somewhat rare, it is important to handle each one with sensitivity,
understanding, and a desire to reach a reasonable outcome.
There are also many requests that are not traditionally covered under ADA but require further
consideration. Credentialing Boards must think about the potential outcome of denying a
request and if it could result in hardship or ultimately have adverse effects on the candidate,
testing sponsor, or testing vendor. For example, pregnancy is not considered a disability under
ADA. However, if a candidate is very late in their pregnancy and must take frequent breaks, the
Board should consider what effect a denial may have on their future. If the test is only given
once per year and that person cannot take the test without accommodations, it could result in
not getting a pay increase, loss of a promotion, or even loss of employment. In this situation,
the pregnancy could be considered a physical impairment that substantially limits a major life
activity. If the examination is given monthly then accommodations may not be appropriate or
![Page 33: Meeting of the Examination Resources Advisory Committee ... Agenda Septe… · Assist in the identification of CLEAR Regulatory News stories or Discussion Forum Quick Poll questions](https://reader035.vdocuments.us/reader035/viewer/2022081615/5fd2b9d4ed67aa4193022c7b/html5/thumbnails/33.jpg)
fair since the candidate would have a more timely opportunity to take the exam at another
date. These unique situations are part of the new expanded definition of the ADA law.
To be fair and consistent, Boards and credentialing organizations should verify that its process
to apply for accommodations under ADA is the same for everyone. Failure to apply the same
process to everyone will eventually lead to complaints and may result in legal action. There
should be one entity with the clear responsibility of reviewing and approving accommodation
requests – typically the test sponsor or testing vendor. The Board or credentialing organization
must ensure it has approved paperwork, proper, and current diagnosis from an appropriate
professional, and ensure that the approval/denial process is consistent for all candidates. It is
also desirable to have more than one person within an organization with knowledge of ADA
laws. This allows for consultation regarding unique requests. It is also very important to
understand that accommodation applications and supporting documentation are medical
information and should be treated as private and secure, only being shared with the person(s)
making the decisions. The accommodation will need to be shared with those involved in the
administration of the examination (e.g., proctor, supervisor), but the reason or diagnosis
should not be provided to anyone outside of the approval process. While there are some black
and white guidelines, organizations are still faced with some grey area in determining if and
when accommodations may be appropriate in every single case. Thus, a common sense
approach must be applied to each request that falls outside of the standard request. Boards
and credentialing organizations must monitor changes and new interpretations of the ADA law
to ensure that their examinations are administered using procedures that provide comparable
conditions for all candidates.
In Canada, similar guidelines are available. The Canadian Human Rights Act defines
discrimination as follows:
An action or a decision that treats a person or a group negatively for reasons such as
their race, age, or disability. These reasons are known as grounds of discrimination.
Federal employers and service providers, as well as employers and service providers of
private companies that are regulated by the federal government, cannot discriminate
against individuals for these reasons.
These 11 grounds are protected under the Canadian Human Rights Act: race national or ethnic origin
![Page 34: Meeting of the Examination Resources Advisory Committee ... Agenda Septe… · Assist in the identification of CLEAR Regulatory News stories or Discussion Forum Quick Poll questions](https://reader035.vdocuments.us/reader035/viewer/2022081615/5fd2b9d4ed67aa4193022c7b/html5/thumbnails/34.jpg)
colour religion age sex sexual orientation marital status family status disability a conviction for which a pardon has been granted or a record suspended.
There are several ways that a person could be discriminated against. The Canadian
Human Rights Act calls these “discriminatory practices.” The following seven
discriminatory practices are prohibited by the Canadian Human Rights Act when they
are based on one or more of the 11 grounds of discrimination:
Denying someone goods, services, facilities or accommodation. Providing someone goods, services, facilities or accommodation in a way that treats
them adversely and differently. Refusing to employ or continue to employ someone, or treating them unfairly in the
workplace. Following policies or practices that deprive people of employment opportunities. Paying men and women differently when they are doing work of the same value. Retaliating against a person who has filed a complaint with the Commission or against
someone who has filed a complaint for them. Harassing someone.
Federal employers are not allowed to discriminate against their employees. In fact, they
are obligated to make every effort to accommodate an employee’s individual
circumstances that relate to protected grounds of discrimination. This is referred to as
the duty to accommodate.
Canadian credentialing Boards should consult their legal departments for guidance. Additional
information can be found at the following links:
http://www.chrc-ccdp.ca/eng
http://www.ccdonline.ca/en/socialpolicy/fda/cda
http://laws-lois.justice.gc.ca/eng/acts/E-5.401/
http://www.disabilitypolicy.ca/resourcesNational.php
![Page 35: Meeting of the Examination Resources Advisory Committee ... Agenda Septe… · Assist in the identification of CLEAR Regulatory News stories or Discussion Forum Quick Poll questions](https://reader035.vdocuments.us/reader035/viewer/2022081615/5fd2b9d4ed67aa4193022c7b/html5/thumbnails/35.jpg)
C. Test Administration Manual
The test administration manual for either pencil/paper or computer-based testing
should outline complete information concerning the test administration process and
should be adhered to by the staff (proctors) who actually administer the
examination. The following paragraphs describe the important areas that should be
discussed in the test administration manual. These paragraphs, however, are
suggestive of the content of the manual and are provided as a tool to help in the
evaluation of such manuals. It is not suggested that the following paragraphs alone
would suffice as a test administration manual.
Admission: A copy of the admission document, if used, to admit candidates to the
testing center should be reproduced in the administrators’ manual. The manual
should describe the procedure used to identify candidates, such as requiring proof
of identity with government issued picture identification such as a driver’s license or
passport. The manual should also describe how candidates are logged into the
center, typically by being asked to sign an entry roster. A sample of the entry
roster should also be included in the manual. Any materials or excess clothing, plus
caps/hats, should be placed in an area far away from the actual candidates taking
the examination. Seat assignment procedures should be prescribed, not left to the
discretion of the proctors or examinees. A random assignment of examinees to
seat locations is preferred.
Entry and Exit During the Administration: Some candidates will undoubtedly need
to leave the exam room to use the rest rooms during the examination. For this
reason, entrance and exit procedures must be described in the manual. For
example, examinees may be required to sign-out and surrender their test materials
before leaving the room. To re-enter the room, candidates may be required to sign-
in prior to being re-issued their materials. Samples of these sign-in and sign-out
materials should be included in the manual. Also, it is important to ensure that
when candidates leave the testing room, they turn in the same materials they were
provided at the beginning of the testing session. For computer-based
administrations, ensure that other candidates cannot have access to the computer
station that has been vacated. These actions can prevent the potential of one
candidate exchanging test materials with another candidate during a restroom
break.
Release of Examinees: Upon completion of the examination, candidates may be
asked to sign an exit roster and proctors should be required to document the serial
numbers of the materials surrendered by each candidate. Materials which are
turned in should be confirmed to be the same materials that were handed out. The
process for making these verifications should be described in the manual. For
computer-based administration, ensure that the candidate has logged out.
Irregularities: Irregularities include a host of unexpected events that can and will
occur, including defective test booklets, fires, power outages, candidates who
become ill, candidates who arrive without admission documents, and candidates
![Page 36: Meeting of the Examination Resources Advisory Committee ... Agenda Septe… · Assist in the identification of CLEAR Regulatory News stories or Discussion Forum Quick Poll questions](https://reader035.vdocuments.us/reader035/viewer/2022081615/5fd2b9d4ed67aa4193022c7b/html5/thumbnails/36.jpg)
who are suspected of cheating. In the event of a suspected or actual irregularity,
the test administration staff must be aware of the procedures to follow. The
administration manual must clearly outline the procedures to be followed when
problems arise so that the security of the examination is maintained.
Shipment of Materials: The process by which test materials are to be returned from
testing centers is very important and should also be detailed in the manual. If this
process involves the use of commercial delivery services, such as Federal Express,
the container or envelope to be used for returning materials should be sent to the
testing supervisor with all address information already filled out. Test center staff
should be informed of the means of shipping the materials back to the Board office,
and they should be required to observe a deadline by which the shipment is to be
made. In all cases, shipments should be made via a secure, traceable means.
The administration manual should include the actual text of the directions to be
read to candidates. Information about timing must also be included. Clearly, the
test administration manual is a critical resource document for the administration of
the test.
QUESTIONS TO ASK CONCERNING PAPER AND PENCIL TEST ADMINISTRATION
1. Has a test site been selected that meets the following criteria?
a. adequate lighting/heating/cooling
b. adequate writing surfaces/seating
c. adequate rest room facilities
d. an area for candidate admission
e. adequate parking for staff and candidates
f. quiet work environment
g. a barrier-free environment to accommodate candidates with
disabilities
h. a clock in the room (or on the computer monitor) which is visible
to candidates
2. Are site locations convenient to most candidates?
3. Are candidates sent admission materials with directions to the test site,
and a description of the examination admission and administration
procedures, well in advance of the exam date?
4. Has a manual outlining examination procedures and candidate instructions
been developed and is it used for each administration?
![Page 37: Meeting of the Examination Resources Advisory Committee ... Agenda Septe… · Assist in the identification of CLEAR Regulatory News stories or Discussion Forum Quick Poll questions](https://reader035.vdocuments.us/reader035/viewer/2022081615/5fd2b9d4ed67aa4193022c7b/html5/thumbnails/37.jpg)
5. Is there adequate testing staff (proctors) given the number of candidates
(i.e., one proctor for every 30-35 candidates)?
6. Is the testing staff trained in all examination administration
procedures, including the process for handling and documenting
problems at the test site (emergencies, late candidates, test booklet
misprinting, power failures, cheating)?
7. Have materials and supplies needed for the administration been
prepared and delivered to the site(s)?
8. Are materials and supplies kept secure before, during and after the
examination?
9. Are candidate identities verified with photo identification prior to admission?
Are the candidate resources, including study materials, purses, hats, etc.,
moved away from the candidate taking the examination?
10. Are candidate rosters assembled for each site, and is a procedure used to
ensure that candidates do not select their own seats?
11. Are examination booklets and answer sheets distributed to candidates in a
secure manner? For computer-based testing, is the candidate comfortably
situated in the testing station?
12. Are candidates given sufficient time to complete the examination?
D. Computer-Based Testing
Computer-Based Testing (CBT) CBT is a method of examination administration
that uses computers to deliver the examination instead of via a paper
examination. There are many different formats for CBT delivery and many
considerations for the various types of CBT delivery. The options for CBT
include physical considerations and process considerations.
In terms of physical considerations, there are options regarding where the
computer is located and how the candidates access the computer. These
options include the candidate accessing the CBT examination through one of the
![Page 38: Meeting of the Examination Resources Advisory Committee ... Agenda Septe… · Assist in the identification of CLEAR Regulatory News stories or Discussion Forum Quick Poll questions](https://reader035.vdocuments.us/reader035/viewer/2022081615/5fd2b9d4ed67aa4193022c7b/html5/thumbnails/38.jpg)
private testing networks available through various CBT vendors. Private CBT
networks are test centers comprised of banks of computers where candidates
make appointments and schedule examinations that are then pushed down to
the examination center and monitored by the examination center staff. Another
option is the candidate accessing the examination at a test center such as at a
community college or university. These centers may offer paper and pencil
testing as well as regular student testing but can also be configured to accept a
Regulatory examination. Candidates might also access the CBT examination on
a computer located at a non-test center location such as at the regulatory board
office or even at home on their own personal computer. Often CBT exams at
non-test center locations require at least one dedicated computer and a person
willing to manage the examination administration. Each of these various
physical location options have cost, access, and security implications that must
be considered by the Board before selecting any one of them. It is important to
be sure that the security of the examination is maintained, and regardless of the
CBT method utilized, it is important that the examination is proctored and that
the candidate is monitored during the examination experience.
Process considerations include thinking about the software that will be used to
deliver the examination (most CBT vendors have proprietary software that will
be used on their CBT platform) as well as the method of CBT examination
delivery (fixed, random, or adaptive). Software platforms vary. Some CBT
vendors require specific software to be loaded onto the computer before an
examination can be sent. Others utilize the internet to deliver the examination
and only require internet access instead of loading any specific software onto
the computer. Each of these have implications that the Board should consider
when selecting a CBT delivery platform.
The most common delivery method for CBT programs is linear, or fixed CBT.
Similar to paper and pencil testing but instead delivered on a computer, a fixed
set of test items are presented in a fixed order. This delivery method is the
simplest to use and the easiest to explain to candidates.
Random CBT delivery is similar to fixed CBT in that a fixed set of items are
administered however the order of the items are varied or randomized for each
candidate. This method of delivery has the benefit of a slight examination
security advantage as candidates often assume they have been given a different
examination and are less likely to share information about test questions to
future candidates.
Additional delivery methods are available, e.g., adaptive, LOFT. However, the
discussion of these models is beyond the scope of this document. The essential
point for all CBT methods is that the examination is delivered by a computer
and candidate responses are collected electronically.
CBT also allows variations in the administration frequencies. Paper
examinations are often administered on set dates in set locations at some
regular interval (annually, bi-annually, quarterly, etc.) CBT allows more flexible
testing options including daily testing. These options are often referred to as
testing in windows or continuous testing. Windows testing is similar to paper in
![Page 39: Meeting of the Examination Resources Advisory Committee ... Agenda Septe… · Assist in the identification of CLEAR Regulatory News stories or Discussion Forum Quick Poll questions](https://reader035.vdocuments.us/reader035/viewer/2022081615/5fd2b9d4ed67aa4193022c7b/html5/thumbnails/39.jpg)
that an examination window is opened from as little as a one-day window to a
several week window, and candidates are required to sit for the examination
during the window. Continuous testing allows candidates to schedule an
examination for any day that the CBT center is open. Continuous testing has a
higher risk of exposure of the examination items and therefore requires more
security considerations than testing in windows.
QUESTIONS TO ASK CONCERNING CBT TEST ADMINISTRATION
1. Has a test site been selected that meets the following criteria?
a. adequate lighting/heating/cooling
b. adequate writing surfaces/seating
c. adequate rest room facilities
d. an area for candidate admission
e. adequate parking for staff/candidates
f. quiet work environment
g. a barrier free environment to accommodate candidates with
disabilities
h. a clock in the room which is visible to candidates
2. Are site locations convenient to most candidates?
3. Are candidates sent admission materials with directions to the test site,
and a description of the examination admission and administration
procedures, well in advance of the exam date?
4. Has a manual outlining examination procedures and candidate instructions
been developed and is it used for each administration?
5. Is there adequate testing staff (proctors) given the number of candidates
(i.e., one proctor for every 30-35 candidates)?
6. Is the testing staff trained in all examination administration
procedures, including the process for handling and documenting
problems at the test site (emergencies, late candidates, test booklet
misprinting, power failures, cheating)?
7. Have materials and supplies needed for the administration been
prepared and delivered to the site(s)?
![Page 40: Meeting of the Examination Resources Advisory Committee ... Agenda Septe… · Assist in the identification of CLEAR Regulatory News stories or Discussion Forum Quick Poll questions](https://reader035.vdocuments.us/reader035/viewer/2022081615/5fd2b9d4ed67aa4193022c7b/html5/thumbnails/40.jpg)
8. Are materials and supplies kept secure before, during and after the
examination?
9. Are candidate identities verified with photo identification prior to admission?
10. Are candidate rosters assembled for each site, and is a procedure used to
ensure that candidates do not select their own seats?
11. Are examination booklets and answer sheets distributed to candidates in a
secure manner? For CBT, is the candidate “linked” to the testing station.
12. Are candidates given sufficient time to complete the examination?
IV. Statistical Analysis and Research
Following the administration of an examination, it is necessary to evaluate the
examination from a statistical perspective. Two evaluations should be made. An item
analysis should be conducted to determine how effectively each test item functions. A
test analysis should be conducted to determine how effectively the test as a whole
functions.
While item and test analyses are highly technical matters, assistance can readily be
found. A variety of commercially available computer programs can calculate item and
test statistics. In addition, many universities have scoring services which can produce
item and test analyses, and often university faculty can be employed as consultants to
help calculate and interpret the results. Of course, testing contractors have the
capacity to perform these calculations and the expertise to interpret the results.
It should be kept in mind that the statistics described below are most meaningful when
the number of candidates on which the analysis is based is large (over 50). However,
even with as few as ten candidates, item and test statistics can still provide valuable
information about how the items and the test function. It should be recognized,
however, that when the number of candidates is small, the statistics are likely to vary
dramatically from one group of candidates to the next.
A. Item Analysis
![Page 41: Meeting of the Examination Resources Advisory Committee ... Agenda Septe… · Assist in the identification of CLEAR Regulatory News stories or Discussion Forum Quick Poll questions](https://reader035.vdocuments.us/reader035/viewer/2022081615/5fd2b9d4ed67aa4193022c7b/html5/thumbnails/41.jpg)
Item analysis information includes indices of item difficulty and discrimination, as
well as an analysis of the role of each incorrect response alternative. Item difficulty,
usually referred to as an item’s p-value, indicates the proportion of the total
number of candidates who answered the item correctly.
P-values range from 0, when no candidates answer the item correctly, to 1.00,
when all candidates answer the item correctly. Items that are very difficult (have
very low p-values) should be reviewed to make sure that the proper option is
marked as the correct response (i.e., the key), that the content the item measures
is appropriate for the target candidate population, and that the item is free of
construction flaws. Items that are very easy (have p-values close to 1.0) should be
reviewed to ensure that they are measuring an important concept and to be certain
that the item does not contain an internal “clue” that reveals the correct response.
Item discrimination indicates how well the item helps to differentiate between high-
and low-scoring candidates. If an item tends to be answered correctly by individuals
who earn high scores on the examination, and in turn tends not to be answered
correctly by individuals who achieve low scores on the examination, the item is said
to discriminate positively. In general, positively discriminating items are functioning
well by discriminating between the abilities of able candidates versus less-able
candidates. If low-scoring individuals answer an item correctly in larger numbers
than high-scoring individuals, then the item discriminates negatively. Negatively
discriminating items should be reviewed closely to see if there is more than one
correct response, no correct response, or some semantic clue or trick that is
causing the higher scoring candidates to answer incorrectly. If high and low scoring
candidates answer the item correctly in equal numbers, the question is said to fail
to discriminate.
Item discrimination is often estimated by calculating the level of correlation
between test scores and item scores. In almost all cases, either the point-biserial or
biserial item-test correlations are used as indices of this correlation. These
correlation coefficients can range from -1.00, which is perfectly negative
discrimination, to +1.00, which is perfectly positive discrimination. Items are
generally judged to be discriminating well with indices of +.20 and higher.
An analysis of the role of each incorrect response alternative includes the
calculation of the proportion of candidates selecting each incorrect response.
Incorrect responses, or distractors, that are selected by a very small proportion of
candidates may be implausible. Before using such an item on a new form of the
exam, these response options should be changed to make them more plausible.
Distractors that are selected by a large proportion of candidates may be too
plausible or even correct. In such cases, the item must be reviewed to ensure that
it is keyed correctly and does not have more than one correct answer.
![Page 42: Meeting of the Examination Resources Advisory Committee ... Agenda Septe… · Assist in the identification of CLEAR Regulatory News stories or Discussion Forum Quick Poll questions](https://reader035.vdocuments.us/reader035/viewer/2022081615/5fd2b9d4ed67aa4193022c7b/html5/thumbnails/42.jpg)
Item analysis information is useful for identifying ambiguity in items, implausible
answer choices, and items that are omitted (not answered) by a large number of
candidates. The identification of items which are incorrectly keyed or are flawed can
result in changes in candidate scores. For this reason, it is important that candidate
results not be distributed until an item analysis has been conducted, a review has
been made of items flagged in the analysis, and a decision made concerning the
scoring of those items.
Provisions should be made for storing the results of item analyses conducted after
each use of an item on an examination. Most commercially developed item banking
systems are able to accommodate this need. If examination results will be released
prior to conducting an item analysis on the current examination form (e.g., in a CBT
environment or on-site scoring situation), historical analyses can be used to verify
that items are appropriate for use in scoring. Note, however, that this does not
eliminate the need for conducting up-to-date item and test analyses. Additionally,
having ready access to historical data can allow for monitoring of “item drift” or
changes in the level of item difficulty resulting from over-exposure or security
breaches.
One approach to reducing the number of flawed items on examinations is to field
test or pilot test the items. To do so, a number of newly written items that do not
count towards a candidate’s score are included among the scored items on an
examination. Candidate responses to these items are tracked, but the responses do
not contribute to the examination scores. Candidates are generally not informed of
the location of the pilot test items in an effort to ensure that they respond to them
in the same manner as they do the scored items.
The scored items must still be evaluated statistically, to assure that they remain
sound. If the item statistics for pilot test items are acceptable, these items can be
included in future examinations as scored items. If the pilot test statistics are not
acceptable, the items should be reviewed, revised, and field tested again.
Comparisons of response options selected by high- and low-scoring candidates also
can improve the quality and validity of scores. An incorrect response that is
selected by a large number of high scorers should be revalidated to assure that the
item has only one correct response and may require scoring both options as
correct. An item on which low scorers perform better than high scorers may not be
measuring what was intended.
Finally, it may be desirable to determine if any items on the examination have
different levels of difficulty for major population subgroups. In the event that major
racial, ethnic, or gender subgroups exist (for example, if there are a minimum of
100 candidates in any such subgroup for a given administration), then performance
differences can be compared using statistical methods designed for such
comparisons. However, Differential Item Functioning (DIF) analysis is a complex
![Page 43: Meeting of the Examination Resources Advisory Committee ... Agenda Septe… · Assist in the identification of CLEAR Regulatory News stories or Discussion Forum Quick Poll questions](https://reader035.vdocuments.us/reader035/viewer/2022081615/5fd2b9d4ed67aa4193022c7b/html5/thumbnails/43.jpg)
technical process. If the Board feels it is necessary, measurement experts should
be consulted.
B. Test Analysis
Test analysis refers to the statistical evaluation of the scores resulting from the
administration of the examination. A description of some of the frequently used
statistics follows:
Mean Score: The mean score is the simple arithmetic average of the test scores. A
mean score should be computed for the entire candidate population as well as for
important subgroups. Changes in the mean score from one administration to
another may signal changes in candidate capability or examination difficulty. Often,
large changes in overall score means are associated with scoring errors so it is
important that score means are reviewed, and the reasons for changes in the mean
are investigated.
Score Standard Deviation: The standard deviation of the test scores is a measure of
the amount of variability among the scores. If there is a wide range of scores from
the lowest to the highest, the standard deviation will be large. If there is a narrow
range of scores, the standard deviation will be low. When the number of candidates
is large, the standard deviation will usually be very stable from one administration
to another. Large changes in the standard deviation may signal changes in the
nature of the candidate group or errors in scoring.
Test Reliability: As mentioned in Section 1, reliability is an index of the stability of
the test scores. Reliability indices range between 0 and 1.00, with higher numbers
being associated with a greater level of score stability. Reliability indices above .90
are considered very acceptable for most purposes, while indices less than .70
usually indicate an unacceptable level of score stability. Test reliability may usually
be increased by increasing the number of test questions or by substituting
questions nearer to middle difficulty (p = .70) and with higher levels of
discrimination for questions presently in the examination.
Standard Error of Measurement: Because examinations, no matter how well
constructed, are not perfect tools for the measurement of candidate performance,
the standard error is used to describe variability in individual test scores that occurs
because of imprecision in the test scores. If the standard error of measurement is
small, the test user can be confident that the test scores have a high degree of
accuracy. If the standard error of measurement is high, the associated test scores
may have a lower degree of accuracy. The standard error of measurement serves
as a reminder that a test is a less-than-perfect tool for the assessment of candidate
capabilities.
![Page 44: Meeting of the Examination Resources Advisory Committee ... Agenda Septe… · Assist in the identification of CLEAR Regulatory News stories or Discussion Forum Quick Poll questions](https://reader035.vdocuments.us/reader035/viewer/2022081615/5fd2b9d4ed67aa4193022c7b/html5/thumbnails/44.jpg)
Score Frequency Distribution: The frequency distribution of scores is a count of the
number of candidates who achieve each possible score. Typically, frequency
distributions are very similar from one administration to another. By comparing
frequency distributions from two or more administrations, changes in the nature of
the candidate group can be identified. Large changes in frequency distributions may
be indicative of scoring errors or of changes in the nature of the candidate group.
Test analyses help to find possible errors, identify changes in the candidate group,
and characterize the stability of the scores resulting from the test administration. A
test analysis should be part of each test administration.
QUESTIONS TO ASK CONCERNING ITEM AND TEST ANALYSES
1. Were item analyses and a test analysis performed?
2. Are some candidates selecting each of the incorrect responses?
3. Is the item difficulty level reasonable (neither too easy nor too difficult)?
4. If the difficulty level for a given item is either very high or very low, is
the item testing an important task, knowledge, skill, or ability?
5. Are items that fail to discriminate between high and low scoring
candidates reviewed for flaws or errors?
6. Are problems, such as key errors, corrected prior to the final scoring?
7. Are item and test analysis results placed in secure storage for archival
purposes?
8. Have “questionable” items been removed or revised prior to the next
administration?
9. Have items on which low scorers out-perform high scorers or on which
high scorers select an intended wrong answer at more than chance
![Page 45: Meeting of the Examination Resources Advisory Committee ... Agenda Septe… · Assist in the identification of CLEAR Regulatory News stories or Discussion Forum Quick Poll questions](https://reader035.vdocuments.us/reader035/viewer/2022081615/5fd2b9d4ed67aa4193022c7b/html5/thumbnails/45.jpg)
level (e.g. over 25% for a four-option item) been revalidated to assure
that the item has only one right answer?
10. Have scoring adjustments resulting from item revalidation, rekeying, or
multiple- keying been applied to all candidates in final scoring?
C. Test Equating
When a new form of a test is created, it is almost certain that the new form will be
slightly harder or slightly easier than the prior form of the test, despite the best
efforts of test developers to create forms that are equivalent in terms of content
and difficulty. Unless some process is applied to adjust for this difference in test
difficulty, candidates will be either disadvantaged or advantaged by the new form,
and pass/fail decisions will be made that might differ from form to form. Test
equating is the process by which this adjustment is made.
For example, suppose that two forms of the examination are developed and given
to two groups of candidates. Further, suppose that the average score observed for
one form, Form A, is higher than the average score observed for the other form,
Form B. Without further analysis, it is not clear whether this difference in average
scores occurs because Form A is easier than Form B, or because the group of
candidates taking Form A is more able than the group of candidates taking Form B.
It is even possible that the total observed difference in means is due to both a
difference in test difficulty and in a difference in candidate capability. Equating
seeks to eliminate differences in test difficulty as a source of difference among
candidate scores.
If a test equating plan is implemented, the same minimum passing score can be
applied to any form of an examination despite unintentional differences in difficulty
among the forms. In other words, if Form B of the examination is harder than the
one on which the standard was set (Form A), the raw scores from Form B can be
adjusted upward to make them comparable to the established standard. The
opposite is also true; if the examination is easier than the one on which the passing
score was established, the raw scores from that examination can be adjusted
downward to make them comparable to the established standard. Alternatively, the
minimum passing score can be adjusted. In the given scenario, the minimum
passing score for Form B could be adjusted downward so that the performance
standard across the two forms is equitable.
It should be kept in mind that equating cannot substitute for the development of
perfectly comparable forms. That is, test developers must always strive to ensure
that new forms of an examination are as equivalent as possible to earlier forms. In
addition, test forms must be developed with equating in mind. If this is done, then
![Page 46: Meeting of the Examination Resources Advisory Committee ... Agenda Septe… · Assist in the identification of CLEAR Regulatory News stories or Discussion Forum Quick Poll questions](https://reader035.vdocuments.us/reader035/viewer/2022081615/5fd2b9d4ed67aa4193022c7b/html5/thumbnails/46.jpg)
equating can be employed to account for those differences in the difficulty of the
forms which are beyond the control of the test developer.
It must be recognized that each method of test equating is a complex statistical
process. Unlike item and test analysis software, good commercial test equating
software for personal computers is in even shorter supply. Testing contractors or
measurement consultants from university scoring services and faculties can provide
assistance with test equating.
Small-volume programs may present special challenges for test equating. Stable,
valid equating results typically requires at least 75 to 100 candidates per form. If
data must be aggregated over several administrations or a program tests fewer
than 150 candidates per year, it may not be possible to equate several test forms
using complete statistical models.
V. Scoring and Reporting
A. Scoring
Scanning equipment has made the in-house scanning of test answer sheets cost-
effective for paper and pencil objective (e.g., multiple choice) examinations.
Scanners a can be linked to personal computers, and computer software can be
purchased to address the scoring and statistical analysis activities. For agencies
that prefer, testing contractors can be hired who provide scanning, scoring, and
analysis services. For those Boards that use national examinations, scoring of
examinations is the responsibility of the test provider. As many of these providers
use a CBT delivery system, scoring is provided with the test administration.
Scoring of non-written forms of examinations, such as essay, oral, OSCI, and
practical examinations, requires the establishment of methods to make the
process as objective as possible. For example, in scoring essay responses,
prototypic responses can be identified among the candidate papers to help
illustrate the difference between acceptable and unacceptable responses.
Individuals who serve as judges of the essays are then trained to compare
candidate responses to the prototypic responses and to decide if the response is
superior or inferior to those defining the minimum acceptable standard. Even
using such an approach, candidate papers should be read anonymously by more
than one judge, and if the two judges disagree, a third judge should be called
upon to offer yet another opinion.
In the case of performance tests, the task to be performed must be well defined
and must be relevant to the role being assessed. The task should be broken
down into components or sub-tasks and criteria should be established for judging
whether the candidate performs acceptably overall and with respect to each
![Page 47: Meeting of the Examination Resources Advisory Committee ... Agenda Septe… · Assist in the identification of CLEAR Regulatory News stories or Discussion Forum Quick Poll questions](https://reader035.vdocuments.us/reader035/viewer/2022081615/5fd2b9d4ed67aa4193022c7b/html5/thumbnails/47.jpg)
component. Again, it is desirable to have more than one judge involved in the
evaluation process. In performance tests such as an OSCI, and essay tests, it is
important to analyze the reliability of the judgments to help assess the efficacy of
the examination process.
B. Reporting
The ultimate goal of the testing program is the production of a report telling each
candidate their results on the examination. Boards differ with respect to the
practice of providing passing candidates with a numerical score or simply
indicating to them that they have passed. A concern regarding the reporting of
numerical scores to passing candidates stems from the potential for third party
(e.g., employer) misuse of the scores in making decisions about credentialed
individuals. Therefore, that potential should be assessed carefully by a board
before a decision is made to report numerical scores to passing candidates. If the
primary purpose of the credentialing examination is demonstration that an
individual has developed sufficient mastery of a body of knowledge to practice a
particular profession, i.e., they have achieved a minimum level of competency,
then reporting a numerical score to passing candidates serves no additional
purpose. Failing candidates, on the other hand, should be provided with numerical
overall scores, as well as a breakdown of examination performance by content
area to help them identify the areas in which they are deficient.
A variety of summary reports can be generated to help identify any performance
difference that may exist among population subgroups identified by sex, ethnic
group, education, age or other variables. These reports may have several uses,
but are primarily used to determine if test performance differs among important
population subgroups.
Finally, there should be a means for candidates to address concerns about the
examination or the examination process. These concerns typically include
requests to rescore examinations, review an examination copy, or challenge a
question on the examination. While most Boards and their testing agencies do
not allow review of the entire examination or specific items, it is generally
considered good practice to allow candidates to comment on specific questions
while the test is being administered. However, it is important that all policies be
subjected to legal review to ensure that candidates are afforded due process.
Professional testing standards require that require that failing candidates receive diagnostic
information to help them prepare for subsequent attempts to take an exam. Diagnostic
reporting for subcategories of an examination should be described based on the test
specifications. They are typically used to identify content areas of a failing candidate’s
performance in which the candidate performed less well than others - areas that may have
![Page 48: Meeting of the Examination Resources Advisory Committee ... Agenda Septe… · Assist in the identification of CLEAR Regulatory News stories or Discussion Forum Quick Poll questions](https://reader035.vdocuments.us/reader035/viewer/2022081615/5fd2b9d4ed67aa4193022c7b/html5/thumbnails/48.jpg)
caused a candidate’s total score to fall below the minimum passing score. These content areas
should precisely and accurately match the test specifications and content outline that are
provided to candidates before the test to help them prepare for re-taking the examination.
Care should be taken to ensure the content area sub-scores are meaningful statistically, i.e.,
they represent a reasonable number of items such that they provide reliable feedback for a
specific content area.
VI. Examination Security
The examination itself must be thought of as an investment that can be lost if there is
a compromise in examination security. For this reason, examination security has been
discussed earlier in connection with various topics. Because examination security is so
important to the integrity of the examination process, it is useful to emphasize that
security must be maintained during the entire test development and administration
process, from question writing through the reporting of results.
Item Writing and Review: If item writers are permitted to work independently “at
home” (instead of in a structured item writing workshop), they should be reminded of
the importance of keeping their work materials in a secure place, such as a locked
drawer or, preferably, a safe. They should also be reminded to avoid discussing their
work with colleagues, friends, and especially potential candidates. A contract should
exist with item writers, committing them to the maintenance of confidentiality and
security. This contract should state that the items written are the property of the Board
or credentialing organization, and it should explicitly emphasize the fact that the item
writers should not use the items they write for any other purpose.
Item writers and reviewers must understand the necessity of avoiding even the
appearance of impropriety. Accordingly, item writers and reviewers should be required
to affirm that they are not engaged, and will not engage during the period of their
writing and review work on the examination, in preparing candidates to take the
examination. In addition, they should be informed of situations which may, in the
future, constitute a conflict of interest with the goals of the examination program.
Should any such situation occur, they should agree to remove themselves from service
on the program.
Once test items are written, it is recommended that they be reviewed by additional
content experts. Ideally, this review will involve bringing the reviewers to a meeting
and providing them with copies of the items only during the period of the review
meeting. The reviewers should also sign an agreement concerning the confidentiality of
the items.
![Page 49: Meeting of the Examination Resources Advisory Committee ... Agenda Septe… · Assist in the identification of CLEAR Regulatory News stories or Discussion Forum Quick Poll questions](https://reader035.vdocuments.us/reader035/viewer/2022081615/5fd2b9d4ed67aa4193022c7b/html5/thumbnails/49.jpg)
Anyone who has access to secure test content during development or review of a
credentialing exam should be required to sign a non-disclosure agreement that affirms
his or her understanding of the importance of protecting test security.
Item Pool-Maintenance: After items are submitted, any hard copy should be kept in a
secure place, such as a safe, with very limited access. If computer files are maintained,
access to the computers should be restricted by both physical and electronic
safeguards. Additionally, security should be maintained through the use of a password
to access the materials.
Ancillary Materials: Precautions should be implemented to ensure the security of all
test related materials. Candidate scores are confidential and should only be released
according to the organization’s policies. Test keys must be maintained as secure
documents, but this same concern should extend to item analysis reports as well since
these often contain records of correct answers. Testing programs can generate a
substantial volume of scrap paper, and precautions should be in place to ensure that
this waste paper is securely shredded. All materials that are to be saved should be
subject to a retention schedule which indicates how long the materials are to be kept
prior to being destroyed.
Test Book Printing: Test booklets should be printed only by firms experienced in the
production of secure documents. It is important that the security measures of the
printer be reviewed periodically and that printing assignments be rotated among
printers on an unpredictable schedule, if possible. Test booklets should be printed with
unique serial numbers so that missing booklets can be readily identified. Booklets
should be sealed if possible.
Storage: After delivery by the printer and prior to distribution to test proctors, test
booklets should be stored in a vault or locked room with very limited access. As noted
earlier, test proctors should be told in the test administration manual how they are to
store test booklets once they receive them. Test administrators should be provided
with test booklets as close to the day of the examination as possible in order to
minimize the length of time the proctors must store the test materials. Such storage
should also be in a vault or locked room with very limited access.
Transportation: Test booklets should be sent to examination sites by traceable courier.
The return shipment should also be traceable.
Administrative Security: Earlier, in Section III.C., detailed information was provided on
the necessary contents of the Test Administration Manual. This includes entry and exit
procedures, methods for accounting for test materials, responsibilities of the proctor
staff, and the handling of irregularities and emergencies. One goal of all of these
![Page 50: Meeting of the Examination Resources Advisory Committee ... Agenda Septe… · Assist in the identification of CLEAR Regulatory News stories or Discussion Forum Quick Poll questions](https://reader035.vdocuments.us/reader035/viewer/2022081615/5fd2b9d4ed67aa4193022c7b/html5/thumbnails/50.jpg)
procedures is to guarantee the security of the examination materials and the integrity
of the credentialing program.
Despite rigorous security procedures, a breach in test security may occur. A plan
should be devised in advance concerning responses to security problems that may
occur at crucial periods in the program, such as the loss of a test booklet immediately
before a test administration. The plan could be as simple as canceling and rescheduling
the administration (if a small group is being tested) to very elaborate plans for a
national administration to test large numbers of candidates.
QUESTIONS TO ASK CONCERNING TEST SECURITY
1. Has the item bank been kept secure?
a. Are paper copies locked in a limited access location?
b. Are electronic databases accessible only with passwords?
c. Are databases kept on stand-alone systems, not accessible through
networks? Are robust security protocols in place for networked
databases?
2. Is the camera-ready examination copy secure?
a. If typesetters are used, are they trustworthy? Do they understand
the importance of maintaining security through the typesetting
process? Are all master copies, including galleys, mechanicals, and
films returned or destroyed?
b. If desktop publishing is used for in-house production, are all ancillary
and extra materials (such as back-up data files, photocopies, copies
printed for editing and review) kept secure or destroyed?
c. Are unique versions of the test generated for each administration?
d. If unique versions of the test cannot be generated, are there
scrambled versions available for random distribution?
3. Is the printing and production process secure?
a. If printing is done by outside printers, are all copies kept secure
throughout the print run? Are all over-runs, misprints or errors
destroyed or returned? Are the master documents returned?
b. Are the test booklets coded with a serial number and sealed?
4. Are unique serial numbers printed on the test books? Are booklets
inventoried by serial numbers prior to shipping, upon receipt at the test site,
prior to distribution by proctors, prior to the return shipment, and upon
receipt at the home office?
![Page 51: Meeting of the Examination Resources Advisory Committee ... Agenda Septe… · Assist in the identification of CLEAR Regulatory News stories or Discussion Forum Quick Poll questions](https://reader035.vdocuments.us/reader035/viewer/2022081615/5fd2b9d4ed67aa4193022c7b/html5/thumbnails/51.jpg)
5. Are test materials sent ONLY by secure, traceable courier?
6. Is the proctor staff fully briefed concerning the maintenance of test material
security upon receipt of the examination materials, during the examination,
and after the examination?
a. Is a complete proctor manual available that describes materials
check-in procedures, materials storage policy, materials distribution
to candidates, secure storage of extra materials, and retrieval and
return of materials?
b. Are candidates identified prior to admission with a photo?
c. Is a record kept of the test book serial number assigned to and
received from each candidate?
d. Are candidates escorted by proctors whenever they exit the testing
room? Are candidates kept from all outside communication (e.g.,
phones, and other candidates who have completed the examination
or who are still taking the examination)? Are restrooms monitored?
e. Is a simple and complete policy in place for the handling of any
suspected irregularities? Are written reports filed?
f. Is home office staff available for consultation during all examination
administrations?
7. Are used test booklets monitored?
a. checked for missing pages?
b. dated for storage and secure disposal?
8. Are readers and scribes assisting disabled candidates required to sign
releases guaranteeing that they will not sit for the examination within a
reasonable time, nor will they divulge any of the questions on the
examination?
9. Are test keys and statistical output kept secure?
10. Are content experts or board members instructed in the importance of
maintenance of security of all test materials and statistical output? Are they
asked to sign pledges of security or releases?
![Page 52: Meeting of the Examination Resources Advisory Committee ... Agenda Septe… · Assist in the identification of CLEAR Regulatory News stories or Discussion Forum Quick Poll questions](https://reader035.vdocuments.us/reader035/viewer/2022081615/5fd2b9d4ed67aa4193022c7b/html5/thumbnails/52.jpg)
Principles of Fairness: An Examination Guide for Credentialing
Boards
Certification and licensure examinations (or generically, credentialing
examinations) may be subject to professional standards set forth in the following key
documents:
Standards for Educational and Psychological Testing, developed jointly
by the American Educational Research Association (AERA), the
American Psychological Association (APA), and the National Council on
Measurement and Education (NCME), which applies to testing in
general, and includes a chapter on testing in employment and
credentialing.1
Standards for the Accreditation of Certification Programs, developed by
the National Commission for Certifying Agencies, the accrediting body of
the Institute for Credentialing Excellence, for the approval of certification
programs.2
Guidelines for Computer-Based Testing, developed by the Association of
Test Publishers for those developing and administering computer based
tests.3
Code of Fair Testing Practices in Education, developed by the Joint
Committee on Testing Practices.4
Introduction
![Page 53: Meeting of the Examination Resources Advisory Committee ... Agenda Septe… · Assist in the identification of CLEAR Regulatory News stories or Discussion Forum Quick Poll questions](https://reader035.vdocuments.us/reader035/viewer/2022081615/5fd2b9d4ed67aa4193022c7b/html5/thumbnails/53.jpg)
Principles of Fairness: An Examining Guide for Credentialing Boards has been
developed to address the specific needs of credentialing examinations. Whether a
test is developed and administered by the board itself or by an outside testing
company, and whether a test is written, practical (i.e., performance), or oral format,
the principles listed are designed to be reasonable and, in many cases, based on
common sense. Their purpose is to facilitate boards in promoting fairness.
Principles of Fairness recommends testing procedures and practices designed
to enable the candidates to fully demonstrate their competence by means of an
examination (i.e., on a test). Emphasis is placed on providing essential information
to prepare the candidates before the test, providing a test administration
environment and procedures conducive to good performance during the test, and
providing timely and accurate scoring, standard setting, and reporting after the test.
The sequence of before (I), during (II), and after (III) is the basis for the major
divisions of this document. Fairness is also a central issue when developing the
test. Guidance on test development is presented in Development, Administration,
Scoring, and Reporting of Credentialing Examinations: Recommendations for Board
Members (CLEAR, 2015)5.
Credentialing examinations are high stakes tests, intended to assure that
individuals have adequate knowledge and/or skill to perform competently and
protect the public from harm. Therefore, the primary responsibility of credentialing
boards is to ensure the validity, reliability, security, and integrity of examinations.
However, these are high stakes tests for candidates as well. For many individuals, a
long-term investment of time, money, and effort has been expended to prepare for a
particular profession or occupation.
Failure to pass a credentialing examination may be a major career setback.
Considerable financial loss and perhaps, the need to change careers may result.
Therefore, failing an examination should be the result of only one phenomenon:
inadequate knowledge or skill. Failure based on inadequate information about the
testing process, for example, is unfortunate for both the candidate, in terms of lost
income, and for the public, in terms of lost access to a competent professional. Thus,
candidates, the public, and the professions are beneficiaries of a fair testing process.
Principles of Fairness is a product of review and input by many individuals
affiliated with CLEAR. The principles presented are those for which there appears to
be a general consensus of professional opinion.
This document was originally developed in 1993, underwent review and
revision in 2002, and again in 2015. This document is expected to be reviewed and
updated continually in response to new developments and new thinking in the field
of credentialing examinations.
![Page 54: Meeting of the Examination Resources Advisory Committee ... Agenda Septe… · Assist in the identification of CLEAR Regulatory News stories or Discussion Forum Quick Poll questions](https://reader035.vdocuments.us/reader035/viewer/2022081615/5fd2b9d4ed67aa4193022c7b/html5/thumbnails/54.jpg)
CLEAR wishes to encourage credentialing boards to promote test fairness
assertively. Boards that wish to obtain copies of Principles of Fairness should contact
CLEAR (859/269-1289). The document is also available at the CLEAR web site.
Primary Authors
Leon J. Gross, Ph.D.
Director of Psychometrics and Research
National Board of Examiners in
Optometry
Commissioner, National Commission for Certifying
Agencies, National Organization for Competency
Assurance
Barbara Showers, Ph.D.
Director of Examinations
Wisconsin Department of Regulation and Licensing
Member, Board of Directors, Council on Licensure, Enforcement, and
Regulation
2002 Review and Edits
Cynthia D. Woodley,
Ed.D. Vice President,
Operations Professional Testing,
Inc.
Member, Examination Resources and Advisory Committee,
Council on Licensure, Enforcement, and Regulation
![Page 55: Meeting of the Examination Resources Advisory Committee ... Agenda Septe… · Assist in the identification of CLEAR Regulatory News stories or Discussion Forum Quick Poll questions](https://reader035.vdocuments.us/reader035/viewer/2022081615/5fd2b9d4ed67aa4193022c7b/html5/thumbnails/55.jpg)
2015 Review and Edits
Daniel H. Breidenbach
Program Director
Applied Measurement Professionals, Inc.
Steve Nettles
Program Director
Applied Measurement Professionals, Inc.
Jodi Herold
Psychometric Consultant
University of Toronto
Cynthia Woodley
Vice President of Operations Professional Testing, Inc.
Jim Zukowski
Consultant
360training
![Page 56: Meeting of the Examination Resources Advisory Committee ... Agenda Septe… · Assist in the identification of CLEAR Regulatory News stories or Discussion Forum Quick Poll questions](https://reader035.vdocuments.us/reader035/viewer/2022081615/5fd2b9d4ed67aa4193022c7b/html5/thumbnails/56.jpg)
The foundation of test fairness is understanding the nature of the terms and
conditions that govern the test. Candidates should have this information before the
test in order to plan and prepare adequately. Credentialing boards should publish
and distribute a candidate handbook that contains all of the following information.
1. Job- or practice-related test content A test content outline or blueprint conveys the scope and emphasis of the examination.
The outline should list the major content headings of the test and the percentage of
items (i.e., questions) administered in each. Test content should focus the competence
required for effective practice. The basis for the test content outline (e.g., a job or
practice analysis) should be described.
2. Test Characteristics
Credentialing boards should specify whether the format of the test is written, oral, practical (i.e., performance), or computer-based. A sample test item representing each item format, as well as a sample answer sheet should be provided also where applicable. When the examination is administered on computer, information should be provided about the computer interface including sample screens and information regarding opportunities to practice using the computer.
3. Test Preparation Strategies
This information overlaps with the use of the test content outline and test blueprint.
Study references may be listed, as well as possible test-taking strategies (e.g., use of
test time).
4. Change of test content Candidates should be informed in advance regarding when and how a test changes in
content, format, difficulty, or length. Changes should be based on changes in practice
and based on evidence (see point 1).
I. Before the Test
![Page 57: Meeting of the Examination Resources Advisory Committee ... Agenda Septe… · Assist in the identification of CLEAR Regulatory News stories or Discussion Forum Quick Poll questions](https://reader035.vdocuments.us/reader035/viewer/2022081615/5fd2b9d4ed67aa4193022c7b/html5/thumbnails/57.jpg)
5. Test question evaluation
Credentialing boards should indicate how test items, particularly those used for the
first time, are evaluated. Board policies for identifying and handling flawed or
defective test items should be indicated.
6. Bias Credentialing boards are responsible for assuring that the test is fair to population
subgroups. Policies employed to address this issue should be indicated.
7. Time Limits Candidates should be informed about the amount of testing time provided for each
portion of the examination. Where test times may vary (e.g., in computer adaptive
testing), candidates should be provided sufficient information regarding the length of
the exam and why test times vary.
8. Scoring Information about scoring should include whether items are equally weighted and
whether there are penalties for incorrect responses. The types of test results reported
should be indicated (i.e., number correct, percentage, scaled, or pass-fail only). If non-
scored trial items are administered, their inclusion should be noted.
9. Pass-fail standard There are three important issues here. First, candidates should be informed of the
pass-fail standard in advance of the test, if possible. Second, the basis or procedure
for determining the pass-fail standard should be provided. This includes the type of
judgment used, and the types of participants involved in the process (e.g.,
practitioner, academician). The standard should be based on a reasonable level of
performance judged to be needed for competent and safe practice. Third, information
should be given regarding the existence of one or multiple pass-fail standards and
options available to failing candidates for repeating portions, rather than the entire
examination.
10.Reporting test results
Credentialing boards should indicate when test results will be released to candidates and the nature of the results reported. The test result information
should include the pass-fail outcome and pass-fail standard. The information
may include the total test score obtained, the scoring scale (if applicable),
and whether any subscores or other related information is reported.
These latter data should be provided to all failing candidates and may be reported to
passing candidates. However, some boards have observed that these data have been
used inappropriately by third parties in evaluating the credentials of passing
candidates. Therefore, despite the desirability of test and subscore disclosure to all
![Page 58: Meeting of the Examination Resources Advisory Committee ... Agenda Septe… · Assist in the identification of CLEAR Regulatory News stories or Discussion Forum Quick Poll questions](https://reader035.vdocuments.us/reader035/viewer/2022081615/5fd2b9d4ed67aa4193022c7b/html5/thumbnails/58.jpg)
candidates, test results reported to passing candidates do not have to be as extensive
in detail as the results reported to failing candidates.
11.Registration Credentialing boards should announce and disseminate information about registering
for the examination well before any deadline dates. Procedures for applying, including
deadlines, eligibility criteria, availability of reasonable accommodations for persons
with disabilities, fees, cancellation, refunds, and test center locations, should also be
provided.
12.Test administration Candidates should be informed when to arrive at the test center, the amount of time
required for checking in, when the test actually begins and ends, whether breaks are
allowed, and policies for late admission. Credentialing boards should be aware of days
of religious observance and avoid scheduling tests on those days, when possible.
Alternate test dates for religious observers may also be needed.
13.Test center services Candidates should be informed in advance about availability of services at the test
center (e.g., parking, food concessions).
14.Candidate identification Candidates should be informed about what proof of identity is required at the
examination site. Commonly, two forms of identification are required, one of
which must include a photograph of the candidate.
15.Security Candidates should be informed of their obligations regarding test security. Candidates
should specifically avoid any collaborative or disruptive behavior during the test,
removal of test material, reproduction of test material, and discussion of test material
after the test. Duplicating test material may violate copyright laws. In addition,
candidates should be informed of their obligations to report a suspected violation.
Finally, candidates should be informed of score cancellation policies that the board may
exercise if a security break is suspected or revealed.
16.Supplies/equipment/human subjects brought to the test
Candidates should be informed about their obligations and options for supplies,
equipment, and human subjects brought to the test. Candidates should be informed
about what resources are available and/or provided at the site, if any. Written
examinations may require candidates to bring #2 pencils with erasers; some test
providers supply pencils. Some examinations allow candidates to bring calculators
and/or reference books. For these tests, candidates need to know which types of
calculators and/or reference books are permissible and which are considered
unacceptable. For performance tests (i.e., practical examinations), candidates may be
required to bring certain types of equipment or human subjects.
![Page 59: Meeting of the Examination Resources Advisory Committee ... Agenda Septe… · Assist in the identification of CLEAR Regulatory News stories or Discussion Forum Quick Poll questions](https://reader035.vdocuments.us/reader035/viewer/2022081615/5fd2b9d4ed67aa4193022c7b/html5/thumbnails/59.jpg)
Credentialing boards should inform candidates of the consequences of failure to bring
needed supplies, equipment, or subjects. Finally, candidates should be informed of
any items not permitted, any prohibitions against sharing equipment or supplies, and
the consequences of incidents during the test, such as malfunction or breakage of
equipment.
17.Candidates with disabilities
Credentialing boards should comply with the Americans with Disabilities Act (ADA)6 of
1990 as amended in 20087. The ADA obligates boards to provide an accessible test
site to candidates with documented disabilities, and reasonable accommodations of
test administration procedures to enable such candidates to demonstrate accurately
their knowledge and skill. Credentialing boards should refer directly to the ADA for
details regarding compliance.
18.Candidate challenges and appeals
Candidates should be informed of what decisions are subject to appeal (e.g., decisions
on eligibility). Candidates should be informed of methods for commenting on or
challenging specific items, as well as what to expect after doing so. Most often,
examination materials are confidential, and candidates do not have the opportunity to
review their test or their responses. Comments and challenges are typically reviewed
by the credentialing board or their designees, but specific replies about particular
items are not sent to candidates.
Other available types of score verification, such as hand verification of computer
scoring, should be described. Retest policies and procedures should be provided also.
The primary goal of fairness during the test is to enable candidates to
demonstrate fully their knowledge and skill. To promote this goal, care must be
taken to ensure that the environment and testing procedures are conducive to good
performance.
1. Physical site The test center should be well lit, ventilated, free of distracting noises, and have
conveniently located restrooms. Upon request, the test center should be accessible to
II. During the Test
![Page 60: Meeting of the Examination Resources Advisory Committee ... Agenda Septe… · Assist in the identification of CLEAR Regulatory News stories or Discussion Forum Quick Poll questions](https://reader035.vdocuments.us/reader035/viewer/2022081615/5fd2b9d4ed67aa4193022c7b/html5/thumbnails/60.jpg)
candidates with disabilities. For written examinations, candidates should be provided
smooth writing surfaces and adequate space to work. In addition, sufficient space
should be provided between candidates to promote privacy and prevent collaboration.
2. Personnel Proctors and examiners should be impartial and well trained in the test procedures so
that candidates are treated alike. Proctors and examiners should provide uniform and
clear instructions to all candidates and be trained carefully to distinguish acceptable
from unacceptable issues that may be discussed with candidates. The ratio of proctors
and examiners to candidates should be sufficient for effectively monitoring the test
examination environment.
For some practical (performance) examinations, additional human subjects participate
as patients or clients. Care should be taken to assure the safety and dignity of the
subject and the candidate. In addition, the subjects’ physical or physiological
characteristics should be comparable, such that candidates are not disadvantaged by a
specific participant. Subjects should be impartial and trained for their specific roles to
avoid facilitating or hindering any candidate.
3. Security
Security policies and procedures should be designed to prevent premature access to test questions, collaboration during the test, or unauthorized notes from being used or taken. These occurrences may invalidate test scores and jeopardize the integrity of the test. Appropriate procedures include secure storage of tests, secure packaging and shipping, careful and continuous proctor observation, policies for handling suspected cheating on site, and procedures for distributing and collecting test materials to minimize the potential opportunities for theft and loss. Steps should be taken to prevent access to mobile phones and other electronic devices during the examination.
Computer-based testing raises an analogous set of security concerns. Transmittal of
examinations and candidate results must be encrypted. Computers and file-servers
must be protected from physical or electronic tampering. Systems and procedures
must be in place to address technical or operational problems in examination
administration.
More information about test security can be found in Development, Administration,
Scoring, and Reporting of Credentialing Examinations: Recommendations for Board
Members (CLEAR, 2015).
4. Test materials and equipment
![Page 61: Meeting of the Examination Resources Advisory Committee ... Agenda Septe… · Assist in the identification of CLEAR Regulatory News stories or Discussion Forum Quick Poll questions](https://reader035.vdocuments.us/reader035/viewer/2022081615/5fd2b9d4ed67aa4193022c7b/html5/thumbnails/61.jpg)
Test booklets and answer sheets should be easy to read and understand. Candidates
should be instructed to verify that all pages are present. Examiners’ evaluation sheets
for practical or performance examinations should document scoring criteria clearly,
and should be formatted to minimize the potential for tabulation errors. If equipment
is provided to candidates for use during the test, it should be in proper working
condition, with suitable replacement if breakage or a malfunction occurs. Candidates
should be allowed to become familiar with the operation of the equipment.
5. Administration Examinations should be administered at the scheduled time with as few delays as
possible. Policies concerning late admission and required identification should be
adhered to consistently. Candidates should be apprised of procedures governing
checking in and exiting, procedures during a possible emergency, and location and
use of restrooms.
Examination instructions should be presented in a clear and straightforward manner,
and all candidates should be allowed to ask questions for clarification. The instructions
should include procedures for handling claims of examination error, but note that
candidate questions regarding specific test content that are not appropriate to answer.
Finally, starting and ending times should be identified clearly and enforced. Where
possible, a clock displaying the correct time remaining should be available throughout
the test and announced periodically, particularly near the conclusion of the test.
The testing process continues after the test is administered. The scoring,
standard setting, and reporting tasks should be handled in a timely manner, yet be
subjected to thorough quality control. Failure to handle these procedures accordingly
can undo all preceding efforts to promote fairness.
1. Timeliness of reporting test results Test results should be reported as soon as reasonable after the administration of the
test, as professional plans may be contingent on the results. Reporting deadlines listed in the candidate handbook should be met.
2. Accuracy of scoring Test scores must accurately reflect candidates’ performance on the test. Candidates
should be able to request verification of their scores, such as by hand scoring or review
III. After the Test
![Page 62: Meeting of the Examination Resources Advisory Committee ... Agenda Septe… · Assist in the identification of CLEAR Regulatory News stories or Discussion Forum Quick Poll questions](https://reader035.vdocuments.us/reader035/viewer/2022081615/5fd2b9d4ed67aa4193022c7b/html5/thumbnails/62.jpg)
of test performance. Scoring alterations as a result of a candidate’s identification of an
error should be applied to all candidates whose pass-fail status may be affected, not
only to the candidate who discovered the error.
3. Confidentiality of test results
Credentialing boards or their agents should release test results only to the candidate and only in writing. Policies should be in place to ensure that no other individuals or
institutions receive the scores identified by candidate name without the candidate’s
written permission.
4. Use/misuse of test results
Credentialing boards are responsible to promote the use of results only for their
intended purposes. Most credentialing exams are intended to compare a candidate’s
performance against a standard criterion (i.e., to make a pass/fail decision). Using scores from such an examination to rank order examinees (e.g., for employment
selection) is a misuse of that test score.
Information regarding appropriate use of test results should be provided to those who receive the data. Researchers and program evaluators should be made aware of the
purpose and design of the test if results are provided to them.
5. Record retention Candidates should be made aware of retention policies. Notice should be provided to
the candidate if examination performance records will become unavailable after a
certain time, as this may affect legal documentation and one’s credentialed status. Original documentation of examination performance, such as answer sheets and
product fabrications (for practical examinations), should be retained for at least as
long as the candidate has a legal right to challenge the examination results.
Subsequently, answer sheets and grading forms (for practical examinations) may be
stored in archival form (e.g., as microfiche).
6. Re-examination Failing candidates should be permitted the opportunity to be re-examined at a future date and should be informed of the procedures for doing so. The test for repeating
examinees should be equivalent to the test for first-time candidates and meet the
same standards for development and administration.
Repeating examinees should be expected to meet the same test performance
standards as first-time examinees, but should not be identified as repeaters at the
examination, if possible. If a practical (performance) examination is administered,
different examiners than those who evaluated the candidate’s prior performance
should serve, if possible.
![Page 63: Meeting of the Examination Resources Advisory Committee ... Agenda Septe… · Assist in the identification of CLEAR Regulatory News stories or Discussion Forum Quick Poll questions](https://reader035.vdocuments.us/reader035/viewer/2022081615/5fd2b9d4ed67aa4193022c7b/html5/thumbnails/63.jpg)
7. Candidate challenges and appeals Candidates should be informed in advance of policies related to challenges,
comments, and appeals. Once candidates receive their results, it is extremely
important to follow and enforce those policies consistently. Refer to Principle I,
#18.
8. Reporting summary of test results The credentialing board should provide statistical data highlighting important test
outcomes. These data, including overall pass-fail rate (i.e., the percent of candidates
who pass) and the number of newly credentialed individuals, should be available to
candidates, the public, and the profession.
Endnotes
1. American Educational Research Association, American Psychological
Association, and National Council on Measurement in Education (2014).
Standards for educational and psychological testing. Washington, DC:
American Educational Research Association. (American Educational Research
Association, 1430 K Street, NW, Suite 1200, Washington, DC 20005).
2. National Commission for Certifying Agencies (2004). Standards for the
Accreditation of Certification Programs. Washington, DC: NCCA. (Mailing
address: NCCA, 2025 M Street NW Suite 800, Washington DC 20036).
3. Association of Test Publishers (2002). Guidelines for Computer-Based Testing.
Washington, DC: ATP (Mailing address: ATP, 601 Pennsylvania Ave., N.W.
South Building, Suite 900 Washington, DC 20004).
4. Code of Fair Testing Practices in Education. (2004). Washington, DC: Joint
Committee on Testing Practices. (Mailing Address: Joint Committee on
Testing Practices, Science Directorate, American Psychological Association,
750 First Street, NE, Washington, DC 20002-4242).
www.apa.org/science/programs/testing/fair-code.aspx
5. Council on Licensure, Enforcement & Regulation (2015). Development,
Administration, Scoring, and Reporting of Credentialing Examinations:
Recommendations for Board Members.Lexington, KY: CLEAR. (Mailing
![Page 64: Meeting of the Examination Resources Advisory Committee ... Agenda Septe… · Assist in the identification of CLEAR Regulatory News stories or Discussion Forum Quick Poll questions](https://reader035.vdocuments.us/reader035/viewer/2022081615/5fd2b9d4ed67aa4193022c7b/html5/thumbnails/64.jpg)
address: CLEAR, 403 Marquis Ave., Ste. 200, Lexington, KY 40502).
6. Americans With Disabilities Act of 1990, Pub. L. No. 101-336, 104 Stat. 328
(1990). www.ada.gov
7. ADA Amendments Act of 2008, Pub. L. No. 110-325 (2008).
![Page 65: Meeting of the Examination Resources Advisory Committee ... Agenda Septe… · Assist in the identification of CLEAR Regulatory News stories or Discussion Forum Quick Poll questions](https://reader035.vdocuments.us/reader035/viewer/2022081615/5fd2b9d4ed67aa4193022c7b/html5/thumbnails/65.jpg)
Principles of Fairness is a product of the contributions of many individuals.
The first joint organizational meeting for this project, an open session held at
the mid-year CLEAR meeting in January 1992, was attended by the following
individuals from both organizations. These individuals provided direction for the
structure and content of the document.
Nadine Davis
President
National Organization for
Competency Assurance
Liaison Council for Certification for
Surgical Technologists
Michael Hamm
Executive Director
National Organization for
Competency Assurance
Leon Gross
Director of Psychometrics & Research
National Board of Examiners in
Optometry
Norman Hertz
Manager, Testing Unit
California Department of Consumer
Affairs
Brad Mallon
Director of Policy & Research Colorado
Department of Regulatory
Agencies Steve
Nettles
Director, Research & Development Applied
Measurement Professionals
Committee Members and
Participants
![Page 66: Meeting of the Examination Resources Advisory Committee ... Agenda Septe… · Assist in the identification of CLEAR Regulatory News stories or Discussion Forum Quick Poll questions](https://reader035.vdocuments.us/reader035/viewer/2022081615/5fd2b9d4ed67aa4193022c7b/html5/thumbnails/66.jpg)
Lila J. Quero-Munoz
Executive Director of Testing
Georgia State Examining
Boards
Kara Schmitt
Director, Testing Services
Michigan Department of
Commerce
Lee Schroeder
President
Applied Measurement
Services Barbara Showers
Director of Examinations
Wisconsin Department of Regulation &
Licensing
Eric Werner
Director of Examination Services
Colorado Department of Regulatory
Agencies
Kate Windon
Assistant to the President
Applied Measurement Services
Jim Zukowski
Assistant Director, Professional Licensing
Division
Texas Department of Health
Prior to the review and approval by the respective boards of CLEAR and NOCA,
written comments and suggestions in response to initial drafts were solicited.
Drafts were mailed to staff and board members of national and state
credentialing boards and related associations affiliated with either CLEAR or
NOCA. Written responses were received from the following individuals.
Jennifer Bosma
Executive
Director
National Council of State Boards
of Nursing
Joan M. Bruening
Associate Coordinator
Cardiovascular Credentialing International
![Page 67: Meeting of the Examination Resources Advisory Committee ... Agenda Septe… · Assist in the identification of CLEAR Regulatory News stories or Discussion Forum Quick Poll questions](https://reader035.vdocuments.us/reader035/viewer/2022081615/5fd2b9d4ed67aa4193022c7b/html5/thumbnails/67.jpg)
Steven K. Bryant Barbara Bloom Kreml
Executive Director Director, Department of Human Resources
National Board of Respiratory Care American Hospital
Association Janet H. Ciuccio Mary Lunz
Director of Professional Ethics Board of Registry
American Speech Language Hearing American Society of Clinical
Pathologists Association William L. Marcus
Patricia A. Clark Deputy Attorney General
Registrar California Department of Justice
Governing Board of Denture Therapists Virginia M. Maroun
Jerry L. Cripe Executive
Director
Education Director Commission on Graduates of Foreign
State of Oregon Nursing Schools
David R. Denton Judith Mastrine
Director, Health & Human Services Executive Director
Programs Board of Dietetics
Southern Regional Education Board Ohio Department of
Administrative Phyllis Endrich Services
Director of Candidate Relations Bonnie McCandless
Board for Certification in Pedorthics Director of Certification
Henry Fernandez AACN Certification
Corporation Deputy Commissioner for the Professions Nancy Miller
New York State Education Department National Council of State Boards
of James R. Fiedler Nursing
Director of Testing & Competency David Montgomery
Assurance Program
Administrator
![Page 68: Meeting of the Examination Resources Advisory Committee ... Agenda Septe… · Assist in the identification of CLEAR Regulatory News stories or Discussion Forum Quick Poll questions](https://reader035.vdocuments.us/reader035/viewer/2022081615/5fd2b9d4ed67aa4193022c7b/html5/thumbnails/68.jpg)
American Medical Technologists Nebraska Department of
Health Charles Friedman Donna Mooney
Assistant Vice President Discipline Consultant
American College Testing North Carolina Board of
Nursing Barbara Gabier Joseph A. Morrison
Licensing Supervisor General Council
State of Alaska, Division of Occupational US Office of Personnel
Management Licensing Office of the General Council
Stephen L. Garrison Richard Morrison
President Executive
Director
Association of Engineering Geologists Virginia Board of Health
Professions Madelaine Gray Paul D. Naylor
Executive Director Director of Examination Services
American Occupational Therapy Hoffmann Research
Associates Certification Board David S. Nelson
Donald Ross Green Certification Program Manager
CTB MacMillan/McGraw-Hill International Conference of Building
J. Patrick Jones Officials
Vice President of Programs Nancy Roylance
Professional Examination Service Executive
Director
Deborah Marquis Kelly American Board of Opticianry
American Psychological Association National Contact Lens
Examiners
Alan G. Kraut
American Psychological Association
![Page 69: Meeting of the Examination Resources Advisory Committee ... Agenda Septe… · Assist in the identification of CLEAR Regulatory News stories or Discussion Forum Quick Poll questions](https://reader035.vdocuments.us/reader035/viewer/2022081615/5fd2b9d4ed67aa4193022c7b/html5/thumbnails/69.jpg)
Cathy Rooney
Director
Health Occupations Credentialing
Bureau of Adult & Child Care
Department of Health & Environment
Gerald A. Rosen
Vice President
Professional Examination Service
Lorraine P. Sachs
Deputy Executive Director
National Association of State Boards of
Accountancy
Kara Schmitt
Director, Testing Services
Michigan Bureau of Occupational
& Professional Regulation
Craig Schoon
President
Professional Examination Service
George L. Shevlin
Commissioner, Bureau of Professional
& Occupational Affairs
Commonwealth of Pennsylvania,
Department of State
Benjamin Shimberg
Educational Testing Service
Kevin P. Sweeney
Psychometrician, Examinations Division
American Institute of Certified Public
Accountants
Daniel W. Szetela
Assistant Commissioner for Professional
Credentialing
The State Education Department/The
University of the State of New York
Robert E. Tepel
Secretary
Association of Engineering Geologists
Barbara Vilkomerson
Deputy Director, Teacher Programs &
Services
Educational Testing Service
Stanford von Mayrhauser
General Council
Educational Testing
Service
Patricia Wingo-Gass
Director, Health Related Boards
State of Tennessee Bureau of Manpower
& Facilities
Mimi Wong
Director of ABC Affairs
American Board for Certification in
Orthotics & Prosthetics
![Page 70: Meeting of the Examination Resources Advisory Committee ... Agenda Septe… · Assist in the identification of CLEAR Regulatory News stories or Discussion Forum Quick Poll questions](https://reader035.vdocuments.us/reader035/viewer/2022081615/5fd2b9d4ed67aa4193022c7b/html5/thumbnails/70.jpg)
Finally, the authors wish to acknowledge the many individuals who attended
sessions for discussion and comment at the CLEAR Annual Meeting in September
1992 in Detroit, and at the NOCA Annual Meeting in December 1992 in Tucson.
The following individuals reviewed and edited the 2nd
edition of this document.
Charles
Barner
President
Regulatory Agency Management
Systems
F. Jay Breyer
Managing Principal
The Chauncey Group
International Roberta Chinn
General Partner
HZ Assessments
Ida Darragh
Director of Testing
North American Registry of
Midwives Charles Friedman
Assistant Vice President
American College Testing
Sandra Greenberg
Vice President Research &
Development
and Public Service
Professional Examination
Service
Norman Hertz
Managing Partner
HZ Assessments
Jeffrey Kenney
President
Professional Development
Partners Casey Marks
Director of Testing Services
National Council of State Boards
of
Nursing
![Page 71: Meeting of the Examination Resources Advisory Committee ... Agenda Septe… · Assist in the identification of CLEAR Regulatory News stories or Discussion Forum Quick Poll questions](https://reader035.vdocuments.us/reader035/viewer/2022081615/5fd2b9d4ed67aa4193022c7b/html5/thumbnails/71.jpg)
![Page 72: Meeting of the Examination Resources Advisory Committee ... Agenda Septe… · Assist in the identification of CLEAR Regulatory News stories or Discussion Forum Quick Poll questions](https://reader035.vdocuments.us/reader035/viewer/2022081615/5fd2b9d4ed67aa4193022c7b/html5/thumbnails/72.jpg)
Rose McCallin
Director Examination Services Colorado Department of Regulatory
Agencies
Division of Registrations Fae Mellichamp
Senior Psychometrician Professional Testing, Inc.
Paul Naylor
President
Gainesville Independent Testing Services
David Nelson
Certification Program Manager International Conference of Building
Officials Steve Nettles
Vice President, Research & Development
Applied Measurement Professionals Shawn O’Brien
Director Assessment/Research National Board for Certified Counselors
Fred Parker
Executive Director Board of Law Examiners
Ron Rodgers
President
CTS/Employment Research & Development Institute
![Page 73: Meeting of the Examination Resources Advisory Committee ... Agenda Septe… · Assist in the identification of CLEAR Regulatory News stories or Discussion Forum Quick Poll questions](https://reader035.vdocuments.us/reader035/viewer/2022081615/5fd2b9d4ed67aa4193022c7b/html5/thumbnails/73.jpg)
Michael Rosenfeld
Principal Research Scientist Educational Testing Service, Inc.
Kara Schmitt
Consultant KNK Consulting
Barbara Showers
Director Education & Examinations Wisconsin Department of Regulation &
Licensing Rina Sjolund
Assistant Vice President American College Testing
Lynn Webb
Assessment Consultant Elizabeth Witt
Senior Measurement Analyst Promissor
Cynthia Woodley
Vice President for Operations Professional Testing, Inc.
Tony Zara
Vice President, Professional Licensing
& Certification
Pearson Professional Testing Jim Zukowski
Division Director
Texas Professional Licensing & Certification
Department of Health