formerly the national association of test directors president...
TRANSCRIPT
FALL 2017
President’s Message by Dr. Bonnie Strykowski Greetings to our valued members and friends!
The National Association of Assessment Directors (NAAD) is celebrating its’ thirty-six-year
relationship with the National Council of Measurement in Education (NCME). At last year’s
meeting in San Antonio, Texas, NAAD and NCME presidents acknowledged our long history
and collaboration. Each organization renewed its commitment to our collaborative efforts
in the field of educational measurement. This year is no exception as we carry on the tradi-
tion of representing practitioners of measurement and assessment with our colleagues at
NCME.
The NAAD Executive Board is working on the 36th Annual NAAD Meeting activities to be
held in New York City from April 12-17, 2018 in conjunction with AERA and NCME. We will
have our annual business meeting with speakers as well as the symposium. The schedule
has not been finalized by AERA but we have requested a Saturday or Sunday morning time
slot for the business meeting. Dr Geoff Maruyama, University of Minnesota and NAAD Vice
President, is organizing the symposium this year: Creating the Capacity to Increase Under-
standing of What Works in Schools, How It’s Measured and Why it Works. Geoff has invit-
ed NAAD district members to be presenters at the symposium. The format will include an
anchor paper by Dr. Maruyama with several shorter papers by district members to be fol-
lowed by a discussion. Other meeting activities such as the joint luncheon with our col-
leagues at Directors of Research and Evaluation and our annual dinner are in the planning
stages.
In addition to the annual meeting activities, this year we are hosting several webinars in-
cluding two on Propensity Score Matching and one on Test Security. Details will be an-
nounced shortly. We also have our Monthly News authored by Faith Connolly and quarter-
ly newsletters. We hope to see you in New York to celebrate our 36th year in person!
Inside this Issue:
Celebrating 35 years of
collaboration with NCME!
President’s Message
1
Calendar of Events 2
Call for Nominations
3
Special Recognition
4
Contributing Author
5-8
Graduate Student
Corner 9
NAAD Board 2017-18
10
Membership Applica-
tion 11
National Association of
Assessment Directors
website: www.natd.org
Formerly the National Association of Test Directors
November 2017
Su Mo Tu We Th Fri Sat
1 2 3 4
5 6 7 8 9 10 11
12 13 14 15 16 17 18
19 20 21 22 23 24 25
26 27 28 29 30
December 2017
Su Mo Tu We Th Fri Sat
1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30 NCME 2017 Special Conference on Classroom As-
sessment and Large-Scale Psychometrics
September 12-14, 2017 at the
University of Kansas in Lawrence, KS
Visit the NCME website to read a description of the con-ference and a link to the papers:
http://www.ncme.org/NCME
The “Do's and Don’ts “ of Administering High Stakes Tests in Schools—Webinar
Date: Tuesday, December 12, at 12:00 -1:00 PM, EST Presented by Dr. John Fremer, President of Caveon Con-sulting Services, Caveon Test Security This one hour webinar focuses on administering tests in
schools and identifies ten "best practices" that apply to all
high stakes testing. The content is drawn from careful anal-
yses of current testing practices by states, districts, and
testing vendors.
In this webinar, you will learn:
• Ten Best Practices that apply to all high stakes testing
• What is required to be an effective test administrator
• How to promote fairness and validity in your testing pro-
grams
Link to Register: https://mpsaz.qualtrics.com/jfe/
January 2018
Su Mo Tu We Th Fri Sat
1 2 3 4 5 6
7 8 9 10 11 12 13
14 15 16 17 18 19 20
21 22 23 24 25 26 27
28 29 30 31
Coming Soon!!
Webinars on Propensity Scoring will be offered by Dr.
Geoff Maruyama and his staff in the coming months. Stay
tuned for dates and times!
Several years ago, the NAAD Board of Directors established an award to recognize individuals who have made outstand-ing professional contributions in the area of applied educa-tional assessment in schools. One or two individuals may be recognized each year conditional on approval by the Board of Directors. Individuals recognized by this award are nomi-nated from any of the myriad areas in the field of educa-tional assessment including, but not limited to:
• Measurement theory
• Promotion of best professional practices
• Teaching of measurement and assessment
• Use of assessment information for curriculum, instruc-tion, policy making and communication with stakehold-ers
Please give serious consideration for nominating a worthy candidate by submitting the nominee’s name, contact in-formation and a brief statement of nomination to : Dr. Bonnie Strykowski at [email protected] no later than December 15, 2017.
Past recipients of this award include:
Carole Perlman & Ed Drahozal (2003)
Steve Henry & Jim Bray (2004)
Joe Hansen (2005)
Joseph O’Reilly (2006)
Robert L. Linn (2007)
G. Gage Kingsbury (2008)
Peter Hendrickson (2009)
Michael Flicek & Zollie Stevenson, Jr. (2010)
Lee Baldwin & Lorrie Shepard (2011)
John Fremer & Phil Morse (2012)
Jamal Abedi & Michael Strozeski (2013)
Sherry Rose-Bond, Joe Willhoft, Laura McGiffert Slover (2014)
Neil Martin Kingston & Judy Levinson (2015)
Carlos Martinez & Bonnie Strykowski (2016)
Faith Connolly (2017)
Many Thanks to Dr. Dale Whittington on her service to public education!
Always one to support advancement in measurement, Dr. Whittington currently serves on the board of directors for
the National Council on Measurement in Education (NCME), representing district/state practitioners. She co-chairs
NCME’s Classroom Assessment Task Force. Dale served as an officer and board member for the National Association
of Assessment Directors (NAAD/NATD), where she remains a long-standing member.
Dr. Whittington began her career in public education as a school media specialist in the Baltimore County Schools.
She moved into the private sector to work at ETS and the Psychological Corporation, then entered academic as an
assistant professor at The University of Akron and Cleveland State University and as an associate professor at John
Carroll University. She returned to public education in 2000 with the Shaker Heights City School District.
Dr. Whittington earned her bachelor’s degree from Harvard University, three master’s degrees from Simmons Col-
lege School of Library Science (library science) , Teachers College of Columbia University (teaching), University of Mas-
sachusetts Lowell (education administration), and her doctorate in psychology from Columbia University with a spe-
cialization in measurement, evaluation and statistics.
Dale Whittington, Ph.D. will be retiring from her role as Director of Research and Accountability for the Shaker
Heights City School District in Ohio on November 30, 2017. We thank Dale for representing our interests with NCME
and for her service to public education. We wish her well in her next endeavors.
NEW FEATURE!
NAAD has now created a space where members can share personal and/or professional accomplishments or announce-
ments. Please send this information to Leigh Bennett for inclusion in future newsletters at [email protected].
We look forward to sharing in your happiness.
Using Adaptive Tests to Improve Schools
G. Gage Kingsbury, Ph.D.
Adaptive tests are everywhere these days. K-12 students here in the U.S. may find their achievement measured by adaptive tests from Smarter Balanced, NWEA, FastBridge, Renaissance, Curriculum Associates, and others. Students in Australia, The Netherlands, New Zealand, China, and other countries may also have adaptive tests in their schools. If you want to enter the Armed Services, or become a nurse, or become a pharmacist, or be licensed or certified in any number of professions, you will take an adaptive test. In fact, if you are treated in a hospital, it is very possible that you will take an adaptive test to esti-mate how well you have recovered.
An adaptive test is fairly simple to describe. It is a test that adjusts in difficulty to match an individual’s performance level by selecting questions from a pool of questions with known characteristics. If the test taker does well, the questions that are chosen become more difficult and if the test taker does less well, the questions become easier. In a well-designed adaptive test, a person will be consistently challenged by the questions asked, without being frustrated by content that is too difficult or bored by content that is too easy.
The first adaptive tests were individually-administered intelligence tests, developed in the early 1900s (e.g., Binet and Simon, 1905). A person taking one of these tests started with fairly simple questions and continued to get harder and harder ques-tions until they were no longer able to answer them correctly. This type of test is still used, but it is expensive to administer, and only has minor adaptive capabilities.
Adaptive testing was advanced dramatically by the efforts of two individuals. Fred Lord, working with Educational Testing Services during the 1960s and 1970s, developed the theoretical underpinnings for modern adaptive testing (Lord, 1970). Dave Weiss, working at the University of Minnesota in the 1970s, developed many of the first computer-delivered adaptive tests and did the primary work to extend adaptive tests from ability testing (Weiss, 1973) to educational achievement testing (Bejar and Weiss, 1978) and is still providing advances in the field. With the advent of the personal computer, adaptive testing became more accessible. In the 1980s, the first adaptive tests for K-12 were introduced in Portland, Oregon (Kingsbury, 1986) and the state of Maryland (Stevenson, 1984). By the early 2000s, adaptive tests were in widespread use. Adaptive testing is an overnight success story 100 years in the making.
As a result of its design, an adaptive test will provide more information about each test taker than we can get from a fixed-form test of the same length. This allows the test designer to a) shorten the test and still get good information about each test taker or b) leave the test the same length and obtain more information about every test taker. A high-quality adaptive test will provide approximately 50 percent more information than a fixed-form test. A less precise adaptive test like a multi-stage test will still provide much more information than a fixed-form test but may not have enough accuracy to measure growth for an individual student. Adaptive tests will also provide high information across much wider ranges of student achievement than a fixed-form test.
So adaptive tests are popular and they have useful psychometric characteristics. They have provided me with a living and probably did the same for any number of psychometricians. That explains why I like adaptive tests, but what is in it for you as Assessment Directors? Adaptive tests are costly to develop, and much more difficult to administer than a fixed-form, pa-per-and-pencil test. What do adaptive tests do for the students, teachers, school districts, and states that make them worth the extra effort? That is the question we will consider here.
Adaptive testing and state departments
A state department has as its primary goal the success of the educational system in their state. However, since they are fair-ly removed from the actual schools and students, they need to consider success on a macro level. The proficiency or growth of groups of students becomes the level of interest and reporting for the state. Trends across grades and years become im-portant for strategic planning.
States commonly categorize student performance as “below basic”, “basic”, “proficient”, “advanced”, or similar proficiency levels. Accurately placing students into these proficiency levels becomes of paramount importance. Unfortunately, it is diffi-cult for a fixed-form test to be accurate in making these decisions.
As an example, consider an actual state (remaining nameless) that had three cutoff points on their test that divided students into the four categories mentioned above for use under NCLB several years ago. “Below basic” corresponded to students below the 7th percentile. “Basic” included students from approximately the 7th percentile to the 35th percentile. “Proficient” included students from approximately the 36th percentile to the 95th percentile. “Advanced” included those students with scores above the 95th percentile. (The proficiency levels weren’t developed based on percentiles. This is just where they happened to end up.)
A very good proportion of the test content in this state was centered at the cutoff between “basic” and “proficient”, which allowed this decision to be made with high accuracy. Unfortunately, this left only a few items to cover the rest of the range of achievement, and made the other decisions quite inaccurate. Since many special services were directed at students in the “advanced” and “below basic” groups, this inaccurate placement caused unknown costs in misdirected services. Using an adaptive test would have improved the precision with which each of the decisions was made. This, in turn, would have saved the state money through correctly directing services, and helped the state’s students by identifying them for needed services more accurately.
States are also interested in examining growth in schools across years and across grades. Measuring growth is a very chal-lenging psychometric problem, and difficulties measuring growth with fixed-form tests were identified decades ago (described in Cronbach and Furby, 1970). To oversimplify a complex problem, measuring individual growth between two points in time requires a test score at each of the points in time. This means that two error terms are included in any change score. Unless these two error terms are small, they overwhelm the amount of growth, and make the growth score quite un-reliable. Using fixed-form tests to obtain the two test scores exacerbates the difficulty by allowing different amounts of error for students with different achievement levels. Unfortunately, larger sample sizes do not correct this difficulty, particularly if the schools being examined tend to have students with different achievement levels.
Using adaptive testing allows control of the error in each of the test scores, and limits the variability in error across achieve-ment levels. If an adaptive test is used with time intervals which allow a reasonable amount of growth, and the standard error for student scores is controlled, measuring growth at the student level becomes practical. This allows the state to have a much more accurate view of growth at the school level.
Adaptive testing and school districts
School districts provide guidance in data use, identify and implement instructional programs, and deal with resource alloca-tion to the schools. They can benefit noticeably from the additional accuracy of an adaptive test.
As strategic planners for their schools, school districts need to understand the performance of students from one school to the next. They also need to understand whether schools are serving their entire student population well. For instance, it is quite common for high-performing schools to have low growth for their low-performing students (and sometimes for their entire student population). An adaptive test can help identify this kind of situation, since it provides more accurate scores for both high and low performers, and it also allows the accurate measurement of growth for these two groups of students.
As the face of the schools to the public, school districts can use adaptive testing data to accurately reflect trends in student achievement and growth, particularly for groups of special interest to stakeholders in the district. In many districts, these trends tend to jump around from one year to the next. This is due, in part, to cohort effects and small sample sizes for some groups. However, it is also due to scores being more accurate for some groups than for others. Specifically, a fixed-form test will provide score that are more accurate for groups with scores close to the average, and less accurate for groups that di-verge from the average. An adaptive test can reduce this last problem substantially, by providing accurate scores across a wide range of performance. The trends will still move around a bit if the groups are small, but the use of adaptive testing data should reduce the fluctuations.
Adaptive testing and teachers
Teachers are at the intersection of educational policy and students. They have the most interesting and most demanding job in all of education. As a result, their time is at a premium, and any aid they can get in improving the learning of their stu-dents is appreciated. While adaptive testing information should be combined with formative testing practices that get more detailed information about students each day consider how an interim adaptive test might help a teacher.
People often mention how nice it would be to be a teacher because of that long summer vacation. I have always thought it is exactly the opposite. Teachers have to do a year’s worth of work and have less than a year to accomplish it. Instructional planning is an example of the time pressure a teacher faces. Identifying what went well in the previous school year and try-ing to plan for improvement in the next school year is difficult, and might benefit from targeted, accurate information. A question like, “How did my lowest performers grow last year?” would be helpful to ask. An adaptive test should be able to
help answer this and similar questions with consistent accuracy.
In a more immediate manner, an adaptive test should enable a teacher to connect to resources that are at the appropriate level for their students. Examples of this can be seen in connections to the digital library from the smarter balanced interim assessment blocks, and connections to Kahn Academy from NWEA’s MAP Growth. In both of these examples, the accuracy of the adaptive test is essential to create the appropriate connections. This can be used most directly in classrooms that are using flexible grouping by achievement level, but is also directly applicable to other personalized learning approaches.
Adaptive testing and students
We sometimes consider students to be the consumers of education, but without their active, passionate involvement, we wouldn’t have much of an education system. By intent, adaptive testing offers much to students.
Early research indicated that students were more motivated by adaptive tests than by fixed-form tests, and this effect was most noticeable for low performing students (Pine, 1977). Now that we know much more about adaptive testing, these find-ings make perfect sense. A fixed-form test is commonly designed to challenge the students in the middle of the distribution (or possibly close to the proficiency level). This can be a fairly frustrating test for lower performing students and may lower motivation. Recent research indicates that lower motivation may cause rapid guessing behavior, which in turn can result in invalid test scores. If the students are challenged but not frustrated by the test, as they are in an adaptive test, they may have increased motivation, and provide us with more valid test scores.
We already mentioned the capacity for an adaptive test to connect to appropriate instructional materials, and this character-istic helps the student as well as the teacher. being able to find reasonable text to read, math materials to try out, or science experiments to perform should help a student with an interest in a subject to expand their horizons and stretch their knowledge base.
Finally, since an adaptive test that is well designed should allow the measurement of individual growth, creating and using growth targets can become a valuable tool in helping students take responsibility for their learning. If students participate in setting growth targets, they will try to surpass “average” growth, and if they actually achieve their desired growth, it can be quite an accomplishment. Using student-centered growth targets as a portion of the parent conference can be very valua-ble, particularly if the student is leading the conference.
Finally, since an adaptive test that is well designed should allow the measurement of individual growth, creating and using growth targets can become a valuable tool in helping students take responsibility for their learning. If students participate in setting growth targets, they will try to surpass “average” growth, and if they actually achieve their desired growth, it can be quite an accomplishment. Using student-centered growth targets as a portion of the parent conference can be very valua-ble, particularly if the student is leading the conference.
Your next steps
I have not been exaggerating the capabilities of adaptive testing, but not all adaptive tests serve all purposes well. This paper is not just to introduce you to the capabilities of adaptive tests, but to help you to become pushier users of adaptive tests and the information they provide. Consider three examples:
Assigning students to a talented and gifted program: Since your district has limited resources to serve this group of students, and since the parents of these students are quite invested (I got all of those calls in my district, for some reason), accurate decisions are required. An adaptive test can provide scores to serve as at least a portion of the decision process and will almost always do a better job than a fixed form test. However, depending on the quality of the adaptive test and the nature of the students in your district, the test may not be accurate for the decision that you need to make. Make sure that the district is using the test in the decision making process, if it is accurate enough. If you are not sure whether it is accurate enough for use, push your provider to give you the information. If it turns out that the test doesn’t provide enough information to help identify your high performers accurately, push your test provider to improve.
Connecting to instructional content: Most educational measurement scales allow some connection from test scores to instructional materials. Some of these are pretty gross, like the lexile scale, which gives a very rough idea of the type of text that students might be able to read. Some are pretty detailed, like the Smarter Balanced and NWEA scales mentioned above, which both use adaptive test scores to link to content of appropriate difficulty for a student or group of students. While this is useful, it is still limited by the specific materials that are connected. If your test provider doesn’t provide links to content structures commonly in use in your district, consider pushing them to provide the links. Alternatively, if the content that you have available is too broad in its difficulty to be linked, consider looking for content more suitable for personalized instruction.
Measuring individual growth: As I mentioned above, one of the trickiest measurement problems we have is to measure change in one individual across time. This is made even more interesting by individual growth patterns. Most students have reading growth patterns that are non-linear. They don’t know the letters and words to begin with, so growth is hard to see. Then they learn how to decode, and growth starts happening rapidly. Then they begin to comprehend deeply and more completely. At this point their growth may slow, because they are incremen-tally adding vocabulary and deeper understanding. So, in order to measure this growth as it changes across time, a very accurate test is needed. For this purpose, not all adaptive tests are created equal. Ask your test provider how accurate the growth measurement is for the test, and push them to improve if the current accuracy doesn’t allow you to say something meaningful about the growth of each of your students after a year of instruction.
So, use adaptive tests, but not indiscriminately. Push your district, state, schools, and teachers to use them well. Also, con-sider your opportunity to push test providers in directions that will facilitate the decisions that you make every year in your district. An adaptive test is a student-centered test, and so it provides more information. If we can follow that path to create more student-centered educational systems, we are moving education in the right direction.
References
Bejar, I. I. & Weiss, D. J. (1978). A construct validation of adaptive achievement testing (Research Report 78-4). Minneapolis MN: University of Minnesota, Department of Psychology, Psychometric Methods Program, Computerized Adaptive Testing Laboratory. Downloaded from IACAT.org.
Binet, A., & Simon, Th. A. (1905). Méthode nouvelle pour le diagnostic du niveau intellectuel des anormaux. L'Année Psychologique, 11, 191-244.
Cronbach, L. J., & Furby, L. (1970). How we should measure "change": Or should we? Psychological Bulletin, 74(1), 68-80.
Kingsbury, G. G. (1986). Computerized adaptive testing: A pilot project. In W. C. Ryan (ed.), Proceedings: NECC 86, National Educational Computing Conference (pp.172-176). Eugene OR: University of Oregon, International Council on Computers in Education.
Lord, F. M. (1970). Some test theory for tailored testing. In W. H. Holtzman (Ed.), Computer-assisted instruction, testing, and guidance (pp.139-183). New York: Harper and Row.
Pine, S. M. (1977). Reduction of Test Bias by Adaptive Testing. In D. J. Weiss (Ed.), Proceedings of the 1977 Comput-erized Adaptive Testing Conference. Minneapolis MN: University of Minnesota, Department of Psychology, Psychomet-ric Methods Program. Downloaded from IACAT.org.
Stevenson, J. (1984). Computerized adaptive testing in the Maryland Public Schools. MicroCAT News, 1-1, 1.
Weiss, D. J. (1973). The stratified adaptive computerized ability test (Research Report 73-3). Minneapolis: University of Minnesota, Department of Psychology, Psychometric Methods Program. Downloaded from IACAT.org.
G. Gage Kingsbury is a National Association Of Test Directors Outstanding Contribution Award Winner,2008.
Renewing Our Support
NAAD is proud to count you among our members. Having you as a colleague enriches all of us. We welcome new mem-bers and encourage you to share information about NAAD membership with your colleagues. NAAD continues to maintain our original low membership fee of $20 for 2018. The annual membership renewal notice will be sent to cur-rent members in early December. If you don’t receive a notice, please contact Jennifer at [email protected] to update your contact information.
New and continuing members may pay the membership fee with a credit card through PayPal at http://nationalassociationoftestdirectors.org/registration/
Or you may print a copy of the membership form that appears in the newsletter and on the website and mail it with your 2018 payment to the address listed there.
Welcome to the Graduate Student Corner!
As you may or may not know, under the leadership of our current President, Bonnie Strykowski, the National Association of
Assessment Directors has re-introduced an initiative focused on building graduate student presence in the organization. As
such, NAAD is reaching out to graduate students interested in pursuing career paths in educational assessment, evaluation,
accountability, and more.
Members of NAAD have active national roles in the development of assessments and assessment policy, often making im-
portant contributions to assessment at the state and local levels. They manage assessment departments within school dis-
tricts, teach and conduct research at colleges/universities, and contribute to the success of testing companies and education-
al foundations across the United States. In addition to networking opportunities with these prominent professionals, gradu-
ate students enjoy: reduced membership costs, access to unique educational resources (such as our upcoming webinar on
High-Stakes Test Security), and opportunities to join various NAAD committees to help drive progress within the organiza-
tion.
So, if you are a graduate student interested in learning more about NAAD (or if you know a student, who could benefit from
the possibilities associated with membership in this Association), please feel free to contact me directly at: sa-
We cannot do this without you! As you know, the NAAD membership is this organization’s strongest asset. So, please en-
courage graduate students you know to join us, and help ensure a bright future for the National Association of Assessment
Directors! Thank you!
Graduate Student Corner
Graduate Student Board Representative Sarah D. Newton, M.S.
University of Connecticut
The digest is a current, topical, electronic resource regarding various assessment information. You can ac-
cess it using this link: https://paper.li/e-1446927012#/
NAAD is an association of professionals responsible
for assessment programs in public educational
settings. The membership is broadly representative
of North American schools.
NAAD Board 2017-2018
PRESIDENT
Bonnie Strykowski. Ph.D.
Analyst and Program Evaluator
Mesa Public Schools
Mesa, AZ
Term 2017-2018
DIRECTOR
Adrienne Bailey, Ph.D.
Senior Consultant, The Panasonic Foundation
Chicago, IL
Term 2015-2019
VICE-PRESIDENT/PRESIDENT-
ELECT
Geoff Maruyama, Ph.D.
Professor and Chair,
Department of Educational Psychology
University of Minnesota
Minneapolis, MN
Term 2017-2018
DIRECTOR
Elvia G. Noriega
Executive Director, Accountability & Continuous
Improvement
Richardson Independent School District
Richardson, TX
Term 2017-2021
IMMEDIATE PAST PRESIDENT
Mary Yakimowski, Ph.D.
Associate Professor, Department of Edu-cational Leadership and Literacy
Assistant Dean and Certification Officer, College of Education
Sacred Heart University Tollant, CT
Term 2017-2018
DIRECTOR
Kyndra V. Middleton, Ph.D.
Associate Professor,
Educational Psychology
Howard University, Washington D.C.
Term 2014-2018
SECRETARY AND NEWSLETTER
EDITOR
Leigh A. Bennett
Assessment Supervisor
Loudoun County Public Schools
Ashburn, VA
Term 2016-2018
DIRECTOR
Faith Connolly, Ph.D.
Executive Director
Baltimore Education Research Consortium
John Hopkins University
Baltimore, MD
Term 2016-2020
TREASURER
Jennifer McCreadie
Director of Evaluation, Sr. Research Scientist
Center for Equity and Excellence in Edu-cation
The George Washington University
Washington D.C.
Term 2017-2019
BOARD
LIAISON FOR
GRADUATE
STUDENTS
Sarah D. Newton, MA
Doctoral Student, Measurement , Evaluation and Assessment
University of Connecticut
Term 2017-2018
Please go to http://nationalassociationoftestdirectors.org/registration/ to sign up online and pay via credit card/PayPal. To pay by check, print and complete the form below.
Membership categories
Active membership:
Active membership shall be members in good standing with current or former responsibility for educational testing programs; or from institutions involved in the construction or use of tests in settings not primarily for profit, including, but not limited to, school systems and school system research, assessment, testing, or evaluation departments.
Associate membership:
Associate membership shall be open to all persons who are interested in assessment and who subscribe to the purposes and objectives of the Association. Associate members in good standing shall be accorded all rights and privileges of membership except the right to hold office. Bona fide graduate students in full-time programs of study, which are related to or in-clude assessment may be admitted to Associate membership.
Honorary Emeritus membership: (no dues required)
The Board of Directors may from time to time elect an Honorary member. Honorary members are individuals who were for-merly Active or Associate members for a minimum of five years and who, in the opinion of the Board of Directors of the Associ-ation, have made significant contributions to the Association. The Board shall nominate such an individual and, upon a majori-ty of Active, Associate, and Honorary members casting written ballots in favor of the proposal, this individual shall be accorded honorary emeritus status for life and shall be exempt from paying dues. This status shall also include those individuals to whom the association’s Award for Outstanding Contributions to Educational Assessment has been conferred.
Title (optional): First Name: Last Name:
Organization (where you are employed or enrolled):
Mailing Address:
City: State /Province/Region
Zip/Postal Code: Telephone: Country
Email Address:
Membership Category:
□ Active($20) □ Associate ($20) □ Graduate Student ($10) □ Honorary Emeritus (free)
Mail your membership application and your annual membership fee to:
Leigh Bennett, LCPS, 20 High Street, Round Hill, VA 20141
MEMBERSHIP FORM