act explore technical manual

72
TECHNICAL MANUAL 2013 l 2014

Upload: vocong

Post on 03-Jan-2017

217 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: ACT Explore Technical Manual

TECHNICAL MANUAL

2013l2014

Page 2: ACT Explore Technical Manual
Page 3: ACT Explore Technical Manual

TECHNICAL MANUAL

Page 4: ACT Explore Technical Manual

ACT endorses the Code of Fair Testing Practices in Education and the Code ofProfessional Responsibilities in Educational Measurement, guides to the con-duct of those involved in educational testing. ACT is committed to ensuring thateach of its testing programs upholds the guidelines in each Code.

A copy of each Code may be obtained free of charge from ACT CustomerServices (68), P.O. Box 1008, Iowa City, IA 52243-1008, 319.337.1429.

Visit ACT’s website at www.act.org.

© 2013 by ACT, Inc. All rights reserved.

19893

Page 5: ACT Explore Technical Manual

Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v

Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v

Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii

Chapter 1 The ACT Explore Program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1Overview and Purpose of the ACT Explore Program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1Code of Fair Testing Practices in Education and Code of Professional Responsibilities in Educational Measurement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

Philosophical Basis for the Tests of Educational Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2Administering the ACT Explore Program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

Administration Schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3Support Materials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4ACT’s Standards for Test Administration and Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

Chapter 2 The ACT Explore Tests of Educational Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5Description of the ACT Explore Tests. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

The English Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5The Mathematics Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5The Reading Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6The Science Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

Test Development Procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6Test Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

Content Specifications. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6Statistical Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

Selection of Item Writers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6Item Construction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9Review of Items. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9Item Tryouts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9Item Analysis of Tryout Units . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10Assembly of New Forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10Content and Fairness Review of Test Forms. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11Review Following Operational Administration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

ACT Explore Scoring Procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

Chapter 3 ACT’s College Readiness Standards and College Readiness Benchmarks . . . . . . . . . . . . . . . . . . . . . . . . . . 12ACT’s College Readiness Standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

Description of the College Readiness Standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12Determining the Score Ranges for the College Readiness Standards (1997) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12Developing the College Readiness Standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

The Process. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13Conducting an Independent Review of the College Readiness Standards. . . . . . . . . . . . . . . . . . . . . . . . . . 15

Refining the College Readiness Standards for ACT Explore and ACT Plan (2001) . . . . . . . . . . . . . . . . . . . . . . . . 16Periodic Review of the College Readiness Standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

Interpreting and Using the College Readiness Standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17ACT’s College Readiness Benchmarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

Description of the College Readiness Benchmarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17Data Used to Establish the Benchmarks for the ACT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17Procedures Used to Establish the Benchmarks for ACT Explore and ACT Plan . . . . . . . . . . . . . . . . . . . . . . . . . . 18

Intended Uses of the Benchmarks for Students, Schools, Districts, and States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18Interpreting ACT’s Test Scores with Respect to Both ACT’s College Readiness Standards

and ACT’s College Readiness Benchmarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18Alignment of ACT’s College Readiness Standards with the Common Core State Standards . . . . . . . . . . . . . . . . . . . . . 18

iii

Contents

Page 6: ACT Explore Technical Manual

Chapter 4 Technical Characteristics of the ACT Explore Tests of Educational Develop ment. . . . . . . . . . . . . . . . . . . . . 19Scaling and Norming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

Scaling Study (1999) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19Norming Study (2010) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

Sampling for the 1999 Study. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19Sample Design and Data Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19Data Editing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20Weighting. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20Response Rates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20Obtained Precision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

Scaling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22The Score Scale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22The Scaling Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

Norming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24Sampling for the 2010 Norming Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

Sample Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24Data Editing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24Weighting. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25Nonresponse and Bias . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25Precision Estimates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25Norming Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25Estimated ACT Plan Composite Score Ranges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

Equating . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36Reliability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

Reliability, Measurement Er ror, and Effective Weights . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36Validity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40Measuring Educational Achievement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

Content Validity for ACT Explore Scores . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40ACT Explore Test Scores . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47Statistical Relationships Between ACT Explore, ACT Plan, and ACT Scores . . . . . . . . . . . . . . . . . . . . . . . 48ACT Explore Scores and Course Grades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49Growth From Grade 8 to Grade 12 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52ACT Explore and ACT Plan College Readiness Benchmarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

Chapter 5 The ACT Explore Interest Inventory and Other Program Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55Interest Inventory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55Score Reporting Procedure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55World-of-Work Map. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55Norms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56Weighting. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

Precision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56Representativeness. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59Psychometric Support for UNIACT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

Chapter 6 Reporting ACT Explore Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60Standard Reporting Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60Supplemental Reporting Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

iv

Page 7: ACT Explore Technical Manual

3.1 Score Ranges for ACT Explore, ACT Plan, and the ACT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

4.1 Conditional Standard Error of Measurement for ACT Explore Test Scores . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

4.2 Conditional Standard Error of Measurement for ACT Explore Subscores . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

4.3 Conditional Standard Error of Measurement for ACT Explore Composite Scores . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

4.4 Average Increase in ACT Plan Mathematics Scores from Taking Rigorous Mathematics Courses, Regardless of Prior Achievement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

4.5 Average Increase in ACT Plan Science Scores from Taking Rigorous Science Courses, Regardless of Prior Achievement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

4.6 Growth Trajectories Calculated at Grades 8 through 12 for Expected Within-School Average E(Yij), High School A, and High School B . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

4.7 Conditional Probability of Meeting/Exceeding an ACT English Score = 18, Given Students’ ACT Explore or ACT Plan English Scores . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

4.8 2011–2012 National ACT Explore-Tested 8th-Grade Students Likely to Be Ready for College-Level Work (in Percent) . 54

5.1 World-of-Work Map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

5.2 The ACT Interest Inventory Technical Manual Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58

v

Figures

1.1 Components of ACT’s College and Career Readiness System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

2.1 Content Specifications for the ACT Explore English Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

2.2 Content Specifications for the ACT Explore Mathematics Test. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

2.3 Content Specifications for the ACT Explore Reading Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

2.4 Content Specifications for the ACT Explore Science Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

2.5 Difficulty Distributions and Mean Discrimination Indices from the 1999, 2003, and 2005 ACT Explore Operational Administrations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

3.1 Illustrative Listing of Mathematics Items by Score Range . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

3.2 Number of Items Reviewed During 1997 National Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

3.3 Percentage of Agreement of 1997 National Expert Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

3.4 Percentage of Agreement of 2000 National Expert Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

3.5 ACT’s College Readiness Benchmarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

Tables

Page 8: ACT Explore Technical Manual

vi

4.1 Sample Sizes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

4.2 Response Rates by Grade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

4.3 Summary of Response Rates within Schools. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

4.4 Precision of the Samples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

4.5 Original to New Scale Concordance. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

4.6 Response Information by School . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

4.7 Response Rates within Schools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

4.8 Demographic Characteristics for Norming Sample and Population Grade 8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

4.9 Demographic Characteristics for Norming Sample and Population Grade 9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

4.10 Estimated Standard Errors for Proportions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

4.11 Scale Score Summary Statistics for ACT Explore Tests and Subscores . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

4.12 ACT Explore 2010 National Norms for Fall Grade 8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

4.13 ACT Explore 2010 National Norms for Spring Grade 8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

4.14 ACT Explore 2010 National Norms for Fall Grade 9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

4.15 Student Data Used in Constructing the Expected Fall Grade 10 ACT Plan Score Ranges . . . . . . . . . . . . . . . . . . . . . . 31

4.16 Observed Counts and Probabilities of Coverage for Composite Scores ACT Explore Fall Grade 8 to ACT Plan Fall Grade 10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

4.17 Observed Counts and Probabilities of Coverage for Composite Scores ACT Explore Spring Grade 8 to ACT Plan Fall Grade 10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

4.18 Observed Counts and Probabilities of Coverage for Composite Scores ACT Explore Fall Grade 9 to ACT Plan Fall Grade 10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

4.19 Observed Counts and Probabilities of Coverage for Composite Scores ACT Explore Spring Grade 9 to ACT Plan Fall Grade 10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

4.20 Estimated ACT Plan Fall Grade 10 Composite Score Prediction Intervalsfor ACT Explore Fall Grade 8 Composite Scores . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

4.21 Estimated ACT Plan Fall Grade 10 Composite Score Prediction Intervalsfor ACT Explore Spring Grade 8 Composite Scores . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

4.22 Estimated ACT Plan Fall Grade 10 Composite Score Prediction Intervalsfor ACT Explore Fall Grade 9 Composite Scores . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

4.23 Estimated ACT Plan Fall Grade 10 Composite Score Prediction Intervalsfor ACT Explore Spring Grade 9 Composite Scores . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

4.24 Estimated Reliabilities and Standard Errors of Measurement for ACT Explore Tests and Subscores . . . . . . . . . . . . . . 43

4.25 Scale Score Covariances and Effective Weights for Form A Grade 8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

4.26 Scale Score Covariances and Effective Weights for Form A Grade 9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

4.27 Scale Score Covariances and Effective Weights for Form B Grade 8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

4.28 Scale Score Covariances and Effective Weights for Form C Grade 8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

4.29 Correlations Between ACT Explore Test Scores and Subscores . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

4.30 Correlations, Observed and (Disattenuated), Between ACT Explore, ACT Plan, and ACT Test Scale Scores . . . . . . . 48

4.31 Correlations Between ACT Explore Scores and Course Grades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

4.32 Means and Correlations for ACT Explore and High School Grade Point Average . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

4.33 Composite Score Means for the Three Test Batteries, by Grade and Core Coursework, at Time of ACT Testing . . . . . 51

5.1 Selected Demographic and Educational Characteristics of Grade 8 UNIACT Norm Group Students . . . . . . . . . . . . . 59

Tables (continued)

Page 9: ACT Explore Technical Manual

vii

This manual contains information, primarily of a techni calnature, about the ACT Explore® program. The principal focusof this manual is to document the Explore program’s techni-cal adequacy in light of its intended purposes. This manualsupersedes the 2011 edition.

The content of this manual responds to requirements of the testing industry as established in the Code ofProfessional Responsibilities in Educational Measurement(NCME Ad Hoc Committee on the Development of a Code of Ethics, 1995), the Standards for Educational andPsychological Testing (AERA, APA, & NCME, 1999), and theCode of Fair Testing Practices in Education (Joint Committeeon Testing Practices, 2004).

ACT regularly conducts research studies as part of theongoing formative evaluation of its programs. These studiesare aimed at ensuring that the programs remain technicallysound. Information gleaned from these studies is also usedto identify aspects of the programs that might be improved orenhanced. The information reported in this manual wasderived from studies that have been conduct ed for theExplore program since its inception. ACT will continue to

conduct research on the Explore program and will reportfuture findings in updated versions of this manual. Thosewho wish to receive more detailed informa tion on a topic dis-cussed in this manual, or on a related topic, are encour agedto contact ACT.

Qualified researchers wishing to access data in order toconduct research designed to advance knowledge about theExplore program are also encouraged to contact ACT. As part of its involvement in the testing process, ACT is committed to continuing and supporting such research, documenting and disseminat ing its outcomes, and encour -aging others to engage in research that sheds light on theExplore pro gram and its uses. Please direct comments orinquiries to Elementary and Secondary School Programs,ACT Development Area, P.O. Box 168, Iowa City, Iowa52243-0168.

Iowa City, IowaMay 2013

Preface

Page 10: ACT Explore Technical Manual
Page 11: ACT Explore Technical Manual

1

Chapter 1The ACT Explore® Program

Overview and Purpose of the ACT Explore Program

Students in the eighth and ninth grades are at an excitingpoint in their lives—with a world of possibilities open toexplore. The compre hensive ACT Explore® program helpsstudents make the most of their opportu nities and helpsguide them in future educational and career plan ning.

Like all of ACT’s assessment programs, Explore is basedon the belief that young people—and their parents, teachers,counselors, and school administrators—will make more pro-ductive plans and decisions if they have organized, relevantinformation available when they need it most.

Explore is an every-student program that assesses aca-demic progress at the eighth- and ninth-grade levels, helpsstu dents understand and begin to explore the wide range ofcareer options open to them, and assists them in develop inga high school coursework plan that prepares them to achievetheir post-high school goals.

Explore includes four 30-minute multiple-choice tests—English, Mathemat ics, Reading, and Science. Explore alsocollects information about student interests, needs, plans,and selected back ground charac ter istics that can be usefulin guidance and planning activities.

ACT makes available to Explore test takers and prospectiveExplore test takers various materials about test preparation andthe interpretation of test results. An overview of the test and aselection of sample test questions are available to studentsonline at www.act.org/explorestudent/. The Student ScoreReport each examinee receives after testing contains four sec-tions: (a) Your Scores, (b) Your Plans, (c) Your CareerPossibilities, and (d) Your Skills. The report is accompanied bya booklet, Using Your ACT Explore Results, which providesinterpretive information about the test results and provides sug-gestions for making educational plans, for building academicskills, and for exploring occupations.

Explore functions as a stand-alone program or as thepoint of entry into the secondary-school level of ACT’sCollege and Career Readiness System—an integratedseries of assessment programs that includes Explore, ACTPlan®, and the ACT® college readiness assessment. Whenused together, the assessments in the sys tem give edu ca -tors at the middle school and secondary school levels a pow-erful, interrelated sequence of instru ments to measurestudent development from Grade 8 through Grade 12.

The Explore, Plan, and ACT programs are scored along acom mon scale extending from 1 to 36; the maximum scoreon Explore is 25, the maximum Plan score is 32, and themaximum ACT score is 36. Because they are reported on thesame score scale, the programs’ assessment re sults informstudents, parents, teachers, and counselors about individualstudent strengths and weaknesses while there is still time toaddress them.

The Explore, Plan, and ACT assessments provide infor-mation about how well a student performs compared to otherstu dents. They also provide standards-based interpretationsthrough ACT’s College Readiness Standards—statementsthat describe students’ performance in terms of the knowl-edge and skills they have acquired. Because the CollegeReadiness Standards focus on the integrat ed, higher-orderthinking skills that students develop in Grades K–12 and thatare important for success both during and after high school,the Standards provide a common language for secondaryand postsecondary educators.

Using the College Readiness Standards, secondary edu-cators can pinpoint the skills students have and those theyare ready to learn next. The Standards clarify college expec-tations in terms that high school teachers understand. TheStandards also offer teachers guidance for improving instruc-tion to help correct student defi ciencies in specific areas.Explore, Plan, and ACT results can be used to identify stu-dents who are on track to being ready for college. ACT’sCollege Readiness Benchmark Scores—for EnglishComposition, Algebra, Social Sciences, and Biology—weredeveloped to help identify exam inees who would likely beready for doing college-level work in these courses or courseareas. Chapter 3 gives details about the College ReadinessStandards and Benchmarks.

ACT’s College and Career Readiness System is designedto help students plan for further education and explorecareers, based on their own skills, interests, and aspirations.Its results give schools a way to get students engaged inplanning their own futures. When they know what collegesexpect, in terms they can understand, students can takeowner ship and control of their information, and they can useit to help make a smooth transition to postsecondary education or training. Table 1.1 summarizes the system’scomponents.

Page 12: ACT Explore Technical Manual

Code of Fair Testing Practices in Education and Code of Professional Responsibilities in

Educational Measurement

Since publication of the original edition in 1988, ACT hasendorsed the Code of Fair Testing Prac tices in Education(Joint Committee on Testing Practices, 2004), a statement ofthe obligations to test takers of those who develop, adminis-ter, or use educa tional tests and test data. The developmentof the Code was sponsored by a joint committee of theAmerican Association for Counsel ing and Development,Association for Measure ment and Eval uation in Counselingand De velop ment, American Edu ca tional ResearchAssociation, American Psychological Asso ciation, AmericanSpeech-Language-Hearing Associa tion, and NationalCouncil on Measurement in Education to ad vance, in the pub-lic interest, the quality of testing prac tices.

The Code sets forth fairness criteria in four areas: devel-oping and selecting appropriate tests, administering andscoring tests, reporting and interpreting test results, andinforming test takers. Separate standards are provided for testdevelopers and for test users in each of these four areas.

ACT’s endorsement of the Code represents a commit -ment to vigorously safeguard the rights of individuals partic-ipating in its testing programs. ACT employs an ongoingreview process whereby each of its testing pro grams is

routinely reviewed to ensure that it upholds the standards setforth in the Code for appropriate test devel opment practiceand test use.

Similarly, ACT endorses and is committed to complyingwith the Code of Professional Responsibilities inEducational Measurement (NCME Ad Hoc Committee on theDevelopment of a Code of Ethics, 1995), a statement of pro-fessional responsibilities for those who develop assess-ments; market and sell assessments; select assessments;administer assessments; interpret, use, and communicateassessment results; educate about assessment; and evalu-ate programs and conduct research on assessments.

A copy of each Codemay be obtained free of charge fromACT Customer Services (68), P.O. Box 1008, Iowa City, Iowa52243-1008, 319.337.1429.

Philosophical Basis for the Testsof Educational Development

The Explore multiple-choice tests of educational develop-ment share a common philosophical basis with the Plan andACT tests. These three testing programs measure studentdevelopment in the same cur riculum areas of English, math-ematics, reading, and science. In simplest terms, the princi-pal differ ence between the three testing

2

Component Grades 8/9 Grade 10 Grades 11/12

Career and education planning

Explore:Interest InventoryNeeds Assessment

Plan:Interest InventoryCourse TakingNeeds Assessment

ACT:Interest InventoryCourse Taking and GradesStudent Profile

Objective assessments Explore:EnglishMathematicsReadingScience

Plan:EnglishMathematicsReadingScience

ACT:EnglishMathematicsReadingScienceWriting (optional)

Instructional support College Readiness Standards College Readiness Standards College Readiness Standards

Evaluation Summary ReportsExplore/Plan Linkage Reports

Summary ReportsExplore/Plan LinkageReports

Plan/ACT LinkageReports

Summary ReportsPlan/ACT LinkageReports

Table 1.1Components of ACT’s College and Career Readiness System

Page 13: ACT Explore Technical Manual

programs is that they focus on knowledge and skills typicallyattained at different times in students’ secondary schoolexperience. The ACT, for college-bound eleventh and twelfthgraders, focuses on knowledge and skills attained as thecumulative effect of school experience. Plan, intended for alltenth graders, focuses on knowledge and skills typicallyattained early in students’ secondary school experience (byGrade 10), and Explore, intended for all students in eighthand ninth grades, focuses on knowl edge and skills usuallyattained by Grade 8; both tests also include knowledge andskills students are learning in these respective grades. Thatis, the tests emphasize what students have learned in thelong term and also give examinees the chance to use knowl-edge and skills they currently are learning.

Because the content of the Explore tests of educa tionaldevelopment is linked to the ACT framework, understandingthe philosophical basis of the Explore tests requires an appre-ciation of the philosoph ical basis of the ACT.

The ACT tests of educational development are designedto measure how prepared students are to achieve the gen-eral academic goals of college. The princi pal philosophicalbasis for the ACT is that college preparedness is bestassessed by measuring, as directly as possible, the academic skills that students will need in order to perform college-level work. Complexity is certainly a characteristic ofsuch skills. Thus, the ACT tests are designed to determinehow skillful students are in solving problems, graspingimplied meanings, drawing inferences, evaluating ideas, andmaking judgments. In addition, the ACT tests of educationaldevelopment are oriented toward the general content areasof college and high school instructional programs. The testquestions require students to integrate the knowledge andskills they possess in major curriculum areas with the stimu-lus material provided by the test. Briefly, then, the philosophi -cal basis for the ACT tests rests on two pillars: (a) the tests should measure academic skills necessary foreducation and work after high school and (b) the content ofthe tests should be related to major curricu lum areas.

Tests of general educational development are used in theACT, Plan, and Explore because, when compared to othertypes of tests, it was judged that they better satisfy thediverse requirements of tests intended to facilitate the transi-tion to high school, college, or work. By contrast, measuresof examinee knowledge of specific course content (asopposed to curriculum areas) often do not provide a commonbaseline for comparisons of students, because courses varyso much across schools, and even within schools. In addi-tion, course-specific tests may not measure students’ skills inproblem solving and in the integration of knowledge from avariety of courses.

Tests of educational development can also be contrast edwith tests of academic aptitude. The stimuli and test ques-tions for aptitude tests are often purposefully chosen to bedissimilar to instructional materials, and each test within abattery of aptitude tests is usually designed to be homo -geneous in psychological structure. Consequently, often apti-tude tests are not designed to reflect the com plexity ofcourse work or the interactions among the skills measured.

Also, because tests of educational development mea suremany of the same skills that are taught in school, the bestpreparation for such tests should be course work. Thus, testsof educational development should send students a clearmessage that high test scores are not simply a matter ofinnate ability—they reflect a level of achievement that hasbeen earned as a result of hard work and dedication inschool.

Finally, the ACT, Plan, and Explore tests are intended toreflect educational goals that are widely accepted andjudged by educators to be important for success in collegeand work. As such, the content of the tests is designed witheducational considerations, rather than statisti cal and empir-ical techniques, given paramount importance. For example,content representa tiveness of the tests is more importantthan choosing the most highly discriminating items.

Administering the ACT Explore Program

Administration Schedule

The Explore tests (English, Mathe mat ics, Reading, andScience) should be adminis tered in a single session and willrequire approxi mately two hours thirty minutes to three hourstotal admin is tration time, including the non-test portions. TheExplore program is available year-round for adminis tration byschools at a time of their choosing. Because national nor-mative information (percentage of examinees who scored ator below the earned scale score of the examinee) on Explorereports is based on students who took the Explore tests in asingle session, interpretation of results from administration ofthe tests in two or more sessions should be done with cau-tion. If local normative information is important to the testuser, it is important that all students in the local populationbe tested under similar administration schedules.

Eighth-grade students who test in August throughJanuary will receive Fall Eighth-Grade Norms. Eighthgraders who test in February through July will receive SpringEighth-Grade Norms. Ninth-grade students will receive FallNinth-Grade Norms regardless of their test date. (If yourschool chooses to test ninth-grade students in the spring,keep in mind that these students will have had several moremonths of instruction than the norm group. Therefore, spring-tested ninth graders may show higher levels of achievementwhen compared to the fall-tested norm group than if they hadtested in the fall.) Students who are below the eighth gradewhen they take Explore will receive Fall Eighth-Grade Normson their student reports.

Students with physical or learning disabilities who cannotcomplete the Explore multiple-choice tests in the stan dardtime limits, using standard test materials, may be testedunder special conditions and/or using special test ing materi-als available from ACT. It should be noted, however, that nor-mative information is based on students who took theExplore tests in a single session under the standard time lim-its using standard test materials. Therefore, the normativeinforma tion reported for students testing under special con-ditions should be interpreted with caution.

3

Page 14: ACT Explore Technical Manual

Support Materials

Explore includes a coordinated set of support materi als tohelp students, parents, teachers, counselors, and ad min istrators understand the purposes of the program andthe information provided.

• The ACT Explore Supervisor’s Manual, designed to beused by Explore test supervisors, shows how theExplore program can be used to help students build asolid foundation for future academic and career suc-cess.

• The Guide for Interpreting Your ACT Explore Item-Response Summary Report explains how to use andunderstand the information in the Item-ResponseSummary Report.

• An introductory brochure for students and their par-ents, Why Take ACT Explore?, provides a brief over -view of the Explore program and tips to help stu dentsdo their best.

• Test materials are composed of Student AssessmentSets—the consumable materials needed to test onestudent—and reusable test books. Both are typicallyavailable in packages to test 30 students. OneSupervisor’s Manual and copies of Directions forTesting are shipped with each order. StudentAssessment Set prices include scoring and standardprogram reports produced by ACT. Reports are rou-tinely shipped from ACT within two weeks of receipt ofcompleted answer sheets.

• Each student who participates in Explore will receiveUsing Your ACT Explore Results, which includes sec-tions on interpreting the Student Score Report, plan-ning for high school and beyond, career possibilities,building academic skills, and coursework planning.

• College Readiness Standards help students, teachers,counselors, and others to more fully understand whatstudents who score in various score ranges are likelyto know and to be able to do in each academic areaassessed: English, mathematics, reading, and science.The Standards are complemented by “ideas forprogress,” which are suggestions for learning experi-ences that students might benefit from if they wish toprogress to higher levels of achievement. The ACTExplore College Readiness Standards are discussedin chapter 3.

ACT’s Standards for Test Administration and Security

ACT provides specific guidelines for test supervisors andmaterials handling in order to maintain testing condi tions asuniform as possible at all schools. Test supervi sors are pro-vided with a copy of the ACT Explore Supervisor’s Manual.This docu ment provides detailed instruc tions about allaspects of test administration. Among other standard proce-dures, the manual includes instructions to be read aloud tothe examinees. The in structions are to be read withoutdepar ture from the speci fied text in order to maintain stan-dard testing conditions.

4

Page 15: ACT Explore Technical Manual

5

Description of the ACT Explore Tests

Explore contains four multiple-choice tests—English,Mathematics, Reading, and Science. These tests aredesigned to measure students’ curriculum-related knowledgeand the complex cognitive skills impor tant for future educa-tion and careers. Explore results provide eighth- and ninth-grade students with the information they need to beginmaking plans for high school and beyond.

The fundamental idea underlying the development anduse of these tests is that the best way to deter mine how wellprepared students are for further edu cation and for work is tomeasure as directly as pos sible the knowledge and skillsneeded in those set tings. ACT conducted a detailed analysisof three sources of information to deter mine which knowl-edge and skills should be measured by Explore. First, theobjectives for instruction in Grades 6 through 9 for all statesthat had published them were studied. Second, the text-books on state-ap proved lists for courses in Grades 6through 8 were reviewed. Third, educators in Grades 7through 12 and at the postsecondary level were consulted todetermine the knowledge and skills taught in Grades 6through 8 that are prerequisite to successful perfor mance inhigh school. Information from these sources helped to definea scope and sequence for each of the areas measured byExplore.

Curriculum study is ongoing at ACT. Curricula in eachcontent area (English, mathematics, reading, and science) inthe Explore tests are reviewed on a periodic basis. ACT’sanalyses include reviews of tests, curriculum guides, andnational standards; surveys of current instructional practice;and meetings with content experts (see ACT, ACT NationalCurriculum Survey ® 2009, 2009).

The Explore tests are designed to be develop men tal lyand conceptually linked to those of Plan and the ACT. Toreflect that continu ity, names of the multiple-choice tests arethe same across the three pro grams and the Explore testscores are on the same score scales as those of Plan andthe ACT. The programs are similar in their focus on criticalthinking skills and in their common curricu lum base.Specifications for the Explore program are consistent with,and should be seen as a logical extension of, the contentand skills measured in the Plan and ACT programs.

The English Test

The Explore English Test (40 items, 30 minutes) mea -sures the student’s understanding of the conventions of stan-dard written English (punctuation, grammar and us age, andsentence structure) and of rhetorical skills (strat egy, organi-

zation, and style). The test stresses the analy sis of the kindsof prose that students are required to read and write in mostmiddle and secondary school programs, rather than the roterecall of rules of grammar. The test consists of four prosepassages, each accompa nied by a number of multiple-choice test items. Different passage types are employed toprovide a variety of rhetori cal situations.

Some items refer to underlined portions of the passageand offer several alternatives to the portion underlined. Thestudent must decide which choice is most appropriate in thecontext of the passage. Some items ask about an underlinedportion, a section of the passage, or the passage as a whole.The student must decide which choice best answers thequestion posed. Many items offer “NO CHANGE” to the pas-sage as one of the choices. The questions are numberedconsecutively. Each item number refers to a correspondinglynumbered portion underlined in the passage or to a corre-sponding numeral in a box located at the appropriate point inthe passage.

Two subscores are reported for this test, aUsage/Mechanics subscore based on 25 items and aRhetorical Skills subscore based on 15 items.

The Mathematics Test

The Explore Mathematics Test (30 items, 30 min utes)measures the student’s mathematical reasoning. The testemphasizes quantitative reasoning rather than memori zationof formulas or computational skills. In particular, it empha -sizes the ability to solve practical quantitative prob lems thatare encountered in middle school and junior high courses.The items included in the Mathematics Test cover four cog -ni tive levels: knowledge and skills, direct applica tion,understand ing concepts, and integrating conceptual under-standing.

Students should have calculators available when takingthis test and are encouraged to use the calculator they aremost comfortable with. The test in cludes prob lems for whicha calculator is clearly the best tool to use, and others wherea non-calculator solution is appropriate. Some questions onthe test may be solved with or without a calculator, neitherstrate gy being clearly superior to the other. Students mustchoose when to use and when not to use calculators. A fewrestrictions do apply to the calculator used. These restric-tions can be found in the current year’s ACT ExploreSupervisor’s Manual or on ACT’s website at www.act.org.

The items in the Mathematics Test are classified accord -ing to four content categories: Pre-Algebra, Elemen taryAlgebra, Geometry, and Statistics/Probability.

Chapter 2The ACT Explore Tests of Educational Development

Page 16: ACT Explore Technical Manual

The Reading Test

The Explore Reading Test (30 items, 30 minutes) meas-ures the student’s level of reading comprehension as a prod-uct of skill in referring and reasoning. That is, the test itemsrequire students to derive meaning from several texts by: (a)referring to what is explicitly stated and (b) reasoning todetermine implicit meanings. Specifically, items will ask thestudent to use referring and reasoning skills to determinemain ideas; locate and interpret significant details; under-stand sequences of events; make comparisons; comprehendcause-effect relationships; determine the meaning of context-dependent words, phrases, and statements; drawgeneralizations; and analyze the author’s or narrator’s voiceor method. The test comprises three prose passages that arerepresentative of the level and kinds of text commonlyencountered in middle school and junior high curricula; pas-sages on topics in the social sciences, prose fiction, and thehumanities are included. Each passage is preceded by aheading that identifies what type of passage it is (e.g., “ProseFiction”), names the author, and may include a brief note thathelps in understanding the passage. Each passage isaccompanied by a set of multiple-choice test items. Theseitems do not test the rote recall of facts from outside the pas-sage, isolated vocabulary questions, or rules of formal logic.Rather, the test focuses upon the complex of complementaryand mutually supportive skills that readers must bring to bearin studying written materials across a range of subject areas.

The Science Test

The Explore Science Test (28 items, 30 minutes) meas-ures scientific reasoning skills acquired up to Grade 8. Thetest presents six sets of scientif ic informa tion, each followedby a number of multiple-choice test items. The scientific infor-mation is conveyed in one of three different formats: datarepresentation (graphs, tables, and other schematic forms),research summaries (descrip tions of several related experi-ments), or conflicting view points (expressions of severalrelated hypotheses or views that are inconsistent with oneanoth er). The items require students to recognize and under -stand the basic features of, and concepts related to, the infor-mation provided; examine critically the relation ships betweenthe information provided and the conclu sions drawn orhypotheses devel oped; and generalize from given informa-tion to gain new informa tion, draw conclusions, or make pre-dictions.

The Science Test is based on the type of content that istypically covered in science courses through Grade 8.Materials are drawn from the life sciences, Earth/space sci-ences, and physical sciences. The test emphasiz es scientificreasoning skills over recall of scientific content, skill in math-ematics, or skill in reading.

Test Development Procedures

This section describes the proce dures that are used inthe development of the four tests described above. The testdevelopment cycle required to produce each new form of theExplore tests takes as long as two and one-half years and in -volves several stages.

Test Specifications

Two types of test specifi ca tions are used in the develop -ment of the Explore tests: content specifications and statisti-cal specifications.Content Specifications. Content specifications for the

Explore tests were developed through the curricular analysisdiscussed in chapter 1. While care is taken to ensure that thebasic structure of the Explore tests remains the same fromyear to year so that the scale scores are comparable, thespecific characteristics of the test items used in each speci-fication category are re viewed regular ly. Consultant panelsare convened to review the new forms of the test in order toverify their content accura cy and the match of the content ofthe tests to the content specifica tions. At this time, thecharacteris tics of the items that fulfill the content specifi -cations are also reviewed. While the general content of thetest remains constant, the specific kinds of items in aspecifica tion category may change slightly. The basic struc-ture of the content specifi cations for each of the Explore mul-tiple-choice tests is provided in Tables 2.1 through 2.4.Statistical Specifications. Statistical specifica tions for

the tests indicate the level of difficulty (proportion correct)and minimum acceptable level of discrimination (biserialcorre la tion) of the test items to be used.

The distribution of item difficulties was selected so thatthe tests will effectively differentiate among students whovary widely in their level of achieve ment. The tests are con -struct ed to have a mean item difficulty in the mid-60s for theExplore national population and a range of difficulties fromabout 0.20 to 0.95.

With respect to discrimination indices, the follow ing stan -dards are used when selecting Explore items: (1) itemsshould have a biserial correlation of 0.20 or higher withscores on a test measuring comparable content and (2)mathematics items in the diffi culty range of 0.30–0.95 shouldhave a biserial correlation of 0.30 or higher with scores on atest measuring comparable content.

Selection of Item Writers

ACT contracts with item writers to construct the items forExplore. The item writers are content special ists in the disci-plines measured by the Explore tests. Most are activelyengaged in teaching. They teach at a number of different lev-els, from middle school to universi ty, and at a variety of insti-tutions, from small private schools to large public institutions.They represent the diversity of the population of the UnitedStates with re spect to ethnic background, gender, and geo-graphic location. ACT makes every effort to ensure that the

6

Page 17: ACT Explore Technical Manual

7

item writers for Explore rep resent a cross section of educa -tors in the United States.

Before being asked to write items for the Explore tests,potential item writers are required to submit a sample set ofmaterials for review. An item writer’s guide that is specific tothe content area is provided to each item writ er. The guidesinclude examples of items and provide item writers with thetest specifications and ACT’s require ments for content andstyle. Included are specifications for fair portrayal of groupsof individuals, avoidance of subject matter that may be unfa-miliar to members of a group of society, and nonsexist use oflan guage.

Each sample set submitted by a potential item writer isevaluated by ACT Test Development staff. A decision con -cerning whether to contract with the item writer is made onthe basis of that evaluation.

Each item writer under contract is given an assignment toproduce a small number of multiple-choice items. The size ofthe assignment ensures that a diversity of mate rial will beproduced and that the security of the testing program will bemaintained, since any item writer will know only a small pro-portion of the items produced. Item writers work closely withACT test specialists, who assist them in producing items ofhigh quality that meet the test specifi ca tions.

Table 2.1Content Specifications for the ACT Explore English Test

Six elements of effective writing are included in the English Test. These elements and the approximate propor tion of thetest devoted to each are given below.

Content/Skills Proportion of test Number of items

Usage/Mechanics .64 25

Punctuation .15 6

Grammar and Usage .20 8

Sentence Structure .29 11

Rhetorical Skills .36 15

Strategy .12 5

Organization .12 5

Style .12 5

Total 1.00 40

a. Punctuation. The items in this category test the stu -dent’s knowledge of the conven tions of internal andend-of-sentence punc tua tion, with emphasis on therelationship of punc tua tion to meaning (e.g., avoid ingambiguity, identifying appositives).

b. Grammar and Usage. The items in this cate gory testthe student’s understanding of agree ment be tweensubject and verb, between pro noun and ante cedent,and between modifiers and the words modi fied; verbforma tion; pro noun case; formation of comparative andsuper lative adjectives and ad verbs; and idiomaticusage.

c. Sentence Structure. The items in this category test thestudent’s understanding of relationships be tween andamong clauses, placement of modifi ers, and shifts inconstruction.

d. Strategy. The items in this category test the stu dent’sability to develop a given topic by choosing expressionsappropriate to an essay’s audience and purpose; tojudge the effect of adding, revising, or deleting sup-porting material; and to judge the relevancy of state-ments in context.

e. Organization. The items in this category test the stu -dent’s ability to organize ideas and to choose effectiveopening, transitional, and closing sentences.

f. Style. The items in this category test the student’s abil-ity to select precise and appropriate words and images,to maintain the level of style and tone in an essay, tomanage sentence elements for rhetorical effec -tiveness, and to avoid ambiguous pronoun refer enc es,wordiness, and redundancy.

Page 18: ACT Explore Technical Manual

8

Table 2.2Content Specifications for the ACT Explore Mathematics Test

The items in the Mathematics Test are classified accord ing to four content categories. These categories and the approxi-mate proportion of the test devoted to each are given below.

Mathematics content area Proportion of test Number of items

Pre-Algebra .33 10

Elementary Algebra .30 9

Geometry .23 7

Statistics/Probability .14 4

Total 1.00 30

a. Pre-Algebra. Items in this category are based on oper-ations with whole numbers, integers, deci mals, andfractions. The topics covered include place value,square roots, scientific notation, fac tors, ratio, and pro-portion and percent. Formal vari ables are not used.

b. Elementary Algebra. The items in this category arebased on operations with algebraic expres sions. Theoperations include evaluation of alge braic expres sionsby substitution; use of variables to express functionalrelationships, solution of linear equations in one vari-able, use of real num ber lines to repre sent num bers,and graphing of points in the stan dard coordi nateplane.

c. Geometry. Items in this category cover such topics asthe use of scales and measurement systems, planeand solid geometric figures and associated relation -ships and concepts, the concept of angles and theirmeasures, parallelism, relationships of triangles, prop-erties of a circle, and the Pythago re an theorem. All ofthese topics are addressed at a level preced ing formalgeometry.

d. Statistics/Probability. Items in this category cover suchtopics as elementary counting and rudimen ta ry proba -bili ty; data collection, representation, and inter preta -tion; and reading and relating graphs, charts, and otherrepresen tations of data. These topics are ad dressed ata level preceding formal statistics.

Table 2.3Content Specifications for the ACT Explore Reading Test

The items in the Reading Test are based on the prose passages that are representative of the kinds of writing commonlyencountered in middle-school and junior-high school curricula, including the social sciences, prose fiction, and the human-ities. The three content areas and the approximate proportion of the test devoted to each are given below.

Reading passage content Proportion of test Number of items

Prose Fiction .33 10

Social Sciences .33 10

Humanities .33 10

Total 1.00 30

a. Prose Fiction. The items in this category are based onshort stories or excerpts from short stories or novels.

b. Humanities. The items in this category are based onpassages from memoirs and personal essays, and inthe content areas of architecture, art, dance, ethics,film, language, literary criticism, music, philosophy,radio, television, or theater.

c. Social Sciences. The items in this category are basedon passages in anthropology, archaeology, biography,business, economics, education, geography, history,political science, psychology, or sociology.

Page 19: ACT Explore Technical Manual

9

Item Construction

The item writers must create items that are educa tion allyimportant as well as psychometrically sound. A large num berof items must be constructed be cause, even with good writ-ers, many items fail to meet ACT’s standards. Each itemwriter submits a set of items, called a unit, in a content area.

Review of Items

After a unit is accepted, it is edited to meet ACT’s spec i fi -cations for content accuracy, word count, item classifica tion,item format, and language. During the editing pro cess, alltest materials are reviewed for fair portrayal and balancedrepresentation of groups of society and nonsex ist use of lan-guage. The unit is re viewed several times by ACT staff toensure that it meets all of ACT’s standards.

Copies of each unit are then submitted to content and fair-ness experts for external reviews prior to the pretest admin-istration of these units. The content reviewers areeighth-grade teachers, curriculum special ists, and college

and university faculty members. These content expertsreview the unit for content accuracy, educa tional impor tance,and grade-level appropriateness. The fairness reviewers areexperts in diverse educational areas who represent bothgenders and a variety of racial and ethnic backgrounds.These reviewers help ensure fairness to all examin ees.

Any comments on the units by the content consultantsare discussed at a panel meeting with all the content con-sultants and ACT staff, and appropriate changes are madeto the unit(s). All fairness consultants’ comments arereviewed and discussed, and appropriate changes are madeto the unit(s).

Item Tryouts

The items that are judged acceptable in the reviewprocess are assembled into tryout units. Several units arethen combined and placed into booklets. The Explore tryout units are administered in a special study under stan-dard conditions to a sample of students selected to be

Table 2.4Content Specifications for the ACT Explore Science Test

The Science Test is based on the type of content that is typically covered in early general science courses through Grade 8.Materials are drawn from the life sciences (such as biology, botany, ecology, health, human behavior, and zoology),Earth/space sciences (such as map reading, meteorology, geology, and astronomy), and physical sciences (such as sim-ple chemical formulas and equations and other basic chemistry, weights and mea sures, and basic principles of physics).The test emphasiz es scientific reasoning skills over recall of specific scientific content, skill in mathematics, or skill in read-ing. Minimal arithmetic and algebraic computations may be required to answer some questions. The three formats and theapproximate proportion of the test devoted to each are given below.

Content areaa Format Proportion of test Number of items

Earth/Space Sciences Data Representation .43 12

Life Sciences Research Summaries .36 10

Physical Sciences Conflicting Viewpoints .21 6

Total 1.00 28aAll three content areas are represented in the test; there are three units in the life sciences, two units in the physical sciences, and one unit in the Earth/space sciences.

a. Data Representation. This format presents stu dentswith graphic and tabular material similar to that found inscience journals and texts. The items associ ated withthis format measure skills such as graph read ing, inter -pretation of scatterplots, and interpreta tion of informa -tion present ed in tables. The graphic or tabular materialmay be taken from published materi als; the items arecom posed ex press ly for the Sci ence Test.

b. Research Summaries. This format provides stu dentswith descriptions of one or more related experi ments.The items focus on the design of experi ments and the

interpretation of experi men tal results. The stimu lus anditems are written expressly for the Science Test.

c. Conflicting Viewpoints. This format presents stu dentswith expressions of several hypotheses or views that,being based on differing premises or on incom pletedata, are inconsistent with one another. The itemsfocus on the understanding, analysis, and comparisonof alterna tive view points or hy potheses. Both the stim-ulus and the items are written express ly for the ScienceTest.

Page 20: ACT Explore Technical Manual

10

represen tative of the total examinee population. Each exam-inee in the tryout sample is administered a tryout bookletfrom one of the four academic areas covered by the Exploretests. The time limits for the tryout units permit most of thestudents to respond to all items.

Item Analysis of Tryout Units

Item analyses are performed on the tryout units. For agiven booklet the sample is divided into low, medium, andhigh groups by the individual’s total tryout test score. The cut-ting scores for the three groups are the 27th and the 73rdpercentile points in the distribution of those scores.

Proportions of students in each of the groups correctlyanswering each tryout item are tabulated, as well as the pro-portion in each group selecting each of the incorrect options.Biserial and point-biserial correlation coefficients betweeneach item score (correct/incorrect) and the total score on thetryout unit are also computed.

The item analyses serve to identify statistically effec tivetest questions. Items that were either too difficult or too easy,and those that failed to discriminate between stu dents ofhigh and low educational development as mea sured by theirtryout test scores, are eliminated or revised. The biserial andpoint-biserial correlation coefficients, as well as the differ-ences between propor tions of students answering the itemcorrectly in each of the three groups, are used as indices ofthe discriminat ing power of the tryout items.

Each item is reviewed following the item analysis. ACTstaff scrutinizes items determined to be of poor quality inorder to identify possible causes. Some items are revisedand placed in new tryout units following further review. Thereview process also provides feedback that helps decreasethe incidence of poor quality items in the future.

Assembly of New Forms

Items that are judged accept able in the review processare placed in an item pool. Preliminary forms of the Exploretests are constructed by selecting from this pool items thatmatch both the content and statistical specifica tions for thetests.

For each test in a battery form, items are selected tomatch the content distribution for the test shown in Tables 2.1through 2.4. Items are also selected to comply with thedesired statistical specifications as dis cussed in an earliersection. The distributions of item difficulty levels obtained ona recent form of the four tests are displayed in Table 2.5. Thedata in the table are taken from random samples of approxi-mately 2,000 students from the operation al administration ofthe tests in 1999, 2003, and 2005. In addition to the item dif-ficulty distributions, item discrimi na tion indices in the form ofobserved mean biserial correlations are reported.

Table 2.5Difficultya Distributions and Mean Discriminationb Indices

from the 1999, 2003, and 2005 ACT Explore Operational Administrations

Difficulty range English Mathematics Reading Science

.00–.09 0 0 0 0

.10–.19 0 1 0 0

.20–.29 0 3 1 1

.30–.39 7 8 1 4

.40–.49 16 13 16 15

.50–.59 15 16 20 20

.60–.69 26 18 24 22

.70–.79 30 9 16 10

.80–.89 21 14 10 7

.90–.99 5 8 2 5

Number of itemsc 120 90 90 84Mean difficultya .66 .62 .62 .61Mean discriminat ionb 0.58 0.59 0.61 0.59aDifficulty is the proportion of examinees correctly answering the item.bDiscrimination is the item-total biserial correlation coefficient.cThree forms of 40, 30, 30, and 28 items each for the English, Mathematics, Read ing,and Science Tests, respectively.

Page 21: ACT Explore Technical Manual

Content and Fairness Review of Test Forms

The preliminary versions of the test forms are subject edto several reviews to ensure that the items are accurate andthat the overall test forms are fair and conform to good testconstruction practice. The first review is per formed by ACTstaff. Items are checked for content accu racy and conformi tyto ACT style. The items are also reviewed to ensure that theyare free of clues that could allow test-wise students toanswer the item correctly even though they lack knowledgein the subject area or the required skills.

The assembled forms are then submitted to content andfairness experts for external review prior to the operation aladministration of the forms. These consul tants are not thesame individuals used for the content and fairness reviews oftryout units.

The content consultants are eighth-grade teach ers, cur-riculum specialists, and college and universi ty faculty mem -bers. The content consultants review the forms for contentaccuracy, educational impor tance, and grade-level appropri-ateness. The fairness consultants are diversity experts ineducation who repre sent both genders and a variety of racialand ethnic back grounds. The fairness consultants review theforms to help ensure fairness to all examinees.

After the external content and fairness reviews, ACT sum-marizes the results from the reviews. Comments from theconsultants are then reviewed by ACT staff members, andany necessary changes are made to the test forms.Whenever signifi cant changes are made, the revised com-ponents are again reviewed by the appropriate consultantsand by ACT staff. If no further corrections are needed, thetest forms are prepared for printing.

In all, at least sixteen independent reviews are made ofeach test item before it appears on a national form ofExplore. The many reviews are performed to help ensure thateach student’s level of achievement is accu rately and fairlyevaluated.

Review Following Operational Administration

After each operational administration, item analysisresults are reviewed for any abnormality such as sub stan tialchanges in item difficulty and discrimination indices be tweentryout and national administrations. Only after all anomalieshave been thoroughly checked and the final scoring keyapproved are score reports pro duced. Examinees areencouraged to challenge any items that they feel are ques-tionable in correctness. Once a chal lenge to an item is raisedand reported, the item will be reviewed by the experts in thecontent area the item is assessing. In the event that a prob-lem is found with an item, necessary actions will be taken toeliminate or mini mize the influence of the problem item. In allcases, the person who challeng es an item will be sent a let-ter indi cating the results of the review.

Also, after each operational administration, differentialitem func tioning (DIF) analy sis is conducted on the test data.DIF can be described as a statistical difference be tween theprobability of the specific population group (the “focal” group)

getting the item right and the compari son population group(the “base” group) getting the item right given that bothgroups have the same level of expertise with respect to thecontent being tested. The procedures currently used for theanalysis include the standardized difference in proportion-correct (STD) procedure and the Mantel-Haenszel commonodds-ratio (MH) procedure.

In ACT’s experience, the STD and MH procedures areuseful techniques in detecting DIF. Both techniques aredesigned for use with multiple-choice items, and both requiredata from significant numbers of examinees to provide reli-able results. For a description of these statistics and theirperformance overall in detecting DIF, see the ACT ResearchReport entitled Performance of Three Conditional DIFStatistics in Detecting Differential Item Functioning onSimulated Tests (Spray, 1989). In the analysis on items in anExplore form, large samples representing exami nee groupsof interest, e.g., males and females, are selected from thetotal number of examinees taking the test. The examinees’responses to each item on the test are ana lyzed using theSTD and MH procedures. Com pared with preestablished cri-teria, the items with STD and/or MH values exceeding thetolerance level are flagged. The flagged items are then fur-ther reviewed by the content specialists for possible explana -tions of the unusual STD and/or MH results of the items. Inthe event that a prob lem is found with an item, neces saryactions will be taken to eliminate or mini mize the influence ofthe problem item.

ACT Explore Scoring Procedures

For each of the four tests in Explore (English, Mathe -matics, Reading, Science), the raw scores (number of cor-rect responses) are converted to scale scores ranging from1 to 25. The score scale is discussed further on pages 22–23of this manual.

The Composite score is the average of the four scalescores rounded to the nearest whole number (0.5 roundsup). The minimum Composite score is 1; the maximum is 25.

In addition to the four Explore test scores and Compos itescore, two subscores are reported for the English Test. As foreach of the four tests, the raw scores for the subscore itemsare converted to scale scores. These subscores are reportedon a score scale ranging from 1 to 12.

National norms are reported as cumulative percent for thefour Explore test scores, two subscores, and Compositescore. Explore norms are intended to be representative ofthe performance of all eighth graders (in the fall, and in thespring) and ninth graders (in the fall), respectively, in thenation, and are based on adminis trations of Explore toclasses of eighth and ninth graders in public and privateschools throughout the United States. Norming proceduresare discussed further on pages 24–36 of this manual.

11

Page 22: ACT Explore Technical Manual

12

ACT’s College Readiness Standards

Description of the College Readiness Standards

In 1997, ACT began an effort to make Explore, Plan, andACT test results more informative and useful. This effortyielded College Readiness Standards for each of the pro-grams. ACT’s College Readiness Standards are statementsthat describe what students who score in various scoreranges on the tests are likely to know and to be able to do.For example, students who score in the 16–19 range on thePlan English Test typically are able “to select the most logi-cal place to add a sentence in a paragraph,” while studentswho score in the 28–32 score range are able “to add a sen-tence to introduce or conclude a fairly complex paragraph.”The Standards reflect a progression of skills in each of thefive tests: English, Reading, Mathematics, Science, andWriting. ACT has organized the standards by strands—related areas of knowledge and skill within each test—forease of use by teachers and curriculum specialists. The com-plete College Readiness Standards are posted on ACT’swebsite at www.act.org. They also are available in poster for-mat from ACT Educational Services at 319.337.1040.

College Readiness Standards for Explore, Plan, and theACT are provided for six score ranges (13–15, 16–19, 20–23,24–27, 28–32, and 33–36) along a score scale that is com-mon to Explore (1–25), Plan (1–32), and the ACT (1–36).Students who score in the 1–12 range are most likely begin-ning to develop the skills and knowledge described in the13–15 score range. The Standards are cumulative, whichmeans that if students score, for example, in the 20–23 rangeon the English Test, they are likely able to demonstrate mostor all of the skills and understandings in the 13–15, 16–19,and 20–23 score ranges.

College Readiness Standards for Writing, which ACTdeveloped in 2005, are available only for the ACT and are pro-vided for five score ranges (3–4, 5–6, 7–8, 9–10, and 11–12)

based on ACT Writing Test scores obtained (sum of two read-ers’ rating using the six-point holistic scoring rubric for theACT Writing Test). Scores below 3 do not permit useful gen-eralizations about students’ writing abilities.

Since Explore, Plan, and the ACT are designed to meas-ure students’ progressive development of knowledge andskills in the same four academic areas through Grades 8–12,the Standards are correlated across programs as much aspossible. The Standards in the 13–15, 16–19, 20–23, and24–27 score ranges apply to scores for all three programs.The Standards in the 28–32 score range are specific to Planand the ACT, and the scores in the 33–36 score range arespecific to the ACT. Figure 3.1 illustrates the score-rangeoverlap among the three programs.

Determining the Score Ranges for the CollegeReadiness Standards (1997)

When ACT began work on the College ReadinessStandards in 1997, the first step was to determine the num-ber of score ranges and the width of each score range. To dothis, ACT staff reviewed normative data and considered therelationships among Explore, Plan, and the ACT. This infor-mation was considered within the context of how the testscores are used—for example, the use of the ACT scores incollege admissions and course-placement decisions.

In reviewing the normative data, ACT staff analyzed thedistribution of student scores across the three score scales(Explore 1–25, Plan 1–32, and ACT 1–36). The staff alsoconsidered course placement research that ACT has con-ducted over the last forty-five years. ACT’s CoursePlacement Service provides colleges and universities withcutoff scores that are used for placement into appropriateentry-level college courses. Cutoff scores based on admis-sions and course placement criteria were used to help definethe score ranges of all three testing programs.

Chapter 3ACT’s College Readiness Standards and

College Readiness Benchmarks

Figure 3.1. Score ranges for ACT Explore, ACT Plan, and the ACT.

Explore

Plan

ACT

13–15 16–19 20–23 24–27 28–32 33–36

Page 23: ACT Explore Technical Manual

Itemno.

Item difficulties for students scoring in the score range of:

13–15 16–19 20–23 24–27 28–32 33–36

1 .62 .89 .98 .99 1.00 1.002 .87 .98 .99 .99 1.006 .60 .86 .94 .97 .99 .997 .65 .92 .98 .99 .99 1.0020 .84 .94 .97 .98 .9927 .85 .97 .99 .99 .994 .92 .97 .99 1.005 .94 .97 .99 .99. . . . .. . . . .. . . . .8 .82 .95 .98 .999 .80 .89 .96 .9921 .82 .92 .97 .9913 .90 .97 .9915 .90 .97 .9917 .87 .98 1.0018 .83 .93 .9822 .81 .91 .9824 .83 .96 .9829 .87 .98 1.0034 .86 .95 .9936 .82 .93 .9939 .85 .96 .9944 .84 .96 .9925 .95 .9928 .97 1.00. . .. . .. . .35 .86 .9647 .86 .9732 .9533 .9246 .9049 .9551 .9852 .9853 .9256 .9857 .8658 .9559 .8660 .96

13

After analyzing all the data and reviewing different pos-sible score ranges, ACT staff concluded that the scoreranges 1–12, 13–15, 16–19, 20–23, 24–27, 28–32, and33–36 would best distinguish students’ levels of achieve-ment so as to assist teachers, administrators, and others inrelating the test scores to students’ skills and understand-ings.

Developing the College Readiness Standards

After reviewing the normative data, college admissionscriteria, and information obtained through ACT’s CoursePlacement Service, content area test specialists wrote theCollege Readiness Standards based on their analysis of theskills and knowledge students need in order to respond suc-cessfully to test items that were answered correctly by 80%or more of the examinees who scored within each scorerange. Content specialists analyzed test items taken fromdozens of test forms. The 80% criterion was chosenbecause it offers those who use the College ReadinessStandards a high degree of confidence that students scoringin a given score range will most likely be able to demonstratethe skills and knowledge described in that range.The Process. Four ACT content teams were identified,

one for each of the tests (English, Mathematics, Reading,and Science) included in the three testing programs. Eachcontent team was provided with numerous test forms alongwith tables that showed the percentages of students in eachscore range who answered each test item correctly (the itemdifficulties). Item difficulties were computed separately basedon groups of students whose scores fell within each of thedefined score ranges.

The College Readiness Standards were identified by test,by program, beginning with the ACT. Each content team wasprovided with 10 forms of the ACT and the item difficultiescomputed separately for each score range for each of theitems on the forms. For example, the mathematics contentteam reviewed 10 forms of the ACT Mathematics Test. Thereare 60 items in each ACT Mathematics Test form, so 600ACT Mathematics items were reviewed in all. An illustrativetable displaying the information provided to the mathematicscontent team for one ACT Mathematics Test form is shownin Table 3.1.

The shaded areas in Table 3.1 show the items that metthe 0.80 or above item difficulty criterion for each of thescore ranges. As illustrated in Table 3.1, a cumulative effectcan be noted: the items that are correctly answered by 80%of the students in Score Range 16–19 also appear in ScoreRange 20–23; the items that are correctly answered by 80%of the students in Score Range 20–23 also appear in ScoreRange 24–27; and so on. By using this information, the con-tent teams were able to isolate and review the items by scoreranges across test forms.

Table 3.1Illustrative Listing of Mathematics Items

by Score Range

Page 24: ACT Explore Technical Manual

14

Table 3.2Number of Items Reviewed During 1997 National Review

Number of items for each testing program

Content area Explore Plan ACT

English 40 50 75

Mathematics 30 40 60

Reading 30 25 40

Science 28 30 40

Number of items per form 128 145 215

Total number of test forms reviewed 4 9 10

Total number of items reviewed 512 1,305 2,150

The procedures described allowed the content teams toconceptualize what is measured by each of the academictests. Each content team followed the same basic process asthey reviewed the test items in each academic test in thethree assessment programs, Explore, Plan, and the ACT:1. Multiple forms of each academic test were distributed.2. The knowledge, skills, and understandings that are nec-

essary to answer the test items in each score rangewere identified.

3. The additional knowledge, skills, and understandingsthat are necessary to answer the test items in the nextscore range were identified. This process was repeatedfor all the score ranges.

4. All the lists of statements identified by each content spe-cialist were merged into a composite list. The compositelist was distributed to a larger group of content special-ists.

5. The composite list was reviewed by each content spe-cialist and ways to generalize and to consolidate the var-ious skills and understandings were identified.

6. The content specialists met as a group to discuss theindividual, consolidated lists and prepared a master listof skills and understandings, organized by score ranges.

7. The master list was used to review at least three addi-tional test forms, and adjustments and refinements weremade as necessary.

8. The adjustments were reviewed by the content special-ists and “final” revisions were made.

9. The “final” list of skills and understandings was used toreview additional test forms. The purpose of this reviewwas to determine whether the College ReadinessStandards adequately and accurately described theskills and understandings measured by the items, byscore range.

10. The College Readiness Standards were once againrefined.

These steps were used to review test items for all fourmultiple-choice academic tests in all three testing programs.As work began on the Plan and Explore test items, theCollege Readiness Standards developed for the ACT wereused as a baseline, and modifications or revisions weremade as necessary.

Table 3.2 reports the total number of test items reviewedfor each content area and for each testing program.

Page 25: ACT Explore Technical Manual

15

Conducting an Independent Review of the CollegeReadiness Standards. As a means of gathering contentvalidity evidence, ACT invited nationally recognized scholarsfrom high school and university English, mathematics, read-ing, science, and education departments to review theCollege Readiness Standards. These teachers andresearchers were asked to provide ACT with independent,authoritative reviews of the College Readiness Standards.

The content area experts were selected from among can-didates having experience with and an understanding of theacademic tests on Explore, Plan, and the ACT. The selectionprocess sought and achieved a diverse representation bygender, ethnic background, and geographic location. Eachparticipant had extensive and current knowledge of his orher field, and many had acquired national recognition fortheir professional accomplishments.

The reviewers were asked to evaluate whether theCollege Readiness Standards (a) accurately reflected theskills and knowledge needed to correctly respond to testitems (in specific score ranges) in Explore, Plan, and theACT and (b) represented a continuum of increasingly sophis-ticated skills and understandings across the score ranges.Each national content area team consisted of three collegefaculty members currently teaching courses in curriculumand instruction, and three classroom teachers, one eachfrom Grades 8, 10, and 12. The reviewers were provided withthe complete set of College Readiness Standards and asample of test items falling in each of the score ranges, byacademic test and program.

The samples of items to be reviewed by the consultantswere randomly selected for each score range in all four aca-demic tests for all three assessment programs. ACT believedthat a random selection of items would ensure a more objec-tive outcome than would preselected items. Ultimately, 17items for each score range were selected (85 items per testing program, or a total of 255 items for all

three programs). Before identifying the number of items thatwould comprise each set of items in each score range, it wasfirst necessary to determine the target criterion for the levelof agreement among the consultants. ACT decided upon atarget criterion of 70%. It was deemed most desirable for thepercentage of matches to be estimated with an accuracy ofplus or minus 0.05. That is, the standard error of the esti-mated percent of matches to the Standards should be nogreater than 0.05. To estimate a percentage around 70% withthat level of accuracy, 85 observations were needed. Sincethere were five score ranges, the number of items per scorerange to be reviewed was 17 (85 ÷ 5 = 17).

The consultants had two weeks to review the CollegeReadiness Standards. Each reviewer received a packet ofmaterials that contained the College Readiness Standards,sets of randomly selected items (17 per score range), intro-ductory material about the College Readiness Standards, adetailed set of instructions, and two evaluation forms.

The sets of materials submitted for the experts’ reviewwere drawn from 13 ACT forms, 8 Plan forms, and 4 Exploreforms. The consultants were asked to perform two maintasks in their area of expertise: Task 1—judge the consis-tency between the Standards and the corresponding sampleitems provided for each score range; Task 2—judge thedegree to which the Standards represent a cumulative pro-gression of increasingly sophisticated skills and understand-ings from the lowest score range to the highest score range.The reviewers were asked to record their ratings using a five-point Likert scale that ranged from Strongly Agree toStrongly Disagree. They were also asked to suggest revi-sions to the language of the Standards that would help theStandards better reflect the skills and knowledge measuredby the sample items.

ACT collated the consultants’ ratings and comments asthey were received. The consultants’ reviews in all but twocases reached ACT’s target criterion, as shown in Table 3.3.

Table 3.3Percentage of Agreement of 1997 National Expert Review

Explore Plan/ACT

Task 1 Task 2 Task 1 (Plan) Task 1 (ACT) Task 2

English 65% 80% 75% 75% 86%

Mathematics 80% 100% 70% 95% 100%

Reading 75% 75% 75% 60% 100%

Science 95% 100% 100% 70% 80%

Page 26: ACT Explore Technical Manual

16

That is, 70% or more of the consultants’ ratings were Agreeor Strongly Agree when judging whether the Standards ade-quately described the skills required by the test items andwhether the Standards adequately represented the cumula-tive progression of increasingly sophisticated skills from thelowest to the highest score ranges. The two exceptions werethe Explore English Test and the ACT Reading Test, wherethe degree of agreement was 65% and 60%, respectively.Each ACT staff content area team met to review all com-ments made by all the national consultants. The teamsreviewed all suggestions and adopted a number of helpfulclarifications in the language of the Standards, particularly inthe language of the Explore English Test Standards and theACT Reading Test Standards—those two cases in which theoriginal language had failed to meet the target criterion.

Refining the College Readiness Standards for ACT Explore and ACT Plan (2001)

In 2001, the score scale for Explore and Plan was refined.This required that the College Readiness Standards forExplore and Plan be reexamined.

The approach used in 1997 to develop the Standards wasused to reexamine the Standards for Explore and Plan in2000. Staff reviewed items, at each Explore and Plan scoreinterval, that were answered correctly by 80% or more of theExplore and Plan examinees. Using the Plan CollegeReadiness Standards as a baseline, Explore test items werereviewed to ensure that the Plan College ReadinessStandards adequately described the skills and understand-ings students were being asked to demonstrate in eachscore range.

As in the 1997 study, a national independent panel ofcontent experts was convened in each of the four multiple-choice academic tests to ensure that the refinedExplore/Plan Standards (a) accurately reflected the skillsand knowledge needed to correctly respond to test items inthe common score ranges and (b) represented a continuum

of increasingly sophisticated skills and understandingsacross the entire score range. As was the case in 1997, con-tent area experts were identified in the areas of English,mathematics, reading, and science. Each content area teamconsisted of three reviewers, one each from middleschool/junior high, high school, and college/university.

For each academic test, the consultants were asked toreview sets of test items, arranged by score range, and thecorresponding College Readiness Standards. The Planreviewers received two sets of test items, an Explore set anda Plan set, along with the corresponding Standards. A crite-rion of 17 items per score range was chosen.

As was the case in 1997, the reviewers were asked torecord their ratings using a five-point Likert scale that rangedfrom Strongly Agree to Strongly Disagree. They were alsoasked to suggest revisions to the language of the Standardsthat would help the Standards better reflect the skills andknowledge measured by the sample items. A target criterionof 70% agreement was again identified. The consultants’review in all cases reached ACT’s target criterion, as shownin Table 3.4.

Periodic Review of the College Readiness Standards

In addition to the regularly scheduled independentreviews conducted by national panels of subject matterexperts, ACT also periodically conducts internal reviews ofthe College Readiness Standards. ACT identifies three tofour new forms of the ACT, Plan, and Explore (for Explore,fewer forms are available) and then analyzes the data andthe corresponding test items, by score range. The purposesof these reviews are to ensure that (a) the Standards reflectthe knowledge and skills being measured by the items ineach score range and (b) the Standards reflect a cumulativeprogression of increasingly sophisticated skills and under-standings from the lowest score range to the highest. Minorrefinements intended to clarify the language of theStandards have resulted from these reviews.

Table 3.4Percentage of Agreement of 2000 National Expert Review

Explore Plan

Task 1 Task 2 Task 1 Task 2

English 90% 100% 73% 100%

Mathematics 75% 100% 100% 100%

Reading 100% 100% 87% 100%

Science 75% 100% 90% 100%

Page 27: ACT Explore Technical Manual

17

Interpreting and Using the College Readiness Standards

Because new test forms for Explore, Plan, and the ACTare developed at regular intervals and because no one testform measures all of the skills and knowledge included inany particular Standard, the College Readiness Standardsmust be interpreted as skills and knowledge that most stu-dents who score in a particular score range are likely to beable to demonstrate. Since there were relatively few testitems that were answered correctly by 80% or more of thestudents who scored in the lower score ranges, the stan-dards in these ranges should be interpreted cautiously.

It is important to recognize that the tests neither measureeverything students have learned nor does any test meas-ure everything necessary for students to know to be suc-cessful in their next level of learning. The tests includequestions from a large domain of skills and from areas ofknowledge that have been judged important for success inhigh school, college, and beyond. Thus, the CollegeReadiness Standards should be interpreted in a responsibleway that will help students, parents, teachers, and adminis-trators to:

• identify skill areas in which students might benefit fromfurther instruction;

• monitor student progress and modify instruction toaccommodate learners’ needs;

• encourage discussion among principals, curriculumcoordinators, and classroom teachers as they evaluatetheir academic programs;

• enhance discussions between educators and parentsto ensure that students’ course selections are appro-priate and consistent with their post–high schoolplans;

• enhance the communication between secondary andpostsecondary institutions;

• identify the knowledge and skills students enteringtheir first year of postsecondary education shouldknow and be able to do in the academic areas of lan-guage arts, mathematics, and science; and

• assist students as they identify skill areas they need tomaster in preparation for college-level coursework.

ACT’s College Readiness Benchmarks

Description of the College Readiness Benchmarks

The ACT College Readiness Benchmarks (see Table 3.5)are the minimum ACT test scores required for students tohave a high probability of success in first-year, credit-bearingcollege courses—English Composition I, social sciencecourses, College Algebra, or Biology. In addition to theBenchmarks for the ACT, there are corresponding Exploreand Plan Benchmarks for use by students who take theseprograms to gauge their progress in becoming college readyin Grades 8 through 10. Students who meet a Benchmarkon the ACT have approximately a 50% chance of earning aB or better and approximately a 75% chance or better ofearning a C or better in the corresponding college course orcourses. Students who meet a Benchmark on Explore orPlan have approximately a 50% chance of meeting the ACTBenchmark in the same subject, and are likely to haveapproximately this same chance of earning a B or bettergrade in the corresponding college course(s) by the timethey graduate high school.

Data Used to Establish the Benchmarks for the ACT

The ACT College Readiness Benchmarks are empiricallyderived based on the actual performance of students in college. As part of its research services, ACT provides reportsto colleges to help them place students in entry-level coursesas accurately as possible. In providing these research serv-ices, ACT has an extensive database consisting of coursegrade and test score data from a large number of first-yearstudents and across a wide range of postsecondary institu-tions. These data provide an overall measure of what it takesto be successful in selected first-year college courses. Datafrom 214 institutions and over 230,000 students were used toestablish the Benchmarks. The numbers and types of colleges varied by course. Because the sample of colleges inthis study is a “convenience” sample (that is, based on datafrom colleges that chose to participate in ACT’s researchservices), there is no guarantee that it is representative of allcolleges in the United States. Therefore, ACT weighted the

Table 3.5ACT Explore, ACT Plan, and ACT College Readiness Benchmarks

Assessment and Grade LevelSubject test College course Explore Plan ACT

8 9 10 11/12

English English Composition I 13 14 15 18

Mathematics College Algebra 17 18 19 22

Reading Social Sciences 16 17 18 22

Science Biology 18 19 20 23

Page 28: ACT Explore Technical Manual

18

sample so that it would be representative of all ACT-testedcollege students in terms of college type (2-year and 4-year)and selectivity.

Procedures Used to Establish the Benchmarks for ACT Explore and ACT Plan

The College Readiness Benchmarks for Explore and Planwere developed using records of students who had takenExplore or Plan, followed by the ACT in Grades 11 or 12.Separate Benchmarks were developed for Explore for Grade 8 and Grade 9, and Plan for Grade 10. The samplesizes used to develop the Explore and Plan Benchmarksranged from 210,000 for the Explore Grade 9 Benchmarks toapproximately 1.5 million for the Plan Grade 10 Benchmarks.To establish the Benchmarks, the probability of meeting theappropriate ACT Benchmark was estimated at each Exploreand Plan test score point. Next, the Explore and Plan testscores were identified in English, Reading, Mathematics,and Science that corresponded most closely to a 50% prob-ability of meeting each of the four Benchmarks establishedfor the ACT.

Intended Uses of the Benchmarks for Students,Schools, Districts, and States

ACT, Plan, and Explore results give students an indicationof how likely they are to be ready for college-level work. Theresults let students know if they have developed or are devel-oping the foundation for the skills they will need by the timethey finish high school. Plan and Explore results provide anearly indication of college readiness. Students who score ator above the College Readiness Benchmarks in English,mathematics, and science are likely to be on track to do wellin entry-level college courses in these subjects. Studentsscoring at or above the Reading Benchmark are likely to bedeveloping the level of reading skills they will need in all oftheir college courses. For students taking Explore and Plan,this assumes that these students will continue to work hardand take challenging courses throughout high school.

Researchers and policymakers can use the Benchmarksto monitor the educational progress of schools, districts, andstates. Middle and high school personnel can use theBenchmarks for Explore and Plan as a means of evaluatingstudents’ early progress toward college readiness so thattimely interventions can be made when necessary, or as aneducational counseling or career planning tool.

Interpreting ACT’s Test Scores with Respect to Both ACT’s College Readiness Standards and

ACT’s College Readiness Benchmarks

The performance levels on ACT’s tests necessary for students to be ready to succeed in college-level work aredefined by ACT’s College Readiness Benchmarks.Meanwhile, the skills and knowledge a student currently has(and areas for improvement) can be identified by examiningthe student’s test scores with respect to ACT’s CollegeReadiness Standards. These two empirically derived toolsare designed to help a student translate test scores into aclear indicator of the student’s current level of college readiness and to help identify key knowledge and skill areasneeded to improve the likelihood of achieving college success.

Alignment of ACT’s College Readiness Standardswith the Common Core State Standards

The Common Core State Standards developed by theNational Governors Association and the Council of ChiefState School Officers, in partnership with ACT, the CollegeBoard, and Achieve, align well with ACT’s College ReadinessStandards, in both of the large areas in which Common CoreState Standards have been published to date: namely, theCommon Core State Standards for English Language Arts &Literacy in History/Social Studies, Science, and TechnicalSubjects, and the Common Core State Standards forMathematics. The alignment of ACT’s College ReadinessStandards in English, Reading, and Mathematics is sufficientto encourage us to believe that scores earned in ACT’s test-ing programs can provide evidence of current student per-formance relative to the Common Core State Standards.Given ACT’s research base, the percentage of studentsmeeting ACT’s College Readiness Benchmarks in English,Reading, and Mathematics can serve as a measure of whatpercentage of students could potentially meet or exceed theCommon Core’s English Language Arts and Mathematicsstandards (ACT, 2010).

Page 29: ACT Explore Technical Manual

19

This chapter discusses the technical characteristics—thescore scale, norms, equating, and reliability—of the Exploretests of educational development.

Scaling and Norming

The information on the scaling presented here was col-lected in a special study to rescale the tests. Data for thisstudy were collected in the fall of 1999. Data for the normscomes from a special study done in the fall of 2010. Thesestudies are briefly described below, with fuller discussionsfollowing.Scaling Study (1999). Data to scale the Explore test

were obtained from a sample of eighth- and tenth-grade stu-dents in the fall of 1999. The eighth graders were adminis-tered the Explore battery and a scaling test in one of the foursubject areas. The tenth graders were administered the Planbattery and a scaling test in one of the four subject areas.The scaling tests taken by the eighth and tenth graders wereidentical and were used to link Explore to the Plan scorescales in each of the four subject areas. Approximately11,500 students participated in the scaling study, and about80% of the student records were retained for analysis.Norming Study (2010). Data to establish the norms for

the Explore tests were collected in a special study done inthe fall of 2010. Both eighth and ninth graders participated inthe study. Each student took the Explore battery. The totalsample consisted of approximately 20,000 students acrossboth grades. To create the norms, the sample data werecombined with the data from schools that use Explore oper-ationally. (The 2010 norming study was preceeded by a sim-ilar norming study in 2005 [see ACT, 2007].)

Sampling for the 1999 Study

Data for the 1999 Explore scaling/norming study wereobtained from samples of eighth-, ninth-, and tenth-gradestudents, which were intended to be nationally representa-tive. The eighth graders were each administered the Explorebattery and a scaling test in one of the four subject areas.Similarly, the tenth-grade examinees took the Plan batteryand a scaling test in one of the four subject areas. The scal-ing tests taken by both eighth- and tenth-grade examineeswere identical and were used to link Explore to the Planscore scales in each of the four subject areas. The Explorebattery scores from the eighth-grade and ninth-grade exam-inees were used to obtain nationally representative norms forFall Grade 8, Fall Grade 9, and (interpolated) Spring Grade8. Plan battery scores from the sample of fall-tested tenth-grade examinees were used to obtain nationally representa-tive Fall Grade 10 norms.

Sample Design and Data Collection. The Fall 1999study had multiple goals and the sampling design was devel-oped to take into account those goals. In particular, the goals were to:1. rescale the Explore tests;2. develop nationally representative norms for Explore

Grade 8 Fall;3. develop nationally representative norms for Explore

Grade 8 Spring;4. develop nationally representative norms for Explore

Grade 9 Fall;5. develop nationally representative norms for Plan Grade

10 Fall; and6. develop norms for the college-bound students in the Plan

Grade 10 Fall national sample.

The results of the 1999 scaling are reported in this sec-tion. The results of the 2010 norming study, which supersedethe 1999 and the 2005 norming results, are reported onpages 24–36.

The eighth-grade Explore norms for spring-tested stu-dents were to be interpolated from the norms for the fall-tested eighth graders and the norms for the fall-tested ninthgraders. Because of this, it was decided to sample botheighth- and ninth-grade students from the same schools.However, many schools do not have both an eighth and aninth grade. To overcome this, school contact was made atthe district level, with the intention of getting both an eighthand a ninth grade from a district to participate in the study.

The target population consisted of students enrolled ineighth, ninth, and tenth grades in schools in the UnitedStates. The overall sampling design was two-stage, withstratification of the first-stage units (schools). The explicitstratification variable was school size. In order to obtain anationally representative sample, schools were sorted ongeographic region, within school size strata. A systematicsample was then selected from each school size stratum.

The goal of the sampling was to estimate any proportionto within .05 with probability .95, equivalent to a standarderror of 0.025 for the estimate, assuming the variable ofinterest is normally distributed. That is, suppose the value ofinterest is the proportion of students in Grade 8 who willscore at or below an 18 on the Explore English Test. Thesample size is chosen so that 95 times out of 100, the sam-ple will produce an estimate of the proportion at or below 18that is within .05 of the true proportion. This should be truefor any proportion calculated.

In anticipation that some schools would not participate inthe study, many more schools were invited to participate thanwere required to achieve the targeted precision. During therecruitment, the number of participating schools in each stra-tum were carefully monitored to maintain the representative-ness of the sample with respect to the stratification variables.

Chapter 4Technical Characteristics of

the ACT Explore Tests of Educational Develop ment

Page 30: ACT Explore Technical Manual

20

Data Editing. Data were edited at the school, classroom,and student levels. The following editing rules were used toexclude data from the scaling data set.1. If an inappropriate grade (a blank was okay) was

gridded.2. If there was a score of zero on any of the four main tests

(a zero on a subscore was okay).3. If too many items on any test were omitted. The defini-

tion of “too many items” depended on the test length.4. If the examinee, classroom, or school was identified on

an irregularity report.5. If the examinee marked special testing or extended time

on the answer sheet.6. If the administrator mistimed the test.7. If the battery was administered over two or more days,

rather than on one day.8. If the scaling test was not administered.9. If the scaling test was administered before the battery.10. If the scaling test was administered on the same day as

the battery.11. If the scaling test was not spiraled within classroom.12. If the examinee’s score was voided by the administrator.13. If the examinee volunteered to be in the study, rather

than being in a selected classroom.

The following editing rules were used with the normingdata. 1. If an inappropriate grade (a blank was okay) was

gridded.2. If there was a score of zero on any of the four main tests

(a zero on a subscore was okay).3. If too many items on any test were omitted. The defini-

tion of “too many items” depended on the test length.4. If the examinee, classroom, or school was identified on

an irregularity report.5. If the examinee marked special testing or extended time

on the answer sheet.6. If the administrator mistimed the test.7. If the battery was administered over two or more days,

rather than on one day.8. If the examinee’s score was voided by the administrator.9. If the school district was represented in either the

eighth-grade or ninth-grade samples, but not both.

These rules were applied separately for each test, to formfour scaling groups and four norming groups. Extensive datacleanup measures were taken to ensure that the data avail-able for analysis were the best possible, given the data thatwere collected. About 16,500 students participated in the

norming: approximately 64% of the student records collectedfor the norming were retained for analysis. About 11,500 stu-dents participated in the scaling: approximately 80% of theeighth- and tenth-grade student records collected for thescaling were retained for analysis. The final sample sizes aregiven in Table 4.1.

Weighting. For the scaling and norming process, individ-ual examinee records were multiplied by weights. Theweighting was an attempt to adjust for some of the nonre-sponse seen in both schools and strata, and to thus betterrepresent the populations. The weight for a student in schooli with stratum j is given by

Weight(i, j) = · · K,

whereNij = the total number of students at school i in stratum j,nij = the number of students at school i in stratum j in the

sample,Mj = the total number of schools in stratum j, andmj = the number of schools in stratum j in the sample.

K is a constant chosen to scale the weights so that the totalweighted sample size is equal to the size of a random sam-ple that would give the same precision. Weights were calcu-lated for Grades 8, 9, and 10 separately.Response Rates. One type of nonresponse in this study

was among schools: not every school invited to participatedid so. Attempts were made to choose replacement schoolsfrom the same strata as the schools they were replacing sothat the obtained sample would be representative with

Mj____mj

Nij____nij

Table 4.1Sample Sizes

Sample size Sample size Sample size(number of for norms for scaling

Grade schools) (students) (students)

8 47 4,879 7,065

9 41 6,660 —

10 49 5,004 4,504

Page 31: ACT Explore Technical Manual

respect to the stratification variable. Nevertheless, it is con-ceivable that schools’ willingness to participate in this studycould be related to their students’ academic development,independent of school size. If this were true, then the nonre-sponse among schools would introduce a bias in the results.In fact, with school-level response rates as low as thoseshown in Table 4.2, the error due to this nonresponse ispotentially the largest of all sources of error.

A second type of nonresponse was among studentswithin a participating school. Within-school participationrates are provided in Table 4.3. The reported rates are theproportion of students actually tested relative to the numberof enrolled students reported. In general, the participationrate is quite high, typically about 80%. Any school that testedfewer than 50% of the students was called and asked whythe participation rate was low. If the low value was due torandom subsampling, the school was included in the study.If no reason could be given, or the reason was judged toaffect the quality of the results, the records from that schoolwere deleted. Four schools in the tenth-grade sample weredeleted for this reason.

21

Table 4.2Response Rates by Grade

Number of Number Numberdistricts/schools agreeing to in the

Grade contacted participate final sample

8 925 72 47

9 925 55 41

10 505 67 49

Table 4.3Summary of Response Rates within Schools

Grade Minimum Median Maximum

8 52% 76% 100%

9 21% 88% 100%

10 39% 82% 100%

Page 32: ACT Explore Technical Manual

22

Obtained Precision. Sampling theory allows the estima-tion of precision and effective sample sizes. The targetedlevel of precision was to estimate any proportion, P(x), towithin 0.05 with probability .95. The obtained levels of preci-sion are the estimated standard errors of P(x) in Table 4.4.The effective sample sizes, which are based on precision,were smaller than the actual sample sizes. Because sepa-rate data sets were constructed for each of the four subjectareas, the precision of the samples varies.

Scaling

Scale scores are reported for the Explore English,Mathematics, Reading, and Science Tests, and for theUsage/Mechanics and Rhetorical Skills subscores. AComposite score, calculated by averaging over the four testscores and rounding to an integer, is also reported. Becausesubscores and test scores were scaled separately, there isno arithmetic relationship between subscores and the testscores. The Usage/Mechanics and Rhetorical Skills sub-scores will not necessarily sum to the English Test score.The Score Scale. Scale scores for the four tests and the

Composite range from a low of 1 to a high of 25. Scalescores for the subscores range from 1 to 12.

The scores reported for the four Explore tests are approx-imately on the same scale as the scores on the correspon-ding tests of the Plan battery (ACT, 1999). Explore is intendedto be a shorter and less difficult version of Plan, and thesetesting programs have similar, although not identical, contentspecifications. Explore was designed primarily for eighthgraders (although it is also used with ninth graders), whereasPlan is primarily for tenth graders. To facilitate longitudinal

comparisons between Explore scores for eighth graders andPlan scores for tenth graders, the score scales for Explorewere constructed with the consideration that they be approx-imately on the same scale as Plan. Being “on the same scale”means that the Explore test score obtained by an examineecan be interpreted as approximately the Plan test score theexaminee would obtain if that examinee had taken Plan at thetime of the Explore testing. If this property were exactlyachieved, the mean Explore and Plan scale scores would bethe same for any given group of examinees.

The rationale for making the maximum Explore scalescore lower than the maximum Plan score was to leave roomat the top of the scale for educational development thatoccurs between the eighth and tenth grades. The Plan testsare intended to assess skills typically achieved by the tenthgrade. Some of these skills are not expected in the eighthgrade and are not assessed by Explore. Making 25 the max-imum scale score on the Explore tests, rather than 32 as forthe Plan tests, ensures that Explore and Plan are not on thesame scale at high values of the Plan scale.

The Plan scale was constructed to be approximately onthe same scale as the ACT. Even though Explore and Planare approximately on the same scale, and Plan and the ACTare approximately on the same scale, it can not be statedthat Explore and the ACT are approximately on the samescale. The approximate equality of the scales holds for adja-cent batteries, but the Explore and the ACT are too disparatein subject matter and difficulty (eighth grade versus twelfthgrade) for the same-scale property to extend from Exploredirectly to the ACT.

Table 4.4Precision of the Samples

Estimated standard Coefficient EffectiveGrade Analysis error of P(x) of variation sample size

8 English Scaling 0.031 262Mathematics Scaling 0.031 0.06 264Reading Scaling 0.032 252Science Scaling 0.032 238

9 0.038 0.05 172

10 English Scaling 0.019 679Mathematics Scaling 0.021 0.04 549Reading Scaling 0.027 331Science Scaling 0.029 295

Page 33: ACT Explore Technical Manual

23

The Scaling Process. For each of the four subject areas,the scaling test taken by both eighth- and tenth-grade exam-inees was used to link the Explore scale to the Plan scale. Astrong true score model was fit to the Plan and Explore num-ber-correct score distributions for each test, and to the num-ber-correct score distributions for the four scaling tests.Estimates of the true score distributions were used to com-pute scales (one for each Explore test) that were close to thecorresponding Plan scales.

The score scales constructed for the four tests based onthe Fall 1999 data were first used operationally in the fall of2001. The score scales introduced in the fall of 2001 were dif-ferent from the score scales used prior to Fall 2001. Scoresreported for the four Explore test scores from administrationsprior to Fall 2001 are not interchangeable with scores

reported for administrations during or after the fall of 2001.Table 4.5 is a concordance relating scores on the originalscale (used in administrations prior to Fall 2001) to the currentscale (used in Fall 2001 and later administrations).

Explore was first administered operationally in 1992 usingscales that were developed in a special study conducted inthat year. The scales for the four tests were replaced in 2001,whereas the scales for the two subscores remain those intro-duced in 1992. The subscore scales were constructed tohave means of approximately 7 and standard errors of meas-urement of approximately 1 for examinees in the 1992 sam-ple. A procedure described by Kolen (1988) was used toconstruct the scales. More complete descriptions of the 1992scaling can be found in two previous ACT Explore TechnicalManuals (1994 and 1997).

Table 4.5Original to New Scale Concordance

Original New scale scores Originalscale scalescore English Mathematics Reading Science Composite score

1 1 1 7 2 1 12 3 1 8 4 3 23 5 2 8 6 5 34 7 3 9 8 6 45 8 3 9 9 8 56 9 5 10 11 9 67 9 7 10 12 10 78 10 9 11 12 11 89 10 10 11 13 11 910 11 11 12 14 12 1011 12 12 12 15 13 1112 12 13 13 15 13 1213 13 14 13 15 14 1314 14 15 14 16 15 1415 15 15 14 17 15 1516 16 16 15 17 16 1617 17 17 15 18 17 1718 18 17 16 18 18 1819 18 18 17 19 18 1920 20 18 18 19 19 2021 22 19 19 20 20 2122 23 21 21 21 21 2223 23 23 22 22 22 2324 25 24 23 23 24 2425 25 25 25 25 25 25

Page 34: ACT Explore Technical Manual

Grade Minimum Median Maximum

8 52% 100% 100%

9 48% 100% 100%

Norming

Sampling for the 2010 Norming Study

The norms for Explore tests are intended to representnational populations of eighth- and ninth-grade students.Data for the Explore norming study were obtained from threeplaces: (a) the group of schools that used the Explore testduring the fall of 2010, (b) nonuser schools that participatedin the 2010 Explore equating study, and (c) a sample ofschools from among the nonusers of Explore drawn specifi-cally for the norming. A sample was taken for both the eighthand ninth grades. For the schools in the sample, the studentswere given a complete Explore battery. The data from thesamples of nonuser schools were combined with the infor-mation from the user schools to create the norms. This wasdone for both eighth-grade fall norms and ninth-grade fallnorms. The norms for the spring of Grade 8 were then cal-culated by interpolation.

Because of the interpolation, it was advantageous to useeighth- and ninth-grade samples from the same schools.Some schools do not have both an eighth and a ninth grade,so efforts were made to include schools with Grade 9 fromthe same districts as selected schools with Grade 8. Notethat this means that some schools with a Grade 9 were morelikely to be chosen than others, depending on the number ofeighth-grade schools that feed into that ninth-grade school. Itis unlikely that the chance of selection for ninth-gradeschools varied enough to affect the norms.Sample Design. The sample design for the special norm-

ing study schools was a stratified cluster sample. The strati-fication variables are size of school and region of country.The cluster consists of all students at a particular school. Thelist of schools used was obtained from the Market Data

Retrieval Educational Database in 2009. All schools that hadordered Explore, either for the eighth or ninth grade, weredeleted. The remaining schools were separated into thestrata, and random samples were proportionally selectedfrom different strata for invitation to the study. Because it wasanticipated that many schools would decline the invitation,many more schools were invited than were needed for thestudy. The actual numbers that were invited, that agreed toparticipate, that did participate, and that were used in thefinal calculation are included in Table 4.6. (Note that schoolsinvited for the equating study also belonged to the nonusergroup.)Data Editing. Data from the participating nonuser

schools and from user schools were edited using the follow-ing rules.

A student record was excluded under any of the conditionslisted below:1. The reporting grade did not match the grade level understudy.

2. The Explore Composite score was not in the valid range.3. The student had special accommodations.4. The student was not fall tested.5. The test form code was not valid.A school was excluded under any of the conditions listedbelow:1. The school is not a public, private, or Catholic school.2. The school is an adult school or a career/ed tec school.3. The number of tested students in the school was less

than 10, and the ratio between the expected enrollmentand the number of tested students was higher than 10for a nonuser school or higher than 20 for a user school.

24

Table 4.7Response Rates within Schools

Table 4.6Response Information by School

Invited Grade Number agreeing to participate Number participatedNumber in the final sample(including equating sample)

12,6698 93 69 248

9 108 87 86

Page 35: ACT Explore Technical Manual

Under these rules, the eighth-grade norms are based onthe records from 10,429 students from 248 nonuser schoolsand 731,458 students from 6,453 user schools. The ninth-grade norms are based on records from 9,450 students from86 nonuser schools and 230,898 students from 1,574 userschools.Weighting. For the norming process, each individual stu-

dent record is weighted according to its proportion in thepopulation relative to the proportion in the sample. The goalof the weighting is to make the sample representative of thepopulation, and to adjust for nonresponse. In both grades,the weight was the product of two terms:

Weight (i,j) = ·

whereMj = the total number of schools in the population in

stratum j,mj = the total number of schools in the sample in

stratum j,Nij = the total number of students in school i in stratum j,nij = the sample number of students in school i in

stratum j.

Note that the first component of the weight is equal to 1 foruser schools, as there is no sample to adjust for.Nonresponse and Bias. There are two basic types of

nonresponse in this study: nonresponse by schools and non-response by students. Nonresponse by schools occursbecause not every school contacted agrees to participate.Nonresponse by students occurs because not every studentat the given grade level in the school takes the test. Bias canoccur due to nonresponse if the schools or students who donot respond are different from the schools or students whodo, with regard to test scores. Table 4.6 gives the responseinformation for schools, and the response rate, as is typical,is quite low. Table 4.7 gives the response rates withinschools. These are quite high, typically 100%. Therefore, biasdue to nonresponse within schools is unlikely to be a prob-lem. Bias due to nonresponse at the school level was poten-tially more serious. To allow for this, various demographicvariables were checked to see if the sample appeared rep-resentative. The demographic summary is given in Tables4.8 and 4.9.Precision Estimates. Based on the sample value, stan-

dard errors for proportions can be calculated, and those forthe eighth- and ninth-grade median scores are given in Table4.10. Standard errors for scores other than the median wouldbe smaller. The standard error was based on the Compositescore; standard errors for the subject area tests would besimilar.Norming Results. Scale score summary statistics for

examinees in the 2010 nationally representative normingsample of fall-tested eighth graders and ninth graders aregiven in Table 4.11. Examinee scores were weighted using theweighting procedure described in the weighting section, andthe distributions were smoothed using the log-linear method.

No data were collected on spring-tested eighth graders.Spring eighth-grade norms were interpolated based on theassumption that academic growth between fall of the eighthgrade and fall of the ninth grade is linear, and that the threemonths of summer count as only one month.

Data from the national sample were used to develop per-cents-at-or-below for the four Explore test scores, theComposite score, and the two subscores. The percent-at-or-below corre spond ing to a scale score is defined as the per-cent of examinees with scores equal to or less than the scalescore. Tables 4.12, 4.13, and 4.14 contain percents-at-or-below for the Explore test scores, subscores, and Compositescore. Table 4.12 contains the norms (i.e., the percents-at-or-below) for fall eighth-grade students based on the 2010national sample of fall eighth graders. Table 4.13 presentsthe norms for spring eighth-grade students. The springeighth-grade norms are interpolated values obtained fromthe 2010 national samples of fall eighth-grade students andfall ninth-grade students. Table 4.14 contains the norms forfall ninth-grade students based on the 2010 national sampleof fall ninth-grade students.

An examinee’s standing on different tests should be com -pared by using the percents-at-or-below shown in the normstables and not by using scale scores. The reason for prefer-ring percents-at-or-below for such comparisons is that thescales were not constructed to ensure that, for example, ascale score of 6 on the English Test is compa rable to a 6 onthe Mathematics, Reading or Science Tests. In contrast,examinee percents-at-or-below on different tests indicatestandings relative to the same comparison group.

Even comparison of percents-at-or-below does not permitcomparison of standing in different skill areas in any absolutesense. The question of whether a particular examinee isstronger in science reasoning than in mathe matics,assessed by the corresponding tests, can be answered onlyin relation to reference groups of other examinees. Whetherthe answer is “yes” or “no” can de pend on the group.Estimated Plan Composite Score Ranges. For every

Explore Composite score, ACT constructs an expected PlanComposite score range. These expected score ranges arecomputed for each of four different Explore testingseason/grades (i.e., Fall Grade 8, Spring Grade 8, FallGrade 9, and Spring Grade 9) combined with Fall Grade 10Plan test dates, such that there were four groups of students,each with a different interval between Explore and Plan test-ing. These groups are outlined in Table 4.15. For a givenExplore season/grade (e.g., Fall Grade 8), scores from threeyears of data were matched to Plan Fall Grade 10 scoresfrom three years of data. Then, for each of the four Exploreseason/grade groups, a two-way (Explore by Plan) fre-quency table was created. Within each table, for everyExplore score, the approximate middle 75% of the Planscore distribution was selected as the expected PlanComposite score range. The Plan Composite score rangeestimates the score that an examinee would obtain in the fallof his or her sophomore year.

Mj___mj

Nij___nij

25

Page 36: ACT Explore Technical Manual

26

Table 4.8Demographic Characteristics for Norming Sample and Populationa

Grade 8

aPopulation percentages for gender and race come from Keigher (2009, pp. 10–11).Population percentages for school affiliation and geographic region come from theMarket Data Retrieval Educational Database, 2010. Note: Due to rounding, per-centages may not sum to 100.

CategorySample

percentagePopulation percentage

Gender

Male 51 50

Female 49 50

Racial/Ethnic Origin

African American/Black 12 15

American Indian/Alaska Native 1 1

Caucasian American/White 63 59

Hispanic 13 19

Asian American 3 5

Other/Multiracial/Prefer Not to Respond 8

School Affiliation

Private 38 38

Public 62 62

Geographic Region

East 39 39

Midwest 24 24

Southwest 12 12

West 25 25

Table 4.16 is for students who took Explore in fall of Grade8. The rows contain the Explore Composite scores, whichrange from 1 to 25, and the columns contain the PlanComposite scores, which range from 1 to 32. The cell corre-sponding to Explore Composite score 15 and PlanComposite score 17 contains 26,629 examinees. This meansthat of the 960,460 examinees, 26,629 examinees receivedan Explore Composite score of 15 during the fall of Grade 8and a Plan composite score of 17 during the fall of Grade 10.Consider all examinees in Table 4.16 who received anExplore Composite score of 15 regardless of their PlanComposite score. There are 116,051 such examinees, andtheir Plan scores range from 2 to 31. Notice that the number116,051 is listed in the column labeled Row Total. This col-umn lists all of the examinees that have a particular ExploreComposite score regardless of their Plan Composite score.

If consideration is just given to the middle of the Planscore range containing approximately 75% of the 116,051scores, then that reduced range of Plan Composite scores is

15 to 18. This means that of the 116,051 examinees whoreceived a Composite score of 15 on Explore, 86,793 (75%)had Plan Composite scores between 15 and 18, inclusive.The number 86,793 is listed in the column labeled “RowHits,” and 0.75 is listed in the Column labeled “Prob. Cov.”“Row Hits” is the number of examinees in the middle range,and “Prob. Cov.” is the proportion of examinees in the middlerange. This range of scores is shaded. All of these 75% pre-diction intervals are conveniently listed in Table 4.20 alongwith their widths.

The probabilities of coverage values are fairly constantacross the Explore Composite score range, but less so at theextremes of the Explore Composite score range (i.e., beyond5 and 25). As can be seen in the very bottom entry in theprobability of coverage column, the proportion of all 960,460examinees whose estimated Plan Composite score predic-tion interval contained their obtained Plan Composite scorewas 0.75. This is the overall probability of coverage.

(Text continues on page 36.)

Page 37: ACT Explore Technical Manual

27

Table 4.9Demographic Characteristics for Norming Sample and Populationa

Grade 9

aPopulation percentages for gender and race come from Keigher (2009, pp. 10–11).Population percentages for school affiliation and geographic region come from theMarket Data Retrieval Educational Database, 2010. Note: Due to rounding, per-centages may not sum to 100.

CategorySample

percentagePopulation percentage

Gender

Male 50 50

Female 50 50

Racial/Ethnic Origin

African American/Black 9 15

American Indian/Alaska Native 1 1

Caucasian American/White 58 59

Hispanic 23 19

Asian American 2 5

Other/Multiracial/Prefer Not to Respond 9

School Affiliation

Private 45 28

Public 55 72

Geographic Region

East 39 39

Midwest 22 22

Southwest 14 14

West 25 25

Grade Estimated standard error

8 0.021

9 0.032

Table 4.10Estimated Standard Errors for Proportions

Page 38: ACT Explore Technical Manual

28

Table 4.11Scale Score Summary Statistics for ACT Explore Tests and Subscores

Grade 8 national sample

Statistic English Mathematics Reading Science CompositeUsage/

MechanicsRhetoricalSkills

Mean 14.7 15.5 14.6 16.6 15.5 7.6 7.3

SD 4.2 3.5 3.9 3.3 3.3 2.4 2.2

Skewness 0.2 0.0 0.6 −0.1 0.3 −0.4 0.0

Kurtosis 2.8 4.1 3.0 3.9 2.8 2.6 2.6

Grade 9 national sample

Statistic English Mathematics Reading Science CompositeUsage/

MechanicsRhetoricalSkills

Mean 15.7 16.3 15.4 17.1 16.2 8.0 7.8

SD 4.3 3.6 4.2 3.5 3.4 2.3 2.2

Skewness 0.1 0.0 0.4 −0.1 0.2 −0.5 -0.1

Kurtosis 2.4 3.7 2.6 3.7 2.6 2.8 2.5

Page 39: ACT Explore Technical Manual

29

Scale score English Mathematics Reading Science

Usage/Mechanics

RhetoricalSkills Composite

Scale score

25 100 100 100 100 100 2524 99 98 98 98 99 2423 97 97 97 97 99 2322 95 96 95 95 98 2221 93 95 93 93 96 2120 89 93 91 90 92 2019 85 90 88 84 88 1918 81 84 84 75 82 1817 75 75 79 63 74 1716 69 64 72 49 64 1615 61 50 64 35 53 1514 52 36 54 24 41 1413 42 25 44 15 29 1312 32 16 33 9 100 100 18 1211 23 10 23 5 97 97 10 1110 15 16 14 3 91 92 5 109 9 4 7 2 78 83 2 98 5 3 3 1 61 70 1 87 3 2 1 1 44 54 1 76 1 1 1 1 30 37 1 65 1 1 1 1 19 21 1 54 1 1 1 1 12 10 1 43 1 1 1 1 6 4 1 32 1 1 1 1 2 1 1 21 1 1 1 1 1 1 1 1

Mean 14.7 15.5 14.6 16.6 7.6 7.3 15.5 MeanSD 4.2 3.5 3.9 3.3 2.4 2.2 3.3 SD

Table 4.12ACT Explore 2010 National Norms for Fall Grade 8

Percent at or below

Page 40: ACT Explore Technical Manual

30

Scale score English Mathematics Reading Science

Usage/Mechanics

RhetoricalSkills Composite

Scale score

25 100 100 100 100 100 2524 99 98 98 97 99 2423 96 96 96 96 98 2322 94 95 94 94 97 2221 91 94 92 91 94 2120 87 91 89 88 90 2019 82 87 85 81 85 1918 77 81 81 72 78 1817 71 71 75 60 70 1716 64 59 68 46 60 1615 56 45 60 33 49 1514 47 33 50 22 37 1413 38 22 40 14 26 1312 28 14 30 9 100 100 16 1211 20 9 20 5 96 96 9 1110 13 6 12 3 88 90 4 109 8 4 6 2 75 79 2 98 4 2 2 1 57 65 1 87 2 2 1 1 40 49 1 76 1 1 1 1 27 32 1 65 1 1 1 1 17 18 1 54 1 1 1 1 11 8 1 43 1 1 1 1 5 3 1 32 1 1 1 1 2 1 1 21 1 1 1 1 1 1 1 1

Mean 15.2 15.9 15.0 16.8 7.8 7.6 15.9 MeanSD 4.3 3.6 4.1 3.4 2.4 2.2 3.4 SD

Table 4.13

ACT Explore 2010 National Norms for Spring Grade 8

Percent at or below

Page 41: ACT Explore Technical Manual

31

Scale score English Mathematics Reading Science

Usage/Mechanics

RhetoricalSkills Composite

Scale score

25 100 100 100 100 100 2524 98 97 97 97 99 2423 95 96 95 95 98 2322 92 94 93 93 96 2221 88 92 90 90 92 2120 84 89 87 85 88 2019 79 85 83 79 82 1918 73 77 78 69 74 1817 67 67 71 57 65 1716 59 54 64 43 55 1615 51 41 55 31 44 1514 42 29 46 20 33 1413 33 19 37 13 23 1312 25 12 27 8 100 100 14 1211 17 8 18 5 95 96 8 1110 11 5 11 3 86 88 4 109 6 3 5 2 71 76 1 98 3 2 2 1 53 61 1 87 2 1 1 1 36 44 1 76 1 1 1 1 23 28 1 65 1 1 1 1 15 15 1 54 1 1 1 1 9 7 1 43 1 1 1 1 5 2 1 32 1 1 1 1 1 1 1 21 1 1 1 1 1 1 1 1

Mean 15.7 16.3 15.4 17.1 8.0 7.8 16.2 MeanSD 4.3 3.6 4.2 3.5 2.3 2.2 3.4 SD

Table 4.14ACT Explore 2010 National Norms for Fall Grade 9

Percent at or below

Years of test data matchedExplore testing season/grade

Matched group Sample size Explore Plan

Fall Grade 8 Group 1 960,4602005–20062006–20072007–2008

2007–20082008–20092009–2010

Spring Grade 8 Group 2 69,7472005–20062006–20072007–2008

2007–20082008–20092009–2010

Fall Grade 9 Group 3 314,3102006–20072007–20082008–2009

2007–20082008–20092009–2010

Spring Grade 9 Group 4 20,8092006–20072007–20082008–2009

2007–20082008–20092009–2010

Table 4.15Student Data Used in Constructing the Expected Fall Grade 10 Plan Score Ranges

Page 42: ACT Explore Technical Manual

Tabl

e 4.

16O

bse

rved

Co

un

ts a

nd

Pro

bab

iliti

es o

f C

over

age

for

Co

mp

osi

te S

core

sA

CT

Exp

lore

Fall

Gra

de

8 to

AC

T P

lan

Fal

l Gra

de

10U

sing

AC

T E

xplo

re D

ata

from

Fal

l 200

5/06

, 200

6/07

, and

200

7/08

and

AC

T P

lan

Dat

a fr

om F

all 2

007/

08, 2

008/

09, a

nd 2

009/

10

AC

T P

lan

Sco

re

ACT ExploreScore

12

34

56

78

910

1112

1314

1516

1718

1920

2122

2324

2526

2728

2930

3132

Row

Tot

alR

ow H

itsP

rob.

Cov

.

10

00

00

00

00

10

00

00

00

00

00

00

00

00

00

00

01

11.

00

20

00

01

00

00

00

10

00

20

00

00

00

00

00

00

00

04

10.

25

30

00

10

00

00

31

30

00

00

00

00

00

00

00

00

00

08

70.

88

40

00

00

01

31

05

04

42

11

10

00

00

00

00

00

00

023

130.

57

50

00

00

02

13

718

1113

73

22

10

10

10

00

00

00

00

072

530.

74

60

00

01

26

821

3663

8268

2613

156

87

57

65

21

01

00

00

038

927

80.

71

70

00

00

39

2468

164

311

385

304

153

7938

2517

2017

59

62

32

20

00

00

1646

1232

0.75

80

04

47

913

6616

746

095

013

2410

8756

326

410

070

6245

2923

1817

89

44

71

10

053

1638

210.

72

90

01

36

1640

105

331

997

2196

3251

3015

1706

775

356

202

121

8061

4325

3024

1411

63

10

00

1341

910

168

0.76

101

10

610

1856

152

476

1458

3567

5955

6461

4461

2132

923

440

262

151

113

8665

4832

3722

118

32

00

2695

720

444

0.76

110

10

211

2844

171

555

1744

4526

8970

1227

711

076

6249

2799

1117

527

302

197

141

9878

5546

3819

175

10

051

094

3857

20.

75

120

01

47

2132

154

440

1361

4013

9460

1671

920

720

1612

881

2133

7413

3861

634

120

716

610

788

6633

3317

83

00

8357

863

027

0.75

130

00

02

918

6924

378

722

8761

9313

730

2359

326

695

1927

293

1036

7214

5964

731

717

916

496

9259

3431

106

20

1089

7683

290

0.76

140

02

02

56

2810

736

110

2928

0473

5915

989

2662

928

615

2002

399

7540

2415

1764

734

118

512

990

5438

3215

24

012

0012

9125

60.

76

150

11

00

21

1451

143

438

1143

2813

6957

1568

225

159

2662

919

323

1028

543

6816

5666

829

018

189

5838

3716

71

011

6051

8679

30.

75

160

01

00

02

723

7422

551

111

3127

5966

6813

950

2127

223

616

1776

910

146

4705

1811

701

304

126

7040

2111

91

010

5953

7660

70.

72

170

10

03

13

38

3298

241

485

954

2258

5565

1129

716

708

1840

814

797

9099

4660

1958

795

291

115

4124

124

10

8786

266

775

0.76

180

00

10

01

17

2655

113

202

431

836

1904

4552

8607

1256

813

947

1205

179

1944

2421

1085

231

110

935

135

10

7108

155

092

0.78

190

00

01

00

23

1536

5811

220

033

669

815

5534

9064

3292

6510

448

9256

6303

3787

1810

791

286

8317

70

054

991

4170

40.

76

200

01

00

01

01

520

3865

9315

431

654

511

7425

1346

7064

9974

8969

4453

4132

3616

6970

024

666

122

041

800

3094

30.

74

210

00

00

02

10

49

1340

5691

104

201

386

860

1696

3093

4494

5207

5026

3948

2642

1423

613

164

404

030

117

2176

80.

72

220

00

00

00

01

26

1626

3346

5251

122

239

519

1118

1886

2780

3529

3411

2865

1969

1122

456

132

170

2039

814

471

0.71

230

00

00

00

11

15

514

1624

1834

4270

126

304

567

1100

1689

2084

2211

1902

1389

811

292

504

1276

089

860.

70

240

00

00

00

00

14

58

714

912

1717

2853

108

257

453

696

956

1097

1070

846

443

153

1262

6646

650.

74

250

00

00

00

00

00

12

32

44

15

23

1121

5210

016

123

433

034

327

711

713

1686

1345

0.80

9604

6072

1312

0.75

32

Page 43: ACT Explore Technical Manual

Tabl

e 4.

17O

bse

rved

Co

un

ts a

nd

Pro

bab

iliti

es o

f C

over

age

for

Co

mp

osi

te S

core

sA

CT

Exp

lore

Sp

rin

g G

rad

e 8

to A

CT

Pla

n F

all G

rad

e 10

U

sing

AC

T E

xplo

re D

ata

from

Spr

ing

2005

/06,

200

6/07

, 200

7/08

, and

AC

T P

lan

Dat

a fr

om F

all 2

007/

08, 2

008/

09, a

nd 2

009/

10

AC

T P

lan

Sco

re

ACT Explore Score

12

34

56

78

910

1112

1314

1516

1718

1920

2122

2324

2526

2728

2930

3132

Row

Tot

alR

ow H

itsP

rob.

Cov

.

10

00

00

00

00

00

00

00

00

00

00

00

00

00

00

00

00

00.

00

20

00

00

00

00

00

00

00

00

00

00

00

00

00

00

00

00

00.

00

30

00

00

00

00

00

00

00

00

00

00

00

00

00

00

00

00

00.

00

40

00

00

00

10

00

00

00

00

00

00

00

00

00

00

00

01

00.

00

50

00

00

00

00

00

30

10

00

00

00

10

00

00

00

00

05

30.

60

60

00

00

00

11

13

60

00

10

23

00

01

00

01

00

00

020

100.

50

70

00

00

00

01

115

1915

67

12

14

40

00

10

00

00

10

078

560.

72

80

00

00

10

39

1843

5843

2320

611

43

21

00

10

00

00

00

024

618

50.

75

90

00

00

11

416

3510

515

113

786

4127

1710

84

55

43

12

01

00

00

664

479

0.72

100

00

00

25

1027

7316

329

329

622

012

055

2316

115

93

63

14

01

00

00

1346

972

0.72

110

00

00

06

637

8721

343

062

253

228

513

567

2718

1812

55

52

60

02

00

025

2017

970.

71

120

00

00

05

820

6623

450

189

399

777

736

013

661

3720

1615

1315

75

50

20

00

4193

3168

0.76

130

00

00

03

415

2312

638

485

913

4214

6810

0244

818

177

4837

2618

106

33

51

00

060

8946

710.

77

140

00

00

01

07

1481

195

489

1056

1744

1745

1167

529

199

8529

2824

135

104

02

00

074

2757

120.

77

150

00

00

10

12

1329

9921

059

812

4018

6517

9312

6362

424

493

3816

1114

64

42

00

081

7061

610.

75

160

00

00

00

00

920

4093

253

603

1277

1777

1777

1201

626

263

105

5519

75

41

11

00

8137

6032

0.74

170

00

00

00

00

17

1438

103

228

567

1123

1613

1584

1096

656

277

118

4816

145

40

00

075

1254

160.

72

180

00

00

00

02

01

827

4372

228

496

923

1297

1270

967

554

263

117

4518

46

11

00

6343

4953

0.78

190

00

00

00

01

15

811

2133

6219

542

276

598

399

271

448

428

811

144

193

10

00

5163

3876

0.75

200

00

00

00

01

34

89

816

3683

158

329

572

697

735

569

413

238

107

338

20

00

4029

2902

0.72

210

00

00

00

00

00

12

614

1023

6512

525

140

055

956

146

333

219

296

3711

40

031

5223

150.

73

220

00

00

10

10

00

04

56

813

1129

9018

425

236

240

234

025

417

266

173

00

2220

1610

0.73

230

00

00

00

00

01

13

10

11

311

1939

110

141

185

264

216

183

134

7927

41

1424

1099

0.77

240

00

00

00

00

00

00

02

21

12

612

2030

7010

014

713

012

774

4810

178

360

40.

77

250

00

00

00

00

00

00

00

11

20

01

31

923

1836

4439

3411

222

516

00.

71

6974

752

181

0.75

33

Page 44: ACT Explore Technical Manual

Tabl

e 4.

18O

bse

rved

Co

un

ts a

nd

Pro

bab

iliti

es o

f C

over

age

for

Co

mp

osi

te S

core

sA

CT

Exp

lore

Fal

l Gra

de

9 to

AC

T P

lan

Fal

l Gra

de

10

Usi

ng A

CT

Exp

lore

Dat

a fr

om F

all 2

006/

07, 2

007/

08, a

nd 2

008/

09 a

nd A

CT

Pla

n D

ata

from

Fal

l 200

7/08

, 200

8/09

, and

200

9/10

AC

T P

lan

Sco

re

ACT Explore Score

12

34

56

78

910

1112

1314

1516

1718

1920

2122

2324

2526

2728

2930

3132

Row

Tot

alR

ow H

itsP

rob.

Cov

.

10

00

00

00

00

00

00

00

00

00

00

00

00

00

00

00

00

00.

00

20

00

00

00

00

00

00

00

00

00

00

00

00

00

00

00

00

00.

00

30

00

00

00

00

10

00

00

01

00

00

00

00

00

00

00

02

10.

50

40

00

00

01

03

11

32

00

10

00

00

00

00

00

00

00

012

80.

67

50

00

00

10

02

64

13

11

00

00

00

00

00

00

00

00

019

140.

74

60

00

10

12

48

822

2515

32

10

00

00

00

00

00

00

00

092

700.

76

70

00

01

25

1429

5396

114

6721

127

33

10

00

00

00

00

00

00

428

330

0.77

80

10

01

06

2036

141

301

360

246

9939

84

46

41

00

10

00

00

00

012

7810

480.

82

90

00

22

413

3613

529

065

985

366

530

711

941

1914

52

21

40

00

00

00

00

3173

2467

0.78

100

00

23

717

5616

948

110

6516

9615

4485

830

211

046

188

25

11

20

10

00

00

063

9447

860.

75

110

00

12

517

6118

857

014

0127

3031

8622

7394

629

698

3717

94

32

10

00

00

00

011

847

9590

0.81

120

00

02

012

3214

649

713

5030

6049

2649

3028

6510

7833

511

149

116

61

12

20

00

00

019

422

1426

60.

73

130

00

00

25

2076

262

778

2273

4898

7290

6322

3215

1100

392

128

5113

147

22

00

00

00

026

850

2078

30.

77

140

00

01

02

635

9638

610

8129

3458

9986

8469

4134

2312

5335

613

658

199

73

21

00

00

031

332

2445

80.

78

150

00

00

01

713

3012

542

311

9231

7867

8289

0571

1737

7514

3647

213

148

2111

63

10

00

00

3367

725

982

0.77

160

00

10

02

35

1446

155

428

1323

3321

6718

8360

6883

3947

1597

581

192

5925

76

12

00

00

3367

625

282

0.75

171

00

00

01

00

730

6915

840

112

0033

3059

7575

4164

3737

7917

2666

423

672

208

30

00

00

3165

823

283

0.74

180

00

00

00

42

621

3156

169

437

1250

2948

5087

6207

5363

3319

1740

737

292

105

286

11

00

027

810

1997

60.

72

190

00

00

00

01

26

1626

5212

241

211

3325

5041

6450

8743

4429

8816

7877

830

410

327

42

00

023

799

1826

10.

77

200

00

01

10

01

12

710

2058

128

364

988

2040

3286

3865

3620

2598

1551

785

319

128

333

00

019

809

1492

00.

75

210

00

01

00

01

10

36

1120

4310

034

876

216

4524

7729

5528

0521

3813

2273

429

598

263

10

1579

512

020

0.76

220

00

10

00

01

13

22

56

1526

6421

556

111

3917

3021

2822

3317

8711

8570

128

810

720

10

1222

190

170.

74

230

00

00

00

01

01

01

11

311

1850

148

355

650

1087

1404

1505

1381

930

578

288

7710

085

0063

820.

75

240

00

00

00

00

00

10

00

22

312

3066

144

296

531

813

833

837

699

445

204

465

4969

3713

0.75

250

00

00

00

00

00

00

00

00

00

04

929

5111

518

125

633

227

620

382

915

4711

600.

75

3143

1023

7817

0.76

34

Page 45: ACT Explore Technical Manual

Tabl

e 4.

19O

bse

rved

Co

un

ts a

nd

Pro

bab

iliti

es o

f C

over

age

for

Co

mp

osi

te S

core

sA

CT

Exp

lore

Sp

rin

g G

rad

e 9

to A

CT

Pla

n F

all G

rad

e 10

Usi

ng A

CT

Exp

lore

Dat

a fr

om S

prin

g 20

06/0

7, 2

007/

08, a

nd 2

008/

09 a

nd A

CT

Pla

n D

ata

from

Fal

l 200

7/08

, 200

8/09

, and

200

9/10

AC

T P

lan

Sco

re

ACT Explore Score

12

34

56

78

910

1112

1314

1516

1718

1920

2122

2324

2526

2728

2930

3132

Row

Tot

alR

ow H

itsP

rob.

Cov

.

10

00

00

00

00

00

00

00

00

00

00

00

00

00

00

00

00

00.

00

20

00

00

00

00

00

00

00

00

00

00

00

00

00

00

00

00

00.

00

30

00

00

00

00

00

00

00

00

00

00

00

00

00

00

00

00

00.

00

40

00

00

00

00

00

00

00

00

00

00

00

00

00

00

00

00

00.

00

50

00

00

00

00

00

01

00

00

00

00

00

00

00

00

00

01

11.

00

60

00

00

00

00

01

12

00

00

00

00

00

00

00

00

00

04

41.

00

70

00

00

00

11

38

30

10

10

00

00

00

00

00

00

00

018

140.

78

80

00

00

00

12

98

2213

103

11

10

00

00

00

00

00

00

071

520.

73

90

00

00

00

15

1638

4449

226

31

30

10

00

01

00

00

00

019

014

70.

77

100

00

00

00

312

2677

104

7960

167

42

41

10

00

00

00

00

00

396

286

0.72

110

00

00

20

75

3787

145

175

119

5922

75

02

00

00

00

00

00

00

672

526

0.78

120

00

00

02

410

2078

178

301

270

144

6016

113

00

20

01

00

00

00

011

0082

70.

75

130

00

00

00

25

1060

147

330

388

311

153

5616

86

20

01

00

00

00

00

1495

1182

0.79

140

00

00

00

23

1029

9120

940

649

438

917

961

146

41

10

10

00

00

00

1900

1468

0.77

150

00

00

00

00

617

4412

629

351

660

136

119

750

288

50

10

00

00

00

022

5317

710.

79

160

00

00

00

02

36

1749

111

334

541

604

430

178

5822

122

21

00

00

00

023

7219

090.

80

170

00

00

00

00

10

614

3511

427

546

649

335

917

065

195

23

00

00

00

020

2715

930.

79

180

00

00

00

00

03

36

1451

142

304

427

441

316

162

6628

62

10

00

00

019

7214

880.

75

190

00

00

00

00

01

14

418

4611

521

537

434

827

014

471

2313

41

00

00

016

5212

070.

73

200

00

00

00

00

00

11

05

1343

105

198

292

283

244

126

5428

114

10

00

014

0910

170.

72

210

00

00

00

00

00

10

12

710

4285

161

223

211

162

134

5430

102

00

00

1135

891

0.79

220

00

00

00

00

00

01

21

04

1528

6312

915

818

115

512

154

348

11

00

956

744

0.78

230

00

00

00

00

00

00

00

22

25

1137

5810

712

999

8358

2510

00

062

847

60.

76

240

00

00

00

00

00

00

10

00

02

54

2750

5574

6956

4731

81

043

033

10.

77

250

00

00

00

00

00

00

00

00

00

12

14

1019

1524

2915

53

012

897

0.76

2080

916

031

0.77

35

Page 46: ACT Explore Technical Manual

36

The 75% prediction intervals in Table 4.20 also vary inwidth from a low of 4 Plan Composite score points to a highof 6 Plan Composite score points. The longer intervals tendto fall near the bottom and near the top of the ExploreComposite score range, where there tend to be fewer exam-inees, especially near the bottom. In the middle of theExplore Composite score range, between score points 8 and16, where there tend to be more examinees, the width of the75% prediction intervals are uniformly equal to 4 PlanComposite score points.

It is worth emphasizing that 25% of examinees fall out-side the estimated 75% prediction intervals. In practice notevery examinee who takes Explore in the fall of the eighthgrade and then takes Plan in the fall of the tenth grade willachieve a Plan score within the estimated 75% predictioninterval. Because both Explore and Plan are designed to becurriculum-based testing programs, some students will falloutside their estimated Plan Composite score range. Ifexaminees do not maintain good academic work in schoolbetween eighth and tenth grade, then their Plan Compositescore may fall short of their estimated score range.Conversely, examinees who improve their academic per-formance may earn Plan Composite scores higher than theirestimated score range.

Tables 4.17 and 4.21 are derived from the data for 69,747examinees. These two tables are analogous to Tables 4.16and 4.20 but contain the results for examinees who tookExplore during the spring of their eighth grade and then tookPlan one and one-half years later in the fall of their tenthgrade. The prediction interval results for examinees who tookExplore during the fall of their ninth grade and took Plan oneyear later in the fall of their tenth grade are presented inTables 4.18 and 4.22. These two tables are based on thescores of 314,310 examinees. Finally, Tables 4.19 and 4.23contain the 75% prediction intervals for examinees who tookExplore during the spring of their ninth grade and Plan dur-ing the fall of their tenth grade. These two tables are derivedfrom the scores of 20,809 examinees.

Equating

Even though each form of the Explore test is constructedto adhere to the same content and statistical specifications,the forms may be slightly different in difficulty. To control forthese differ ences, subsequent forms are equated to earlierforms and the scores reported to exami nees are scalescores that have the same mean ing regard less of the partic-ular form ad ministered to examin ees. Thus, scale scores arecompa rable across test forms and test dates.

Equating is conducted using a study in which each formis administered to approximately 2,000 examinees. Theexaminees in this sample are administered a spiraled set offorms—the new forms (“n – 1” of them) and one anchorform that has already been equated to previous forms.(Initially, of course, the anchor form is the form used to estab-lish the score scale.) This spiraling technique, in which everynth examinee receives the same form of the test, results inrandomly equivalent groups taking the forms. The use of ran-

domly equivalent groups is an important feature of the equat-ing procedure and provides a basis for the large degree ofconfidence in the continuity of scores.

Scores on the alternate forms are equated to the scorescale using equipercentile equating methodology. In equiper-centile equating, a score on Form X of a test and a score onForm Y are considered to be equivalent if they have thesame percentile rank in a given group of exami nees. Theequipercentile equating results are subse quently smoothedusing an analytic method described by Kolen (1984) toestablish a smooth curve, and the equiva lents are roundedto integers. The conver sion tables that result from thisprocess are used to transform raw scores on the new formsto scale scores.

The equipercentile equating technique is applied to theraw scores of each of the four tests for each form sepa rate ly.The Composite score is not directly equated across forms.Instead, the Composite is calculated by rounding theunweighted average of the scale scores for the four equatedtests. The subscores are also separate ly equat ed using theequipercentile method. Note, in particular, that the equatingprocess does not lead to the English Test score being a sumof the two subscores.

Through the equipercentile equating method, it is possi-ble for an examinee answering all of the items correctly on aparticular test to receive a scale score less than the maxi-mum score of 25. Similarly, an examinee who answers all ofthe items correctly on a subscore might receive a scale scoreless than the maximum score of 12 for that subscore. Byallowing an all-correct raw score to convert to a scale scoreless than the maximum, scale scores from different Exploreforms are kept more comparable. Explore test scores mustbe as compara ble as possible from year to year so that theyaccurately reflect how school, school district, or statewidestudent achievement levels vary from year to year.

Reliability

Reliability, Measurement Er ror, and Effective Weights

Some degree of inconsis tency or error is poten tially con -tained in the mea surement of any cognitive charac teris tic. Anexaminee administered one form of a test on one occasionand a second, parallel form on another occasion likely wouldearn somewhat different scores on the two administra tions.These differ ences might be due to the examinee or the test-ing situation, such as differen tial motivation or differential lev-els of distractions on the two testings. Alternatively, thesedifferences might result from attempt ing to infer the exam i -nee’s level of skill from a relatively small sample of items.

Reliability coefficients are estimates of the consistency oftest scores. They typically range from zero to one, with val-ues near one indicating greater consistency and those nearzero indicating little or no consistency.

The standard error of measurement (SEM) is closelyrelated to test reliability. The standard error of measure mentsummarizes the amount of error or inconsistency in scoreson a test. The original score scales for Explore were devel-

(Text continued from page 26.)

Page 47: ACT Explore Technical Manual

37

Table 4.20Estimated ACT Plan Fall Grade 10 Composite

Score Prediction Intervals

ExploreScore

Plan intervals

Low score High score Width1 8 12 52 8 12 53 8 12 54 8 13 65 8 13 66 8 13 67 9 13 58 10 13 49 11 14 410 11 14 411 12 15 412 12 15 413 13 16 414 14 17 415 15 18 416 16 19 417 16 20 518 18 22 519 19 23 520 20 24 521 21 25 522 22 26 523 23 27 524 25 29 525 26 30 5

Table 4.21Estimated ACT Plan Fall Grade 10 Composite

Score Prediction Intervalsfor ACT Explore Spring Grade 8 Composite Scores

ExploreScore

Plan intervals

Low score High score Width1 10 13 42 10 13 43 10 13 44 10 13 45 10 13 46 10 13 47 10 14 58 10 14 59 11 14 410 11 14 411 11 14 412 12 15 413 13 16 414 14 17 415 15 18 416 16 19 417 17 20 418 17 21 519 18 22 520 19 23 521 21 25 522 22 26 523 22 27 624 23 28 625 25 29 5

for ACT Explore Fall Grade 8 Composite Scores

Page 48: ACT Explore Technical Manual

38

Table 4.22Estimated ACT Plan Fall Grade 10 Composite

Score Prediction Intervals

for ACT Explore Fall Grade 9 Composite Scores

ExploreScore

Plan intervals

Low score High score Width1 9 12 42 9 12 43 9 12 44 9 12 45 10 13 46 10 13 47 10 13 48 10 13 49 10 13 410 10 13 411 11 14 412 11 14 413 12 15 414 13 16 415 14 17 416 15 18 417 16 19 418 18 21 419 19 23 520 20 24 521 20 24 522 21 25 523 21 26 624 24 28 525 25 29 5

Table 4.23Estimated ACT Plan Fall Grade 10 Composite

Score Prediction Intervalsfor ACT Explore Spring Grade 9 Composite Scores

ExploreScore

Plan intervals

Low score High score Width1 10 13 42 10 13 43 10 13 44 10 13 45 10 13 46 10 13 47 10 13 48 10 13 49 10 13 410 10 13 411 11 14 412 11 14 413 13 16 414 14 17 415 14 17 416 15 18 417 16 19 418 17 20 419 18 21 420 19 22 421 20 24 522 21 25 523 22 26 524 22 27 625 24 28 5

Page 49: ACT Explore Technical Manual

39

oped to have approximately con stant standard errors ofmeasurement for all true scale scores (i.e., the conditionalstandard error of measure ment as a function of true scalescore was con stant). For the new scales, the SEM may varyacross the range of Explore scale scores. The average SEMis the average of the conditional standard errors of meas-urement. If the distribution of mea surement error is approxi-mated by a normal distribution, about two-thirds of theexaminees can be expected to be mismeasured by less than1 standard error of measure ment.

Figures 4.1 and 4.2 present the conditional standard errorsof measurement for the four tests and two sub scores as afunction of true scale score for fall-tested eighth and ninthgraders. The mini mum true scale score plotted is around 5 foreach test and around 2 for the two subscores. These are thelow est true scale scores plotted because only a very smallproportion of examinees would be expected to have a truescale score lower than this for each test and sub score. Someplots begin at higher scores because stable estimates couldnot be obtained at lower scores. Lines are plotted for combi-nations of form and grade—multiple forms (A, B, and C) forGrade 8 and a single form (A) for Grade 9. As can be seen,the estimated conditional standard errors of measurementvary across the true score range. Values are generallybetween one and two score points in the middle of the range.See Kolen, Hanson, and Brennan (1992) for details on themethod used to produce these plots.

Explore reliability coefficients and average standarderrors of measurement for the tests, subscores, andComposite are shown in Table 4.24 for the forms and grades.In all cases the data used to estimate the reliabili ties andstandard errors of measurement are weighted frequency dis-tributions. Kuder-Richardson 20 (KR-20) internal con sis tencyreliability coefficients of raw scores are listed first. Scalescore reliability coefficients and standard errors of measure-ment are reported next. Scale score average standard errorsof measurement were estimated using a four parameter betacompound binomi al model as described in Kolen, Hanson,and Brennan (1992). The estimated scale score reliability fortest i (RELi) was calcu lated as

RELi = 1 – ,

where SEMi is the estimated scale score average stan darderror of measurement and Si

2 is the observed scale scorevariance for test i.

The estimated standard error of measurement for theComposite (SEMc) was calculated as

SEMc =

where the summation is over the four tests. The estimat edreliability of the Composite (RELc) was calculated as

RELc = 1 –

where Sc2 is the observed variance of scores on the

Com posite.

Assuming the measurement errors on the four tests areuncorrelated, the conditional standard error of measurementof the unrounded Composite scale score is

sc(τe,τm,τr,τs) =

where si(τi) is the conditional standard error of measurementfor test i at true scale score τi, where i = e, m, r, and s forEnglish, Mathematics, Reading, and Science, respectively.The functions si(τi) are plotted in Figure 4.1. The conditionalstandard error of measurement for the Composite, instead ofdepending on a single variable as the conditional standarderrors of measurement for the four tests, depends on fourvariables—the true scale scores for the four tests. To facili-tate presentation of the conditional standard errors of meas-urement for the Composite, the conditional standard errorswill be plotted as a function of the average of the true scalescores for the four tests. In other words, sc(τe,τm,τr,τs) will beplotted as a function of

.

A particular true Composite score value can be obtained ina variety of ways (i.e., different combinations of true scalescores on the individual tests could produce the same trueComposite score). Consequently, each true Composite scorevalue may correspond to several different values of the con-ditional standard error of measurement depending on thecombination of true scores on the four tests that producedthe true Composite score value.

To produce plots of the conditional standard errors ofmeasurement of the Composite, the observed proportion-correct scores (the number of items correct divided by thetotal number of items) of the examinees on the four testswere treated as the true proportion-correct scores at whichthe conditional standard errors were calculated. For each testthe conditional standard error of measurement was com-puted for each examinee using the observed proportion-cor-rect score as the true proportion-correct in the formula for theconditional standard error of measurement (Equation 8 inKolen, Hanson, & Brennan, 1992). In addition, for each testthe true scale score corresponding to the observed propor-tion-correct score (treated as a true proportion-correct score)was computed (Equation 7 in Kolen, Hanson, & Brennan,1992). The resulting conditional standard errors of measure-ment for the four tests were substituted in the equation givenabove to compute a value of the conditional standard error ofmeasurement of the Composite. This is plotted as a functionof the average of the true scale scores across the four tests.This procedure was repeated for examinees in the 2010norming study. Figure 4.3 presents a plot of these calculatedconditional Composite standard errors of measurement ver-sus the averages of the true scale scores over the four testsfor each form and grade.

The conditional standard errors of measurement, as pre-sented in Figure 4.3, vary not only across average scalescores but also within each average scale score. Differentstandard errors of measurement are possible for each par-ticular value of the average scale score because more than

SEMi2

______Si

2

∑i SEMi2

_________4

SEMc2

______Sc

2

Σ i si2(τi)__________4

Σ i τi_____4

√•••••••

��������

Page 50: ACT Explore Technical Manual

40

one combination of the four test scores can produce thesame average score. The general trend in the plots is for theconditional standard errors in the middle of the scale to belower than the conditional standard errors for moderately lowand moderately high scores. This trend is similar to the trendin Figure 4.1 for the conditional standard errors of measure-ment for the Mathematics and Science Tests. The degree towhich the conditional standard errors of measurement varywith true scale score is greater for the Mathematics andScience Tests than it is for the Composite. As with the fourtest scores, it is concluded that the conditional standard errorof measurement of the Composite is, for practical purposes,reasonably constant across the score scale.

A limitation of the approach used in producing estimatesof the conditional standard error of measurement of theComposite is that standard errors of measurement of theunrounded average of the four test scores are computedrather than the standard errors of measurement of therounded average of the four test scores (the rounded aver-age is the score reported to examinees).

It is not a problem that the observed scores of the exam-inees are used in producing the plots because it is standarderrors conditional on average true scale score that are beingplotted, and the observed scores for the examinees are onlyused to determine the specific average true scale scores atwhich to plot the standard errors. One effect of usingobserved scores as the true score values at which to plot theconditional standard errors of measurement is that manypoints at the extremes of the scale may not represent realis-tically obtainable average true scale scores (the probabilityof observing examinees with these values of average truescale score is extremely small).

Scale scores from the four tests are summed and divid -ed by 4 in the process of calculating the Composite score.This process suggests that, in a sense, each test is con trib -uting equally to the Composite. The weights used (.25, in thiscase) are often referred to as nominal weights.

Other definitions of the contribution of a test to a com pos -ite may be more useful. Wang and Stanley (1970) de scribedeffective weights as an index of the contribution of a test toa composite. Specifical ly, the effective weights are defined asthe covariance between a test score and the score on a com-posite. These covariances can be summed over tests andthen each covariance divided by their sum (i.e., the compos-ite variance) to arrive at pro portional effective weights.Propor tional effective weights are referred to as effectiveweights in the remainder of this discussion.

The covariances and effective weights are shown inTables 4.25 through 4.28. The values in the diagonals thatare not in brackets are the observed scale score varianc es(the diagonal values in brackets are the true scale score vari -ances). With nominal weights of .25 for each test, the effec-tive weight for a test can be calculated by summing thevalues in the appropriate row that are not in brackets and

dividing the resulting value by the sum of all covariancesusing the formula

(effective weight)i =

where covij is the observed covariance of test scores corre-sponding to row i and column j in the table. Effec tive weightsfor true scores, shown in brackets, are calcu lated similarly,with the true score variance [Si

2 · RELi] used in place of theobserved score variance.

The effective weights for the eighth and ninth graders arevery similar. The effective weights for English and Readingare the largest of the effective weights. They are relative lyhigh because they had the largest scale score variances andbecause their covari ances with the other tests tended to bethe highest. These effective weights imply that the Englishand the Reading tests are somewhat more heavily weighted(relative to composite variance) in forming the Compositethan the Mathematics and the Science tests. Note that theseeffective weights are for the nationally repre sentative sam-ples and that the weights might differ considerably fromthose for other examinee groups.

Validity

Overview

According to the Standards for Educational andPsychological Testing (AERA, APA, & NCME, 1999), “validityrefers to the degree to which evidence and theory support theinterpretations of test scores entailed by proposed use oftests” (p. 9). Arguments for the validity of an intended inferencemade from a test may contain logical, empirical, and theoreti-cal components. A distinct validity argument is needed foreach intended use of a test. In this section, validity issues arediscussed for one of the most common interpretations anduses of Explore: measuring eighth-grade students’ educa-tional achievement in particular subject areas.

Measuring Educational Achievement

Content Validity for ACT Explore Scores. The Exploretests are designed to measure students’ problem-solvingskills and knowledge in particular subject areas. The useful-ness of Explore scores for this purpose provides the founda-tion for validity arguments for more specific uses (e.g.,program evaluation).

The fundamental idea underlying the development anduse of Explore tests is that the best way to determine studentpreparedness for further education and careers is to measureas directly as possible the knowledge and skills students willneed in those settings. Tasks presented in the tests musttherefore be representative of scholastic tasks. They must beintricate in structure, comprehensive in scope, and significantin their own right, rather than narrow or artificial tasks that can

∑jcovij________∑i∑jcovij

(Text continues on page 47.)

Page 51: ACT Explore Technical Manual

41

Figure 4.1. Conditional standard error of measurement for ACT Explore test scores.

3.0

2.5

2.0

1.5

1.0

0.5

0.0

Con

ditio

nal S

tand

ard

Err

or o

f Mea

sure

men

t

252015105

True Scale Score

Form A, Grade 8 Form B, Grade 8 Form C, Grade 8 Form A, Grade 9

English3.0

2.5

2.0

1.5

1.0

0.5

0.0

Con

ditio

nal S

tand

ard

Err

or o

f Mea

sure

men

t

252015105

True Scale Score

Form A, Grade 8 Form B, Grade 8 Form C, Grade 8 Form A, Grade 9

Mathematics

3.0

2.5

2.0

1.5

1.0

0.5

0.0

Con

ditio

nal S

tand

ard

Err

or o

f Mea

sure

men

t

252015105

True Scale Score

Form A, Grade 8 Form B, Grade 8 Form C, Grade 8 Form A, Grade 9

Reading3.0

2.5

2.0

1.5

1.0

0.5

0.0

Con

ditio

nal S

tand

ard

Err

or o

f Mea

sure

men

t

252015105

True Scale Score

Form A, Grade 8 Form B, Grade 8 Form C, Grade 8 Form A, Grade 9

Science

Page 52: ACT Explore Technical Manual

42

Figure 4.2. Conditional standard error of measurement for ACT Explore subscores.

1.4

1.2

1.0

0.8

0.6

0.4

0.2

0.0

Con

ditio

nal S

tand

ard

Err

or o

f Mea

sure

men

t

12108642

True Scale Score

Form A, Grade 8 Form B, Grade 8 Form C, Grade 8 Form A, Grade 9

Usage/Mechanics

1.4

1.2

1.0

0.8

0.6

0.4

0.2

0.0

Con

ditio

nal S

tand

ard

Err

or o

f Mea

sure

men

t

12108642

True Scale Score

Form A, Grade 8 Form B, Grade 8 Form C, Grade 8 Form A, Grade 9

Rhetorical Skills

Page 53: ACT Explore Technical Manual

43

Table 4.24Estimated Reliabilities and Standard Errors of Measurement for ACT Explore Tests and Subscores

Usage/ RhetoricalStatistic English Mechanics Skills Mathematics Reading Science Composite

Form A Grade 8

Raw ScoresReliability 0.87 0.78 0.75 0.83 0.90 0.86 —

Scale ScoresReliability 0.84 0.76 0.74 0.76 0.86 0.79 0.94SEM 1.66 1.16 1.11 1.71 1.44 1.53 0.79

Form A Grade 9

Raw ScoresReliability 0.88 0.80 0.78 0.85 0.91 0.88 —

Scale ScoresReliability 0.86 0.79 0.76 0.80 0.86 0.82 0.95SEM 1.69 1.11 1.10 1.69 1.60 1.52 0.81

Form B Grade 8

Raw ScoresReliability 0.88 0.82 0.75 0.80 0.90 0.82 —

Scale ScoresReliability 0.85 0.81 0.73 0.72 0.82 0.76 0.93SEM 1.55 1.01 1.20 1.74 1.63 1.58 0.81

Form C Grade 8

Raw ScoresReliability 0.87 0.80 0.73 0.82 0.88 0.84 —

Scale ScoresReliability 0.84 0.79 0.72 0.74 0.84 0.77 0.94SEM 1.69 1.06 1.19 1.70 1.52 1.55 0.81

Page 54: ACT Explore Technical Manual

44

Figure 4.3. Conditional standard error of measurement for ACT Explore Composite scores.

1.2

0.8

0.4

0.0

Sta

ndar

d E

rror

of M

easu

rem

ent

252015105

Average True Scale Score

Form A Grade 8

1.2

0.8

0.4

0.0

Sta

ndar

d E

rror

of M

easu

rem

ent

252015105

Average True Scale Score

Form A Grade 9

1.2

0.8

0.4

0.0

Sta

ndar

d E

rror

of M

easu

rem

ent

252015105

Average True Scale Score

Form B Grade 8

1.2

0.8

0.4

0.0

Sta

ndar

d E

rror

of M

easu

rem

ent

252015105

Average True Scale Score

Form C Grade 8

Page 55: ACT Explore Technical Manual

45

Table 4.25Scale Score Covariances and Effective Weights for Form A Grade 8

(Numbers in brackets relate to true scores.)

Table 4.26Scale Score Covariances and Effective Weights for Form A Grade 9

(Numbers in brackets relate to true scores.)

English Mathematics Reading Science

Number of items 40 30 30 28

Proportion of total Explore items 0.31 0.23 0.23 0.22

English 17.44 10.05 12.32 9.83

[14.70]

Mathematics 10.05 12.30 8.87 8.29

[9.38]

Reading 12.32 8.87 15.24 9.54

[13.18]

Science 9.83 8.29 9.54 11.36

[9.02]

Effective weight 0.29 0.23 0.26 0.22

[0.29] [0.22] [0.27] [0.22]

Reliability 0.84 0.76 0.86 0.79

English Mathematics Reading Science

Number of items 40 30 30 28

Proportion of total Explore items 0.31 0.23 0.23 0.22

English 20.04 11.78 14.66 11.70

[17.20]

Mathematics 11.78 14.00 10.51 9.87

[11.16]

Reading 14.66 10.51 17.96 11.34

[15.40]

Science 11.70 9.87 11.34 12.77

[10.46]

Effective weight 0.28 0.23 0.27 0.22

[0.29] [0.22] [0.27] [0.22]

Reliability 0.86 0.80 0.86 0.82

Page 56: ACT Explore Technical Manual

46

Table 4.27Scale Score Covariances and Effective Weights for Form B Grade 8

(Numbers in brackets relate to true scores.)

Table 4.28Scale Score Covariances and Effective Weights for Form C Grade 8

(Numbers in brackets relate to true scores.)

English Mathematics Reading Science

Number of items 40 30 30 28

Proportion of total Explore items 0.31 0.23 0.23 0.22

English 16.40 8.95 11.34 9.05

[13.98]

Mathematics 8.95 10.82 7.64 7.04

[7.81]

Reading 11.34 7.64 14.69 8.47

[12.02]

Science 9.05 7.04 8.47 10.50

[8.02]

Effective weight 0.29 0.22 0.27 0.22

[0.30] [0.21] [0.27] [0.22]

Reliability 0.85 0.72 0.82 0.76

English Mathematics Reading Science

Number of items 40 30 30 28

Proportion of total Explore items 0.31 0.23 0.23 0.22

English 17.51 8.95 12.04 8.97

[14.66]

Mathematics 8.95 11.23 7.84 7.25

[8.35]

Reading 12.04 7.84 14.34 8.66

[12.02]

Science 8.97 7.25 8.66 10.25

[7.86]

Effective weight 0.30 0.22 0.27 0.22

[0.30] [0.22] [0.27] [0.22]

Reliability 0.84 0.74 0.84 0.77

Page 57: ACT Explore Technical Manual

47

be defended for inclusion in the tests solely on the basis oftheir statistical correlation with a criterion. In this context, con-tent-related validity is particularly significant.

The Explore tests contain a proportionately large numberof complex problem-solving skills. The tests are orientedtoward major areas of middle school and junior-high schoolinstructional programs, rather than toward a factorial defini-tion of various aspects of intelligence. Thus, Explore scores,subscores, and skill statements based on the ACT CollegeReadiness Standards are directly related to student educa-tional progress and can be readily understood and inter-preted by instructional staff, parents, and students.

As described earlier in this chapter, the specific knowl-edge and skills selected for inclusion in Explore were deter-mined through a detailed analysis of three sources ofinformation. First, the objectives for instruction for Grades 6through 9 were obtained for all states in the United Statesthat had published such objectives. Second, textbooks onstate-approved lists for courses in Grades 6 through 8 werereviewed. Third, educators at the secondary (Grades 7through 12) and postsecondary levels were consulted todetermine the knowledge and skills taught in Grades 6

through 8 prerequisite to successful performance in highschool. These three sources of information were analyzed todefine a scope and sequence (i.e., test content specifica-tions) for each of the areas measure by Explore. Thesedetailed test content specifications have been developed toensure that the test content is representative of current middle school and junior high curricula. All forms arereviewed to ensure that they match these specifications.Throughout the item development process there is an ongo-ing assessment of the content validity of the tests.ACT Explore Test Scores. This section provides evi-

dence that the Explore tests and subtests measure separateand distinct skills. The data included all fall eighth-, spring eighth-, and fall ninth-grade 2005–2006 Explore testtakers. Correlations were developed for all possible pairs oftests. As shown in Table 4.29, the scale scores on the fourtests have correlations in the range of .65 to .75 for fall-tested eighth-grade students, .65 to .73 for spring-testedeighth-grade students, and .69 to .77 for ninth-grade stu-dents, indicating that examinees who score well on one testalso tend to score well on another.

(Text continued from page 40.)

Table 4.29Correlations Between ACT Explore Test Scores and Subscores

Usage/ RhetoricalTest score Mechanics Skills Mathematics Reading Science Composite

Fall Eighth Grade (N = 427,117)

English .91 .90 .70 .75 .71 .90Usage/Mechanics .72 .67 .67 .65 .83Rhetorical Skills .64 .70 .66 .83

Mathematics .66 .69 .86Reading .73 .88Science .87

Spring Eighth Grade (N = 43,779)

English .90 .90 .69 .73 .71 .90Usage/Mechanics .71 .65 .65 .65 .82Rhetorical Skills .64 .69 .66 .83

Mathematics .65 .70 .86Reading .73 .88Science .87

Fall Ninth Grade (N = 145,060)

English .92 .91 .72 .77 .74 .91Usage/Mechanics .74 .69 .69 .68 .84Rhetorical Skills .67 .73 .69 .84

Mathematics .69 .71 .87Reading .75 .90Science .88

Page 58: ACT Explore Technical Manual

48

Statistical Relationships Between ACT Explore, ACT Plan, and ACT Scores. The Explore, Plan, and ACTtests all measure student educational development in thesame curricular areas of English, mathematics, reading, andscience. The Explore scale ranges from 1 (lowest) to 25,Plan from 1 to 32, and ACT from 1–36. Each test includes acomputed Composite score equal to the average of the fourtest scores in the four curriculum areas (English,Mathematics, Reading, and Science). The three programsfocus on knowledge and skills typically attained within thesecurriculum areas at different times in students’ secondary-school experience. Thus, performance on Explore should bedirectly related to performance on Plan and the ACT.

Table 4.30 shows the correlations between Explore, Plan,and ACT scale scores for 352,405 students who tookExplore, Plan, and the ACT in Grades 8, 10, and 11–12,respectively. The table shows observed correlations for alltest scores and disattenuated correlations (shown in paren-theses) for corresponding test scores across Explore, Plan,and the ACT. The observed correlations among the four sub-ject area tests are in the range of .53 to .75 and disattenu-ated correlations are in the range of .77 to .88. The observedcorrelations between tests suggest that performance on thethree test batteries is related.

Table 4.30Correlations, Observed and (Disattenuated), Between ACT Explore, ACT Plan, and ACT Test Scale Scores

Plan

Explore English Mathematics Reading Science Composite

English .74 (.85) .60 .63 .58 .75

Mathematics .60 .73 (.88) .53 .60 .72

Reading .67 .56 .63 (.77) .58 .71

Science .64 .63 .59 .62 (.78) .72

Composite .77 .73 .69 .69 .84 (.89)

ACT

Explore English Mathematics Reading Science Composite

English .75 (.85) .60 .67 .61 .74

Mathematics .60 .73 (.85) .57 .66 .72

Reading .68 .56 .68 (.80) .60 .71

Science .65 .64 .63 .65 (.80) .72

Composite .79 .73 .74 .72 .83 (.87)

Page 59: ACT Explore Technical Manual

49

ACT Explore Scores and Course Grades. The resultsshown in Table 4.31 are based on the Explore scores andself-reported course grades of students who took Explore inGrade 8 in 2003–2004 and also took Plan in Grade 10 in2005–2006. A total of 210,470 eighth-grade students hadvalid Explore Composite scores and took the test under stan-dard conditions.

Across individual courses, correlations between subjectarea Explore scores and associated course grades rangedfrom .28 (Geography) to .42 (Algebra 1). However, correla-tions with subject area GPAs and overall high school GPAwere much higher, ranging from .37 to .42 for subject area

GPAs and from .47 to .55 for overall high school GPA. In general, correlations between test scores and course

grades are smaller than those between test scores due tothe lower reliabilities of course grades. For example, theintercorrelations among course grades could be consideredan estimate of the reliabilities of individual course grades.For these courses, the median correlation among pairs ofgrades is .57. Using this value as a reliability estimate for indi-vidual course grades, disattenuated correlations among testscores and individual course grades ranged from .40(Geography) to .61 (Algebra 1).

Grade/grade average

No. of students Mean

Correlation for Explore score

English Mathematics Reading Science Composite

English

English 9 175464 3.00 .41 .44

English 10 80801 3.06 .38 .41

English GPA 178539 2.99 .42 .46

Mathematics

Algebra 1 158716 2.97 .42 .43

Geometry 80776 3.13 .40 .42

Algebra 2 34801 3.21 .39 .40

Mathematics GPA 166355 2.92 .42 .44

Social Studies

U.S. History 61780 3.14 .38 .42

World History 70044 3.16 .37 .42

Government/Civics 42408 3.17 .39 .44

World Cultures/Global Studies 13539 3.21 .39 .44

Geography 74309 3.31 .28 .32

Economics 11367 3.16 .38 .43

Social Studies GPA 172668 3.18 .37 .42

Science

Physical/Earth/General Science 123078 3.03 .38 .43

Biology 1 95581 3.06 .39 .44

Chemistry 1 18383 3.24 .34 .40

Science GPA 168395 3.02 .40 .45

Overall

Overall GPA 148640 3.08 .49 .48 .47 .48 .55

Table 4.31Correlations Between ACT Explore Scores and Course Grades

Page 60: ACT Explore Technical Manual

50

All students benefit from taking upper-level high schoolcourses and working hard in them, regardless of their level ofprior achievement—as students take more upper-levelcourses in high school, the increases in their proficiency arereflected in higher Plan scores (see Figures 4.4 and 4.5).

Of all three EPAS assessments, Explore is the earliestmeasure of academic achievement. Additional ACT researchexamined the relationship between high school grade point

average (HSGPA) at the time students took the ACT andtheir Explore test scores at Grade 8. The data included highschool students who graduated in 2002, 2003, or 2004 andtook Explore, Plan, and the ACT. Self-reported high schoolcourse grades from the ACT CGIS were used to calculate anoverall HSGPA. Explore means, HSPGA means, and corre-lations for each of the four Explore tests and the Compositeare presented in Table 4.32.

Figure 4.4. Average increase in ACT Plan Mathematics scores from taking rigorous mathematics courses, regardless of prior achievement.

Figure 4.5. Average increase in ACT Plan Science scores from taking rigorous science courses, regardless of prior achievement.

1.61.41.2

10.80.60.40.2

0General Science & Biology General Science, Biology, &

Other Science beyond Biology

Course pattern taken (Baseline = General Science)

0.5

1.4

5

4

3

2

1

0Alg 1 & Alg 2

or Alg 1 & GeomAlg 1, Geom,

& Alg 2Alg 1, Geom,

Alg 2, & Other Mathbeyond Alg 2

Course pattern taken (Baseline = No college-prep math)

4.6

2.8

1.2

0.2

Algebra

Page 61: ACT Explore Technical Manual

51

Table 4.33Composite Score Means for the Three Test Batteries, by Grade and Core Coursework, at Time of ACT Testing

Table 4.32Means and Correlations for ACT Explore and High School Grade Point Average

The results showed a moderate relationship betweenHSGPA and Explore test scores, even though the time spanbetween Explore testing and HSGPA was about four years.The largest correlation was between the Explore Compositeand HSGPA. Corresponding correlation coefficients for thefour subject area tests ranged from .40 (Reading) to .45(Mathematics).

ACT also investigated whether taking or planning to takehigh school college-preparatory core coursework is relatedto ACT Composite scores for students with identical ExploreComposite scores at Grade 8. Data for the study consisted of2002 ACT-tested graduates from public middle and highschools who took Explore, Plan, and the ACT. Fifty-sevenpercent of the sample was female. Racial/ethnic affiliationswere 80% Caucasian, 8% African American, and 12% other

ethnic backgrounds. Explore, Plan, and the ACT Compositemeans were disaggregated by the time of ACT testing (junioror senior year) and whether or not the student hadtaken/planned to take core or less than core. Core coursesare defined as four years of English and three years each ofmathematics, science, and social studies.

The Explore Composite means in Table 4.33 for studentswho took/planned to take the core curriculum were higherthan those for students who did not take core, given theirExplore scores. Thus, students who take Explore and areencouraged to take the college-preparatory core curriculumin high school are likely to achieve higher test scores on Planand the ACT than students who take less than core, regard-less of whether ACT is taken as a junior or senior.

Explore NExploremeans

HSGPA means Correlations

English 221,805 16.5 3.30 .44

Mathematics 210,651 16.6 3.12 .45

Reading 210,666 16.4 3.40 .40

Science 210,493 17.8 3.25 .41

Composite 211,603 17.0 3.27 .54

Year took ACT Took core N Explore Plan ACT

Junior Yes 7,948 18.4 20.5 23.3

No 3,848 16.5 18.7 20.7

Senior Yes 21,236 17.0 19.1 21.9

No 10,534 15.2 17.4 19.4

Page 62: ACT Explore Technical Manual

52

Growth from Grade 8 to Grade 12. Results of ACT stu-dents show that Explore, Plan, and ACT can be used tomeasure growth in educational achievement across Grades8, 10, and 12.

Roberts and Bassiri (2005) investigated the relationshipbetween the initial academic achievement status of studentson Explore at Grade 8, and their rate of change in academicachievement at Grade 10 on Plan and Grades 11/12 on theACT. The longitudinal achievement data for this study con-sisted of 34,500 students from 621 high schools who hadtaken Explore (1998–1999), Plan (1999–2000), and the ACT(2001–2002). Linear growth over time for students withinschools was measured using Explore, Plan, and ACTComposite scores. A multilevel growth model was used to testthe extent of the relationship between the initial achievementof students (measured by Explore) and their rate of educa-tional growth (measured by Plan and the ACT, respectively).The multilevel growth model for the study was specified,where the time of measure was nested within students andstudents were nested within high schools to account for vari-ation at both levels.

The unconditional model showed an expected between-school grand mean equal to 16.35 for the Explore Compositeand achievement rate of growth equal to 1.16 units per year.A strong correlation (r = .90) was found between where stu-dents start on Explore and their rate of academic achieve-ment growth through high school, as measured by Plan andthe ACT. Within-school rates of change regressed on stu-dents’ Explore scores explained 79% of the variation in stu-

dent-level growth trajectories. These results showed that, onaverage, students’ initial level of academic achievement is animportant predictor of change in academic growth for stu-dents within schools. Although variation between schools wasobserved (Figure 4.6), a student might be expected toincrease his or her rate of change in academic growth by 0.19scale units, on average, for each unit increase on the ExploreComposite.

The Roberts and Bassiri study was extended by Bassiriand Roberts (2005) using the same sample and employingthe same growth model statistical methods to examine therelationship between high school core courses (four years ofEnglish and three years each of mathematics, science, andsocial studies) taken and student growth over time measuredwith Explore, Plan, and the ACT subject tests in English,Mathematics, Reading, and Science. Statistically significantinitial status and rate of change variances showed that stu-dents attending different schools do not start at the samelevel on Explore or change at the same rate of academicachievement over time. Yet, on average, students within aschool who take Explore can be expected to move to higherMathematics, English, Reading, and Science scores overtime as assessed by Plan and the ACT. After controlling forschool-level characteristics (i.e., metropolitan area, propor-tion of ACT-tested students in a school, degree of integrationof race/ethnicity in a school, and initial achievement status ofthe school), and student-level variables (i.e., gender andrace/ethnicity), Explore-tested students who took the highschool core curriculum showed statistically significantly faster

36

31

26

21

16

11

6

1

Grade8 10 11 12

Wit

hin

-Sch

oo

l Tes

t M

ean

s

School AE(Yij)School B

Figure 4.6. Growth trajectories calculated at Grades 8 through 12 for expected within-school average E(Yij), high school A, and high school B.

Page 63: ACT Explore Technical Manual

53

rates of change to higher achievement scores on English,Mathematics, Reading, and Science, compared to Explore-tested students who did not take core (regression coefficientsfor each of the four tests equal 1.56, 1.37, 13.2, and 0.96,respectively). Based on these results, regardless of wherestudents start on EXPLORE, the rate of change in EPASachievement is fastest for students who have taken the highschool core curriculum.ACT Explore and ACT Plan College Readiness

Benchmarks. As described in chapter 3, ACT has identifiedCollege Readiness Benchmarks for the ACT. TheseBenchmarks (English = 18, Mathematics = 22, Reading = 22,and Science = 23) reflect a 50% chance of a B or highergrade or an approximate 75% chance of a C or higher gradein entry-level, credit-bearing college English Composition I,College Algebra, Social Science, and Biology courses.Subsequently, corresponding College ReadinessBenchmarks were developed for Explore and Plan to reflect astudent’s probable readiness for college-level work in thesesame courses by the time he or she graduates from highschool.

The Explore and Plan College Readiness Benchmarkswere developed using records of students who had takenExplore and the ACT, or Plan and the ACT. Benchmarks weredeveloped for Grade 8 (Explore), Grade 9 (Explore), andGrade 10 (Plan). Each Explore (1–25) or Plan (1–32) scorewas associated with an estimated probability of meeting orexceeding the relevant ACT Benchmark (see Figure 4.7 forEnglish). We then identified the Explore and Plan scores thatcame the closest to a .50 probability of meeting or exceedingthe ACT Benchmark, by subject area. These scores wereselected as the Explore and Plan Benchmarks.

The resulting Explore and Plan Benchmarks, with the corresponding ACT Benchmarks, are given in Table 3.5. Figure 4.8 shows the percentages of 2011–2012 Grade 8Explore-tested students on target to be ready for college-levelwork at high school graduation.

0.00

0.10

0.20

0.30

0.40

0.50

0.60

0.70

0.80

0.90

1.00

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31

Prob

abili

ty

ACT Explore Grade 8 ACT Explore Grade 9 ACT Plan Grade 10

Figure 4.7. Conditional probability of meeting/exceeding an ACT English score = 18, given students’ ACT Explore or ACT Plan English scores.

Page 64: ACT Explore Technical Manual

54

Figure 4.8. 2011–2012 national ACT Explore-tested 8th-grade students likely to be ready for college-level work (in percent).

100

90

80

70

60

50

40

30

20

10

0

EnglishComposition I

ExploreEnglish

BenchmarkScore = 13

Per

cen

t R

ead

y 65

Algebra

ExploreMath

BenchmarkScore = 17

SocialSciences

ExploreReading

BenchmarkScore = 16

Biology

ExploreScience

BenchmarkScore = 18

StudentsMeeting All 4

ExploreBenchmark

Scores

32 32

16

32

College

Page 65: ACT Explore Technical Manual

55

Interest Inventory

Overview

The Unisex Edition of the ACT Interest Inventory (UNI-ACT) helps students explore personally relevant career(occupa tional and educational) options. Using their UNIACTresults, students can explore occupations and academiccourses in line with their preferences for com mon, everydayactivities involving data, ideas, people, and things. UNIACTprovides scores on six scales parallel ing Holland’s (1997) sixtypes of interests and occupations (see also Holland,Whitney, Cole, & Richards, 1969). Scale names (and corre-sponding Holland types) are Science & Technology(Investigative), Arts (Artistic), Social Service (Social),Administration & Sales (Enterprising), Business Oper ations(Conventional), and Technical (Realis tic). Each scale con-sists of work-relevant activities (e.g., fix a toy, help settle anargument between friends, sketch and draw pictures) thatare familiar to students, either through par ticipation or obser-vation. The activities have been careful ly chosen to assessbasic interests while minimizing the effects of sex-roleconnota tions. Since males and females obtain similar distri-butions of scores on the UNIACT scales, combined-sexnorms are used to obtain sex-bal anced scores.

Score Reporting Procedure

The Explore student score report suggests 2–3 regionson the World-of-Work Map (Figure 5.1), the primary proce -dure used to link UNIACT scores to career options (Prediger& Swaney, 1995). Holland’s hexagonal model of interests andoccupations (Holland, 1997; Holland et al., 1969) and theunderlying Data/Ideas and Things/People Work Task Dimen -sions (Prediger, 1982) form the core of the map. Holland’stypes and ACT career clus ters appear on the periphery. Themap is populated by 26 career areas (groups of occupa-tions). Each area consists of many occupations sharing sim-ilar work tasks.

The student guide Using Your ACT Explore Resultsdescribes how the World-of-Work Map is used for ca reerexploration. Students are encouraged to explore occupa tionsin career areas suggested by their UNIACT results and theirself-reported career plans. Students are also encouraged tovisit www.act.org/explorestudent/ to gather occupationalinformation, such as salary, growth, and entry requirements.More information about career exploration in Explore isfound in the Interpretive Guide for Student and SchoolReports.

World-of-Work Map

The World-of-Work Map (WWM) is an empirically basedoccupational exploration tool. Career area content and loca-tions on the Map were determined from three databases: (a) expert ratings (Rounds, Smith, Hubert, Lewis, & Rivkin,1998) on Holland’s (1997) six work environments for occu-pations in the US Department of Labor’s (DOL’s) O*NETOccupational Information Network (Peterson, Mumford,Borman, Jeanneret, & Fleishman, 1999); (b) job analysis(JA) data for 1,573 recency-screened occupations in theDictionary of Occupational Titles database update(Dictionary of Occupational Titles, 1999); and (c) Holland-type mean interest scores (four interest inventories, six sam-ples) for persons pursuing 640 occupations. Thesedatabases provided three diverse perspectives for the WWMupdate: (a) general nature of work (expert ratings); (b)detailed nature of work (JA data); and (c) interests of work-ers (mean interest scores).

The three databases were used to obtain Data/Ideas andThings/People scores for O*NET occupations. For many ofthese occupations, scores for all three databases were avail-able. For the Data/Ideas scores, correlations for databasepairs were as follows: rating-JA (.78), rating-interest (.78),and JA-interest (.75). For the Things/People scores, the cor-relations were .81, .77, and .74, respectively. These correla-tions, which are unusually high for scores based on diverseassessment procedures, provide good support for the worktask dimensions underlying the WWM and Holland’s (1997)hexagon. As expected, correlations between the Data/Ideasand Things/People scores ranged near zero.

The work task dimension scores were used to plot theO*NET occupations in each of the previous Map’s careerareas. The assignments of occupations to career areas werethen revised in order to increase career area homogeneitywith respect to basic work tasks. In addition, some careerareas were combined and new career areas were created.After a second set of plots was obtained, occupationalassignments were again revised. This process continueduntil career area homogeneity stabilized. Purpose of workand work setting were also considered (Prediger & Swaney,2004).

The 3rd Edition WWM has 26 career areas. Of the 26career areas, 21 have content similar to career areas on theprevious edition of the WWM. The 26 career areas are listedat www.act.org/explorestudent/, where students can learnmore about occupations in each area. The occupationalinformation on this site is updated every two years.

Chapter 5The ACT Explore Interest Inventory

and Other Program Components

Page 66: ACT Explore Technical Manual

56

NormsData for the Grade 8 UNIACT norms were obtained from

Explore program files. The target population consisted of stu-dents enrolled in Grade 8 in the United States. Since theExplore program tests a sizable percentage of U.S. highschool students, some sample bias is inevitable. To improvethe national representativeness of the sample, individualrecords were weighted to more closely match the character-istics of the target populations with respect to gender, eth-nicity, school enrollment, school affiliation (public/private), and region of the country.Sampling. Development of the norming sample began

with schools that participated in Explore testing during the2003–2004 academic year. Based on Market Data Retrieval(MDR; 2003) information, all schools in the United Stateswith public, private, Catholic, or Bureau of Indian Affairs affil-iation were retained. In addition, schools that contained aneighth grade and had at least ten Explore-tested studentsduring the 2003–2004 academic year were retained. Withinretained schools, all students with a valid career choice anda complete set of valid interest inventory responses wereretained. The sample consisted of 273,964 students from2,739 schools. In general, schools use Explore to test allGrade 8 students. The median proportion of students testedwas .81 for the group of 2,739 schools.Weighting. As noted above, the sample was weighted to

make it more representative of the population of eighth gradersin the U.S. The proportion of eighth graders in the U.S. in eachgender/ethnicity category was approximated using populationcounts for the 10–14 age group from the 2000 Census (United States Census Bureau, 2001). The proportion of U.S. eighthgraders in each enrollment size/affiliation/region category was

calculated using MDR (2003) data. Each student was assigneda weight as:

Weight = ·

where

N1 = the number of students, in the population, from thegender/ethnicity category to which the studentbelongs,

n1 = the number of students, in the sample, from the gender/ethnicity category to which the studentbelongs,

N2 = the number of students, in the population, from theenrollment size/affiliation/region category to whichthe student belongs, and

n2 = the number of students, in the sample, from theenrollment size/affiliation/region category to whichthe student belongs.

Precision

By obtaining data from Explore program files, we wereable to make the norming sample quite large so as to allow aprecise estimate of percentile ranks. For a simple randomsample of 16,587 student scores, there would be a 99%chance that the 50th percentile of the scores in the samplewas within one percentile rank of the 50th percentile of thescores in the target population. Although not a simple randomsample, the norming sample consisted of more than 250,000students, permitting precise estimation of percentiles.

N1___n1

N2___n2

(Text continues on page 59.)

Page 67: ACT Explore Technical Manual

57

About the Map• The World-of-Work Map arranges 26 career areas (groups of similar jobs) into 12 regions. Together, the career areas cover allU.S. jobs. Most jobs in a career area are located near the point shown. However, some may be in adjacent Map regions.

• A career area’s location is based on its primary work tasks. The four primary work tasks are working with—DATA: Facts, numbers, files, accounts, business procedures.IDEAS: Insights, theories, new ways of saying or doing something—for example, with words, equations, or music.PEOPLE: People you help, serve, inform, care for, or sell things to.THINGS: Machines, tools, living things, and materials such as food, wood, or metal.

• Six general types of work (“career clusters”) and related Holland types (RIASEC) are shown around the edge of the Map. The overlapping career cluster arrows indicate overlap in the occupational content of adjacent career clusters.

• Because of their People rather than Things orientation, the following two career areas in the Science & Technology Cluster arelocated toward the left side of the Map (Region 10): Medical Diagnosis & Treatment and Social Science.

Arts

Science &

Techno

logy

Social

Service

Administration

& SalesBusiness Operations

Technical

H. Transport Operation& Related

A. Employment-Related Services

B. Marketing& Sales D. Regulation

&Protection

E. Communi-cations &Records

F. FinancialTransactions

G. Distribution and Dispatching

K. Construction &Maintenance

L. Crafts &Related

M.Manufacturing& Processing

N. Mechanical& ElectricalSpecialties

O. Engineering & Technologies

P. Natural Science& TechnologiesT. Applied Arts

(Visual)

V. Applied Arts(Written &Spoken)

U. Creative &Performing Arts

W. Health Care

S R

I

E

A

C

Q. MedicalTech-nologies

X. Education

Y. Community Services

Z. PersonalServices

C. Manage-ment

12

2

1

3 4

5

6

7

8

910

11R.

MedicalDiagnosis

& Treatment

S.Social

Science

J. Computer/Info Specialties

I. Ag/Forestry& Related

IDEAS

THINGS

DATA

PEOPLE

World-of-Work Map(Third Edition — COUNSELOR Version)

Figure 5.1. World-of-Work Map.

© 2000 by ACT, Inc. All rights reserved.

Page 68: ACT Explore Technical Manual

58

Chapter 1 Overview of the ACT Interest Inventory

Description of UNIACTBasic Interest ScalesItem ContentGender-Balanced ScalesThe Data/Ideas and People/Things Work Task Dimensions

ACT Occupational Classification SystemCareer Clusters and Career AreasThe World-of-Work Map (Third Edition)

Chapter 2 UNIACT Development

Summary of UNIACT DevelopmentUNIACT-S Development

Performance and Content GuidelinesItem Selection and Revision

Comparison of Item/Scale Functioning: UNIACT-R and UNIACT-SGender BalanceScale IntercorrelationsReliability

Chapter 3 Norms

Norming Samples and WeightingGrade 8Grade 10Grade 12College/Adult

PrecisionGrades 8, 10, and 12College/Adult

Representativeness of NormsNorm Distributions

Chapter 4 Theory-Based Evidence of Validity

Scale StructureScale Structure and Underlying DimensionsResponse Style and Scale StructureAge-Related Structural StabilityItem Structure

Evidence of Convergent and Discriminant ValidityEvidence that UNIACT Identifies Personally Relevant Career OptionsValidity Evidence for Demographic Groups

GenderRacial/Ethnic Groups

Chapter 5 More Validity Evidence: Outcome Predictionand the Use of UNIACT with Other Measures

Prediction of Environments and OutcomesEnvironmentsCongruenceInterest-Major Congruence and Stability OutcomesCongruence and Success Outcomes

Using UNIACT with Other Measures UNIACT in Tandem with Work-Relevant AbilitiesUNIACT in Combination with Work-Relevant Values

Chapter 6 Reliability

Internal ConsistencyTest-Retest Stability

ReferencesAppendices

Non-ACT research on UNIACTUNIACT-S directions and itemsUNIACT-S scoring procedures and norms

Figure 5.2. The ACT Interest Inventory Technical Manual table of contents.

Page 69: ACT Explore Technical Manual

59

Table 5.1Selected Demographic and Educational Characteristics of

Grade 8 UNIACT Norm Group Students

Weighted sampleCharacteristic proportion U.S. proportiona U.S. category

GenderFemale .48 .49 FemaleMale .52 .51 Male

Racial/EthnicAfrican American/Black .11 .13 African American/BlackAmerican Indian, Alaska Native .01 .01 American Indian/Alaska NativeAsian American, Pacific Islander .04 .03 Asian/Native Hawaiian/Other

Pacific IslanderCaucasian American/White .56 .60 WhiteHispanicb .14 .13 Hispanic/Latino EthnicityOther, Prefer Not to Respond, Blank .12 c —Multiracial .03 .03 Two or more races

Estimated Enrollment<126 .25 .25126–254 .24 .25255–370 .25 .25>370 .26 .25

School AffiliationPublic .90 .90 PublicPrivate .10 .10 Private

Geographic RegionEast .40 .42 EastMidwest .21 .21 MidwestSouthwest .13 .13 SouthwestWest .26 .24 West

aU.S. proportion for gender and ethnicity estimated from the 2000 Census (2001) age 10–14 group. U.S. proportions for enrollment and region obtained from the Market Data Retrieval Educational Database (2003).

bCombination of two racial/ethnic categories: “Mexican American/Chicano” and “Puerto Rican, Cuban, OtherHispanic Origin.”

cU.S. proportion not available.

Representativeness

One way to determine the type and extent of sample biasis to compare demographic characteristics of the normingsample with national statistics for various educational anddemographic variables. The sample weights describedabove were used to obtain the weighted sample proportionsin Table 5.1. This table compares demographic characteris-tics of the norming sample to national statistics, permitting ageneral examination of the representativeness of the norm-ing sample. As can be seen, the norming sample appears tobe reasonably representative of the national population. Forexample, the weighted sample is very similar to the nationalpopulation with respect to geographic region—within 2 per-centage points in each region.

Psychometric Support for UNIACT

The ACT Interest Inventory Technical Manual (ACT, 2009b)describes UNIACT’s rationale, interpretive aids, develop ment,norms, reliability, and validity. To provide readers with anoverview of the information available, the table of contents ofthis manual is listed in Figure 5.2. Internal consistency reliability coef ficients for the six 12-item scales, based on anationally representative Grade 8 sample, ranged from .82 to .89 (median = .84). ACT invites readers to examine thefull scope of informa tion available on UNIACT, available atwww.act.org/research-policy/research-reports.

(Text continued from page 56.)

Page 70: ACT Explore Technical Manual

60

The Explore program provides reports useful to students,parents, teachers, counselors, principals, and district-levelpersonnel. Explore student reports provide valuable informa-tion to help students and their parents begin to consider theirfuture plans at an early age. Explore school reports provideinformation that principals, teachers, counselors, and districtstaff need to monitor and evaluate the academic achieve-ment of their students.

Standard Reporting Services

Purchase prices of Explore Student Assessment Setsinclude scoring services, a package of reports, and a dataCD. Schools/districts will receive aggregate reports for anygrades with 25 or more students. If no grade contains thismany, aggregate reports will be issued for the grade with thelargest number of students. Reports include:• Student Score Reports—Schools receive two copies ofeach Student Score Report: one for the school and onefor the student and parents. The report includes scalescores and norms (cumulative percentiles) for theExplore tests and Composite, as well as feedback on stu-dents’ skills, ways to improve their skills, and informationregarding being on track for college. The report also pro-vides feedback on students’ expressed needs for helpwith a variety of skills and tasks, and results of the UNI-ACT Interest Inventory. Reports can be sorted by class-room or other group identified at the time of scoring.

• Student List Report—Schools receive an alphabeticallisting of all students tested, showing test scores,national and local percentile ranks, career and educa-tional plans, and course work plans in selected subjectareas. Rosters can be prepared by classroom or othergroup identified at the time of scoring.

• School Profile Summary Report—Schools receive amulti-table summary of aggregated test results and self-reported student information. Tables include frequencydistributions of scores and subscores, and mean scoresfor the total group and by gender and racial/ethnic group.

• Early Intervention Rosters—Schools receive a set ofintervention rosters, identifying students who qualifyunder three categories that warrant possible interventionstrategies to assist them to reach their academic andcareer goals.

• Presentation Packet—Schools receive a graphic sum-mary of Explore results, including three-year trends inaverage Explore scores.

• Item-Response Summary Report—Schools receive aset of tables that contain student response percentagesfor the foils of each test item. Also, reference groupresults are provided for comparison.

• Student Score Labels—Schools receive self-adhesivelabels that report students’ scores and percent at orbelow on Explore.

• Local Norms for Schools or District—School normsare reported on the Student Report and Student ScoreLabels in addition to the Student List Report. Districtnorms are provided in the Research Data File.

• Research Data File—Schools receive Explore studentrecords on CD in both ASCII and CSV formats for use inlocal data analyses and research or for merging into astudent master database.

• District Profile Summary Report—Districts receive anaggregated summary across two or more schools (sametable format as School Profile Summary Report).

The information presented in the reports can help schoolpersonnel determine whether students are planning to takethe course work necessary to be prepared for and success-ful in their post–high school plans; what courses may bemost appropriate to their educational development; and howtheir current academic development, expressed needs, andcourse plans work together.

Supplemental Reporting Services

Schools administering Explore may purchase additionalreporting services from ACT. These services include:• Customized Student List Report—Explore roster pre-pared in other than alpha order, such as rank order byComposite score or by Math score, sorted by educa-tional or career plans, etc.

• Custom Profile Summary Report—aggregated sum-mary of a specified subgroup based on selected criteriaavailable in the Explore student record such as gender,racial/ethnic group, postsecondary plans, etc., in thesame table format as the School Summary Report.

Schools or districts interested in specialized data analysisand reporting to meet local needs can contact ACT at(877.789.2925) to discuss those needs and possibleresponses.

Chapter 6Reporting ACT Explore Results

Page 71: ACT Explore Technical Manual

61

American College Testing Program. (1988). Interim psy -chomet ric handbook for the 3rd edition ACT CareerPlanning Program (1988 update). Iowa City, IA: Author.

American College Testing Program. (1994). Explore technicalmanual. Iowa City, IA: Author.

ACT. (1997). Explore technical manual. Iowa City, IA: Author.

ACT. (1999). Plan technical manual. Iowa City, IA: Author.

ACT. (2001). Explore technical manual. Iowa City, IA: Author.

ACT. (2007). Explore technical manual. Iowa City, IA: Author.

ACT. (2009a). ACT National Curriculum Survey® 2009. IowaCity, IA: Author.

ACT, Inc. (2009b). The ACT Interest Inventory technical manual. Available from http://www.act.org/research-policy/research-reports.

ACT. (2010, June). The alignment of Common Core andACT’s College and Career Readiness System. IowaCity, IA: Author.

ACT. (2011). Directions for testing. Iowa City, IA: Author.

ACT. (2011). Explore supervisor’s manual. Iowa City, IA:Author.

ACT. (2011). Guide for interpreting your ExploreItem–Response Summary Report. Iowa City, IA: Author.

ACT. (2011). Using your Explore results. Iowa City, IA: Author.

ACT. (2011). Why take Explore? Iowa City, IA: Author.

American Educational Research Association, AmericanPsychological Association, & National Council on Mea -surement in Education. (1999). Standards for educa -tional and psychological testing. Washing ton, DC:American Educational Research Association.

Bassiri, D., & Roberts, W. L. (2005). Using latent growthanalysis to investigate relationships between EPASacademic development and student/school character-istics. Paper presented at the annual meeting of theAmerican Educational Research Association, Montreal,Quebec, Canada.

Dictionary of occupational titles [Data file: DOTFILE.DOC].(1999). Des Moines, IA: NOICC Crosswalk and DataCenter [producer and distributor].

Hanson, B. A. (1991). A comparison of bivariate smooth ingmethods in common-item equipercentile equat ing.Applied Psychological Measurement, 15, 391–408.

Harris, D. J. (Ed.) (2001). Methodology used in the 1999Explore scaling. Iowa City, IA: Author.

Holland, J. L. (1997). Making vocational choices: A theory ofvocational personalities and work environ ments (3rded.). Odessa, FL: Psychological Assess ment Resourc es.

Holland, J. L., Whitney, D. R., Cole, N. S., & Richards, J. M.(1969). An empirical occupational classification derivedfrom a theory of personality and intended for practiceand research (ACT Research Report No. 29). Iowa City,IA: ACT.

Joint Committee on Testing Practices. (2004). Code of fairtesting practices in education. Washington, DC: Author.

Keigher, A. (2009). Table 3. Percentage distribution of stu-dents by sex, race/ethnicity, school type, and selectedschool characteristics: 2007–08. In Characteristics ofpublic, private, and Bureau of Indian Education ele-mentary and secondary schools in the United States:Results from the 2007–08 Schools and Staffing Survey(NCES 2009-321). Washington, DC: National Center forEducation Statistics, Institute of Education Sciences,U.S. Department of Education. Retrieved fromhttp://nces.ed.gov/pubs2009/2009321.pdf

Kolen, M. J. (1984). Effectiveness of analytic smoothing inequiper centile equating. Journal of Educational Statis -tics, 9, 25–44.

Kolen, M. J. (1988). Defining score scales in relation tomeasure ment error. Journal of Education Measure ment,25, 97–110.

Kolen, M. J., & Hanson, B. A. (1989). Scaling the ACTAssess ment. In R. L. Brennan (Ed.) Methodology usedin scaling the ACT Assessment and P-ACT+. Iowa City,IA: American College Testing Program.

Kolen, M. J., Hanson, B. A., & Brennan, R. L. (1992). Condi -tional standard errors of measurement for scale scores.Journal of Educational Measurement, 29, 285–307.

Market Data Retrieval Educational Database [Electronicdata type]. (2003). Chicago, IL: Market Data Retrieval.

Market Data Retrieval Educational Database [Electronicdata type]. (2009). Chicago, IL: Market Data Retrieval.

Market Data Retrieval Educational Database [Electronicdata type]. (2010). Chicago, IL: Market Data Retrieval.

National Council on Measurement in Education Ad HocCommittee on the Development of a Code of Ethics.(1995). Code of professional responsibilities in educa-tional measurement. Washington, DC: National Councilon Measurement in Education.

References

(Please note that in 1996 the corporate name “The American College Testing Program” was changed to “ACT.”)

Page 72: ACT Explore Technical Manual

Peterson, N. G., Mumford, M. D., Borman, W. C., Jeanneret,P. R., & Fleishman, E. A. (1999). An occupational infor-mation system for the 21st century: The development ofO*NET. Washington, DC: American PsychologicalAssociation.

Prediger, D. J. (1982). Dimensions underlying Holland’s hexa-gon: Missing link between interests and occu pa tions?Journal of Vocational Behavior, 21, 259–287.

Prediger, D. J., & Swaney, K. B. (1995). Using UNIACT in acomprehensive approach to assessment for careerplanning. Journal of Career Assessment, 3, 429–451.

Prediger, D. J., & Swaney, K. B. (2004). Work task dimensionsunderlying the world of work: Research results fordiverse occupational databases. Journal of CareerAssessment, 12, 440-459.

Roberts, W. L. & Bassiri, D. (2005). Assessing the relation-ship between students’ initial achievement level andrate of change in academic growth within and betweenhigh schools. Paper presented at the annual meeting ofthe American Educational Research Association,Montreal, Quebec, Canada.

Rosenbaum, P. R., & Thayer, D. T. (1987). Smoothing the jointand marginal distributions of scored two-way contin -gency tables in test equating. British Journal ofMathematical and Statistical Psychology, 40, 43–49.

Rounds, J., Smith, T., Hubert, L., Lewis, P., & Rivkin, D.(1998). Development of occupational interest profiles(OIPs) for the O*NET. Raleigh, NC: SouthernAssessment Research and Development Center,Employment Security Commission of North Carolina.

Spray, J. A. (1989). Performance of three conditional DIF sta-tistics in detecting differential item functioning on simu-lated tests (ACT Research Report No. 89-7). Iowa City,IA: American College Testing Program.

United States Census Bureau (2001). Census 2000Summary File 1. Retrieved using 2000 American FactFinder, July 15, 2004, from http://factfinder.census.gov

Wang, M. W., & Stanley, J. L. (1970). Differential weight ing: Areview of methods and empirical studies. Review ofEducational Research, 40, 663–705.

62