1. research base

10

Upload: others

Post on 15-Jan-2022

0 views

Category:

Documents


0 download

TRANSCRIPT

In K-12 education, the driving force behind every aspect of our work is student growth, and data and technology are coming together in unique ways to empower student achievement like never before.

We’ve seen this through learning innovations in the classroom, and at the administrative level in data-based tools such as teacher candidate assessments. Educator assessment tools have a unique ability to provide valuable insights on a candidate’s likely influence on student achievement, and identify supports that can help good teachers become great — but not all tools are created equally.

Education leaders can identify which assessment tools actually deliver accurate, effective and actionable hiring insights that support student growth by determining how each assessment tool performs in four key areas:

1. Research base2. Predictive validity3. Continuous learning4. Report functionality

The research methodology behind most teacher candidate assessment tools focuses on traditional approaches from the field of industrial psychology with a heavy emphasis on content validity.

Explained in a very simplistic fashion, content validity is essentially asking principals to identify “good” teachers, then asking questions about the qualities that make a good teacher, identifying themes, and building an assessment around those themes. These assessments produce “results” based on perception — as opposed to a candidate’s measurable influence on actual student growth.

Meta-analyses of 40 years of research studies, focused on these content validity-based tools, concludes that there is a weak relationship between scores on these instruments and student achievement. Instead, these tools “reflect a teacher’s ability to be liked by their administrator” (p. 931) more so than their teaching ability (Metzger & Wu, 2008).

Contrast this to the research base used by tools such as TalentEd’s Educators Professional Inventory (EPI). As the only existing teacher candidate assessment solution that defines teacher effectiveness in a scientific or inductive manner, the EPI measures and predicts teacher effectiveness based on quantifiable data, rather than highly subjective notions or a-priori conceptions of what constitutes an effective teacher.

To build the EPI based on meaningful data, a research consortium — made up of individuals and organizations with expertise in research, education, psychometrics, and predictive analytics — began with an extensive review of the literature on teacher effectiveness over the last several

decades. Thousands of studies were considered when building a working definition of teacher effectiveness as the combination of experiences, dispositions, understandings, general abilities, skills and performances that generate relatively greater gains in student achievement.

So, rather than assessing teacher candidates based on their perceived influence on student growth, the EPI measures the correlation between known qualities or pre-requisites influencing student achievement, and uses that measurement to predict a prospective teacher’s effectiveness.

1. Research Base

Here’s how it works:Based on an extensive review of predictors and outcomes that empirical research, educational theory, and professional standards have deemed significant, the consortim identified four primary domains that predict teacher effectiveness:

1) Grasp of professional practices, instructional-style, and teaching skills2) General cognitive ability3) Attitudinal dispositions4) Attributes or qualifications

The EPI contains items grouped into one of the first three domains listed above — general cognitive ability, attitudinal dispositions and grasp of professional practices/instructional-style teaching skill — while an applicant tracking system identifies qualifications.

Candidates answer questions on the assessment, then the EPI measures the correlation between the responses to assessment items and the likelihood that the teacher generates a particular degree of student achievement, measured against what we expect as “normal” student achievement in a given academic year.

In any hiring assessment tool, the predictive ability of the instrument is its most important feature, and one where many tools fall short. In fact, the EPI is the only tool available in the K-12 education market that uses a predictability model based on which teachers will help students learn more. It’s also the only tool to be validated using predictive analytics based on student growth in districts across the nation.

In speaking with K-12 leaders who have used various candidate assessment tools, I commonly hear phrases such as, “We have a robust screening process but no idea if they actually help students learn more. We’ve never tied it back to student learning before.”

Rather than speculate why other tools fall short in predictive validity, let’s look at the research and technology that deliver accurate EPI predictions.

The EPI was designed to use item responses to predict individual teachers’ value-added gains on

high-quality student assessments, including the computer-adaptive NWEA Measures of Academic Progress (MAP) test. The primary function of the EPI is to predict the future success of the teacher in this regard so K-12 leaders have the necessary insights to make strategic hiring decisions based on quantifiable growth in student achievement.

The first phase of EPI research involved 700 assessment items of varying types — true/false, multiple choice, rank order, etc. — that were tagged to four domains and various sub-domains created by a group of 24 subject matter experts from the field of education.

Assessment items were then deployed in varying combinations to thousands of teachers from more than a dozen states. We chose a national sample that was deliberately representative of the nation across multiple indicators.

2. predictive validity

The graphic above represents how an individual candidate’s performance on EPI directly relates to student growth. For example, you can see above that an EPI score of 50 indicates a candidate, over an academic year, will deliver student growth that is equal to the average teacher. As teachers’ EPI scores increase, we also see that the amount of student growth increases. This is a positive, statistically significant correlation.

However, it is also important to look at the big picture of how the EPI can positively influence achievement for every student at a school or district.

Analysis from the research consortium behind the EPI — which includes the Northwest Evaluation Association, the University of Chicago and other

experts in the fields of research, education, psychometrics and predictive analytics — indicates school district characteristics account for 20 percent of variation in value-added model (VAM) scores. Teachers account for the remaining 80 percent. Thus, the individual teacher characteristics measured by EPI, and their behaviors, can be more important than all of a district’s effects such as curriculum, district socio-economic status, etc.

With an effect size up to 0.36, the EPI can be a powerful tool for predicting successful teachers. Over time, if a district consistently hires highly ranked EPI teachers, they can expect student outcomes to outpace their current level of student achievement.

EPI SCORE

STUDENT ACADEMIC GROWTH

STU

DEN

T G

ROW

TH

0

-2.5

-2.0

-1.5

-1.0

-0.5

0

0.5

1.0

1.5

2.0

2.5

25 50 75 100

Static, narrowly constricted pre-employment assessments fail to provide education leaders with the up-to-date information and timely intelligence required to make increasingly insightful certifying and hiring decisions over time. In other words, the predictive effectiveness of most assessment tools doesn’t improve over time, and may even become less effective.

But the continuous learning methods built into the EPI effectively link student achievement data (and eventually supervisor evaluations and student ratings) to post-employment results —establishing a tightly-knit, contemporary feedback loop that allows the tool to become more effective over time.

Our research consortium automates the real-time improvement of the EPI’s predictive ability using machine learning techniques that analyze context on the fly. That enables the EPI to learn more the longer it is used in a setting.

These real-time analytics can generate scores that are customized to a particular county, zip code, district or school, they also learn from what is happening with the performance of those who were hired using the EPI.

This is accomplished using data loops that connect pre-employment EPI responses to post-employment performance — as represented by value-added measures on test growth and supervisor evaluations and student ratings, if desired. This provides regular

access to a rich field of criterion data that enable the research consortium and TalentEd to incrementally revise and refine the EPI over time to optimize its strongest items and incrementally increase its predictive validity.

3. continuous learning

4. reporting functionality

Just as data-based assessments are only as accurate as the research they’re based on, education leaders’ decision-making abilities can only improve when they have access to meaningful, actionable and easy-to-understand assessment data.

The EPI captures thousands of data points in an ongoing manner. As a quintessential big data provider, TalentEd works with informatics partners and employs best practices in big data analytics to create unique EPI user dashboards and report functionality.

The EPI’s custom reports and dashboards deliver powerful analytics for senior leadership teams, human capital departments, principals and teacher candidates … all customized for the data and analyses that matter most for each stakeholder’s decision-making.

TalentEd is the only research consortium to bring together student data, teacher candidate assessment data, and state, district, and school context/demographic data in one place, demonstrating our commitment to ensuring our reports serve each user in a manner that maximizes their effectiveness within their unique role/area of responsibility. That means more than just facts and figures.

In addition to quick-view dashboards with drill-down functionality for a deeper look at key metrics, the EPI provides:

• The Candidate Grid, which identifies top candidates for interviewing.

• The Interview Guide, which delivers custom interview questions — based on a candidate’s assessment answers — to help

principals objectively interview candidates and better understand their unique strengths and weaknesses. The EPI is one of the first assessment tools to deliver this feature, a sample of which is available here.

• The Professional Development Profile, a custom report for teacher candidates designed to help them improve skills and become more effective educators.

• Specialized analytics dashboards, including a Superintendent dashboard that makes it easy to monitor macro-trends in the teacher workforce, and a Professional Development dashboard that helps education leaders identify strengths and weaknesses across their hiring cohorts.

By providing ever-deepening insights into teacher effectiveness and incorporating data-driven refinements into assessment technology, TalentEd’s EPI delivers a unique, cost-effective, research-based instrument that can improve hiring practices, enhance talent acquisition, and increase student achievement.

To learn more about how the EPI makes data-based decision-making

efficient and powerful, visit www.talentedk12.com/epi/

www.talentedk12.com

by PeopleAdmin

Nick Montgomery is the Chief Research Officer for PeopleAdmin. With over 15 years of experience across education research and

computer science, Nick brings a wide breadth of knowledge and insight to solving education problems with data. His background includes

building organizations and providing nationwide school improvement data tools built from high-end research to over 4,000

schools nationwide. Nick holds a Master of Arts in education research from the University of

Michigan, and a Bachelor of Arts in Computer Science from Brown University.