overview t copyrighted material · 2020. 2. 20. · practitioners with the means to make...

37
1 One OVERVIEW T he Cross-Battery Assessment (XBA) approach (hereafter referred to as the XBA approach) was introduced by Flanagan and her colleagues in the late 1990s (Flanagan & McGrew, 1997; Flanagan, McGrew, & Ortiz, 2000; McGrew & Flanagan, 1998). The XBA approach provides practitioners with the means to make systematic, valid, and up-to-date in- terpretations of intelligence batteries and to augment them with other tests (e.g., academic ability tests) in a way that is consistent with the empirically supported Cattell-Horn-Carroll (CHC) theory of cognitive abilities. Moving beyond the boundaries of a single intelligence test kit by adopting the psy- chometrically and theoretically defensible XBA principles and procedures represents a significantly improved method of measuring cognitive abilities (Carroll, 1998; Kaufman, 2000). According to Carroll (1997), the CHC taxonomy of human cognitive abili- ties “appears to prescribe that individuals should be assessed with respect to the total range of abilities the theory specifies” (p. 129, emphasis added). However, because Carroll recognized that “any such prescription would of course create enormous problems,” he indicated that “[r]esearch is needed to spell out how the assessor can select what abilities need to be tested in particular cases” (p. 129). Flanagan and colleagues’ XBA approach was de- veloped specifically to “spell out” how practitioners can conduct assessments that approximate the total range of broad and narrow cognitive abilities more adequately than what is possible with a single intelligence battery. In a review of the XBA approach, Carroll (1998) stated that it “can be used to develop the most appropriate information about an individual in a given testing situ- ation” (p. xi). In Kaufman’s (2000) review of the XBA approach, he stated that the approach is based on sound assessment principles, adds theory to COPYRIGHTED MATERIAL

Upload: others

Post on 10-Oct-2020

5 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: OVERVIEW T COPYRIGHTED MATERIAL · 2020. 2. 20. · practitioners with the means to make systematic, valid, and up- to- date in- ... 2 ESSENTIALS OF CROSS-BATTERY ASSESSMENT ... solve

1

One

OVERVIEW

The Cross- Battery Assessment (XBA) approach (hereafter referred to as the XBA approach) was introduced by Flanagan and her colleagues in the late 1990s (Flanagan & McGrew, 1997; Flanagan, McGrew,

& Ortiz, 2000; McGrew & Flanagan, 1998). The XBA approach provides practitioners with the means to make systematic, valid, and up- to- date in-terpretations of intelligence batteries and to augment them with other tests (e.g., academic ability tests) in a way that is consistent with the empirically supported Cattell- Horn- Carroll (CHC) theory of cognitive abilities. Moving beyond the boundaries of a single intelligence test kit by adopting the psy-chometrically and theoretically defensible XBA principles and procedures represents a signifi cantly improved method of measuring cognitive abilities (Carroll, 1998; Kaufman, 2000).

According to Carroll (1997), the CHC taxonomy of human cognitive abili-ties “appears to prescribe that individuals should be assessed with respect to the total range of abilities the theory specifi es” (p. 129, emphasis added). However, because Carroll recognized that “any such prescription would of course create enormous problems,” he indicated that “[r]esearch is needed to spell out how the assessor can select what abilities need to be tested in particular cases” (p. 129). Flanagan and colleagues’ XBA approach was de-veloped specifi cally to “spell out” how practitioners can conduct assessments that approximate the total range of broad and narrow cognitive abilities more adequately than what is possible with a single intelligence battery. In a review of the XBA approach, Carroll (1998) stated that it “can be used to develop the most appropriate information about an individual in a given testing situ-ation” (p. xi). In Kaufman’s (2000) review of the XBA approach, he stated that the approach is based on sound assessment principles, adds theory to

COPYRIG

HTED M

ATERIAL

Page 2: OVERVIEW T COPYRIGHTED MATERIAL · 2020. 2. 20. · practitioners with the means to make systematic, valid, and up- to- date in- ... 2 ESSENTIALS OF CROSS-BATTERY ASSESSMENT ... solve

2 ESSENTIALS OF CROSS-BATTERY ASSESSMENT

psychometrics, and improves the quality of the assessment and interpretation of cognitive abilities and processes.

Noteworthy is the fact that the “crossing” of batteries is not a new method of intellectual assessment. Neuropsychological assessment has long adopted the practice of crossing various standardized tests in an attempt to measure a broader range of brain functions than that offered by any single instrument (Lezak, 1976, 1995). Nevertheless, several problems with crossing batteries have plagued assessment- related fi elds for years. Many of these problems have been circumvented by Flanagan and colleagues’ XBA approach (see Rapid Reference 1.1 for examples).

Unlike the XBA model, the various so- called “cross- battery” techniques applied within the fi eld of neuropsychological assessment, for example, are not grounded in a systematic approach that is both psychometrically and the-oretically defensible. Thus, as Wilson (1992) cogently pointed out, the fi eld of neuropsychological assessment is in need of an approach that would guide practitioners through the selection of measures that would result in more specifi c and delineated patterns of function and dysfunction—an approach that provides more clinically useful information than one that is “wedded to the utilization of subscale scores and IQs” (p. 382). Indeed, all fi elds in-volved in the assessment of cognitive functioning have some need for an ap-proach that would aid practitioners in their attempt to “touch all of the major cognitive areas, with emphasis on those most suspect on the basis of history, observation, and on- going test fi ndings” (Wilson, 1992, p. 382). The XBA ap-proach represents a quantum leap in this direction. Recently, other researchers appear to be offering similar recommendations as those inherent in the XBA approach (e.g., Dehn, 2006; Fiorello & Hale, 2006). The defi nition of XBA as well as the rationale and foundations for and applications of this approach are depicted in Figure 1.1 and are described briefl y in the following sections.

DEFINITION

The XBA approach is a time- effi cient method of cognitive assessment that is grounded in CHC theory and research. It allows practitioners to reliably measure a wider range (or a more in- depth but selective range) of cogni-tive abilities / processes than that represented by a single intelligence battery. The XBA approach is based on three foundational sources of information or

Page 3: OVERVIEW T COPYRIGHTED MATERIAL · 2020. 2. 20. · practitioners with the means to make systematic, valid, and up- to- date in- ... 2 ESSENTIALS OF CROSS-BATTERY ASSESSMENT ... solve

Par

alle

l Nee

ds

in C

ogn

itiv

e A

sses

smen

t- R

elat

ed F

ield

s A

ddre

ssed

by

the

XB

A A

ppro

ach

Nee

d w

ith

in A

sses

smen

t- R

elat

ed F

ield

saN

eed

Ad

dre

ssed

by

the

XB

A A

pp

roac

h

Scho

ol P

sych

olog

y, C

linic

al P

sych

olog

y, an

d N

euro

psyc

holo

gy

have

lagg

ed in

the

dev

elop

men

t of c

once

ptua

l mod

els

of t

he

asse

ssm

ent o

f ind

ivid

uals

. The

re is

a n

eed

for

the

deve

lopm

ent

of c

onte

mpo

rary

mod

els.

The

XBA

app

roac

h pr

ovid

es a

con

tem

pora

ry c

once

ptua

l m

odel

for

mea

sure

men

t and

inte

rpre

tatio

n of

hum

an c

ogni

tive

abili

ties.

It is

likel

y th

at t

here

is a

nee

d fo

r ev

ents

ext

erna

l to

a fi e

ld o

f en

deav

or to

giv

e im

petu

s to

new

dev

elop

men

ts a

nd r

eal a

d-va

nces

in t

hat fi

eld

.

Car

roll

and

Hor

n’s

Flui

d- Cr

ysta

llized

the

oret

ical

mod

els

and

sys-

tem

atic

pro

gram

s of

res

earc

h in

cog

nitiv

e ps

ycho

logy

pro

vide

d th

e im

petu

s fo

r th

e X

BA a

ppro

ach

and

led

to t

he d

evel

opm

ent

of b

ette

r as

sess

men

t ins

trum

ents

and

pro

cedu

res.

The

re is

a n

eed

for

trul

y un

idim

ensio

nal a

sses

smen

t ins

tru-

men

ts fo

r ch

ildre

n an

d ad

ults

. With

out t

hem

, val

id in

terp

reta

-tio

ns o

f tes

t sco

res

are

prob

lem

atic

at b

est.

Man

y sc

ale

and

com

posi

te m

easu

res

on in

telli

genc

e ba

tter

ies

are

mix

ed, c

onta

inin

g ex

cess

rel

iabl

e va

rianc

e as

soci

ated

with

a

cons

truc

t irr

elev

ant t

o th

e on

e in

tend

ed fo

r m

easu

rem

ent a

nd

inte

rpre

tatio

n. T

he X

BA a

ppro

ach

ensu

res

that

ass

essm

ents

in

clud

e co

mpo

site

s or

clu

ster

s th

at a

re r

elat

ivel

y pu

re r

epre

-se

ntat

ions

of C

HC

bro

ad a

nd n

arro

w a

bilit

ies,

allo

win

g fo

r va

lid

mea

sure

men

t and

inte

rpre

tatio

n of

mul

tiple

uni

dim

ensio

nal

cons

truc

ts.

Rapid

Refer

ence

1.1

(con

tinue

d)

Page 4: OVERVIEW T COPYRIGHTED MATERIAL · 2020. 2. 20. · practitioners with the means to make systematic, valid, and up- to- date in- ... 2 ESSENTIALS OF CROSS-BATTERY ASSESSMENT ... solve

Nee

d w

ith

in A

sses

smen

t- R

elat

ed F

ield

saN

eed

Ad

dre

ssed

by

the

XB

A A

pp

roac

h

The

re is

a n

eed

to u

tiliz

e a

conc

eptu

al fr

amew

ork

to d

irect

any

ap

proa

ch to

ass

essm

ent.

Thi

s w

ould

aid

in b

oth

the

sele

ctio

n of

inst

rum

ents

and

met

hods

, and

in t

he in

terp

reta

tion

of te

st

fi ndi

ngs.

The

XBA

app

roac

h to

ass

essm

ent i

s ba

sed

on C

HC

the

ory.

Sinc

e th

is ap

proa

ch li

nks

all t

he m

ajor

inte

llige

nce

batt

erie

s (a

nd

a va

riety

of o

ther

cog

nitiv

e ab

ility

/ pro

cess

ing

and

acad

emic

te

sts)

to t

his

theo

ry, b

oth

sele

ctio

n of

test

s an

d in

terp

reta

tion

of te

st fi

ndin

gs a

re m

ade

easy

.

It is

nece

ssar

y th

at t

he c

once

ptua

l fra

mew

ork

or m

odel

und

er-

lyin

g as

sess

men

t inc

orpo

rate

s va

rious

asp

ects

of n

euro

psyc

ho-

logi

cal a

nd c

ogni

tive

abili

ty fu

nctio

n th

at c

an b

e de

scrib

ed in

te

rms

of c

onst

ruct

s th

at a

re r

ecog

nize

d in

the

neu

rops

ycho

logi

-ca

l and

cog

nitiv

e ps

ycho

logy

lite

ratu

re.

The

XBA

app

roac

h in

corp

orat

es v

ario

us a

spec

ts o

f neu

rops

y-ch

olog

ical

and

cog

nitiv

e ab

ility

func

tions

tha

t are

des

crib

ed in

te

rms

of c

onst

ruct

s th

at a

re r

ecog

nize

d in

the

rel

ated

lite

ra-

ture

.

The

re is

a n

eed

to a

dopt

a c

once

ptua

l fra

mew

ork

that

allo

ws

for

the

mea

sure

men

t of t

he fu

ll ra

nge

of b

ehav

iora

l fun

ctio

ns

subs

erve

d by

the

bra

in. U

nfor

tuna

tely

, in

neur

opsy

chol

ogic

al

asse

ssm

ent t

here

is n

o in

clus

ive

set o

f mea

sure

s th

at is

sta

n-da

rdiz

ed o

n a

singl

e no

rmat

ive

popu

latio

n.

The

XBA

app

roac

h al

low

s fo

r th

e m

easu

rem

ent o

f a w

ide

rang

e of

bro

ad a

nd n

arro

w c

ogni

tive

abili

ties /

proc

esse

s sp

eci-

fi ed

in c

onte

mpo

rary

CH

C t

heor

y. A

lthou

gh a

n X

BA n

orm

gr

oup

does

not

exi

st, t

he c

hara

cter

istic

s of

the

nor

mal

pro

b-ab

ility

cur

ve a

nd s

ound

psy

chom

etric

prin

cipl

es a

re u

sed

to

inte

rpre

t XBA

dat

a ef

fect

ivel

y.

Beca

use

ther

e ar

e no

tru

ly u

nidi

men

siona

l mea

sure

s in

psy

cho-

logi

cal a

sses

smen

t, th

ere

is a

need

to s

elec

t sub

test

s fr

om s

tan-

dard

ized

inst

rum

ents

tha

t app

ear

to r

efl e

ct t

he n

euro

cogn

itive

fu

nctio

n of

inte

rest

. In

neur

opsy

chol

ogic

al a

sses

smen

t, th

e ai

m,

ther

efor

e, is

to s

elec

t tho

se m

easu

res

that

, on

the

basis

of c

are-

ful t

ask

anal

ysis

, app

ear

mai

nly

to t

ap a

giv

en c

onst

ruct

.

The

XBA

app

roac

h is

defi n

ed b

y a

CH

C c

lass

ifi ca

tion

syst

em.

Subt

ests

from

the

maj

or in

telli

genc

e ba

tter

ies

(and

var

ious

ot

her

inst

rum

ents

) w

ere

clas

sifi e

d em

piric

ally

as

mea

sure

s of

br

oad

and

narr

ow C

HC

con

stru

cts.

Use

of t

hese

cla

ssifi

catio

ns

allo

ws

prac

titio

ners

to b

e re

ason

ably

con

fi den

t tha

t a g

iven

test

ta

ps a

giv

en c

onst

ruct

.

Page 5: OVERVIEW T COPYRIGHTED MATERIAL · 2020. 2. 20. · practitioners with the means to make systematic, valid, and up- to- date in- ... 2 ESSENTIALS OF CROSS-BATTERY ASSESSMENT ... solve

It is

clea

r th

at a

n ec

lect

ic a

ppro

ach

is ne

eded

in t

he s

elec

tion

of

mea

sure

s, pr

efer

ably

sub

test

s ra

ther

than

the

omni

bus

IQs,

in

orde

r to

gai

n m

ore

spec

ifi ci

ty in

the

del

inea

tion

of p

atte

rns

of

func

tion

and

dysf

unct

ion.

The

XBA

app

roac

h en

sure

s th

at t

wo

or m

ore

rela

tivel

y pu

re,

but q

ualit

ativ

ely

diffe

rent

, ind

icat

ors

of e

ach

broa

d co

gniti

ve

abili

ty / p

roce

ss a

re r

epre

sent

ed in

a c

ompl

ete

asse

ssm

ent o

f th

at c

onst

ruct

. Tw

o or

mor

e qu

alita

tivel

y si

mila

r in

dica

tors

are

ne

cess

ary

to m

ake

infe

renc

es a

bout

spe

cifi c

or

narr

ow C

HC

ab

ilitie

s. T

his

proc

ess

is ec

lect

ic in

its

sele

ctio

n of

mea

sure

s, bu

t at

tem

pts

to r

epre

sent

all

broa

d an

d na

rrow

abi

litie

s / pr

oces

ses

by u

sing

a su

bset

of m

easu

res

from

onl

y tw

o ba

tter

ies

(tha

t w

ere

norm

ed w

ithin

a fe

w y

ears

of o

ne a

noth

er).

The

re is

a n

eed

to s

olve

the

pot

entia

l pro

blem

s th

at c

an a

rise

from

cro

ssin

g no

rmat

ive

grou

ps a

s w

ell a

s se

ts o

f mea

sure

s th

at

vary

in r

elia

bilit

y.

In t

he X

BA a

ppro

ach,

one

can

typ

ical

ly a

chie

ve b

asel

ine

data

in

cog

nitiv

e fu

nctio

ning

acr

oss

seve

n or

eig

ht C

HC

bro

ad a

bil-

ities

/ pro

cess

es t

hrou

gh t

he u

se o

f onl

y tw

o w

ell- s

tand

ardi

zed

batt

erie

s, w

hich

min

imiz

es t

he e

ffect

s of

err

or d

ue to

nor

min

g di

ffere

nces

. Also

, sin

ce in

terp

reta

tion

of b

oth

broa

d an

d na

r-ro

w C

HC

abi

litie

s / pr

oces

ses

is m

ade

at t

he c

lust

er (

rath

er t

han

subt

est)

leve

l, iss

ues

rela

ted

to lo

w r

elia

bilit

y ar

e le

ss p

robl

em-

atic

in t

his

appr

oach

. Fin

ally

, bec

ause

con

fi den

ce in

terv

als

are

used

for

all b

road

and

nar

row

abi

lity /

proc

essin

g cl

uste

rs, t

he

effe

cts

of m

easu

rem

ent e

rror

are

red

uced

furt

her.

a Inf

orm

atio

n ob

tain

ed, i

n pa

rt, f

rom

Wils

on (1

992)

.

Page 6: OVERVIEW T COPYRIGHTED MATERIAL · 2020. 2. 20. · practitioners with the means to make systematic, valid, and up- to- date in- ... 2 ESSENTIALS OF CROSS-BATTERY ASSESSMENT ... solve

Rat

iona

leFo

unda

tion

sA

pplic

atio

ns

Gui

ding

Pri

ncip

les

Ste

p-by

-Ste

pP

roce

ssC

LD

Pop

ulat

ions

Pra

ctic

eR

esea

rch

Test

Dev

elop

men

t

Bri

dge

the

theo

ry-p

ract

ice

gap

Pro

vide

stan

dard

nom

encl

atur

e

Iden

tifi

cati

on o

fco

gnit

ive

proc

essi

ngst

reng

ths

and

wea

knes

s

Impr

ove

the

valid

ity

ofin

telli

genc

e te

sts

Impr

ove

unde

rsta

ndin

g of

re

lati

ons

betw

een

cogn

itiv

e an

d ac

adem

icco

nstr

ucts

Blu

epri

nt fo

rim

prov

ing

upon

the

subs

tant

ive

and

stru

ctur

alva

lidit

y of

tes

ts

Pill

ar 1

CH

C T

heor

y

Pill

ar 2

Bro

ad(S

trat

um I

I)T

est

Cla

ssif

icat

ion

Pill

ar 3

Nar

row

(Str

atum

I)

Tes

tC

lass

ific

atio

ns

Sele

ct b

atte

ryth

at b

est

addr

esse

sre

ferr

al c

once

rns

Sele

ctin

telli

genc

eba

tter

y

Rev

iew

C-L

TC

and

sele

ct t

ests

that

are

like

lyto

be

mos

t fa

ir

Use

clu

ster

sba

sed

on a

ctua

lno

rms

whe

n po

ssib

le

Iden

tify

Bro

adan

d N

arro

w C

HC

abili

ties

mea

sure

dby

bat

tery

Incl

ude

test

s fr

omC

-LT

C n

eede

d fo

rre

ferr

al d

espi

teC

HC

cla

ssif

icat

ion

Sele

ct t

ests

clas

sifi

ed t

hrou

ghan

acc

epta

ble

met

hod

Sele

ct t

ests

to m

easu

reC

HC

abi

litie

sno

t m

easu

red

by b

atte

ry

Adm

inis

ter

enti

reco

llect

ion

of t

ests

sele

cted

inst

anda

rdiz

ed w

ay

Whe

n br

oad

abili

ty is

unde

rrep

rese

nted

,ob

tain

fro

man

othe

r ba

tter

y

Adm

inis

ter

batt

ery

and

supp

lem

enta

l tes

tsas

nec

essa

ry

Use

C-L

IM t

oco

mpa

re r

esul

ts t

oex

pect

ed p

atte

rnof

per

form

ance

Whe

n cr

ossi

ngba

tter

ies

use

test

s de

velo

ped

and

norm

edw

ithi

n a

few

yea

rs

Ent

er d

ata

into

XB

A D

MIA

If p

atte

rn e

vide

ntre

sult

s ar

e in

valid

,ca

nnot

inte

rpre

tda

ta f

urth

er

Sele

ct t

ests

fro

mth

e sm

alle

stnu

mbe

r of

batt

erie

s to

min

imiz

e er

ror

Fol

low

XB

Ain

terp

reti

vegu

idel

ines

If n

o pa

tter

n,re

sult

s ar

e va

lid,

inte

rpre

t vi

aX

BA

gui

delin

es

Fig

ure

1.1

Ove

rvie

w o

f th

e X

BA

Ap

pro

ach

Not

e: C

HC

= C

atte

ll-H

orn-

Car

roll;

C-L

TC =

Cul

ture

-Lan

guag

e Te

st C

lass

ifi ca

tions

; C-L

IM =

Cul

ture

-Lan

guag

e In

terp

retiv

e M

atri

x; C

LD =

C

ultu

rally

and

Lin

guis

tical

ly D

iver

se; X

BA D

MIA

= C

ross

-Bat

tery

Ass

essm

ent D

ata

Man

agem

ent a

nd In

terp

retiv

e A

ssis

tant

.

Page 7: OVERVIEW T COPYRIGHTED MATERIAL · 2020. 2. 20. · practitioners with the means to make systematic, valid, and up- to- date in- ... 2 ESSENTIALS OF CROSS-BATTERY ASSESSMENT ... solve

OVERVIEW 7

three pillars. Together, these pillars (described later in this chapter) pro-vide the knowledge base necessary to organize theory- driven, compre-hensive, reliable, and valid assess-ment of cognitive abilities / pro-cesses.

RATIONALE FOR THE XBA APPROACH

The XBA approach has signifi cant implications for practice, research, and test development. A brief discussion of these implications follows.

Practice

The XBA approach provides “a much needed and updated bridge between current intellectual theory and research and practice” (Flanagan & McGrew, 1997, p. 322). The results of several joint factor analyses conducted over the past 10+ years demonstrated that none of our intelligence batteries contained measures that suffi ciently approximated the full range of broad abilities / pro-cesses that defi ne the structure of intelligence specifi ed in contemporary psychometric theory (e.g., Carroll, 1993; Flanagan & McGrew, 1998; Horn, 1991; Keith, Kranzler, & Flanagan, 2001; McGrew, 1997; Phelps, McGrew, Knopik, & Ford, 2005; Woodcock, 1990). Indeed, the joint factor analyses conducted by Woodcock suggested that it may be necessary to cross batter-ies to measure a broader range of cognitive abilities than that provided by a single intelligence battery.

A summary of the fi ndings of the joint factor analytic studies of intelligence batteries that were published before 2000 are presented in Rapid Reference 1.2. As may be seen in this table, most batteries fell far short of measuring all seven of the broad cognitive abilities / processes listed. Of the major intel-ligence batteries in use prior to 2000, most failed to measure three or more broad CHC abilities (viz., Ga, Glr, Gf, and Gs) that were (and are) considered important in understanding and predicting school achievement. In fact, Gf, often considered to be the essence of intelligence, was either not measured or

DON’T FORGET

The XBA approach allows prac-titioners to reliably measure a wider range (or a more in- depth but selective range) of cognitive abilities / processes than that rep-resented by a single intelligence battery.

Page 8: OVERVIEW T COPYRIGHTED MATERIAL · 2020. 2. 20. · practitioners with the means to make systematic, valid, and up- to- date in- ... 2 ESSENTIALS OF CROSS-BATTERY ASSESSMENT ... solve

Rep

rese

ntat

ion

of B

road

CH

C A

bilit

ies /

Pro

cess

es o

n N

ine

Inte

llig

ence

Bat

teri

es P

ubli

shed

Pri

or to

200

0

Gf

Gc

Gv

Gsm

Glr

Ga

Gs

WIS

C- I

II—

Voca

bula

ry

Info

rmat

ion

Sim

ilari

ties

Com

preh

ensi

on

Bloc

k D

esig

n

Obj

ect A

ssem

bly

Pict

ure

Arr

ange

men

t

Pict

ure

Com

plet

ion

Maz

es

Dig

it Sp

an—

—Sy

mbo

l Sea

rch

Cod

ing

WA

IS- R

—Vo

cabu

lary

Info

rmat

ion

Sim

ilari

ties

Com

preh

ensi

on

Bloc

k D

esig

n

Obj

ect A

ssem

bly

Pict

ure

Com

plet

ion

Pict

ure

Arr

ange

men

t

Dig

it Sp

an—

—D

igit-

Sym

bol

WPP

SI- R

—Vo

cabu

lary

Info

rmat

ion

Sim

ilari

ties

Com

preh

ensi

on

Bloc

k D

esig

n

Obj

ect A

ssem

bly

Pict

ure

Com

plet

ion

Maz

es

Geo

met

ric

Des

ign

Sent

ence

s—

—A

nim

al P

egs

KA

ITM

yste

ry C

odes

Logi

cal S

teps

Defi

niti

ons

Fam

ous

Face

s

Aud

itory

C

ompr

ehen

sion

Dou

ble

Mea

ning

s

Mem

ory

for

Bloc

k D

esig

ns—

Reb

us L

earn

ing

Reb

us D

elay

ed

Rec

all

Aud

itory

Del

ayed

R

ecal

l

——

Rapid

Refer

ence

1.2

Page 9: OVERVIEW T COPYRIGHTED MATERIAL · 2020. 2. 20. · practitioners with the means to make systematic, valid, and up- to- date in- ... 2 ESSENTIALS OF CROSS-BATTERY ASSESSMENT ... solve

K- A

BCM

atri

x A

nalo

gies

—Tr

iang

les

Face

Rec

ogni

tion

Ges

talt

Clo

sure

Mag

ic W

indo

w

Han

d M

ovem

ents

Spat

ial M

emor

y

Phot

o Se

ries

Num

ber

Rec

all

Wor

d O

rder

——

CA

S—

—Fi

gure

Mem

ory

Verb

al S

patia

l R

elat

ions

Non

verb

al M

atri

ces

Wor

d Se

ries

Sent

ence

Rep

etiti

on

Sent

ence

Que

stio

ns

——

Mat

chin

g N

umbe

rs

Rec

eptiv

e A

tten

tion

Plan

ned

Cod

es

Num

ber

Det

ectio

n

Plan

ned

Con

nect

ions

Expr

essi

ve

Att

entio

n

DA

SM

atri

ces

Pict

ure

Sim

ilari

ties

Sequ

entia

l and

Q

uant

itativ

e R

easo

ning

Sim

ilari

ties

Verb

al

Com

preh

ensi

on

Wor

d D

efi n

ition

s

Nam

ing

Voca

bula

ry

Patt

ern

Con

stru

ctio

n

Bloc

k Bu

ildin

g

Cop

ying

Mat

chin

g Le

tter

- Li

ke F

orm

s

Rec

all o

f Des

igns

Rec

ogni

tion

of

Pict

ures

Rec

all o

f Dig

itsR

ecal

l of O

bjec

ts—

Spee

d of

In

form

atio

n Pr

oces

sing

(con

tinue

d)

Page 10: OVERVIEW T COPYRIGHTED MATERIAL · 2020. 2. 20. · practitioners with the means to make systematic, valid, and up- to- date in- ... 2 ESSENTIALS OF CROSS-BATTERY ASSESSMENT ... solve

Gf

Gc

Gv

Gsm

Glr

Ga

Gs

WJ-

RC

once

pt F

orm

atio

n

Ana

lysi

s- Sy

nthe

sis

Ora

l Voc

abul

ary

Pict

ure

Voca

bula

ry

List

enin

g C

ompr

ehen

sion

Verb

al A

nalo

gies

Spat

ial R

elat

ions

Pict

ure

Rec

ogni

tion

Vis

ual C

losu

re

Mem

ory

for

Wor

ds

Mem

ory

for

Sent

ence

s

Num

bers

Rev

erse

d

Mem

ory

for

Nam

es

Vis

ual- A

udito

ry

Lear

ning

Del

ayed

Rec

all:

Mem

ory

for

Nam

es

Del

ayed

Rec

all:

Vis

ual- A

udito

ry

Lear

ning

Inco

mpl

ete

Wor

ds

Soun

d Bl

endi

ng

Soun

d Pa

tter

ns

Vis

ual M

atch

ing

Cro

ss O

ut

SB:F

EM

atri

ces

Equa

tion

Build

ing

Num

ber

Seri

es

Verb

al R

elat

ions

Com

preh

ensi

on

Abs

urdi

ties

Voca

bula

ry

Patt

ern

Ana

lysi

s

Bead

Mem

ory

Cop

ying

Mem

ory

for

Obj

ects

Pape

r Fo

ldin

g &

C

uttin

g

Mem

ory

for

Sent

ence

s

Mem

ory

for

Dig

its

——

Not

e: C

lass

ifi ca

tions

of t

ests

in t

his

tabl

e ar

e ba

sed

on a

sum

mar

y of

fact

or a

naly

tic r

esea

rch

cond

ucte

d by

Car

roll

(199

3); F

lana

gan

and

McG

rew

(199

8); H

orn

(199

1);

Keith

, Kra

nzle

r, an

d Fl

anag

an (

2001

); M

cGre

w (1

997)

; and

Woo

dcoc

k (1

990)

. WIS

C- I

II =

Wec

hsle

r In

telli

genc

e Sc

ale

for

Chi

ldre

n–T

hird

Edi

tion

(Wec

hsle

r, 19

91);

WA

IS-

R =

Wec

hsle

r A

dult

Inte

llige

nce

Scal

e–R

evis

ed (

Wec

hsle

r, 19

81);

WPP

SI- R

= W

echs

ler

Pres

choo

l and

Pri

mar

y Sc

ale

of In

telli

genc

e–R

evis

ed (

Wec

hsle

r, 19

89);

KA

IT =

Ka

ufm

an A

dole

scen

t and

Adu

lt In

telli

genc

e Te

st (

Kauf

man

& K

aufm

an, 1

993)

; K- A

BC =

Kau

fman

Ass

essm

ent B

atte

ry fo

r C

hild

ren

(Kau

fman

& K

aufm

an, 1

983)

; CA

S =

C

ogni

tive

Ass

essm

ent S

yste

m (

Das

& N

aglie

ri, 1

997)

; DA

S =

Diff

eren

tial A

bilit

y Sc

ales

(El

liott

, 199

0); W

J- R

= W

oodc

ock-

John

son

Psyc

ho- E

duca

tiona

l Bat

tery

–Rev

ised

(W

oodc

ock

& Jo

hnso

n, 1

989)

; SB

:FE

= S

tanf

ord-

Bine

t Int

ellig

ence

Sca

le–F

ourt

h Ed

ition

(T

horn

dike

, Hag

en, &

Sat

tler,

1986

).

Rep

rodu

ced

with

per

mis

sion

from

Gui

lford

.

Page 11: OVERVIEW T COPYRIGHTED MATERIAL · 2020. 2. 20. · practitioners with the means to make systematic, valid, and up- to- date in- ... 2 ESSENTIALS OF CROSS-BATTERY ASSESSMENT ... solve

OVERVIEW 11

not measured adequately by most of the intelligence batteries included in Rapid Reference 1.2 (i.e., WISC- III, WAIS- R, WPPSI- R, K- ABC, and CAS; Alfonso, Flanagan, & Radwan, 2005).

The fi nding that the abilities not

measured by the intelligence batter-ies listed in Rapid Reference 1.2 are important in understanding children’s

learning diffi culties provided the impetus for developing the XBA approach (Flanagan & McGrew 1997). In effect, the XBA approach was developed to systematically replace the dashes in Rapid Reference 1.2 with tests from another battery. As such, this approach guides practitioners in the selection of tests, both core and supplemental, that together provide measurement of abilities / processes that is considered suffi cient in both breadth and depth for the purpose of addressing referral concerns.

Another benefi t of the XBA approach is that it facilitates communication among professionals. Most scientifi c disciplines have a standard nomencla-ture (i.e., a common set of terms and defi nitions) that facilitates communica-tion and guards against misinterpretation. For example, the standard nomen-clature in chemistry is refl ected in the Periodic Table; in biology, it is refl ected in the classifi cation of animals according to phyla; in psychology and psychiatry, it is refl ected in the Diagnostic and Statistical Manual of Mental Disorders; and in medicine, it is refl ected in the International Classifi cation of Diseases. Underlying the XBA approach is a standard nomenclature, or Table of Human Cognitive

Abilities, that includes classifi cations of over 500 tests according to the broad and narrow CHC abilities / processes they measure (see also Alfonso et al., 2005; Flanagan & Ortiz, 2001; Flanagan, McGrew, & Ortiz, 2000; Flanagan, Ortiz, Alfonso, & Mascolo, 2002, 2006). The XBA classifi cation system has had a positive impact on communication among practitioners, has improved research on the relations between cognitive and academic constructs, and has resulted in substantial improvements in the measurement of cognitive constructs, as may be seen in the design and structure of current intelligence batteries (e.g., WJ III, KABC- II, DAS- II, SB5).

Finally, the XBA approach offers practitioners a psychometrically de-

DON’T FORGET

The XBA approach guides prac-titioners in the selection of tests, both core and supplemental, that together provide measurement of abilities / processes that is consid-ered suffi cient in both breadth and depth for the purpose of addressing referral concerns.

Page 12: OVERVIEW T COPYRIGHTED MATERIAL · 2020. 2. 20. · practitioners with the means to make systematic, valid, and up- to- date in- ... 2 ESSENTIALS OF CROSS-BATTERY ASSESSMENT ... solve

12 ESSENTIALS OF CROSS-BATTERY ASSESSMENT

fensible means to identifying population- relative (or normative) strengths and weaknesses in cogni-tive abilities / processes. According to Brackett and McPherson (1996), “the limited capacity of standard-ized instruments to assess isolated cognitive processes creates a ma-jor weakness in intracognitive dis-

crepancy models. Although analysis of [Wechsler] subtests typically report measures of distinct cognitive abilities, such abilities may not emerge by individual subtests but rather in combination with other subtests” (p. 79). The XBA approach addresses this limitation. By focusing interpretations on cognitive ability clusters (i.e., via combinations of construct- relevant sub-tests) that contain qualitatively different indicators of each broad CHC cog-nitive ability / process, the identifi cation of normative processing strengths and weaknesses via XBA procedures is both psychometrically defensible and theoretically sound. In sum, the XBA approach addresses the longstanding need within the entire fi eld of assessment, from learning disabilities to neuro-psychological assessment, for methods that “provide a greater range of infor-mation about the ways individuals learn—the ways individuals receive, store, integrate, and express information” (Brackett & McPherson, p. 80). Because current intelligence tests provide a broader range of information than their predecessors, it is not surprising that results of recent studies demonstrated that specifi c cognitive abilities / processes explain signifi cant variance in aca-demic outcomes (e.g., reading achievement) above and beyond the variance accounted for by g (e.g., Floyd, Keith, Taub, & McGrew, 2006; Vanderwood, McGrew, Flanagan, & Keith, 2002).

Research

The XBA approach was also developed to promote a greater understand-ing of the relationship between cognitive abilities and important outcome criteria. Because XBAs are based on the empirically supported CHC theory and constructed in a psychometrically defensible manner, they represent a valid means of measuring cognitive constructs (Flanagan, 2000; Phelps et al.,

DON’T FORGET

The XBA approach offers practitio-ners a psychometrically defensible means to identifying population- relative (or normative) strengths and weaknesses in cognitive abili-ties / processes.

Page 13: OVERVIEW T COPYRIGHTED MATERIAL · 2020. 2. 20. · practitioners with the means to make systematic, valid, and up- to- date in- ... 2 ESSENTIALS OF CROSS-BATTERY ASSESSMENT ... solve

OVERVIEW 13

2005). It is noteworthy that when second- order constructs are composed of (moderately) correlated but qualitatively distinct measures, they will tend to have higher correlations with complex criteria (e.g., academic achievement), as compared to lower- order constructs, because they are broader in what they measure (Comrey, 1988). Predictive statements about different achieve-ments (i.e., criterion- related inferences) that are made from XBA clusters are based on a more solid foundation than individual subtests (and perhaps some global scores from single intelligence batteries) because the predictor con-structs are represented by relatively pure and qualitatively distinct measures of broad CHC abilities / processes. Thus, improving the validity of CHC abil-ity measures (i.e., intelligence batteries) has further elucidated the relations between CHC cognitive abilities / processes and different achievement and vocational / occupational outcomes (e.g., Flanagan, 2000; Floyd, Bergeron, & Alfonso, 2006; Floyd, Keith, Taub, & McGrew, in press; McGrew, 1997; Vanderwood, McGrew, Flanagan, & Keith, 2002).

Test Development

Although there was substantial evidence of at least eight or nine broad cog-nitive CHC abilities / processes by the late 1980s, the tests of the time did not refl ect this diversity in measurement. For example, Rapid Reference 1.2 shows that the WISC-III, WPPSI- R, K- ABC, KAIT, WAIS- R, and CAS bat-teries only measured two or three broad CHC abilities / processes adequately. The Wechslers primarily measured Gv and Gc. The K- ABC primarily mea-sured Gv and Gsm, and to a much lesser extent Gf, while the KAIT primarily measured Gc, Gf, and Glr, and to a much lesser extent Gv. The CAS measured Gs, Gsm, and Gv. Finally, while the DAS and SB:FE did not provide suffi -cient coverage of abilities to narrow the gap between contemporary theory and practice, their comprehensive measurement of approximately four CHC abilities was nonetheless an improvement over the previously mentioned bat-teries. Rapid Reference 1.2 shows that only the WJ- R included measures of all broad cognitive abilities as compared to the other batteries available at that time. Nevertheless, most of the broad abilities were not measured adequately by the WJ- R (Alfonso et al., 2005; McGrew & Flanagan, 1998).

In general, Rapid Reference 1.2 shows that Gf, Gsm, Glr, Ga, and Gs were not measured well by the majority of intelligence batteries published prior

Page 14: OVERVIEW T COPYRIGHTED MATERIAL · 2020. 2. 20. · practitioners with the means to make systematic, valid, and up- to- date in- ... 2 ESSENTIALS OF CROSS-BATTERY ASSESSMENT ... solve

14 ESSENTIALS OF CROSS-BATTERY ASSESSMENT

to 2000. Therefore, it is clear that most test authors did not use contempo-rary psychometric theories of the structure of cognitive abilities to guide the development of their intelligence batteries. As such, a substantial theory- practice gap existed—that is, theories of the structure of cognitive abilities were far in advance of commonly used intelligence batteries. In fact, prior to the mid- 1980s, theory seldom played a role in intelligence test development. The numerous dashes in Rapid Reference 1.2 exemplify the “theory- practice gap” that existed in the fi eld of intellectual assessment at that time (Alfonso et al., 2005).

In the past decade, Gf- Gc theory, and more recently CHC theory, has had a signifi cant impact on the revision of old and the development of new in-telligence batteries. For example, a wider range of broad and narrow abili-ties / processes is represented on current intelligence batteries than that which was represented on previous editions of these tests. Rapid Reference 1.3 pro-vides several salient examples of the impact that CHC theory and XBA CHC test classifi cations have had on intelligence test development in recent years. This rapid reference lists the major intelligence tests in the order in which they were revised, beginning with those tests with the greatest number of years between revisions (i.e., K- ABC) and ending with newly developed tests (i.e., RIAS and WRIT) and tests that have yet to be revised (e.g., CAS). As is obvious from a review of Rapid Reference 1.3, CHC theory and XBA CHC test classifi cations have had a signifi cant impact on recent test development (Alfonso et al., 2005).

Of the seven intelligence batteries (including both comprehensive and brief measures) that were published since 2000, the test authors of four clearly used CHC theory and XBA CHC test classifi cations as a blueprint for test de-velopment (i.e., WJ III, SB5, KABC- II, and DAS- II), and the test authors of two were obviously infl uenced by CHC theory (i.e., RIAS and WRIT). Only the authors of the Wechsler Scales (i.e., WPPSI- III, WISC- IV, WAIS- III) and CAS did not state explicitly that CHC theory was used as a guide for revi-sion.1 Nevertheless, the authors of the Wechsler Scales acknowledged the re-search of Cattell, Horn, and Carroll in their most recent manuals (Wechsler,

1 Das and Naglieri developed the CAS from PASS theory; therefore, their test is based on an information- processing theory, rather than any specifi c theory within the psychometric tradition.

Page 15: OVERVIEW T COPYRIGHTED MATERIAL · 2020. 2. 20. · practitioners with the means to make systematic, valid, and up- to- date in- ... 2 ESSENTIALS OF CROSS-BATTERY ASSESSMENT ... solve

Impa

ct o

f CH

C T

heor

y an

d X

BA

CH

C T

est C

lass

ifi c

atio

ns o

n In

tell

igen

ce T

est D

evel

opm

ent

Tes

t (Y

ear

of P

ub

licat

ion

)R

evis

ion

(Yea

r o

f Pu

blic

atio

n)

K- A

BC

(19

83)

No

obvi

ous

impa

ct.

KA

BC

- II (

2004

)Pr

ovid

ed a

sec

ond

glob

al s

core

tha

t inc

lude

d fl u

id a

nd c

ryst

alliz

ed a

bilit

ies;

in

clud

ed s

ever

al n

ew s

ubte

sts

mea

suri

ng r

easo

ning

; int

erpr

etat

ion

of te

st

perf

orm

ance

may

be

base

d on

CH

C t

heor

y or

Lur

ia’s

theo

ry; p

rovi

ded

asse

ssm

ent o

f fi v

e C

HC

bro

ad a

bilit

ies /

proc

esse

s. T

hree

add

ition

al C

HC

br

oad

abili

ties /

proc

esse

s ar

e m

easu

red

by it

s co

unte

rpar

t, th

e K

TEA

- II,

repr

esen

ting

8 br

oad

CH

C a

bilit

ies /

proc

esse

s ac

ross

the

se c

o- n

orm

ed

inst

rum

ents

.

SB

:FE

(19

86)

Use

d a

thre

e- le

vel h

iera

rchi

cal m

odel

of t

he s

truc

ture

of c

ogni

tive

abili

ties

to

guid

e co

nstr

uctio

n of

the

test

: the

top

leve

l inc

lude

d a

gene

ral r

easo

ning

fac-

tor,

or g

; the

mid

dle

leve

l inc

lude

d th

ree

broa

d fa

ctor

s ca

lled

crys

talli

zed

abil-

ities

, fl u

id- a

naly

tic a

bilit

ies,

and

shor

t- te

rm m

emor

y; th

e th

ird le

vel i

nclu

ded

mor

e sp

ecifi

c fa

ctor

s in

clud

ing

verb

al r

easo

ning

, qua

ntita

tive

reas

onin

g, a

nd

abst

ract

/ visu

al r

easo

ning

.

SB

5 (2

003)

Use

d C

HC

the

ory

to g

uide

test

dev

elop

men

t; in

crea

sed

the

num

ber

of

broa

d fa

ctor

s fr

om 4

to 6

; inc

lude

d a

Wor

king

Mem

ory

Fact

or b

ased

on

rese

arch

indi

catin

g its

impo

rtan

ce fo

r ac

adem

ic s

ucce

ss.

WA

IS- R

(19

81)

No

obvi

ous

impa

ct.

WA

IS- I

II (

1997

)En

hanc

ed t

he m

easu

rem

ent o

f fl u

id r

easo

ning

by

addi

ng t

he M

atri

x R

ea-

soni

ng s

ubte

st; i

nclu

ded

four

inde

xes

that

mea

sure

spe

cifi c

abi

litie

s / pr

o-

cess

es m

ore

pure

ly t

han

the

trad

ition

al IQ

s pr

ovid

ed in

the

var

ious

W

echs

ler

Scal

es; i

nclu

ded

a W

orki

ng M

emor

y In

dex

base

d on

rec

ent

rese

arch

indi

catin

g its

impo

rtan

ce fo

r ac

adem

ic s

ucce

ss.

Rapid

Refer

ence

1.3

(con

tinue

d)

Page 16: OVERVIEW T COPYRIGHTED MATERIAL · 2020. 2. 20. · practitioners with the means to make systematic, valid, and up- to- date in- ... 2 ESSENTIALS OF CROSS-BATTERY ASSESSMENT ... solve

Tes

t (Y

ear

of P

ub

licat

ion

)R

evis

ion

(Yea

r o

f Pu

blic

atio

n)

WP

PS

I- R

(19

89)

No

obvi

ous

impa

ct.

WP

PS

I- II

I (20

02)

Inco

rpor

ated

mea

sure

s of

Pro

cess

ing

Spee

d th

at y

ield

ed a

Pro

cess

ing

Spee

d Q

uotie

nt b

ased

on

rece

nt r

esea

rch

indi

catin

g th

e im

port

ance

of

Proc

essi

ng S

peed

for

earl

y ac

adem

ic s

ucce

ss; e

nhan

ced

the

mea

sure

men

t of

fl ui

d re

ason

ing

by a

ddin

g th

e M

atri

x R

easo

ning

and

Pic

ture

Con

cept

s su

btes

ts.

WJ-

R (

1989

)U

sed

mod

ern

Gf-

Gc

theo

ry a

s th

e co

gniti

ve m

odel

for

test

dev

elop

men

t; in

clud

ed t

wo

mea

sure

s of

eac

h of

eig

ht b

road

abi

litie

s / pr

oces

ses.

WJ I

II (

2001

)U

sed

CH

C t

heor

y an

d X

BA C

HC

test

cla

ssifi

catio

ns a

s a

“blu

epri

nt”

for

test

dev

elop

men

t; in

clud

ed t

wo

or t

hree

qua

litat

ivel

y di

ffere

nt n

arro

w

abili

ties

for

each

bro

ad a

bilit

y; t

he c

ombi

ned

cogn

itive

and

ach

ieve

men

t ba

tter

ies

of t

he W

J III

incl

ude

9 of

the

10

broa

d ab

ilitie

s su

bsum

ed in

CH

C

theo

ry.

WIS

C- I

II (

1991

)N

o ob

viou

s im

pact

.

WIS

C- I

V (

2003

)El

imin

ated

Ver

bal a

nd P

erfo

rman

ce IQ

s; r

epla

ced

the

Free

dom

from

Dis-

trac

tibili

ty In

dex

with

the

Wor

king

Mem

ory

Inde

x; r

epla

ced

the

Perc

eptu

al

Org

aniz

atio

n In

dex

with

the

Per

cept

ual R

easo

ning

Inde

x; e

nhan

ced

the

mea

sure

men

t of fl

uid

rea

soni

ng b

y ad

ding

the

Mat

rix

Rea

soni

ng a

nd P

ic-tu

re C

once

pts

subt

ests

; enh

ance

d th

e m

easu

rem

ent o

f Pro

cess

ing

Spee

d w

ith t

he a

dditi

on o

f the

Can

cella

tion

subt

est.

DA

S (

1990

)N

o ob

viou

s im

pact

.

DA

S- I

I (20

06)

CH

C t

heor

y w

as u

sed

as a

gui

de fo

r th

e re

visi

on o

f thi

s ba

tter

y. A

s a

resu

lt,

the

DA

S- II

mea

sure

s as

pect

s of

sev

en b

road

cog

nitiv

e ab

ilitie

s / pr

oces

ses.

Page 17: OVERVIEW T COPYRIGHTED MATERIAL · 2020. 2. 20. · practitioners with the means to make systematic, valid, and up- to- date in- ... 2 ESSENTIALS OF CROSS-BATTERY ASSESSMENT ... solve

RIA

S (

2003

)In

clud

ed in

dica

tors

of fl

uid

and

cry

stal

lized

abi

litie

s.

WR

IT (

2002

)D

evel

oped

to b

e co

nsis

tent

with

cur

rent

the

orie

s of

inte

llige

nce;

pro

vide

d Fl

uid

and

Cry

stal

lized

IQs

base

d on

the

Cat

tell-

Hor

n G

f- G

c th

eory

.

CA

S (

1997

)N

o ob

viou

s im

pact

bas

ed o

n PA

SS t

heor

y.

KA

IT (

1993

)In

clud

ed s

ubte

sts

orga

nize

d ac

cord

ing

to t

he w

ork

of H

orn

and

Cat

tell;

pr

ovid

ed F

luid

and

Cry

stal

lized

IQs.

Not

e: K

- ABC

= K

aufm

an A

sses

smen

t Bat

tery

for

Chi

ldre

n (K

aufm

an &

Kau

fman

, 198

3); K

ABC

- II =

Kau

fman

Ass

essm

ent B

atte

ry fo

r C

hild

ren–

Seco

nd E

ditio

n (K

aufm

an

& K

aufm

an, 2

004)

; SB

:FE

= S

tanf

ord-

Bine

t Int

ellig

ence

Sca

le–F

ourt

h Ed

ition

(T

horn

dike

, Hag

en, &

Sat

tler,

1986

); SB

5 =

Sta

nfor

d- Bi

net I

ntel

ligen

ce S

cale

s–Fi

fth

Editi

on

(Roi

d, 2

003)

; WA

IS- R

= W

echs

ler

Adu

lt In

telli

genc

e Sc

ale–

Rev

ised

(W

echs

ler,

1981

); W

AIS

- III

= W

echs

ler

Adu

lt In

telli

genc

e Sc

ale–

Thi

rd E

ditio

n (W

echs

ler,

1997

); W

PPSI

- R =

Wec

hsle

r Pr

esch

ool a

nd P

rim

ary

Scal

e of

Inte

llige

nce–

Rev

ised

(W

echs

ler,

1989

); W

PPSI

- III

= W

echs

ler

Pres

choo

l and

Pri

mar

y Sc

ale

of In

telli

genc

e–T

hird

Edi

-tio

n (W

echs

ler,

2002

); W

J- R

= W

oodc

ock-

John

son

Psyc

ho- E

duca

tiona

l Bat

tery

–Rev

ised

(W

oodc

ock

& Jo

hnso

n, 1

989)

; WJ I

II =

Woo

dcoc

k- Jo

hnso

n III

Tes

ts o

f Cog

nitiv

e A

bilit

ies

(Woo

dcoc

k, M

cGre

w, &

Mat

her,

2001

); W

ISC

- III

= W

echs

ler

Inte

llige

nce

Scal

e fo

r C

hild

ren–

Thi

rd E

ditio

n (W

echs

ler,

1991

); W

ISC

- IV

= W

echs

ler

Inte

llige

nce

Scal

e fo

r C

hild

ren–

Four

th E

ditio

n (W

echs

ler,

2003

); R

IAS

= R

eyno

lds

Inte

llect

ual A

sses

smen

t Sca

les

(Rey

nold

s &

Kam

phau

s, 2

003)

; WR

IT =

Wid

e R

ange

Inte

llige

nce

Test

(G

lutt

ing,

Ada

ms,

& S

hesl

ow, 2

002)

; CA

S =

Cog

nitiv

e A

sses

smen

t Sys

tem

(D

as &

Nag

lieri

, 199

7); K

AIT

= K

aufm

an A

dole

scen

t and

Adu

lt In

telli

genc

e Te

st (

Kauf

man

&

Kauf

man

, 199

3); D

AS

= D

iffer

entia

l Abi

lity

Scal

es (

Ellio

tt, 1

990)

; DA

S- II

= D

iffer

entia

l Abi

lity

Scal

es–S

econ

d Ed

ition

(El

liott

, 200

7).

Rep

rodu

ced

with

per

mis

sion

from

Gui

lford

.

Page 18: OVERVIEW T COPYRIGHTED MATERIAL · 2020. 2. 20. · practitioners with the means to make systematic, valid, and up- to- date in- ... 2 ESSENTIALS OF CROSS-BATTERY ASSESSMENT ... solve

18 ESSENTIALS OF CROSS-BATTERY ASSESSMENT

2002, 2003). Presently, as Rapid Reference 1.3 shows, nearly all comprehen-sive, individually administered intelligence batteries that are used with some regularity subscribe either explicitly or implicitly to CHC theory (Alfonso et al., 2005; Flanagan et al., 2006).

Convergence toward the incorporation of CHC theory is also seen clearly in Rapid Reference 1.4. This table is identical to Rapid Reference 1.2 except it includes all intelligence batteries that were published after 2000, including recent revisions of many of the tests from Rapid Reference 1.2. A comparison of Rapid Reference 1.2 and Rapid Reference 1.4 shows that many of the gaps in measurement of broad cognitive abilities have been fi lled. Specifi cally, the majority of tests published after 2000 now measure four or fi ve broad cogni-tive abilities adequately (see Rapid Reference 1.4) as compared to two or three (see Rapid Reference 1.2). For example, Rapid Reference 1.4 shows that the WISC- IV, WAIS- III, WPPSI- III, KABC- II, SB5, and DAS- II measure four or fi ve CHC broad abilities. The WISC- IV measures Gf, Gc, Gv, Gsm, and Gs while the KABC- II measures Gf, Gc, Gv, and Glr adequately, and to a lesser extent Gsm. The WAIS- III measures Gc, Gv, Gsm, and Gs adequately, and to a lesser extent Gf, while the WPPSI- III measures Gf, Gc, Gv, and Gs adequately. Finally, the SB5 measures four CHC broad abilities adequately (i.e., Gf, Gc, Gv,

Gsm; Alfonso et al., 2005) and the DAS- II measures fi ve CHC broad abilities adequately (i.e., Gf, Gc, Gv, Gsm, and Glr) and to a lesser extent, Ga and Gs.

Rapid Reference 1.4 shows that the WJ III continues to include measures of all the major broad cognitive abilities / processes and now measures them well, particularly when it is used in conjunction with the Diagnostic Supplement (DS; Woodcock, McGrew, Mather, & Schrank, 2003). Third, a comparison of Rapid References 1.2 and 1.4 indicates that two broad abilities / processes not measured by many intelligence batteries prior to 2000 are now measured by the majority of intelligence batteries available today; that is, Gf and Gsm. These broad abilities / processes may be better represented on revised and new intelligence batteries because of the accumulating research evidence re-garding their importance in overall academic success (see Chapter 2). Finally, Rapid Reference 1.4 reveals that intelligence batteries continue to fall short in their measurement of three CHC broad abilities / processes; specifi cally, Glr, Ga, and Gs. In addition, current intelligence batteries do not provide adequate measurement of most specifi c or narrow CHC abilities / processes, many of which are important in predicting academic achievement. Thus,

Page 19: OVERVIEW T COPYRIGHTED MATERIAL · 2020. 2. 20. · practitioners with the means to make systematic, valid, and up- to- date in- ... 2 ESSENTIALS OF CROSS-BATTERY ASSESSMENT ... solve

Rep

rese

ntat

ion

of B

road

CH

C A

bilit

ies /

Pro

cess

es o

n N

ine

Inte

llig

ence

Bat

teri

es P

ubli

shed

Aft

er 2

000

Gf

Gc

Gv

Gsm

Glr

Ga

Gs

WIS

C- I

VM

atri

x R

easo

ning

Pict

ure

Con

cept

s

Voca

bula

ry

Info

rmat

ion

Sim

ilari

ties

Com

preh

ensi

on

Wor

d R

easo

ning

Bloc

k D

esig

n

Pict

ure

Com

plet

ion

Dig

it Sp

an

Lett

er- N

umbe

r Se

quen

cing

——

Sym

bol S

earc

h

Cod

ing

Can

cella

tion

WA

IS- I

IIaM

atri

x R

easo

ning

Voca

bula

ry

Info

rmat

ion

Sim

ilari

ties

Com

preh

ensi

on

Bloc

k D

esig

n

Obj

ect A

ssem

bly

Pict

ure

Arr

ange

men

t

Pict

ure

Com

plet

ion

Dig

it Sp

an

Lett

er- N

umbe

r Se

quen

cing

——

Sym

bol S

earc

h

Dig

it- Sy

mbo

l Cod

ing

WPP

SI- I

IIM

atri

x R

easo

ning

Pict

ure

Con

cept

s

Voca

bula

ry

Info

rmat

ion

Sim

ilari

ties

Com

preh

ensi

on

Rec

eptiv

e Vo

cabu

lary

Pict

ure

Nam

ing

Wor

d R

easo

ning

Bloc

k D

esig

n

Obj

ect A

ssem

bly

Pict

ure

Com

plet

ion

——

—C

odin

g

Sym

bol S

earc

h

Rapid

Refer

ence

1.4

(con

tinue

d)

Page 20: OVERVIEW T COPYRIGHTED MATERIAL · 2020. 2. 20. · practitioners with the means to make systematic, valid, and up- to- date in- ... 2 ESSENTIALS OF CROSS-BATTERY ASSESSMENT ... solve

Gf

Gc

Gv

Gsm

Glr

Ga

Gs

KA

BC- I

IPa

tter

n R

easo

ning

Stor

y C

ompl

etio

n

Expr

essi

ve

Voca

bula

ry

Verb

al K

now

ledg

e

Rid

dles

Face

Rec

ogni

tion

Tria

ngle

s

Ges

talt

Clo

sure

Rov

er

Bloc

k C

ount

ing

Con

cept

ual T

hink

ing

Num

ber

Rec

all

Wor

d O

rder

Han

d M

ovem

ents

Atla

ntis

Reb

us

Atla

ntis

Del

ayed

Reb

us D

elay

ed

——

WJ I

IIC

once

pt F

orm

atio

n

Ana

lysi

s- Sy

nthe

sis

Verb

al

Com

preh

ensi

on

Gen

eral

Info

rmat

ion

Spat

ial R

elat

ions

Pict

ure

Rec

ogni

tion

Plan

ning

Mem

ory

for

Wor

ds

Num

bers

Rev

erse

d

Aud

itory

Wor

king

M

emor

y

Vis

ual- A

udito

ry

Lear

ning

Ret

riev

al F

luen

cy

Vis

ual- A

udito

ry

Lear

ning

Del

ayed

Rap

id P

ictu

re

Nam

ing

Soun

d Bl

endi

ng

Aud

itory

Att

entio

n

Inco

mpl

ete

Wor

ds

Vis

ual M

atch

ing

Dec

isio

n Sp

eed

Pair

Can

cella

tion

SB5

Non

verb

al F

luid

R

easo

ning

Verb

al F

luid

R

easo

ning

Non

verb

al

Qua

ntita

tive

Rea

soni

ng

Verb

al Q

uant

itativ

e R

easo

ning

Non

verb

al

Kno

wle

dge

Verb

al K

now

ledg

e

Non

verb

al V

isua

l- Sp

atia

l Pro

cess

ing

Verb

al V

isua

l- Spa

tial

Proc

essi

ng

Non

verb

al W

orki

ng

Mem

ory

Verb

al W

orki

ng

Mem

ory

——

Page 21: OVERVIEW T COPYRIGHTED MATERIAL · 2020. 2. 20. · practitioners with the means to make systematic, valid, and up- to- date in- ... 2 ESSENTIALS OF CROSS-BATTERY ASSESSMENT ... solve

DA

S- II

Mat

rice

s

Pict

ure

Sim

ilari

ties

Qua

ntita

tive

Rea

soni

ng

Earl

y N

umbe

r C

once

pts

Nam

ing

Voca

bula

ry

Wor

d D

efi n

ition

s

Verb

al

Com

preh

ensi

on

Verb

al S

imila

ritie

s

Patt

ern

Con

stru

ctio

n

Rec

all o

f Des

igns

Rec

ogni

tion

of

Pict

ures

Cop

ying

Mat

chin

g Le

tter

- Lik

e Fo

rms

Rec

all o

f Dig

its-

Forw

ard

Rec

all o

f Dig

its-

Back

war

d

Rec

all o

f Seq

uent

ial

Ord

er

Verb

al M

emor

y

Rap

id N

amin

g

Rec

all o

f Obj

ects

- Im

med

iate

Rec

all o

f Obj

ects

- D

elay

ed

Spee

d of

In

form

atio

n Pr

oces

sing

RIA

S—

Gue

ss W

hat

Verb

al R

easo

ning

Wha

t’s M

issi

ng

Odd

- Ite

m O

ut

Non

verb

al M

emor

y

Verb

al M

emor

y—

——

WR

ITM

atri

ces

Verb

al A

nalo

gies

Voca

bula

ry

Dia

mon

ds—

——

Not

e: C

HC

cla

ssifi

catio

ns a

re b

ased

on

the

liter

atur

e an

d pr

imar

y so

urce

s su

ch a

s C

arro

ll (1

993)

, Fla

naga

n an

d O

rtiz

(20

01),

Flan

agan

, Ort

iz, A

lfons

o, a

nd M

asco

lo (

2006

), H

orn

(199

1), K

eith

, Fin

e, T

aub,

Rey

nold

s, a

nd K

ranz

ler

(200

6), M

cGre

w (1

997)

, and

McG

rew

and

Fla

naga

n (1

998)

.W

ISC

- IV

= W

echs

ler

Inte

llige

nce

Scal

e fo

r C

hild

ren–

Four

th E

ditio

n (W

echs

ler,

2003

); W

AIS

- III

= W

echs

ler

Adu

lt In

telli

genc

e Sc

ale–

Thi

rd E

ditio

n (W

echs

ler,

1997

); W

PPSI

- III

= W

echs

ler

Pres

choo

l and

Pri

mar

y Sc

ale

of In

telli

genc

e–T

hird

Edi

tion

(Wec

hsle

r, 20

02);

KA

BC- I

I = K

aufm

an A

sses

smen

t Bat

tery

for

Chi

ldre

n–Se

cond

Edi

tion

(Kau

fman

& K

aufm

an, 2

004)

; WJ I

II =

Woo

dcoc

k- Jo

hnso

n III

Tes

ts o

f Cog

nitiv

e A

bilit

ies

(Woo

dcoc

k, M

cGre

w, &

Mat

her

2001

); SB

5 =

Sta

nfor

d- Bi

net I

ntel

ligen

ce S

cale

s–Fi

fth

Editi

on (

Roi

d, 2

003)

; DA

S- II

= D

iffer

entia

l Abi

lity

Scal

es–S

econ

d Ed

ition

(El

liott

, 200

7); R

IAS

= R

eyno

lds

Inte

llect

ual A

sses

smen

t Sca

les

(Rey

nold

s &

Kam

phau

s, 20

03);

WR

IT =

Wid

e R

ange

Inte

llige

nce

Test

(G

lutt

ing,

Ada

ms,

& S

hesl

ow, 2

002)

.a A

lthou

gh t

he W

AIS

- III

was

pub

lishe

d in

199

7, it

is in

clud

ed in

thi

s ta

ble

beca

use

its p

rede

cess

or, t

he W

echs

ler

Adu

lt In

telli

genc

e Sc

ale–

Rev

ised

, was

incl

uded

in R

apid

Ref

-er

ence

1.2

and

in o

rder

to p

rese

nt a

ll re

vise

d W

echs

ler

Scal

es in

one

tab

le.

Rep

rodu

ced

with

per

mis

sion

from

Gui

lford

.

Page 22: OVERVIEW T COPYRIGHTED MATERIAL · 2020. 2. 20. · practitioners with the means to make systematic, valid, and up- to- date in- ... 2 ESSENTIALS OF CROSS-BATTERY ASSESSMENT ... solve

22 ESSENTIALS OF CROSS-BATTERY ASSESSMENT

although there is greater coverage of CHC broad abilities / processes now than there was just a few years ago, the need for the XBA approach to assessment remains (Alfonso et al., 2005).

THE THREE PILLARS OF THE XBA APPROACH

The three pillars of the XBA ap-proach include contemporary CHC theory and the broad and narrow CHC ability classifi cations of all subtests that comprise current cog-nitive and achievement batteries as well as numerous special purpose tests. Each pillar is defi ned briefl y in the following sections and in Rapid Reference 1.5.

The First Pillar of the XBA Approach: CHC Theory

The CHC theory was selected to guide assessment and interpreta-tion because it is based on a more

thorough network of validity evidence than any other contemporary multi-dimensional model of intelligence within the psychometric tradition (see McGrew, 2005; Messick, 1992; Sternberg & Kaufman, 1998). According to Daniel (1997), the strength of the multiple (CHC) cognitive abilities model is that it was arrived at “by synthesizing hundreds of factor analyses con-ducted over decades by independent researchers using many different col-lections of tests. Never before has a psychometric ability model been so fi rmly grounded in data” (pp. 1042–1043). Because nearly all current intel-ligence batteries are based on CHC theory, it will not be described in detail in

DON’T FORGET

Nearly all comprehensive, indi-vidually administered intelligence batteries that are used with some regularity subscribe either explicitly or implicitly to CHC theory.

Three Pillars of the XBA Approach

• The fi rst pillar of the approach is a relatively complete taxonomic framework for describing the structure and nature of cognitive abilities. This taxonomy is the Cattell- Horn- Carroll theory of cognitive abilities (CHC theory).

• The second pillar of the ap-proach is the CHC broad (stra-tum II) classifi cations of cognitive and achievement tests.

• The third pillar of the approach is the CHC narrow (stratum I) classi-fi cations of cognitive and achieve-ment tests.

Rapid Reference 1.5

Page 23: OVERVIEW T COPYRIGHTED MATERIAL · 2020. 2. 20. · practitioners with the means to make systematic, valid, and up- to- date in- ... 2 ESSENTIALS OF CROSS-BATTERY ASSESSMENT ... solve

OVERVIEW 23

this chapter. For a detailed presentation of CHC theory and comprehensive defi nitions of all broad and narrow CHC abilities / processes, see Appendix A of this book.

The Second Pillar of the XBA Approach: CHC Broad (Stratum II) Classifi cations of Cognitive and Achievement Tests

Based on the results of a series of cross- battery confi rmatory factor analysis studies of the major intelligence batteries and the task analyses of many intel-ligence test experts, Flanagan and colleagues classifi ed all the subtests of the major intelligence and achievement batteries according to the particular CHC broad abilities / processes they measured (e.g., Flanagan et al., 2006). To date, well over 500 CHC broad ability classifi cations have been made based on the results of these studies. These classifi cations of cognitive and achievement tests assist practitioners in identifying measures that assess the various broad and narrow abilities / processes represented in CHC theory. Classifi cation of tests at the broad ability / processing level is necessary to improve upon the validity of cognitive assessment and interpretation. Specifi cally, broad abil-ity classifi cations ensure that the CHC constructs that underlie assessments are minimally affected by construct-irrelevant variance (Messick, 1989, 1995). In other words, knowing what tests measure what abilities / processes enables clinicians to organize tests into construct-relevant clusters—clusters that con-tain only measures that are relevant to the construct of interest.

To clarify, construct- irrelevant variance is present when an “assessment is too broad, containing excess reliable variance associated with other distinct con-structs . . . that affects responses in a manner irrelevant to the interpreted constructs” (Messick, 1995, p. 742). For example, the WAIS- III Verbal IQ (VIQ) has construct- irrelevant variance because, in addition to its four indicators of Gc (i.e., Information, Similarities, Vocabulary, Comprehension), it has one indicator of Gq (i.e., Arithmetic) and one indicator of Gsm (i.e., Digit Span). Therefore, the VIQ is a mixed measure of three distinct, broad CHC abilities / processes (Gc, Gq, and Gsm); it contains reliable variance (associated with Gq and Gsm) that is irrelevant to the construct intended to be interpreted (i.e., Gc; McGrew & Flanagan, 1998). The Wechsler VIQ represents a grouping together of sub-tests on the basis of face validity (e.g., grouping tests together that appear to measure the same common concept), an inappropriate aggregation of subtests

Page 24: OVERVIEW T COPYRIGHTED MATERIAL · 2020. 2. 20. · practitioners with the means to make systematic, valid, and up- to- date in- ... 2 ESSENTIALS OF CROSS-BATTERY ASSESSMENT ... solve

24 ESSENTIALS OF CROSS-BATTERY ASSESSMENT

that can actually decrease reliability and validity (Epstein, 1983). The purest Gc composite on the WAIS- III is the Verbal Comprehension Index, because it contains only construct- relevant variance.

Construct- irrelevant variance can also operate at the subtest (as opposed to composite) level. For example, a Verbal Analogies test (e.g., Sun is to day as moon is to ) measures both Gc and Gf. That is, in theory- driven factor- analytic stud-ies, Verbal Analogies tests have sig-nifi cant loadings on both the Gc and Gf factors (e.g., Woodcock, 1990). Therefore, this test is considered factorially complex—a condition that complicates interpretation (e.g., Is poor performance due to low vo-cabulary knowledge [Gc] or poor reasoning ability [Gf ], or both?).

In short, “[A]ny test that mea-sures more than one common fac-tor to a substantial degree yields scores that are psychologically ambiguous and very diffi cult to interpret” (Guilford, 1954, p. 356; cited in Briggs & Cheek, 1986). In-terpretation is far less complicated when composites are derived from relatively pure measures of the underlying construct. Therefore, XBAs are typically designed using only empirically strong or moderate

DON’T FORGET

Invalidity in Assessment

Construct- irrelevant variance—excess reliable variance associated with other distinct constructs that affects responses in a manner irrel-evant to the interpreted construct.The XBA approach guards against this major source of invalidity in assessment by ensuring that only validated measures of a cognitive construct are included in an XBA designed to measure that con-struct.The XBA DMIA organizes the subtests of the major intelligence batteries according to the broad abilities / processes they measure to assist practitioners in designing as-sessments that measure constructs validly.

C A U T I O N

Clusters that contain construct- irrelevant variance are psycho-logically ambiguous and diffi cult to interpret. For example, the traditional Wechsler VIQ contained variance (Gq) that was irrelevant to the construct intended to be interpreted (Gc). The Verbal Com-prehension Index eliminated the ir-relevant Gq variance and, therefore, represented a purer measure of Gc as compared to the VIQ.

Page 25: OVERVIEW T COPYRIGHTED MATERIAL · 2020. 2. 20. · practitioners with the means to make systematic, valid, and up- to- date in- ... 2 ESSENTIALS OF CROSS-BATTERY ASSESSMENT ... solve

OVERVIEW 25

(but not factorially complex or mixed) measures of CHC abilities / processes, following the information presented in Appendix B.2

The Third Pillar of the XBA Approach: CHC Narrow (Stratum I) Classifi cations of Cognitive and Achievement Tests

Narrow ability / processing classifi cations were originally reported in Mc-Grew (1997), then later reported in McGrew and Flanagan (1998) and Flana-gan et al. (2000) following minor modifi cations. Flanagan and her colleagues continued to gather content validity data on cognitive tests and expanded their analyses recently to include tests of academic achievement (Flanagan et al., 2002, 2006). Classifi cations of cognitive tests according to content, format, and task demand at the narrow (stratum I) ability / processing level were necessary to improve further upon the validity of intellectual assess-ment and interpretation (see Messick, 1989). Specifi cally, these narrow ability classifi cations were necessary to ensure that the CHC constructs that under-lie assessments are well represented. According to Messick (1995), construct underrepresentation is present when an “assessment is too narrow and fails to include important dimensions or facets of the construct” (p. 742).

Interpreting the WJ III Concept Formation (CF) subtest as a measure of Fluid Intelligence (i.e., the broad Gf ability / process) is an example of con-struct underrepresentation. This is because CF measures one narrow aspect of Gf (viz., Inductive Reasoning). At least one other Gf measure (i.e., subtest) that is qualitatively different from Inductive Reasoning is necessary to include in an assessment to ensure adequate representation of the Gf construct (e.g., a measure of General Sequential [or Deductive] Reasoning). Two or more

2 Classifi cations of cognitive ability tests as strong, moderate, or mixed measures of CHC abilities were based on the following criteria: A classifi cation of strong was given to a test that had a substantial factor loading (> .50) on a primary factor and a secondary factor loading (if present) that was equal to or less than ½ of its loading on the primary factor. A classifi cation of moderate was given to a test that had a primary factor loading of < .50 and a secondary factor loading (if present) that was less than ½ of the primary loading, or any primary factor loading with a secondary loading between ½ and 7⁄10 of the primary loading. A classifi cation of mixed was given to a test that had a factor loading on a secondary factor that was greater than 7⁄10 of its loading on the primary factor. These criteria were derived from Woodcock (1990).

Page 26: OVERVIEW T COPYRIGHTED MATERIAL · 2020. 2. 20. · practitioners with the means to make systematic, valid, and up- to- date in- ... 2 ESSENTIALS OF CROSS-BATTERY ASSESSMENT ... solve

26 ESSENTIALS OF CROSS-BATTERY ASSESSMENT

qualitatively different indicators (i.e., measures of two or more nar-row abilities / processes subsumed by the broad ability / process) are needed for appropriate construct representation (see Comrey, 1988; Messick, 1989, 1995). The aggregate of CF (a measure of Inductive Rea-soning at the narrow ability level) and the WJ III Analysis- Synthesis test (a measure of General Sequen-tial Reasoning at the narrow ability level), for example, would provide an adequate estimate of the broad Gf construct because these tests are strong measures of Gf and repre-sent qualitatively different aspects of Gf (see Appendix B).

The Verbal Comprehension In-dex (VCI) of the WAIS- III is an example of good construct repre-

sentation. This is because the VCI includes Vocabulary (VL), Similarities (LD / VL), Comprehension (LD), and Information (K0), all of which rep-resent qualitatively different aspects of Gc. Most intelligence batteries yield construct- relevant composites, although some of these composites underrep-resent the broad ability intended to be measured. This is because construct underrepresentation can also occur when the composite consists of two or more measures of the same narrow (stratum I) ability / process. For example, the Number Recall and Word Order subtests of the KABC- II were intended to be interpreted as a representation of the broad Gsm ability / process (Kaufman & Kaufman, 2004). However, these subtests primarily measure Memory Span, a narrow ability / process subsumed by Gsm. Thus, the Gsm cluster of the KABC- II is most appropriately interpreted as Memory Span (a narrow abil-ity / process) rather than an estimate of the broad Gsm ability / process.

“A scale [or broad CHC ability cluster] will yield far more information—and, hence, be a more valid measure of a construct—if it contains more differ-

DON’T FORGET

Invalidity in Assessment

Construct underrepresentation—present when an assessment is too narrow and fails to include important dimensions or facets of a construct.The XBA approach guards against this major source of invalidity in as-sessment by ensuring that at least two different components of a cog-nitive construct are included in an XBA cluster designed to measure that construct.The XBA DMIA organizes the subtests of the major intelligence batteries according to the narrow abilities / processes they measure to assist practitioners in designing as-sessments that measure construct validly.

Page 27: OVERVIEW T COPYRIGHTED MATERIAL · 2020. 2. 20. · practitioners with the means to make systematic, valid, and up- to- date in- ... 2 ESSENTIALS OF CROSS-BATTERY ASSESSMENT ... solve

OVERVIEW 27

entiated items [or tests]” (Clarke & Watson, 1995, p. 316). Cross- battery assessments circumvent the misinterpretations that can result from under-represented constructs by specifying the use of two or more qualitatively different indicators to represent each broad CHC ability / process. In order to ensure that qualitatively different aspects of broad abilities / processes are represented in assessment, classifi cation of cognitive and achievement tests at the narrow (stratum I) ability / processing level was necessary. The subtests of current intelligence batteries, special purpose tests, and comprehensive achievement batteries are classifi ed at both the broad and narrow ability / pro-cessing levels throughout this book (see Appendix B for a summary).

In sum, the latter two XBA pillars guard against two ubiquitous sources of invalidity in assessment: construct- irrelevant variance and construct underrepresentation. Taken together, the three pillars underlying the XBA approach provide the necessary foundation from which to organize assess-ments of cognitive and achievement constructs that are more theoretically driven, comprehensive, and valid.

Prior to discussing the applications of the XBA approach, it is necessary to highlight the various ways in which the approach has evolved since the fi rst edition of this book. As noted earlier, nearly all frequently used intelligence batteries have been revised in recent years. Additionally, these revisions are among the most substantial of the major intelligence batteries in the his-tory of intellectual assessment. As a result, nearly all intelligence batteries include measurement of a broader range of cognitive constructs and, indeed, constructs from a single psychometric theory—CHC theory. Because intel-ligence batteries are substantially better than their predecessors from both a psychometric and theoretical standpoint, the application of XBA methods is less involved. Specifi cally, the mechanics of the approach are more simplistic and may be carried out effortlessly using the automated program included on the CD- ROM accompanying this book (this program hereafter referred to as “XBA DMIA,” which stands for Cross- Battery Assessment Data Man-agement and Interpretive Assistant). Additionally, the interpretation of test performance is enhanced by the XBA DMIA as well as by the interpretive statements that correspond to this program’s output. That is, we provide inter-pretive statements (that may be used verbatim in a psychoeducational report) for every possible outcome of a broad or narrow XBA cluster calculated by the XBA DMIA (see Chapter 3). Rapid Reference 1.6 lists the major changes

Page 28: OVERVIEW T COPYRIGHTED MATERIAL · 2020. 2. 20. · practitioners with the means to make systematic, valid, and up- to- date in- ... 2 ESSENTIALS OF CROSS-BATTERY ASSESSMENT ... solve

New Features of the XBA Approach

1. More easily incorporates and integrates all current intelligence batteries (i.e., WISC- IV, WAIS- III, WPPSI- III, KABC- II, WJ III, SB5, and DAS- II), numerous spe-cial purpose tests, and tests of academic achievement.

2. Uses core tests (and supplemental tests as may be necessary) from a single battery, rather than selected components of a battery, as part of the assessment because (a) current intelligence tests have better representation of the broad CHC abilities / processes and use only two or three subtests to represent them; and (b) the broad abilities / processes measured by current intelligence batteries are typi-cally represented by qualitatively different indicators that are relevant only to the broad ability / processes intended to be measured.

3. Uses actual norms provided by the test’s publisher for CHC broad ability clusters when available.

4. Places greater emphasis on narrow CHC abilities / processes as supported by re-search linking them to acquisition and development of specifi c academic skills.

5. Includes an automated program called Cross- Battery Assessment Data Manage-ment and Interpretive Assistant (XBA DMIA) (on the CD- ROM that accompanies this book), that incorporates and integrates all features of the XBA approach. For example, the XBA DMIA• Incorporates and integrates components of prevailing interpretive systems of

the major intelligence batteries, including optional clinical clusters unique to WISC- IV, WAIS- III, and SB5.

• Calculates CHC broad and narrow ability / processing clusters that are gener-ated from either two or three individual subtests.

• Graphs data to provide a pictorial representation of all interpretable broad and narrow ability / processing clusters and the subtests that comprise them.

6. Includes interpretive statements for all possible outcomes regarding data from two or three subtest combinations for broad and narrow ability / processing areas.

7. Expands coverage of CHC theory to include abilities typically measured on achievement tests (e.g., Broad Reading and Writing [Grw] , Quantitative Knowl-edge [Gq] , and extended components of Auditory Processing [Ga]), providing additional information useful in the identifi cation of specifi c learning disability (SLD).

8. Incorporates the identifi cation of disorders in basic psychological processes in the interpretive system in a manner consistent with the defi nition of SLD in IDEA 2004 and includes an automated program called SLD Assistant.

9. Includes advancements to the interpretive system of the Culture- Language Inter-pretive Matrix used with culturally and linguistically diverse individuals.

10. Includes an automated program called Culture- Language Interpretive Matrix (C- LIM), which calculates and graphs results to facilitate decision making as it pertains to differentiating difference from disability with individuals from cultur-ally and linguistically diverse backgrounds.

Rapid Reference 1.6

Page 29: OVERVIEW T COPYRIGHTED MATERIAL · 2020. 2. 20. · practitioners with the means to make systematic, valid, and up- to- date in- ... 2 ESSENTIALS OF CROSS-BATTERY ASSESSMENT ... solve

OVERVIEW 29

that have taken place in the XBA approach since the publication of the fi rst edition of this book.

APPLICATION OF THE XBA APPROACH

Guiding Principles

In order to ensure that XBA procedures are psychometrically and theoreti-cally sound, it is recommended that practitioners adhere to several guiding principles. These principles were listed previously in Figure 1.1 and are de-fi ned briefl y in the following section.

First, select an intelligence battery that best addresses referral concerns. It is expected that the battery of choice will be one that is deemed most re-sponsive to referral concerns. These batteries may include, but are certainly not limited to, the Wechsler Scales, WJ III, SB5, KABC- II, and DAS- II. It is important to note that the use of conormed tests, such as the WJ III tests of cognitive ability and tests of achievement and the KABC- II and KTEA- II, may allow for the widest coverage of broad and narrow CHC abilities / pro-cesses.

Second, use subtests and clusters / composites from a single battery whenever possible to represent broad CHC abilities / processes. In other words, best practices involve using actual norms whenever they are available in lieu of arithmetic averages of scaled scores from different batteries. In the past, it was necessary to convert subtest- scaled scores from different batteries to a common metric (using the table in Appendix E, for example) and then average them (after determining that there was a nonsignifi cant difference between the scores) in order to build construct relevant broad CHC abil-ity / processing clusters. Because the development of current intelligence bat-teries benefi ted greatly from current CHC theory and research, this practice is seldom necessary at the broad ability / processing level. It continues to be necessary when testing hypotheses about aberrant performance within broad ability / processing domains and when measurement of narrow abilities / pro-cesses is deemed necessary (see Chapters 2 and 3).

Third, when constructing CHC broad and narrow ability clusters, select tests that have been classifi ed through an acceptable method, such as through CHC theory- driven factor analyses or expert consensus content- validity

Page 30: OVERVIEW T COPYRIGHTED MATERIAL · 2020. 2. 20. · practitioners with the means to make systematic, valid, and up- to- date in- ... 2 ESSENTIALS OF CROSS-BATTERY ASSESSMENT ... solve

30 ESSENTIALS OF CROSS-BATTERY ASSESSMENT

studies. All test classifi cations included in the works of Flanagan and col-leagues have been classifi ed through these acceptable methods (Flanagan & Ortiz, 2001; Flanagan et al., 2006). For example, when constructing broad (stratum II) ability / processing clusters, relatively pure CHC indicators should be included (i.e., tests that had either strong or moderate [but not mixed] load-ings on their respective factors in theory driven factor analyses). Further-more, to ensure appropriate construct representation when constructing broad (stratum II) ability / processing clusters, two or more qualitatively differ-

ent narrow (stratum I) ability / processing indicators should be included to represent each domain. Without empirical classifi cations of tests, constructs may not be adequately represented and, therefore, inferences about an in-dividual’s broad (stratum II) ability / process cannot be made. Of course, the more broadly a construct is represented (i.e., through the derivation of a cluster based on multiple qualitatively different narrow ability / processing indicators), the more confi dence one has in drawing inferences about the ability / process presumed to underlie it. A minimum of two qualitatively dif-ferent indicators per CHC cluster is recommended in the XBA approach for practical reasons (viz., time effi cient assessment).

Fourth, when at least two qualitatively different indicators of a broad abil-ity / process of interest is not available on the core battery, then supplement the core battery with at least two qualitatively different indicators of that broad ability from another battery. In other words, if an evaluator is inter-ested in measuring Auditory Processing (Ga), and the core battery includes either one or no Ga subtests, then select a Ga cluster from another battery to supplement the core battery.

Fifth, when crossing batteries (e.g., augmenting a core battery with rele-vant CHC clusters from another battery) or when constructing CHC broad or narrow ability / processing clusters using tests from different batteries (e.g., averaging scores when the construct of interest is not available on a single bat-tery), select tests that were developed and normed within a few years of one another to minimize the effect of spurious differences between test scores that may be attributable to the “Flynn effect” (Flynn, 1984). The subtests listed in the XBA DMIA are from batteries and tests that were normed within 10 years of one another.

Sixth, select tests from the smallest number of batteries to minimize the effect of spurious differences between test scores that may be attributable to

Page 31: OVERVIEW T COPYRIGHTED MATERIAL · 2020. 2. 20. · practitioners with the means to make systematic, valid, and up- to- date in- ... 2 ESSENTIALS OF CROSS-BATTERY ASSESSMENT ... solve

OVERVIEW 31

differences in the characteristics of independent norm samples (McGrew, 1994). In most cases, using select tests from a single battery to augment the constructs measured by any other major intelligence battery is suffi cient to represent the breadth of broad cognitive abilities / processes adequately as well as to allow for at least three qualitatively different narrow ability / pro-cessing indicators of most CHC cognitive constructs.

Noteworthy is the fact that when the XBA guiding principles are imple-mented systematically and the recommendations for development, use, and interpretation of clusters are adhered to, the potential error introduced through the crossing of norm groups is negligible (Flanagan & Ortiz, 2001; McGrew & Flanagan, 1998). Furthermore, although there are other limita-tions to crossing batteries, this systematic approach to the assessment and interpretation of cognitive abilities / processes has far fewer implications with regard to the potential for error than those associated with the improper use and interpretation of cognitive performance inherent in many currently used assessment approaches (e.g., subtest analysis, discrepancy analysis, atheoreti-cal approaches to assessment and interpretation, and so forth).

IMPLEMENTATION OF THE XBA APPROACH STEP- BY- STEP

The XBA approach may be carried out following a straightforward set of steps. These steps are outlined in Rapid Reference 1.7 and described in further detail in Chapter 2.

USE OF THE XBA APPROACH WITH CULTURALLY AND LINGUISTICALLY DIVERSE POPULATIONS

Application of the XBA approach with diverse individuals rests on

XBA Step- by- Step

1. Select primary intelligence bat-tery for assessment.

2. Identify adequately represented CHC abilities / processes.

3. Select tests to measure CHC abilities / processes not measured by primary battery.

4. Administer primary battery and any supplemental tests as neces-sary.

5. Enter data into the XBA DMIA.6. Follow XBA guidelines pre-

sented in Chapter 3 to interpret XBA DMIA output.

Rapid Reference 1.7

Page 32: OVERVIEW T COPYRIGHTED MATERIAL · 2020. 2. 20. · practitioners with the means to make systematic, valid, and up- to- date in- ... 2 ESSENTIALS OF CROSS-BATTERY ASSESSMENT ... solve

32 ESSENTIALS OF CROSS-BATTERY ASSESSMENT

the premise that an empirically based selection of tests, known to represent particular constructs, coupled with a consideration of the relevant cultural and linguistic dimensions of such tests, can provide more reliable, valid, and interpretable data than that ordinarily obtained using traditional methods. Careful and deliberate selection of tests, based on factors relevant to the back-ground of the individual being assessed, creates a unique battery of tests that is responsive to the particular referral questions. Using the XBA approach, practitioners can develop custom batteries for individuals of culturally and linguistically diverse backgrounds that differ as a function of both the specifi c language competencies and the cultural experiences of the individual, as well as the specifi c nature of the referral concerns. With respect to issues of bias re-lated to test selection, the basic goal in constructing XBAs for use with diverse individuals is to ensure a balance between empirical issues and considerations related to cultural and linguistic factors. The construction of an appropriate XBA for use with diverse individuals is presented in Chapter 5 along with a detailed explanation of how to interpret their test performances.

CONCLUSIONS

Recent refi nements to the XBA approach, including automating the process, have made this method of assessment both practical and easy to implement. Its continued popularity revolves around its use in the identifi cation of students with specifi c learning disability (Chapter 4) and in assisting in the process of determining difference from disability in students from culturally and linguisti-cally diverse backgrounds (Chapter 5). This is because the XBA approach (a) allows for fl exibility in designing assessment batteries to meet the unique needs of the individual; (b) provides a defensible interpretive method for identifying cognitive ability / processing strengths and weaknesses (impor-tant in the evaluation of learning disabilities); and (c) is systematic, specifying steps for evaluating the cognitive capabilities of individuals with learning needs, including those from diverse cultural and linguistic backgrounds.

REFERENCES

Alfonso, V. C., Flanagan, D. P., & Radwan, S. (2005). The impact of the Cattell- Horn-Carroll Theory on test development and interpretation of cognitive and academic

Page 33: OVERVIEW T COPYRIGHTED MATERIAL · 2020. 2. 20. · practitioners with the means to make systematic, valid, and up- to- date in- ... 2 ESSENTIALS OF CROSS-BATTERY ASSESSMENT ... solve

OVERVIEW 33

abilities. In D. P. Flanagan & P. L. Harrison (Eds.), Contemporary intellectual assess-

ment: Theories, tests, and issues (2nd ed., pp. 185–202). New York: Guilford.Brackett, J., & McPherson, A. (1996). Learning disabilities diagnosis in postsecondary

students: A comparison of discrepancy- based diagnostic models. In N. Gregg, C. Hoy, & A. F. Gay (Eds.), Adults with learning disabilities: Theoretical and practical per-

spectives (pp. 68–84). New York: Guilford.Briggs, S. R., & Cheek, J. M. (1986). The role of factor analysis in the development and

evaluation of personality scales [Special Issue: Methodological developments in personality research]. Journal of Personality, 54(1), 106–148.

Carroll, J. B. (1993). Human cognitive abilities: A survey of factor- analytic studies. Cambridge, UK: Cambridge University Press.

Carroll, J. B. (1997). The three- stratum theory of cognitive abilities. In D. P. Flanagan, J. L. Genshaft, & P. L. Harrison (Eds.), Contemporary intellectual assessment: Theories,

tests, and issues (pp. 122–130). New York: Guilford.Carroll, J. B. (1998). Foreword. In K. S. McGrew & D. P. Flanagan, The intelligence test

desk reference (ITDR): Gf- Gc cross- battery assessment (pp. xi–xii). Boston: Allyn & Bacon.

Clarke, L. A., & Watson, D. (1995). Constructing validity: Basic issues in objective scale development. Psychological Assessment, 7, 309–319.

Comrey, A. L. (1988). Factor- analytic methods of scale development in personality and clinical psychology. Journal of Consulting and Clinical Psycholog y, 56(5), 754–761.

Daniel, M. H. (1997). Intelligence testing: Status and trends. American Psychologist, 52, 1038–1045.

Das, J. P., & Naglieri, J. A. (1997). Das- Naglieri Cognitive Assessment System. Itasca, IL: Riverside Publishing.

Dehn, M. J. (2006). Essentials of processing assessment. New York: Wiley.Elliot, C. (1990). Differential Ability Scales (DAS). San Antonio, TX: The Psychological

Corporation.Elliot, C. (2007). Differential Ability Scales–Second Edition (DAS- II). San Antonio, TX:

PsychCorp.Epstein, S. (1983). Aggression and beyond: Some basic issues on the prediction of be-

havior. Journal of Personality, 51, 360–392.Fiorello, C. A., & Hale, J. B. (2006). Cognitive hypothesis testing and response to in-

terventions for children with reading problems. Psycholog y in the Schools, 43, 835–853.Flanagan, D. P. (2000). Wechsler- based CHC cross- battery assessment and reading

achievement: Strengthening the validity of interpretations drawn from Wechsler test scores. School Psycholog y Quarterly, 15(3), 295–329.

Flanagan, D. P., & McGrew, K. S. (1997). A cross- battery approach to assessing and interpreting cognitive abilities: Narrowing the gap between practice and cognitive science. In D. P. Flanagan, J. L. Genshaft, & P. L. Harrison (Eds.), Contemporary in-

tellectual assessment: Theories, tests, and issues (pp. 314–325). New York: Guilford.Flanagan, D. P., & McGrew, K. S. (1998). Interpreting intelligence tests from contem-

porary Gf- Gc theory: Joint confi rmatory factor analyses of the WJ- R and KAIT in a non- white sample. Journal of School Psycholog y, 36, 151–182.

Flanagan, D. P., McGrew, K. S., & Ortiz, S. O. (2000). The Wechsler intelligence scales and

CHC theory: A contemporary approach to interpretation. Boston: Allyn & Bacon.

Page 34: OVERVIEW T COPYRIGHTED MATERIAL · 2020. 2. 20. · practitioners with the means to make systematic, valid, and up- to- date in- ... 2 ESSENTIALS OF CROSS-BATTERY ASSESSMENT ... solve

34 ESSENTIALS OF CROSS-BATTERY ASSESSMENT

Flanagan, D. P., & Ortiz, S. O. (2001). Essentials of cross- battery assessment. New York: Wiley.

Flanagan, D. P., Ortiz, S. O., Alfonso, V. C., & Mascolo, J. T. (2002). Achievement test

desk reference: Comprehensive assessment of learning disabilities. Boston: Allyn & Bacon.Flanagan, D. P., Ortiz, S. O., Alfonso, V. C., & Mascolo, J. T. (2006). Achievement test

desk reference: A guide to learning disability identifi cation (2nd ed.). New York: Wiley.Floyd, R. G., Bergeron, R., & Alfonso, V. C. (2006). Cattell- Horn- Carroll cognitive

ability profi les of poor comprehenders. Reading and Writing, 19(5), 427–456.Floyd, R. G., Keith, T. Z., Taub, G. E., & McGrew, K. S. (in press). Cattell- Horn-

Carroll cognitive abilities and their effects on reading decoding skills: g has indirect effects, more specifi c abilities have direct effects. School Psycholog y Quarterly.

Flynn, J. R. (1984). The mean IQ of Americans: Massive gains 1932 to 1978. Psychologi-

cal Bulletin, 95, 29–51.Glutting, J. J., Adams, W., & Sheslow, D. (2002). Wide Range Intelligence Test. Wilming-

ton, DE: Wide Range, Inc.Guilford, J. P. (1954). Psychometric methods (2nd ed.). New York: McGraw- Hill.Horn, J. L. (1991). Measurement of intellectual capabilities: A review of theory. In

K. S. McGrew, J. K. Werder, & R. W. Woodcock (Eds.), Woodcock- Johnson technical

manual (pp. 197–232). Chicago: Riverside Publishing.Kaufman, A. S. (2000). Foreword. In D. P. Flanagan, K. S. McGrew, & S. O. Ortiz

(Eds.), The Wechsler intelligence scales and Gf- Gc theory: A contemporary approach to interpre-

tation (pp. xiii–xv). Boston: Allyn & Bacon.Kaufman, A. S., & Kaufman, N. L. (1983). Kaufman Assessment Battery for Children.

Circle Pines, MN: American Guidance Service.Kaufman, A. S., & Kaufman, N. L. (1993). Kaufman Adolescent and Adult Intelligence Test.

Circle Pines, MN: American Guidance Service.Kaufman, A. S., & Kaufman, N. L. (2004). Kaufman Assessment Battery for Children–

Second Edition. Circle Pines, MN: AGS Publishing.Keith, T., Fine, J., Taub, G., Reynolds, M., and Kranzler, J. (2006). Higher order,

multiple- sample, confi rmatory factor analysis of the Wechsler Intelligence Scale for Children–Fourth Edition: What does it measure. School Psycholog y Quarterly, 35, 108–127.

Keith, T. Z., Kranzler, J., & Flanagan, D. P. (2001). What does the cognitive assess-ment system (CAS) measure? Conjoint confi rmatory factor analysis of the cognitive assessment system (CAS) and the Woodcock- Johnson tests (3rd ed.). School Psychol-

og y Review, 30(1), 89–119.Lezak, M. D. (1976). Neuropsychological assessment. New York: Oxford University Press.Lezak, M. D. (1995). Neuropsychological assessment (3rd ed.). New York: Oxford Univer-

sity Press.McGrew, K. S. (1994). Clinical interpretation of the Woodcock- Johnson Tests of Cognitive Abil-

ity–Revised. Boston: Allyn & Bacon.McGrew, K. S. (1997). Analysis of the major intelligence batteries according to a

proposed comprehensive Gf- Gc framework. In D. P. Flanagan, J. L. Genshaft, & P. L. Harrison (Eds.), Contemporary intellectual assessment: Theories, tests, and issues (pp. 151–180). New York: Guilford.

Page 35: OVERVIEW T COPYRIGHTED MATERIAL · 2020. 2. 20. · practitioners with the means to make systematic, valid, and up- to- date in- ... 2 ESSENTIALS OF CROSS-BATTERY ASSESSMENT ... solve

OVERVIEW 35

McGrew, K. S. (2005). The Cattell- Horn- Carroll theory of cognitive abilities: Past, present, and future. In D. P. Flanagan & P. L. Harrison (Eds.), Contemporary intel-

lectual assessment: Theories, tests, and issues (2nd ed., pp. 136–182). New York: Guil-ford.

McGrew, K. S., & Flanagan, D. P. (1998). The intelligence test desk reference (ITDR): Gf- Gc

cross- battery assessment. Boston: Allyn & Bacon.Messick, S. (1989). Validity. In R. Linn (Ed.), Educational Measurement (3rd ed., pp.

131–104). Washington, DC: American Council on Education.Messick, S. (1992). Multiple intelligences or multilevel intelligence? Selective emphasis

on distinctive properties of hierarchy: On Gardner’s Frames of Mind and Sternberg’s Beyond IQ in the context of theory and research on the structure of human abilities. Psychological Inquiry, 3(4), 365–384.

Messick, S. (1995). Validity of psychological assessment: Validation of inferences from persons’ responses and performances as scientifi c inquiry into score meaning. American Psychologist, 50, 741–749.

Ortiz, S. O., & Flanagan, D. P. (2002). Best practices in working with culturally and linguistically diverse children and families. In A. Thomas & J. Grimes (Eds.), Best

practices in school psycholog y IV (pp. 1351–1372). Washington, DC: National Associa-tion of School Psychologists.

Phelps, L., McGrew, K. S., Knopik, S. N., & Ford, L. (2005). The general (g), broad, and narrow CHC stratum characteristics of the WJ III and WISC- III tests: A con-fi rmatory cross- battery investigation. School Psycholog y Quarterly, 20(1), 51–65.

Reynolds, C. R., & Kamphaus, R. W. (2003). Reynolds Intellectual Assessment Scales. Lutz, FL: Psychological Assessment Resources.

Roid, G. H. (2003). Stanford- Binet Intelligence Scales–Fifth Edition. Itasca, IL: Riverside Publishing.

Sternberg, R. J., & Kaufman, J. C. (1998). Human abilities. Annual Review of Psycholog y,

49, 479–502.Thorndike, R. L., Hagen, E. P., & Sattler, J. M. (1986). Stanford- Binet Intelligence Scale–

Fourth Edition. Chicago: Riverside Publishing.Vanderwood, M. L., McGrew, K. S., Flanagan, D. P., & Keith, T. Z. (2002). The con-

tribution of general and specifi c cognitive abilities to reading achievement. Learning

and Individual Differences, 13, 159–188.Wechsler, D. (1981). Wechsler Adult Intelligence Scale–Revised. San Antonio, TX: The Psy-

chological Corporation.Wechsler, D. (1989). Wechsler Preschool and Primary Scale of Intelligence–Revised. San Anto-

nio, TX: The Psychological Corporation.Wechsler, D. (1991). Wechsler Intelligence Scale for Children–Revised. San Antonio, TX: The

Psychological Corporation.Wechsler, D. (1997). Wechsler Adult Intelligence Scale–Third Edition. San Antonio, TX: The

Psychological Corporation.Wechsler, D. (2002). Wechsler Preschool and Primary Scale of Intelligence–Third Edition. San

Antonio, TX: The Psychological Association.Wechsler, D. (2003). Wechsler Intelligence Scale for Children–Fourth Edition. San Antonio,

TX: The Psychological Association.

Page 36: OVERVIEW T COPYRIGHTED MATERIAL · 2020. 2. 20. · practitioners with the means to make systematic, valid, and up- to- date in- ... 2 ESSENTIALS OF CROSS-BATTERY ASSESSMENT ... solve

36 ESSENTIALS OF CROSS-BATTERY ASSESSMENT

Wilson, B. C. (1992). The neuropsychological assessment of the preschool child: A branching model. In I. Rapin & S. I. Segalowitz (Eds.), Handbook of neuropsycholog y:

Child neuropsycholog y (Vol. 6, pp. 377–394). Amsterdam: Elsevier.Woodcock, R. W. (1990). Theoretical foundations of the WJ- R measures of cognitive

ability. Journal of Psychoeducational Assessment, 8, 231–258.Woodcock, R. W., & Johnson, M. B. (1989). Woodcock- Johnson Psycho- Educational Bat-

tery–Revised. Chicago: Riverside Publishing.Woodcock, R. W., McGrew, K. S., & Mather, N. (2001). Woodcock- Johnson III Tests of

Cognitive Abilities. Itasca, IL: Riverside Publishing.Woodcock, R. W., McGrew, K. S., Mather, N., & Schrank, F. A. (2003). Diagnostic

supplement to the Woodcock- Johnson III Test of Cognitive Abilities. Itasca, IL: Riverside Publishing.

TEST YOURSELF

1. The XBA classifi cation system has had a positive impact on communi-cation among practitioners, has improved research on the relations be-tween cognitive and academic abilities, and has resulted in substantial improvements in the measurement of cognitive constructs, as seen in the design and structure of current intelligence batteries. True or False?

2. Fluid Intelligence (Gf ), Crystallized Intelligence (Gc), and Visual Pro-cessing (Gv) are examples of

(a) general (stratum III) ability.(b) broad (stratum II) abilities.(c) narrow (stratum I) abilities.(d) none of the above.

3. Two broad abilities not measured by many intelligence batteries pub-lished prior to 2000 that are now measured by the majority of intelli-gence batteries available today are

(a) Gc and Gv.(b) Gf and Ga.(c) Gf and Gsm.(d) Gsm and Gt.

4. The three pillars of the XBA approach are CHC theory, CHC broad (stratum II) classifi cations of cognitive and achievement tests, and

(a) CHC narrow (stratum I) classifi cations of cognitive and achievement tests.

(b) CHC general (stratum III) classifi cations of cognitive and achieve-ment tests.

(c) a and b.(d) neither a nor b.

S S

Page 37: OVERVIEW T COPYRIGHTED MATERIAL · 2020. 2. 20. · practitioners with the means to make systematic, valid, and up- to- date in- ... 2 ESSENTIALS OF CROSS-BATTERY ASSESSMENT ... solve

OVERVIEW 37

5. The second guiding principle of the XBA approach is to

(a) use as many intelligence batteries as necessary to answer the refer-ral concerns.

(b) use subtests and clusters from a single battery whenever possible to represent broad CHC abilities / processes.

(c) select tests that have been classifi ed through an acceptable method, such as through CHC theory- driven factor analyses or expert con-sensus content- validity studies.

(d) create broad CHC clusters instead of narrow CHC clusters when possible.

6. An example of a cluster that contains construct- irrelevant variance is the

(a) WISC- IV VCI.(b) WJ III Comprehension- Knowledge Factor.(c) WAIS- III VIQ.(d) KABC- II Simultaneous / Gv Scale.

7. Most clusters that are found in today’s comprehensive intelligence bat-teries are both relatively pure (i.e., containing only construct- relevant tests) and well represented (i.e., containing qualitatively different mea-sures of the broad ability / process represented by the cluster). True or False?

8. Which of the following is not a good descriptor of the XBA approach?

(a) Time- effi cient(b) Theory- focused(c) Test kit–focused(d) Empirically supported

9. All of the following narrow abilities / processes fall under Gc except

(a) Listening Ability (LS).(b) Language Development (LD).(c) Lexical Knowledge (VL).(d) English Usage Knowledge (EU).

10. When conducting XBA, it is important to select tests from a limited number of batteries. True or False?

Answers: 1. True; 2. b; 3. c; 4. a; 5. b; 6. c; 7. True; 8. c; 9. d; 10. True