c copyright 2007 australian curriculum studies association...

33
This may be the author’s version of a work that was submitted/accepted for publication in the following source: Woods, Annette (2007) What’s wrong with benchmarks? Answering the wrong questions with the wrong answers. Curriculum Perspectives, 27 (3), pp. 1-10. This file was downloaded from: https://eprints.qut.edu.au/19600/ c Copyright 2007 Australian Curriculum Studies Association This work is covered by copyright. Unless the document is being made available under a Creative Commons Licence, you must assume that re-use is limited to personal use and that permission from the copyright owner must be obtained for all other uses. If the docu- ment is available under a Creative Commons License (or other specified license) then refer to the Licence for details of permitted re-use. It is a condition of access that users recog- nise and abide by the legal requirements associated with these rights. If you believe that this work infringes copyright please provide details by email to [email protected] Notice: Please note that this document may not be the Version of Record (i.e. published version) of the work. Author manuscript versions (as Sub- mitted for peer review or as Accepted for publication after peer review) can be identified by an absence of publisher branding and/or typeset appear- ance. If there is any doubt, please refer to the published source. http:// www.acsa.edu.au/ pages/ page97.asp

Upload: others

Post on 05-Oct-2020

4 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: c Copyright 2007 Australian Curriculum Studies Association ...eprints.qut.edu.au/19600/1/c19600.pdf · literacy have resulted in those involved - journalists, politicians, education

This may be the author’s version of a work that was submitted/acceptedfor publication in the following source:

Woods, Annette(2007)What’s wrong with benchmarks? Answering the wrong questions with thewrong answers.Curriculum Perspectives, 27 (3), pp. 1-10.

This file was downloaded from: https://eprints.qut.edu.au/19600/

c© Copyright 2007 Australian Curriculum Studies Association

This work is covered by copyright. Unless the document is being made available under aCreative Commons Licence, you must assume that re-use is limited to personal use andthat permission from the copyright owner must be obtained for all other uses. If the docu-ment is available under a Creative Commons License (or other specified license) then referto the Licence for details of permitted re-use. It is a condition of access that users recog-nise and abide by the legal requirements associated with these rights. If you believe thatthis work infringes copyright please provide details by email to [email protected]

Notice: Please note that this document may not be the Version of Record(i.e. published version) of the work. Author manuscript versions (as Sub-mitted for peer review or as Accepted for publication after peer review) canbe identified by an absence of publisher branding and/or typeset appear-ance. If there is any doubt, please refer to the published source.

http:// www.acsa.edu.au/ pages/ page97.asp

Page 2: c Copyright 2007 Australian Curriculum Studies Association ...eprints.qut.edu.au/19600/1/c19600.pdf · literacy have resulted in those involved - journalists, politicians, education

QUT Digital Repository: http://eprints.qut.edu.au/

Woods, Annette F. (2007) What’s wrong with benchmarks? : answering the wrong questions with the wrong answers. Curriculum Perspectives, 27(3). pp. 1-10.

© Copyright 2007 Australian Curriculum Studies Association

Page 3: c Copyright 2007 Australian Curriculum Studies Association ...eprints.qut.edu.au/19600/1/c19600.pdf · literacy have resulted in those involved - journalists, politicians, education

WHAT’S WRONG WITH BENCHMARKS?:

ANSWERING THE WRONG QUESTIONS WITH THE W RONG ANSWERS

Dr Annette Woods

School of Education and Professional Studies; and

Centre for Applied Linguistics, Language and Communication Studies

Griffith University

Contact:

Dr Annette Woods

School of Education and Professional Studies

Griffith University, Gold Coast Campus

PMB 50 Gold Coast Mail Centre

Gold Coast 9726

Ph: +61 7 55529043

Fax: +61 7 55528599

Email: [email protected]

Page 4: c Copyright 2007 Australian Curriculum Studies Association ...eprints.qut.edu.au/19600/1/c19600.pdf · literacy have resulted in those involved - journalists, politicians, education

1

AUTHORS BIONOTE

Annette Woods works at the School of Education and Professional Studies at Griffith

University Gold Coast Campus. She can be contacted at EPS Griffith University, PMB 50

Gold Coast Mail Centre, Queensland, Australia, 9726 or via email on

[email protected]. Annette teaches in the areas of literacy, multiliteracies and research

methods. Her research has focused on diversity and social justice, classroom pedagogy,

constructions of literacy failure, discourse analysis and community partnerships.

Page 5: c Copyright 2007 Australian Curriculum Studies Association ...eprints.qut.edu.au/19600/1/c19600.pdf · literacy have resulted in those involved - journalists, politicians, education

2

WHAT’S WRONG WITH BENCHMARKS?:

ANSWERING THE WRONG QUESTIONS WITH THE W RONG ANSWERS

Abstact

Calling on Foucault’s notions of the formation of objects this paper sets out to unpack the

ambiguities evident in Australia’s approach to accountability through testing approach. The

work speaks equally to other contexts where accountability, benchmarking, and standardised

testing are being used to ‘fix’ education systems. The analysis suggests that the authority of

the accountability through testing initiatives can be critiqued on at least four levels of

ambiguity. These levels of ambiguity concern issues of the formation of benchmarks as

entities, the unproblematic acceptance of essentiality, the identified subgroups used as

categories to disaggregate and report data, and psychometric disparities in the testing and the

public reporting of these benchmarks.

Page 6: c Copyright 2007 Australian Curriculum Studies Association ...eprints.qut.edu.au/19600/1/c19600.pdf · literacy have resulted in those involved - journalists, politicians, education

3

Introduction

Recent attempts to call upon popularised but mistaken discourses to represent a 'crisis' in

literacy have resulted in those involved - journalists, politicians, education systems, school

personnel, community members as examples - seeking solutions to the wrong questions. If we

have the best interests of those children in our public schools in mind, the questions being

asked should not be about which one method of teaching literacy is best. Instead, based on

assumptions about the plural and discursive nature of literacy, literacy pedagogy must be

discussed as context bound suites of locally bound practices. In many classrooms and schools

individuals still struggle with decisions related to the best way to teach literacy, the best

program to buy, and the best intervention program to supply to those children who 'fail' to

learn within the classroom. Similarly while some teachers grapple with attempts to place the

responsibility for students' literacy learning into the social domain of classroom practice and

pedagogy, and approach literacy pedagogy from a balanced perspective, system based literacy

accountability and enhancement funding initiatives continue to call upon a very different set

of discourses.

In Australia, State-based systems have resisted general moves to establish a national

curriculum. However the past decade has seen the introduction of common standardised

benchmarks for the assessment of achievement in literacy and numeracy at years 3, 5 and 7

across all eight State and Territory systems. Recent policy announcements suggest that

Federal policy will soon push this testing into the early secondary years also. At present each

State system designs and manages its own testing procedures, however Federal initiatives

have recently mandated five point scales for all school- based reporting and calls for national

Page 7: c Copyright 2007 Australian Curriculum Studies Association ...eprints.qut.edu.au/19600/1/c19600.pdf · literacy have resulted in those involved - journalists, politicians, education

4

tests to report to these benchmark levels continue to surface as proposed and desired Federal

policy.

Such an approach within other Western nations has been problematic at least. For example a

reliance on high stakes testing and 'scientific' evidence to justify the mandating of some

teaching 'methods' and the outlawing of others has arguably placed education in the United

States on a road toward reductionist destruction. The privileging of high stakes testing is

evident in the following comment made by a US Secretary of Education in 2001:

Anyone who opposes high stakes testing is an apologist for a broken education system.

(Paige, R. (2001), Washington Post, May 13, p. 87 cited in Afflerbach, 2002, p. 13)

Such naïve statements ignore that the testing movements themselves may in fact be major

players in the ‘brokenness’ of an education system, and also fail to scrutinise the claims of

scientific foundations and essentialism of the testing and accountability regimes and that

which they claim to test. The literacy teaching field is increasingly being challenged to reach

scientific standards and to justify its worth. So as Afflerbach suggests, "shouldn't the same

demand be placed on high stakes reading tests?" (Afflerbach, 2002, p. 12).

Reporting of accountability and standards across Australian Federal, State and Territory tied

initiatives continues to support the proliferation of representations of literacy which do not

engage with the plural nature of literacy and literacy teaching and learning. These

autonomous representations are popularised in the press and in policy and funding initiatives

(Brock, 1998). At a time when classrooms are filled with diverse compilations of students,

and when the futures that these diverse students are headed toward are more uncertain than

Page 8: c Copyright 2007 Australian Curriculum Studies Association ...eprints.qut.edu.au/19600/1/c19600.pdf · literacy have resulted in those involved - journalists, politicians, education

5

ever before, a dominant and powerful set of representations ill suited to our current context

continues to thrive within these accountability explanations.

While it is possible to engage with the benchmark debate in a variety of ways, in this paper I

opt to call into question the process itself on four levels of ambiguity. So the paper begins

with my explanation of Foucault’s notion of objectification to frame the process of forming

literacy as an object of knowledge. I then move to contextualising, in a historical sense, the

move toward the dominance of the accountability movement within Australian educational

policy. The process of publicly reporting outcomes will then be critiqued according to four

levels of ambiguity in the authority of the process: issues related to the formation of the

benchmarks as entities; the unproblematic acceptance of essentiality; the identified sub groups

used for reporting; and psychometric disparities in the testing and the public reporting of these

benchmarks. The investigation is presented as a way to move to answer questions about the

consequences of the accountability through benchmarking movement in Australia, although

the issues raised have broader relevance to other systems.

Forming literacy as an object of study

As described by Foucault (1972) in his discussions of the formation of objects, to discuss

what has ruled the existence of ‘things’ as objects of discourse it is first important to map the

surfaces of such objects’ emergence. In this paper I lay out accountability as one such surface

of emergence. The discourses of policy are one of a set of the discourses and ensembles of

regulated ways of doing things that are available as possibilities to create literacy as an object

of study. The accountability movement at this time is a key example of how policy can impact

how we represent concepts such as literacy. So I investigate how such policy can play a role

Page 9: c Copyright 2007 Australian Curriculum Studies Association ...eprints.qut.edu.au/19600/1/c19600.pdf · literacy have resulted in those involved - journalists, politicians, education

6

in limiting the domain of literacy, “of defining what it is talking about, of giving it the status

of an object – and therefore of making it manifest, nameable and describable” (Foucault,

1972, p. 41).

As the recent media frenzy on literacy in Australia demonstrates so aptly, statements about

literacy cannot come from anybody; their value, efficacy, even their educational powers, and,

generally speaking their existence as statements about literacy cannot be “dissociated from the

statutorily defined person who has the right to make them” (Foucault, 1972, p. 51). The

accountability movement is not the only authority of delimitation in literacy education, but its

simplistic commodification of complex notions does give it a powerful base from which to

limit and form. Through political and media support it has become what Foucault (1972)

labels an “authority recognized by public opinion, the law and government” (p. 42), and as

such is influential in how literacy is talked about today.

The systems used to divide, contrast, relate, regroup, classify and derive what will constitute

literacy are discussed by Foucault as “grids of specification” (Foucault, 1972, p. 42). These

grids also elaborate the institutional practices used to specify social roles and to accord

authority to the movement and those who work within it. Once literacy is named and

constituted, these grids then work toward forming literate students and those students who are

represented as ‘failing’ – they help to decide who can read and who can’t, and then present

the ‘failing’ students to the specialists for further analysis, judgement and treatment. Of

course despite individual differences, being literate or failing to be literate are not ‘true’ states

of being, but refer instead to how these subjects speak, how they are spoken about and the

responses of others in the discourse community of education.

Page 10: c Copyright 2007 Australian Curriculum Studies Association ...eprints.qut.edu.au/19600/1/c19600.pdf · literacy have resulted in those involved - journalists, politicians, education

7

Reconstitution of literacy through statistics: 'Nationally agreed minimum acceptable

standards'

A climate for a move to outcomes-based education

Recently debate has again raged about the focus of education that will serve our students most

effectively. In the most recent attacks on public schooling Donnelly and others like him

(Macnamara, 2005) have laid blame for their own imagined ‘crisis’ at the feet of outcomes–

based education. What has been forgotten in these arguments of course is that outcomes-based

education has in fact been the Trojan horse that has carried such measures as minimalist

benchmarks, simplistic testing, dumbing down of teacher professionalism and

commodification of literacy into our education systems. These are all things that the

conservatives would now have more of in our schools, but they have recognised the need to

discredit their own ally in order to proceed with the disruption to public education in a quest

to move toward more control for less responsibility.

So it is important to detail the history of outcomes-based education and its intent, if only to

remind ourselves of what initiatives it has brought with it over the past decade. As stated

Australia has not been immune to the trend toward outcomes-based education that has

impacted Western nations over the past two decades. During this time there has been an

observable shift in thinking in regard to the most efficient ways to ensure high standards in

literacy. One of the first reports to signal this change in thinking within the Australian context

was the Quality of Education Review Committee Report (1985). The most enduring

recommendation within this report was the suggestion that schools and systems needed to

shift their emphasis from educational inputs to educational outcomes. The report also

Page 11: c Copyright 2007 Australian Curriculum Studies Association ...eprints.qut.edu.au/19600/1/c19600.pdf · literacy have resulted in those involved - journalists, politicians, education

8

highlighted the need for a focus on ensuring that all students reached minimum standards in

literacy and numeracy, at an early age. This call for greater proficiency in literacy and

numeracy and for a focus on the early years of schooling, continued as a policy direction

throughout the early 1990s (Department of Employment Education and Training, 1991;

House of Representatives Standing Committee on Employment Education and Training,

1993).

In fact, these trends continue to be evident in Australian Federal Government policy.

Attempts to develop a national school curriculum framework for eight Key Learning Areas in

the early and mid 1990s and the more recent agreement on National Goals for Education and

the National Literacy and Numeracy Plan (Ministerial Council on Education Employment

Training and Youth Affairs, 1999a) and development of the National Literacy and Numeracy

Benchmarks (Commonwealth Department of Education Training and Youth Affairs, 1997)

are cases in point. It is not possible to critique outcomes-based education and then call for

more of what it supports. Although this is what we are presently seeing in the media within

Australia (The Editor, 2005).

In May 1999 as the then Minister for Education, Training and Youth Affairs, Kemp stated

that a focus on achieving students' democratic right to have access to an education system that

met their fundamental needs, and one that was equitable and socially just, "inevitably (led) to

a focus on outcomes" (Kemp, 1999). Indeed, Kemp left little doubt as to the importance that

he placed on outcomes in education when he stated:

Australia's education system for the next millennium must be focussed on outcomes if

it is to achieve educational equity.

Page 12: c Copyright 2007 Australian Curriculum Studies Association ...eprints.qut.edu.au/19600/1/c19600.pdf · literacy have resulted in those involved - journalists, politicians, education

9

(Kemp, 1999)

Further, when discussing the place of accountability and assessment in this process of

providing students with their "democratic right" he stated:

If we are to have a school system for the next millennium, which meets the

expectations and has the confidence of the Australian community, then we must have

mechanisms in place which allow us to measure the key outcomes of all Australian

schools and report these outcomes to the Australian community.

(Kemp, 1999)

This push toward outcomes has remained a consistent foundation in the Australian Federal

Government's approach to education beyond Kemp's ministerial influence, as evidenced by

the focus of Nelson, the Minister responsible for Education since the 2001 Federal elections

until the latest cabinet reshuffle in 2006.

This reporting framework of national goals incorporating agreed targets and

benchmarks of student attainment - provides us with a way of monitoring the key

outcomes of Australian schooling. I do not pretend that these processes will provide us

with a picture of the total social, intellectual or emotional outcomes of Australia's

schools but they allow us to keep a finger on the pulse of what is essential.

(Nelson, 2001)

Reforms resulting from Kemp's initiatives under the umbrella of a push toward achieving the

National Goals for Education (Ministerial Council on Education Employment Training and

Page 13: c Copyright 2007 Australian Curriculum Studies Association ...eprints.qut.edu.au/19600/1/c19600.pdf · literacy have resulted in those involved - journalists, politicians, education

10

Youth Affairs, 1999a) had a major influence on literacy teaching and learning in schools in

Queensland. The States' endorsement of the National Literacy and Numeracy Plan

(Ministerial Council on Education Employment Training and Youth Affairs, 1999b) required

the development of state-based assessment procedures. According to numerous Department of

Education Science and Training (DEST) publications these tests are rigorous state-based

assessment programs (Commonwealth Department of Education Training and Youth Affairs,

1997). However the rigor of these tests has been called into question by myself and others

elsewhere (Luke, Woods, Land, Bahr, & McFarland, 2002). Although the testing program

seems external to the day to day work of classroom literacy events for many teachers and

students, the public reporting of literacy standards as measured against benchmarks has had a

profound influence on how literacy is represented in public, system and school domains.

Benchmark reporting at an Australian Federal level: What are benchmarks?

In discussing the original assessment measure used by supporters of the accountability

through benchmarks movement in Australia, Alloway and Gilbert protest the assessment

measure's ability to support and sustain crisis rhetoric and to resist critique at so many levels:

The national survey has been criticised on a number of accounts, including its

determination of 'cut-off' scores, its location of literacy benchmarks, and the

construction of literacy implicitly endorsed. Notwithstanding this, however, results

from such surveys have been widely used as evidence that 'a problem' exists with

literacy; a problem with how literacy is taught; a problem with literacy practices in

homes and communities; and a problem with the lack of literacy skills that students

from some cultural backgrounds bring with them to schools.

Page 14: c Copyright 2007 Australian Curriculum Studies Association ...eprints.qut.edu.au/19600/1/c19600.pdf · literacy have resulted in those involved - journalists, politicians, education

11

(Alloway & Gilbert, 1998, p. 249)

While the data collected as part of Federal, State and Territory accountability measures is

open to critique on many levels, it is not the purpose of this paper to critique this data as such.

Instead, I will proceed to present a critique of the benchmarking process based on the levels

of arbitrariness of four basic assumptions of the accountability as benchmarks movement. So

my critique will contest the benchmarks on the basis of assumptions made of their very

existence, essential nature, reporting categories, and psychometric disparities.

It is also the case that at a State and Territory level, the actual tests used to collect the

information can be critiqued as being less than rigorous, culturally biased and generally for

containing 'bad' test items (Luke et al., 2002). In a study on the issues involved in inclusive

assessment, monitoring and reporting of achievement for Indigenous students in Queensland,

Luke et. al. (2002, p. 37-49) found that, within the Queensland 2000 Year 5 state-wide tests,

there was evidence to suggest that the tests themselves were a source of inequitable

assessment for particular groups of students. However the reporting of data from these

measures is presented in such a way that it is rarely problematised and is instead taken as a

universal truth.

According to the Commonwealth the benchmarks are part of the agreed moves to improve the

educational outcomes of all Australian children. They are:

a set of indicators or descriptors which represent nationally agreed minimum

acceptable standards for literacy and numeracy at a particular year level.

(Commonwealth Department of Education Training and Youth Affairs, 1997, no page

available)

Page 15: c Copyright 2007 Australian Curriculum Studies Association ...eprints.qut.edu.au/19600/1/c19600.pdf · literacy have resulted in those involved - journalists, politicians, education

12

Benchmarks do not claim to represent the full range of the curriculum, but instead only the

essential elements of literacy and numeracy. This claim of essentiality is perhaps the first

problematic issue related to the national benchmarks (Christie, 1998). Benchmarks are not

tests, but are measured for Commonwealth reporting through distinct testing regimes designed

and implemented at a State or Territory level.

The results of these testing processes are reported annually at a State and Territory level, as

well as within the National Report on Schooling in Australia (Ministerial Council on

Education Employment Training and Youth Affairs, 1999c, 2000, 2001, 2002). In 1999 –

although actually only available in 2002 - nationally comparable data concerning the reading

performance of year 3 and 5 students as measured against national benchmarks was reported

within the National Report on Schooling in Australia (Ministerial Council on Education

Employment Training and Youth Affairs, 1999c) for the first time.

What's wrong with the representations presented?

The ‘literacy standards’ of Australian school children are publicly reported on an individual

system and combined basis within the annual report on schools (Ministerial Council on

Education Employment Training and Youth Affairs, 2003). This public reporting is based on

assumptions made of the benchmarks' existence and essential nature. The following critique

suggests that there are at least four levels of ambiguity that can be identified in the authority

of the benchmarking process. As highlighted earlier in this paper, these levels of ambiguity

concern issues of the formation of benchmarks as entities, the unproblematic acceptance of

essentiality, the identified subgroups used as categories to disaggregate and report data, and

psychometric disparities in the testing and the public reporting of these benchmarks.

Page 16: c Copyright 2007 Australian Curriculum Studies Association ...eprints.qut.edu.au/19600/1/c19600.pdf · literacy have resulted in those involved - journalists, politicians, education

13

The formation of benchmarks as entities

In the Archaeology of Knowledge, Foucault (1972) introduces the notion of rules of

formation of an object. Foucault's claim is that nothing can pre-exist its own naming and

circumscription. So a concept can only be understood by those required to analyse, redefine

and challenge it in praxis and through language once it is named. The benchmarks were not

discovered in 1996 by Kemp, but instead invented through language and discourse. This is

evident in the fact that a search for the word benchmark through newspapers and press

releases produces nothing linked to education much less literacy until 1996 when Kemp

reopened the debate on national school curriculum and revived discussions on nationally

common or at least comparable assessment standards.

Table 1 lists the descriptors of benchmarks as a concept or object, as used within the original

publications of the National Literacy and Numeracy Benchmarks (Commonwealth

Department of Education Training and Youth Affairs, 1997).

TABLE 1 ABOUT HERE

Language is used within and around the benchmarks to denote them as objects that are

concrete and able to represent the intent of other objects such as ‘standards’ and elements of

‘literacy’ and ‘numeracy’. And yet they have in fact been shifting entities used by and

between Federal and State Ministers of Education as political leverage to achieve funding and

policy goals. The benchmark standard was originally set high, but was then lowered when the

number of children unable to achieve it embarrassed Kemp after the 1996 Year 3 literacy

survey. The Federal authority deferred introducing its redesigned benchmarks in December

1997 as a result of a protracted argument with State and Territory Education Ministers over

Page 17: c Copyright 2007 Australian Curriculum Studies Association ...eprints.qut.edu.au/19600/1/c19600.pdf · literacy have resulted in those involved - journalists, politicians, education

14

issues related to the content of benchmarks and accusations that the standards were being set

very low in a deliberate and politically motivated move by Kemp. By April 1998 the

benchmarks had been redrafted again, and compromise had been reached around issues of the

collection of data such that the State, Territory and Federal Ministers of Education signed an

agreement in Hobart that paved the way for the introduction of the benchmarks as minimum

literacy standards against which the achievement of students would be compared across States

and Territories.

As Kemp gave a press statement about the signing of this agreement he stated:

They are high standards, but reasonable standards, and will contribute enormously to

achieving quality of opportunity for Australian school children. Literacy is the

foundation of opportunity in the information age. We now have a commitment that

every Australian child will reach a standard of literacy that will allow them to continue

successfully with their schooling.

(Quoted in Jones, 1998)

Whether a political agreement on what the benchmarks should look like and at what level

they should be set has anything at all to do with Australian children actually reaching a

standard of literacy that will allow them to continue successfully with their schooling is in

itself arguable. However the trail of genre chains and intertextuality that would be necessary

to show how language and discourse have made this link an essential truth is not the subject

of this paper. It is worth noting that any agreement between key educational decision makers

on whether there is a link between testing and improving standards has been precarious over

past years, and remains so. This is evidenced for example by a statement released by a

Page 18: c Copyright 2007 Australian Curriculum Studies Association ...eprints.qut.edu.au/19600/1/c19600.pdf · literacy have resulted in those involved - journalists, politicians, education

15

spokesman for the Minister of Education in Victoria in July of 2001. The spokesperson stated

that:

Victoria was resourcing schools to raise standards rather than regulating those

standards.

(Quoted in Rindfleisch, 2001)

The benchmarks went through another adjustment with changes made to the "method of

calculating the national benchmark figures" (Ministerial Council on Education Employment

Training and Youth Affairs, 2000, p. 4) between their public reporting in 1999 and 2000 –

actually released in 2002 and 2003 respectively. This resulted in a revised version of the

percentage of year 3 students achieving the reading benchmark in 1999 being published as

part of the 2000 results. In all cases the percentage of students achieving the benchmark

increased under this new method of calculation. The Commonwealth claimed within the

report that this new method of calculation was introduced to provide the most accurate picture

of change available, but it is not evident why the changes only applied to the year 3 data. The

Commonwealth also claimed that the change impacted on the results from all States and

Territories similarly. However the actual changes to the reported percentages within each

system's data is uneven, ranging from an increase of 0.4% to 23.3%. The impact of the change

seems to have been larger on subgroups that originally had the lowest scores. So for example

in the original data 67.2% of Indigenous students in the ACT were reported as achieving the

reading benchmark and this figure was revised to 90.5%. These continual changes to the

benchmarks and the standards that they represent attacks their credibility as representing

anything other than a political chimera, shape-shifted to serve the needs of respective

governments and the political credentials of certain ministers.

Page 19: c Copyright 2007 Australian Curriculum Studies Association ...eprints.qut.edu.au/19600/1/c19600.pdf · literacy have resulted in those involved - journalists, politicians, education

16

What has been created as the entity of the ‘benchmarks’ are not true statements about literacy.

They do not refer to the literacy standards of Australian students in an absolute sense. The

authority of these ways of knowing about literacy have produced regimes of truth that tell us

about what can count as truth here and now. Foucault (1983) reminds us that investigating

truth is about the activity of ‘truth-telling’ and not an uncovering of whether this or that is true

– for there is no one truth about literacy. The point of interest is why benchmarks are

unproblematised as truth statements here and now.

To summarise then, benchmarks are constituted as concrete entities or objects, when they are

in fact shifting mock-ups of a subjective literacy standard. This constitution of the

benchmarks as essentialist notions has allowed for statements of improvements of standards

and comparisons of State and Territory achievement levels to go unquestioned. However most

importantly it has also been the basis upon which the notion that informative and useful data

can be collected on literacy standards through comparison of performance on a descriptor

such as the benchmarks supply.

Essential elements of complex concepts

Supporters of the benchmarking process seem to have accepted that it is, in the first instance,

possible to locate the essential elements of complex concepts like literacy for all Australian

students in years 3, 5 and 7 with in records of current levels of achievement and professional

judgement about appropriate and necessary standards. That these essential elements might be

context dependent, or that they may be constantly changing because of the shifting future life

worlds that these students will take their places in, seem not to have become issues within the

benchmark literature. Much of the current curriculum innovation within Australian education

Page 20: c Copyright 2007 Australian Curriculum Studies Association ...eprints.qut.edu.au/19600/1/c19600.pdf · literacy have resulted in those involved - journalists, politicians, education

17

systems is based on the recognition that students need an understanding of, and ability to use,

appropriate skills in context, and yet the benchmarks seem to be based on the assumption that

a narrow set of universal skills regardless of context will prepare students for their future lives

(Luke & van Kraayenoord, 1998, p. 60).

Similarly, there seems to be a level of acceptance that if it is possible to document these

essential elements, that the benchmarks as they are presented in Commonwealth publications

represent them. With no notion of who it is being assumed these benchmarks are essential for,

or for what purpose, and on whose authority, the benchmarks in their present form depict a

monocultural representation of literacy and numeracy competence (Christie, 1998, p. 44) that

opposes the recognition that our schools and classrooms are servicing an increasingly

culturally and linguistically diverse student population.

Subgroups: who gets identified?

The public reporting of literacy achievement across the Commonwealth identifies several

categories of students as identified sub groups. Currently, these sub groups are such that they

allow for the comparison of boys results with those of girls, and to compare the performance

of Indigenous students and those students whose background language is other than English

(LBOTE) with the results of all students. These raw categories act to simplify the issues

surrounding the school-based achievement of particular students. By drawing on just one

characteristic of a student for categorisation purposes - their gender or their language

background as examples - without allowing the opportunity to disaggregate achievement data

further to take account of the complex relationship of issues such as poverty, rurality,

isolation, Indigeniety, along with gender and language, in the achievement of school-based

Page 21: c Copyright 2007 Australian Curriculum Studies Association ...eprints.qut.edu.au/19600/1/c19600.pdf · literacy have resulted in those involved - journalists, politicians, education

18

goals and outcomes, this reporting fails students on several counts. Reporting the data in this

way allows for the perpetuation of stereotypes - often coined in deficit terms - of students and

their families.

This choice of sub-groups to report upon provides only coarse categories that suggest an

imagined homogeneity of student populations. To cut data according to gender or indigeneity

with no recognition of - or means to further investigate - the effect that location, isolation,

social class or poverty may have on those categories, or the intersecting influence of one sub-

group on another, is naive and misleading. The representation of the literate subject presented

is a one-dimensional snapshot. The subject becomes Indigenous or female but not both. The

reporting cuts the subject as a literate representation on a particular psychometric or

demographic grid. This is a classic Foucauldian grid of specification, but its simplicity

flattens out notions of subjectivity, creating one-dimensional students, homogenous in

character with all other cut according to the same grid. From an educational perspective this is

less useful than a more complex grid of specification in framing, defining and reshaping

events and practices implicated in students’ achievement in literacy and numeracy. There are

then obviously several problematic issues around the identification of the categories used

within the public reporting of benchmark data in Australia. However I wish to focus on just

one of these problematics in more depth.

Who does this reporting mark for othering? Indigenous students and migrants – both prime

candidates for unproblematised othering within the current context of Australia – are set up as

other than the rest of the population. This binary division (Foucault, 1977) of those who

achieve and those who do not along such simplistic grids marks out the other in Australian

society more generally. As part of the hegemonic discourse this works to highlight individual

Page 22: c Copyright 2007 Australian Curriculum Studies Association ...eprints.qut.edu.au/19600/1/c19600.pdf · literacy have resulted in those involved - journalists, politicians, education

19

characteristics of students in an autonomous sense, and hide and naturalise that which remains

unmarked. The complex and profound effects of poverty that are known to effect equitable

access to education are unmarked and thus invisible within this reporting.

What we are seeing in fact is another instance of the construction of an entity through

language. The very act of naming a category as an identified sub group creates that group as

an object of study. By reporting the achievement of LBOTE students for example, students

from ESL backgrounds become an equity group, despite the fact that the data demonstrates

that there is little difference in the percentage of students achieving the reading benchmark

who identify themselves as fitting the category of LBOTE and the percentage of all students

who do the same.

For Indigenous students, for whom the data does demonstrate unequal patterns of

performance in comparison to all students, these results perpetuate the expectation that

Indigenous students will have performance patterns which are lower than those of the general

population. My argument does not go to deny the atrocity of inequitable achievement levels

of Indigenous students in Australia, but it does suggest that a more complex disaggregation

and analysis of the data is required to allow for the complex interweaving of equity issues to

become more transparent.

The false sense of homogeneity that a category like Indigenous portrays is the real issue that

requires further investigation. Just as all boys are not failing literacy – in fact the data

demonstrates that there is very little difference between the performance of boys and girls

(Ministerial Council on Education, Employment, Training and Youth Affairs, 1999c) - neither

are all Indigenous students. The public reporting of data both conceals the successes and veils

Page 23: c Copyright 2007 Australian Curriculum Studies Association ...eprints.qut.edu.au/19600/1/c19600.pdf · literacy have resulted in those involved - journalists, politicians, education

20

the true extent of education systems' failure to provide equitable education outcomes for all.

In a recent study investigating one Australian State's approach to state-wide testing as part of

the Commonwealth benchmark process, Luke et. al. (2002) found evidence that rurality of

location had a significant effect on the performance of Indigenous year 5 students. This

finding suggests that reporting results of the identified sub groups provides only a partial map

of achievement, shrouding complex issues of how poverty, gender, language, locality, race

and ethnicity come together with pedagogy in institutions.

From benchmarks to tests: Psychometric disparities

The existence of benchmarks is based on assumptions of a relationship between

accountability and improved outcomes. It is assumed that the articulation of minimum

national standards will lead to increased system and school accountability to key stakeholders.

Secondly it is assumed that the translation of these minimum standards into rigorous state-

based assessment processes will lead to improved outcomes for student achievement and

improvements in teaching quality (Luke & van Kraayenoord, 1998). There would actually

seem to be little evidence that system based testing regimes actually have the capacity to lead

to improvements in these areas, or in fact that they are rigorous. As demonstrated recently

within a large scale longitudinal study in Queensland (Education Queensland, 2001) teaching

practice or pedagogy needs to be highlighted as a fundamental detail in improving student

outcomes. Literacy and numeracy outcomes will improve with the delivery of high quality

teaching programs, and whether the development of minimum competency statements will aid

in the delivery of this is doubtful at best.

Page 24: c Copyright 2007 Australian Curriculum Studies Association ...eprints.qut.edu.au/19600/1/c19600.pdf · literacy have resulted in those involved - journalists, politicians, education

21

Willis (1998) supports these concerns when she states that rather than enhancing student

outcomes, the benchmarks in numeracy are more likely to undermine improvements to

student outcomes. Willis believes that this is most likely to be the case for those most at risk

in relation to numeracy learning, despite claims that the benchmarks specifically support this

group of students. Generally the benchmarks are not based on an adequate conceptualisation

of what factors might put children at risk of not learning, and because of their minimum

competency approach to performance description and narrow back to the basics focus, they

may well be more likely to “undermine good teaching practice than enhance it” (Willis, 1998,

p. 71).

It is possible to critique the accountability as benchmark process on some very basic levels of

psychometric inconsistency. To begin with the very nature of the performance testing regime

and the necessity to test large cohorts will always constrain test development and the

development of essential criteria used to define complex concepts. Paper and pencil tests, able

– for the most part - to be machine marked, are limited in how and what they are able to

assess.

As the notion of benchmarks is investigated, the arbitrariness of their existence becomes more

evident. However, nowhere is this arbitrariness so evident as in the assumption that

comparable, informative data can be collected through seven separate testing regimes. All

seven Australian States and Territories develop, administer and mark their own tests. Parts of

these tests are then taken to refer specifically to a student’s performance against the

benchmarks. Only this part of the student’s performance is identified and then used to

represent the results of student performance against a particular benchmark in the public

reporting. Of course each of these state-based testing regimes has its own bias, purposes,

Page 25: c Copyright 2007 Australian Curriculum Studies Association ...eprints.qut.edu.au/19600/1/c19600.pdf · literacy have resulted in those involved - journalists, politicians, education

22

breadth of coverage, political requirements, systemic constraints, and contexts. The process

requires that student results on these disparate tests be statistically processed and reported as a

singular entity. The process of equating the data is only loosely described in the benchmark

publications:

Comparability of results obtained through different state-based assessment programs is

being achieved using an equating process developed by an expert committee of the

MCEETYA Benchmarking Taskforce. This committee comprises independent

measurement experts as well as representatives of the Commonwealth, State and

Territory education departments, the National Council of Independent Schools'

Associations, the National Catholic Education Commission and assessment agencies.

(Commonwealth Department of Education Training and Youth Affairs, 1997, p. 2)

This statement gives more details about the composition of MEECTYA than details of the

equating process. As part of a lengthier description in the national reporting of the actual

benchmark results – which does not provide details that are any clearer - the reader is told that

the equating process is a calculation and that it is a three-stage process involving a common

achievement scale for reading and a process for determining the location of benchmarks on

the scale. Equivalent locations on State and Territory achievement scales are then calculated

to inform the test design process at State and Territory level.

So the State and Territory education systems are charged with the task of testing students'

achievement against these benchmarks through their own state-wide testing regimes. The

testing points used for this process were contested by State and Territory Ministers of

Education. However eventually it was agreed that the testing should take place in years 3, 5

Page 26: c Copyright 2007 Australian Curriculum Studies Association ...eprints.qut.edu.au/19600/1/c19600.pdf · literacy have resulted in those involved - journalists, politicians, education

23

and 7 during August of each year. This decision was taken despite the legitimate argument of

disadvantage, held up by States such as Queensland that this decision failed to take into

account the differences between the naming of year levels across systems. Students in

Queensland for instance have been attending full time compulsory schooling for a full year

less than students in some other States when they sit for the tests. Claims of improvements of

standards through increased accountability of systems would seem to be difficult to defend

when the years of schooling is not kept consistent in data collection.

Importantly though, regardless of the implausibility of this process equating to anything like a

psychometrically accurate singular benchmark from these disparate tests, the whole process

also rests on the assumption that state-wide testing regimes involve the use of rigorous, valid

and well-designed tests and processes. There is evidence that this assumption could be

disbanded if the tests were closely scrutinised (Luke et al, 2002, Woods, in preparation).

The Consequences

I have detailed how the processes involved within the accountability as benchmarks

movement are flawed. Once theses processes are problematised at these levels it must also

become problematic to allow this process to act as one of the filters used to construct literacy

within education. If there are institutional defects in the process then it is neither acceptable to

use the process to describe literacy, or to influence the teaching and learning of literacy.

The scope of this paper has not been to detail what all of the side affects of this menacing

field might be on literacy teaching and learning within schools. Effects such as: the

constriction of curriculum; the time wasted in practice, administration and reporting of the

data; a construction of a general community mistrust of schools and education systems; the

Page 27: c Copyright 2007 Australian Curriculum Studies Association ...eprints.qut.edu.au/19600/1/c19600.pdf · literacy have resulted in those involved - journalists, politicians, education

24

deskilling of teachers; a foregrounding of individualistic and competitive discourses within

education; as well as others are detailed elsewhere (Alloway & Gilbert, 1998; Christie, 1998;

Gee, 1999; Leung, 1998; Luke & van Kraayenoord, 1998; Willis, 1998).

The disadvantages of raising one instance of literacy assessment above others, when in fact

regardless of the intensity of authority with which they are presented or their colluding with

notions of objectivity, all are developed through human enterprise and prone to human

preferences and fallibility (Afflerbach, 2002) are evident. To justify the channelling of

resources, and the risks to broader constructions of literacy more in tune to the needs of

today's students, surely the accountability through benchmarking movement must give a

suitable answer to the following questions. How will teaching and learning get better because

of these processes? What is the value addedness of benchmarking? Until those questions are

asked and answered we will be asking the wrong questions. The response to improving

outcomes as a result of the data collected in measures such as those reported within this paper,

have been narrow focused, add on - often single hit - intervention programs aimed at topping

up individual children and providing them with inoculations or boosters against 'failure'.

A foregrounding of intervention - especially early intervention - will not produce an

education system that provides equitable access to success for all students. The benchmark

results themselves provide this evidence. Despite considerable resources focused on the

provision of early intervention programs such as Reading Recovery, students are more likely

to reach the reading benchmark in year 3 than in year 5. One-shot intervention within the first

three years of school will not immunize against literacy failure. Because of this, using the

identification of 'at-risk' students for recruitment into intervention as justification for the

accountability as benchmark movement is flawed and unjustifiable.

Page 28: c Copyright 2007 Australian Curriculum Studies Association ...eprints.qut.edu.au/19600/1/c19600.pdf · literacy have resulted in those involved - journalists, politicians, education

25

Page 29: c Copyright 2007 Australian Curriculum Studies Association ...eprints.qut.edu.au/19600/1/c19600.pdf · literacy have resulted in those involved - journalists, politicians, education

References

Afflerbach, P. (2002). The road to folly and redemption: Perspectives on the legitimacy of

high-stakes testing. Reading Research Quarterly, 37(3), 348-360.

Alloway, N., & Gilbert, P. (1998). Reading literacy test data. Benchmarking success? The

Australian Journal of Language and Literacy, 21(3), 249-261.

Brock, P. (1998). The great literacy debate - again. Australian Journal of Language and

Literacy, 21(1), 8-26.

Christie, F. (1998). Point and counterpoint: Benchmarking. Curriculum Perspectives, 18(3),

43-45.

Commonwealth Department of Education Training and Youth Affairs. (1997). National

literacy and numeracy benchmarks. Canberra: Department of Education, Training and

Youth Affairs.

Department of Employment Education and Training. (1991). The Australian language and

literacy policy. Canberra, Australia: Australian Government Printing Service.

Education Queensland. (2001). School reform longitudinal study: Final report. Brisbane,

Australia: School of Education, University of Queensland.

Foucault, M. (1972). The archaeology of knowledge (A. M. Sheridan Smith, Trans.). New

York: Pantheon Books.

Foucault, M. (1977). Discipline and punish: The birth of the prison (A. Sheridan, Trans.).

London, UK: Penguin.

Foucault, M. (1983). Discourse and truth: The problematization of parrhesia, Six lectures

given by Michel Foucault at the University of California at Berkeley, Oct-Nov. 1983.

Accessed at http://foucault.info/documents/parrhesia/ on 7th May 2004.

Page 30: c Copyright 2007 Australian Curriculum Studies Association ...eprints.qut.edu.au/19600/1/c19600.pdf · literacy have resulted in those involved - journalists, politicians, education

Gee, J. (1999). Reading and the New Literacy Studies: Reframing the National Academy of

Sciences report on reading. Journal of Literacy Research, 31(3), 355-374.

House of Representatives Standing Committee on Employment Education and Training.

(1993). The literacy challenge. Canberra, Australia: Australian Government Printing

Service.

Jones, C. (1998, April 24, 1998). Deal clinched on literacy benchmark. The Age, p. 3.

Kemp, D. (1999). Outcomes reporting and accountable schooling. Paper presented at the

Curriculum Corporation 6th National Conference, 6-7 May, 1999.

Leung, C. (1998). Benchmarking literacy attainment: A case of curriculum displacement?

Curriculum Perspectives, 18(3), 50-55.

Luke, A., & van Kraayenoord, C. (1998). Babies, bathwaters and benchmarks: Literacy

assessment and curriculum reform. Curriculum Perspectives, 18(3), 55-62.

Luke, A., Woods, A., Land, R., Bahr, M., & McFarland, M. (2002). Accountability: Inclusive

assessment, monitoring and reporting. Brisbane, Queensland, Australia: Queensland

Indigenous Education Consultative Body.

Macnamara, L. (2005, 30th September, 2005). Educators concede a role for the old ways. The

Australian, p. 2.

Ministerial Council on Education Employment Training and Youth Affairs. (1999a, 29th July

1999). The Adelaide declaration on national goals for schooling for the twenty-first

century. Retrieved 18th July, 2002, from

http://www.curriculum.edu.au/mceetya/nationalgoals/index.htm

Ministerial Council on Education Employment Training and Youth Affairs. (1999b). The

national literacy and numeracy plan. Canberra, Australia: Author.

Page 31: c Copyright 2007 Australian Curriculum Studies Association ...eprints.qut.edu.au/19600/1/c19600.pdf · literacy have resulted in those involved - journalists, politicians, education

Ministerial Council on Education Employment Training and Youth Affairs. (1999c). National

report on schooling in Australia. Canberra, ACT, Australia: Commonwealth of

Australia.

Ministerial Council on Education Employment Training and Youth Affairs. (2000). National

report on schooling in Australia preliminary paper. National benchmark results

reading and numeracy years 3 and 5. Canberra, Australia: Commonwealth of

Australia.

Ministerial Council on Education Employment Training and Youth Affairs. (2001). National

report on schooling in Australia preliminary paper. National benchmark results

reading and numeracy years 3 and 5. Canberra, Australia: Commonwealth of

Australia.

Ministerial Council on Education Employment Training and Youth Affairs. (2002). National

report on schooling in Australia preliminary paper. National benchmark results

reading and numeracy years 3 and 5. Canberra, Australia: Commonwealth of

Australia.

Ministerial Council on Education Employment Training and Youth Affairs. (2003). National

report on schooling in Australia 2003. Canberra, ACT, Australia: Commonwealth of

Australia.

Nelson, B. (2001). Quality teaching a national priority. Media release: 4th April, 2002 min

42/02.

Quality of Education Review Committee chaired by Peter Karmel. (1985). Quality of

education review. Canberra, ACT, Australia: Australian Government Printing Service.

Rindfleisch, T. (2001, 29th July). Kemp: Vic tests fail. Sunday Herald Sun, p. 36.

The Editor. (2005, 30 September, 2005). Educated Consumers. The Australian, p. 15.

Page 32: c Copyright 2007 Australian Curriculum Studies Association ...eprints.qut.edu.au/19600/1/c19600.pdf · literacy have resulted in those involved - journalists, politicians, education

Willis, S. (1998). First do no harm: Accountability and the numeracy benchmarks.

Curriculum Perspectives, 18(3), 70-76.

Woods, A. (in preparation) Queensland Statewide Tests in Aspects of Literacy and

Numeracy: Forming the Literate Student through Testing.

Page 33: c Copyright 2007 Australian Curriculum Studies Association ...eprints.qut.edu.au/19600/1/c19600.pdf · literacy have resulted in those involved - journalists, politicians, education

Table 1: Descriptors used for 'benchmarks' within the original National Literacy and

Numeracy Benchmark publications (Commonwealth Department of Education Training and

Youth Affairs, 1997)

Benchmarks

are part of an agreement adopted by all Australian Education Ministers

adopted … to improve educational outcomes of all Australian children

are a set of indicators or descriptors

represent nationally agreed minimum acceptable standards for literacy and

numeracy at a particular level

represent only the essential elements of literacy and numeracy

(are) not the full range of the curriculum

are distinct from progress maps and Profiles

ask whether a particular level of achievement is likely to be adequate for making

satisfactory progress at school

are not tests