using quality rating scales for professional development: experiences from the uk

16
This article was downloaded by: [McGill University Library] On: 16 December 2014, At: 17:46 Publisher: Routledge Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK International Journal of Early Years Education Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/ciey20 Using quality rating scales for professional development: experiences from the UK Sandra Mathers a , Faye Linskey a , Judith Seddon b & Kathy Sylva a a University of Oxford , UK b Bradford Early Years and Childcare Service , Bradford Metropolitan District Council , UK Published online: 13 Aug 2007. To cite this article: Sandra Mathers , Faye Linskey , Judith Seddon & Kathy Sylva (2007) Using quality rating scales for professional development: experiences from the UK, International Journal of Early Years Education, 15:3, 261-274, DOI: 10.1080/09669760701516959 To link to this article: http://dx.doi.org/10.1080/09669760701516959 PLEASE SCROLL DOWN FOR ARTICLE Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content. This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. Terms &

Upload: kathy

Post on 11-Apr-2017

215 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Using quality rating scales for professional development: experiences from the UK

This article was downloaded by: [McGill University Library]On: 16 December 2014, At: 17:46Publisher: RoutledgeInforma Ltd Registered in England and Wales Registered Number: 1072954 Registeredoffice: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK

International Journal of Early YearsEducationPublication details, including instructions for authors andsubscription information:http://www.tandfonline.com/loi/ciey20

Using quality rating scales forprofessional development: experiencesfrom the UKSandra Mathers a , Faye Linskey a , Judith Seddon b & Kathy Sylvaa

a University of Oxford , UKb Bradford Early Years and Childcare Service , BradfordMetropolitan District Council , UKPublished online: 13 Aug 2007.

To cite this article: Sandra Mathers , Faye Linskey , Judith Seddon & Kathy Sylva (2007) Usingquality rating scales for professional development: experiences from the UK, International Journalof Early Years Education, 15:3, 261-274, DOI: 10.1080/09669760701516959

To link to this article: http://dx.doi.org/10.1080/09669760701516959

PLEASE SCROLL DOWN FOR ARTICLE

Taylor & Francis makes every effort to ensure the accuracy of all the information (the“Content”) contained in the publications on our platform. However, Taylor & Francis,our agents, and our licensors make no representations or warranties whatsoever as tothe accuracy, completeness, or suitability for any purpose of the Content. Any opinionsand views expressed in this publication are the opinions and views of the authors,and are not the views of or endorsed by Taylor & Francis. The accuracy of the Contentshould not be relied upon and should be independently verified with primary sourcesof information. Taylor and Francis shall not be liable for any losses, actions, claims,proceedings, demands, costs, expenses, damages, and other liabilities whatsoeveror howsoever caused arising directly or indirectly in connection with, in relation to orarising out of the use of the Content.

This article may be used for research, teaching, and private study purposes. Anysubstantial or systematic reproduction, redistribution, reselling, loan, sub-licensing,systematic supply, or distribution in any form to anyone is expressly forbidden. Terms &

Page 2: Using quality rating scales for professional development: experiences from the UK

Conditions of access and use can be found at http://www.tandfonline.com/page/terms-and-conditions

Dow

nloa

ded

by [

McG

ill U

nive

rsity

Lib

rary

] at

17:

46 1

6 D

ecem

ber

2014

Page 3: Using quality rating scales for professional development: experiences from the UK

International Journal of Early Years EducationVol. 15, No. 3, October 2007, pp. 261–274

ISSN 0966-9760 (print)/ISSN 1469-8463 (online)/07/030261–14© 2007 Taylor & FrancisDOI: 10.1080/09669760701516959

Using quality rating scales for professional development:experiences from the UK

Sandra Mathersa*, Faye Linskeya, Judith Seddonb and Kathy SylvaaaUniversity of Oxford, UK; bBradford Early Years and Childcare Service, Bradford Metropolitan District Council, UKTaylor and Francis LtdCIEY_A_251561.sgm10.1080/09669760701516959International Journal of Early Years Education0966-9760 (print)/1469-8463 (online)Original Article2007Taylor & Francis153000000October [email protected]

The ECERS-R and ITERS-R are among two of the most widely used observational measures fordescribing the characteristics of early childhood education and care. This paper describes a profes-sional development programme currently taking place in seven regions across England, designed totrain local government staff in the application of the scales as tools for improving practice. Whilethe scales offer a transparent and measurable means of assessing and improving quality, a numberof differences between criteria presented by the scales and by national regulations, curricularguidelines and notions of quality have been identified.

Background to the study

The use of quality rating scales

A number of standardized observational rating scales exist to assess the quality ofearly years provision for young children. Among the best known are the EnvironmentRating Scales: a family of scales developed in the US by Thelma Harms, RichardClifford and Debby Cryer. There are four scales in the set, each appropriate for eval-uating a different type of care ranging from centre-based to home-based provision:the Early Childhood Environment Rating Scale (ECERS-R), the Infant ToddlerEnvironment Rating Scale (ITERS-R), the Family Child Care Environment RatingScale (FCCERS-R) and the School-Age Care Environment Rating Scale(SACERS). A UK extension to the ECERS-R (the ECERS-E) provides a more in-depth curricular focus.

*Corresponding author. Department of Educational Studies, University of Oxford, 15 NorhamGardens, Oxford OX2 6PY, UK. Email: [email protected]

Dow

nloa

ded

by [

McG

ill U

nive

rsity

Lib

rary

] at

17:

46 1

6 D

ecem

ber

2014

Page 4: Using quality rating scales for professional development: experiences from the UK

262 S. Mathers et al.

The Environment Rating Scales were designed—and have been applied—for threebroad purposes (Lambert, 2003): research on global classroom quality, formativeevaluation for self-improvement of teaching and early years quality, and accredita-tion. In the USA, where the original scales were developed, all three applications arecommonplace. The ECERS and the ITERS were used to assess quality in large-scalestudies such as the National Child Care Staffing Study (Whitebook et al., 1990), theCost, Quality and Child Outcomes Study (Phillipsen et al., 1997) and the evalua-tions of Head Start and Smart Start (Bryant et al., 2003; Zill et al., 2003). Many USstates also use the scales as tools to improve practice, through self-assessment or aspart of accreditation or voluntary improvement programmes. To take just one exam-ple from the state of Arkansas, trained staff conduct assessments using the scales andthen offer training and support for centres to improve the quality of their provision.State funding is allocated according to measurable improvement on the scales. Thescales have also been used extensively in other countries, both as research instru-ments and as tools to improve practice. In Sweden, for example, the SwedishECERS has been used as part of a self-improvement programme with pre-schoolteachers, leading to quantifiable improvements in the quality of provision offered(Andersson, 1999).

In contrast, although the scales are relatively well known in England, they havebeen used almost exclusively for research purposes. However, early work in applyingthe scales for centre improvement (e.g. Iram Siraj-Blatchford’s work as part of theEarly Excellence Centres Evaluation, 2002a, b) is now gaining momentum, and useof the scales by practitioners and government authorities is increasing. Best knownin the UK are the ECERS-R and ITERS-R, both of which have been validated onlarge UK samples and shown to predict child outcomes (e.g. Sylva et al., 2004;Mathers & Sylva, 2007).

This paper describes a professional development programme in seven regionsacross England, designed to train local government staff in the application of theECERS-R and ITERS-R as tools for improving practice.

English policy context

There has been a growing interest in recent years amongst English local governmentauthorities in using quality rating scales for evaluation, audit and improvement in theearly years sector. This interest is evolving in a policy context in which early yearseducation and care is, in comparison with other countries, fairly regulated and becom-ing more so. Providers have access to curriculum guidance in the form of Birth toThree Matters and the Foundation Stage, now combined into a single framework: theEarly Years Foundation Stage is due to come into force in 2008 and will offer guidancefrom birth to five. The regulatory body Ofsted (Office for Standards in Education) isresponsible for inspecting early years centres and uses a set of published ‘NationalStandards’ as the criteria for its inspections. The Childcare Bill introduced to parlia-ment in 2005 means that local authorities are legally required to provide high qualityand affordable childcare for children up to the age of five (Childcare Act 2006).

Dow

nloa

ded

by [

McG

ill U

nive

rsity

Lib

rary

] at

17:

46 1

6 D

ecem

ber

2014

Page 5: Using quality rating scales for professional development: experiences from the UK

Using quality rating scales 263

Alongside the requirement for local authorities to provide high quality care is agrowing focus on accountability and measurable progress. The following is a quotefrom draft statutory guidance produced by the UK government (currently underconsultation and due to come into force in April 2008):

Local Authorities need to set clear expectations about what high quality provision lookslike, and settings need to know what support is available to help them improve …There should be clear benchmarking and transparent levers and incentives to improvequality—every setting should strive to push quality ever higher above minimum Ofstedstandards. (HM Government, 2007, p. 26)

While Ofsted fulfils the purpose of monitoring at a broad level, assessing centresagainst the National Standards and providing a report of strengths and areas whichrequire improvement, the reports are generally not in sufficient detail to:

● act as an audit tool for local authorities to rigorously assess quality standards,identify areas for improvement, or monitor change in quality over time; or

● offer centres (and the professionals who support them) specific and practicalguidance on how to improve.

It is in these two areas that the Environment Rating Scales are carving a niche in theUK early years sector. Many local authorities develop and operate their own qualityassurance schemes. Others use existing tools, for example the Effective Early Learningprogramme (EEL) developed by Christine Pascal and Tony Bertram at UniversityCollege Worcester (Pascal et al., 1996). However, the Environment Rating Scales aregrowing in reputation as a viable, rigorous and manageable alternative.

Aims and methodology

Aims

This paper describes research on an ongoing professional development programmein the UK involving seven UK local government regions. Early years professionalsare being trained in the use of the ITERS-R, ECERS-R and ECERS-E, with a viewto applying the scales within each region for quality improvement and/or auditpurposes. Alongside the aim of each individual region to explore possibilities forcentre improvement, the overarching aim of this research paper is to examine the useof the scales across the seven regions for practical (i.e. improvement) purposes andto answer the following specific questions:

1. What are the strengths and limitations of the ECERS-R and ITERS-R forprofessional development and centre improvement within the English context?

2. Which aspects of the scales are appropriate for assessing English practice, andwhich are less appropriate?

3. What training and support is required to enable tools such as the ECERS-R andITERS-R to be successfully applied for practical purposes?

This paper sets out some early themes in relation to the three questions outlined. Anadditional aim of the ongoing work is to add to the growing body of literature offering

Dow

nloa

ded

by [

McG

ill U

nive

rsity

Lib

rary

] at

17:

46 1

6 D

ecem

ber

2014

Page 6: Using quality rating scales for professional development: experiences from the UK

264 S. Mathers et al.

a critique of the Environment Rating Scales as a method of assessing early yearsprovision quality (e.g. Douglas, 2004). The ECERS and ITERS scales have under-gone considerable revisions in the light of feedback received from researchers andpractitioners—for example concerning clarity of definition and areas not adequatelyaddressed by the scales—and are still evolving in the light of their continued use. Thispaper adds a UK voice to the debate, and considers the use and applicability of theECERS and ITERS scales within the UK early years policy context.

The Environment Rating Scales

The ECERS and ITERS are among two of the most widely used observationalmeasures for describing the characteristics of early childhood education and care,and have been used worldwide to assess quality. The Infant Toddler EnvironmentRating Scale (ITERS-R; Harms, Cryer & Clifford, 2006) considers provision forchildren from birth to two and a half years, while the Early Childhood EnvironmentRating Scale (ECERS-R; Harms et al., 2005) considers provision for children agedtwo and a half to five years. Broadly speaking, they provide a ‘snapshot’ of the envi-ronment as experienced by the children attending. Both scales address a range ofquality dimensions, including the physical space and furnishings, resources andactivities available to children, interactions (adult–child and peer interactions) andaspects of health, safety and personal care routines. An additional subscale considersthe provision for staff members and for parents. Each subscale comprises a numberof items, rated on a 7-point scale, from 1 (inadequate) through to 7 (excellent). Bothscales have proven reliability and validity (Harms et al., 2005, 2006), and manyresearch studies have found that they accurately predict children’s development (e.g.Burchinal et al., 1996; Peisner-Feinberg et al., 1999; Sylva et al., 2004; Whitebooket al., 1990).

A new UK extension to the ECERS-R (the ECERS-E) is also being applied aspart of the quality improvement programme. The ECERS-E, developed and vali-dated as part of a major UK research study (the EPPE Project: Sylva et al., 2004),focuses in greater depth on the quality of curricular provision. Unlike the ECERSand ITERS which have a history as tools for professional development, the ECERS-E is only just beginning to undergo the move from research to practice. Thispresents an interesting opportunity to study the application of an instrument whichis validated as a research tool, but still in the early stages of application as a tool forimproving practice (e.g. Siraj-Blatchford, 2002a, b). However, the ECERS-E is notthe focus of this paper, and findings related to its use will be reported separately.

The professional development programme

To date, seven UK local government regions have participated in the EnvironmentRating Scales professional development programme. The regions are diverse in theirnature, ranging from urban to rural, and differing also in their relative sizes andpopulation characteristics. Advisory and support teams in each of the regions are

Dow

nloa

ded

by [

McG

ill U

nive

rsity

Lib

rary

] at

17:

46 1

6 D

ecem

ber

2014

Page 7: Using quality rating scales for professional development: experiences from the UK

Using quality rating scales 265

undergoing intensive training on the scales. Part of the training programme involvesinput and discussion on the possibilities for applying the scales for audit and qualityimprovement purposes. The participating UK regions have diverse plans for imple-menting the scales. One region has conducted a large-scale audit of all the privateand voluntary centres within the region (over 700 in total), with the aim of identify-ing priorities for training, resource allocation and centre support. Another regionis using the scales, in conjunction with its existing Quality Assurance scheme, asan audit tool to measure quality and the impact of support and training providedto centres. A third hopes to use the ECERS and ITERS to develop a frameworkfor decision making regarding registration and funding, while a fourth plans to usethe scales with individual centres, to set a baseline for performance and actionplanning.

Data collection

Data have been collected in each of the seven government regions using semi-structured interviews, discussion sessions and written evaluation questionnairesfollowing the training sessions. All early years professionals taking part in the devel-opment programme are asked for their views on:

● the strengths and limitations of the scales for professional development and centreimprovement within the English context;

● the specific aspects of the scales which they consider to be appropriate for assess-ing English practice, and any which are less appropriate;

● the training and support required to enable tools such as the ECERS-R andITERS-R to be successfully applied for practical purposes.

Case study: Bradford Metropolitan District Council

Bradford Metropolitan District Council (MDC) is a local government authorityregion in the north of England. It covers five constituency areas, which range fromhigh levels of deprivation to more affluent rural areas. The Environment RatingScales project within Bradford has involved a number of stages:

1. A pilot study in four early years centres to develop an understanding of the scalesand their potential as tools for improvement.

2. Intensive training for early years professionals within the advisory and support team onthe ECERS-R, ECERS-E and ITERS-R.

3. A baseline audit of early years funded centres in the private, voluntary and indepen-dent sector. Assessments were carried out by a fully trained team and subject toappropriate reliability checks.

4. Follow-up support: as a starting point, advice, support and resources are beingprovided in a specific area (provision for books and reading). This will developinto an ongoing programme of targeted support for centres, using the ECERS-Rand ITERS-R scales (and the outcome of the audit) as a framework to guide

Dow

nloa

ded

by [

McG

ill U

nive

rsity

Lib

rary

] at

17:

46 1

6 D

ecem

ber

2014

Page 8: Using quality rating scales for professional development: experiences from the UK

266 S. Mathers et al.

improvement. Bradford MDC is keen to ensure that the scales are used withcentres to encourage self-evaluation processes, and not simply as a monitoringtool.

Next steps

● Extension to schools (a pilot is already underway) and possible extension to child-minders and out-of-school care using the SACERS and FCCERS-R.

● Links with a series of guidance documents produced by the local authority to offerearly years centres practical ideas and support appropriate practice. ‘Hands OnLearning’ will be linked with the ECERS principles, as well as with the UKcurriculum guidance documentation, and a video is also underway.

● The development of a ‘demonstration’ centre, which practitioners can visit forideas and advice on good practice.

● Use of the data provided by the audit for improvement at the centre and at theauthority level.

● Further training, for example for partner organizations, and training of ‘gold stan-dard’ ECERS-R/ITERS-R practitioners who will take on responsibility for ensur-ing consistency in the use of the scales throughout the region.

Strengths of the ECERS-R and ITERS-R for professional development and centre improvement within the English context

Semi-structured interviews and responses to written evaluation questionnairesshowed that, on the whole, the reaction to the Environment Rating Scales in the UKhas been positive. The perceived benefits can be summarized as follows:

● Bringing teams of early years professionals together and providing a commonlanguage for discussion and development. Many UK local government authoritiesare undergoing restructuring and reorganization, and several of those training touse the scales for centre support have found the experience to be a unifying one.

● Monitoring of change and accountability. Local government authorities areincreasingly being asked to provide evidence of their success in raising standardsand improving the quality of early years provision. Using the scales to create arigorous and detailed baseline from which future progress can be measured is seenas immensely beneficial.

● Transparency in terms of the criteria by which early years centres are being askedto improve. The majority of local authorities are planning to use the scales in avery transparent way, working with centres to improve quality rather than impos-ing an additional set of standards to which they must adhere. Many are buyingcopies of the scales for early years centres, and recognize their potential for reflec-tive self-evaluation as well as monitoring. The fact that the scales clearly set outthe progression in quality within each area means not only that centres can see thelevel at which they are currently functioning but also the next steps.

Dow

nloa

ded

by [

McG

ill U

nive

rsity

Lib

rary

] at

17:

46 1

6 D

ecem

ber

2014

Page 9: Using quality rating scales for professional development: experiences from the UK

Using quality rating scales 267

● Identification of strengths as well as areas for development.● Minimal paperwork: the scales set out the progression in quality using a series of

specific statements, which can either be marked off on a score sheet or recorded inthe scale itself.

The following quotes are from local government advisors and other professionalstaking part in the training programme:

A clear, concise document which will help all the [centres] to see where they are, andwhat the next steps are for them individually without putting undue stress on them.

I think this could be a really useful tool for both a quality audit and as a means tosupport centres to move forward and raise quality in a positive way.

This gives our teams a cohesive tool/language to use in multi-agency meetings—we willhave a shared understanding and so be able to support more effectively when trying toraise standards in centres.

Issues for consideration when using the ECERS-R and ITERS-R for professional development and centre improvement within the English context

Can quality be captured using rating scales?

As always, alongside the identified advantages of the ECERS-R and ITERS-R thereare also issues which need to be addressed: not least the overarching questions ofwhether a rating scale approach is appropriate for describing early childhood provi-sion, and whether existing scales can adequately capture ‘quality provision’. Someargue that reducing quality to a series of defined statements or ‘boxes’ is inappropri-ate and leads to a lack of conscious thought about the process of improving quality:‘measurement without description and conceptual understanding’ (Athey, 1990).Certainly, any such tool needs to be used in an intelligent and reflective manner, andas part of a wide range of methods, approaches and philosophies rather than beingpresented as a ‘complete solution’.

If the rating scale approach per se is accepted, the second question we mustaddress relates to our notions of quality. Statham and Brophy (1992) point out thatthe provision of ‘an objective rating scale for measuring quality has to assume thatthere is an explicit model of what constitutes good provision’. The ECERS-R andITERS-R scales were developed following the NAEYC guidance on ‘developmen-tally appropriate practice’ (Bredekamp, 1987) and cover a broad range of qualitydimensions. However, it has been argued that the ideas that underpin these notionsare no longer fully up to date. For example, Dickinson (2002) suggests that while‘ideas of what is developmentally appropriate practice have undergone significantchange with respect to literacy and cognitive learning more generally’ there havebeen ‘relatively few changes in research tools and accreditation guidelines’. Specificgaps in ‘coverage’ of the ECERS-R and ITERS-R identified by the UK practitionerstaking part in this study are outlined in more detail later in this section.

Dow

nloa

ded

by [

McG

ill U

nive

rsity

Lib

rary

] at

17:

46 1

6 D

ecem

ber

2014

Page 10: Using quality rating scales for professional development: experiences from the UK

268 S. Mathers et al.

Cultural and regulatory contexts

A further issue which must be considered is the applicability of the scales to differentcultural contexts. Tobin (2005) argues that ‘quality standards should reflect localvalues and concerns and not be imposed across cultural divides’. Similarly, Douglas(2004) points out that ‘rating scales such as ECERS are generally validated by refer-ence to the values of one particular group in one country … in the case of ECERS,most of the experts were drawn from the field of child development in NorthAmerica’. An emerging theme for the UK is therefore the ‘goodness of fit’ betweenthe ECERS-R and ITERS-R scales and the existing national regulations and curric-ular guidance.

Overall, the fit between the two scales and the National Standards applied by theUK regulatory body (Ofsted) is good. UK local authorities have been positive aboutthe rigour of ECERS when compared to the Ofsted guidelines and, in particular, thefact that the scales focus in greater depth on ‘good’ and ‘excellent’ quality. However,there are areas of difference and, as the use of the scales in the UK increases, thesewill need to be addressed. A number of the differences relate to the guidelines forroutine care. The ECERS and ITERS approach to personal care and hygiene isoften more demanding of early years centres than the UK Ofsted criteria. For exam-ple, item 8 in the ITERS-R (Nap), which considers provision for young children tosleep during the day, requires that cots or mats are placed at least 36 inches apart forreasons of hygiene. While the UK National Standards do cover the health and safetyof sleeping children (e.g. the avoidance of cross-contamination through use of cleanbedding), there is no specific guidance relating to the spacing of cots and mats, andthe majority of UK early years centres would not meet this stringent ITERS require-ment. Differences in relation to the health and safety requirements in the USA andthe UK were also identified as an issue in relation to the original version of theFCCERS-R (the FDCRS; Harms & Clifford, 1989) by Rowland et al. (1996). Forexample, on the ‘health’ item, family day care providers in the study were not able toscore above ‘one’ (inadequate) unless they had undertaken a health examinationwithin the last year. However, since this was not part of the regulatory requirementsof registration in the UK, few providers received high scores.

For local authorities and centres using the ECERS-R and ITERS-R, differencesbetween the scales and English inspection guidelines raise the question of which setof criteria to use. Where the ECERS/ITERS criteria are more stringent, someregions have chosen to use these over and above the criteria set by Ofsted. Othershave chosen to adapt the scales, dropping or amending those elements which do notfit with UK guidance. While helpful in terms of dovetailing with local contexts, thisraises some issues in terms of comparability. If the scales are altered—that is, usersadhere to certain elements of the scales and discard others—then they are no longerthe reliable and valid instruments proven by research. In addition, if different usersretain and discard different parts of the scale/s, they are no longer valuable ascomparative tools. This could be addressed through the development of an ‘English’version which, for example, did not include such stringent criteria for personal care

Dow

nloa

ded

by [

McG

ill U

nive

rsity

Lib

rary

] at

17:

46 1

6 D

ecem

ber

2014

Page 11: Using quality rating scales for professional development: experiences from the UK

Using quality rating scales 269

routines. Although this would require independent testing before being confirmed asa reliable and valid research tool, it would offer a standardized instrument for devel-opment and comparison of quality within the UK.

Identified gaps

It is clear that the ECERS-R and ITERS-R are more demanding in some areas thanis accepted practice in the UK. However, perhaps more important are those areaswhere the scales are not seen to go far enough. The ECERS and ITERS scales havebeen criticized in the past for not adequately addressing important areas of provi-sion: for example, Dickinson (2002) notes that ‘the revised ECERS … almostcompletely ignores the place of print in the classroom. Of the 43 scales, none dealdirectly with literacy.’ Although perceived by many of the participating regions asmore rigorous than the Ofsted guidelines, a number of other important gaps havebeen noted by the early years professionals taking part in the training programme.These include:

● A general focus on the provision of resources, rather than on the opportunitiesstaff create for play and interaction. On the ECERS-R item 24 (Dramatic Play),for example, a score of ‘excellent’ can be achieved simply by offering appropriatematerials: no specific interaction is required on the part of the staff.

● A light touch in terms of mathematical and scientific development, again with alimited focus on the role of the adult in developing children’s thinking (althoughthis is covered to some extent by the item on ‘using language to develop reasoningskills’), and on the extent to which children are encouraged to develop under-standing through their own play—for example by leaving blank labels in a role-play shop for children to price their own goods.

● A lack of rigour in assessing provision for information and communications tech-nology. ICT plays an important role in the UK Early Years Foundation Stageguidance, and all early years centres are expected to offer children a wide range ofexperiences including equipment such as digital cameras, programmable toys andeveryday appliances. In contrast, the ICT item in the ECERS-R can simply bescored ‘not applicable’ if none is offered.

Another important area identified by the UK practitioners is whether the scalesadequately assess diversity and inclusive provision—something for which the originalECERS received criticism. Although revisions to the original scales have includedadditional items and examples to address these concerns, the UK experience to datesupports Douglas (2004) in his conclusion that ‘even where revisions have beenmade, they would not be perceived by many practitioners as going far enough’. Theitem ‘Provisions for Children with Disabilities’ is completed only ‘if there is a childin the group with an identified and diagnosed disability, with a completed assess-ment’ (Harms et al., 2005). What this fails to assess is how adequately centres iden-tify children who may have special needs but have not yet been diagnosed, or whohave additional needs which do not fall under the ‘disability’ umbrella (e.g. children

Dow

nloa

ded

by [

McG

ill U

nive

rsity

Lib

rary

] at

17:

46 1

6 D

ecem

ber

2014

Page 12: Using quality rating scales for professional development: experiences from the UK

270 S. Mathers et al.

who speak English as an additional language). Since early identification andintervention is crucial for these children, this was perceived by several of the partici-pating authorities as a serious failing. Throughout the scales, many of the individualstatements relating to children with additional needs can be marked as ‘not applica-ble’ if there are no children with a diagnosed need attending at that particular time.In the UK, many special educational needs coordinators would argue that, to betruly inclusive, centres should have an ‘anticipatory duty’ towards children withadditional needs.

Where the ECERS-R is not perceived as going far enough, the recommendationis that the scales are used in conjunction with other tools, i.e. as one tool in a ‘tool-kit’ of approaches. The UK ECERS-E (Sylva et al., 2006) was developed to providea more rigorous assessment of provision for diversity and planning for individuallearning needs, as well as assessing curricular provision for literacy, mathematicsand science/environment. Ongoing research at the University of Oxford includes thedevelopment of a further extension to the ECERS-R, designed specifically toaddress issues of access and inclusion for children with additional needs (Soucacou,n.d.) and other tools, for example the Programme Administration Scale (Talan &Bloom, 2004), assess aspects of management and leadership. Many researchers andpractitioners successfully supplement the ‘environmental’ focus of ECERS andITERS-style scales with tools which consider the experiences of individual (‘target’)children, for example the Leuven Involvement Scale (Laevers, 1994).

Putting ‘labels’ on quality

One concern evident in responses to the questionnaires and semi-structured inter-views was the use of the quality labels ‘inadequate, minimal, good and excellent’. Aparticular problem perceived by many professionals taking part in the trainingprogramme was the use of the label ‘excellent’ at the top end of the quality state-ments, particularly where it was felt that the statements did not go far enough intheir expectations of a high quality environment. Participants were concerned thatthe label ‘excellent’ to describe provision which met all the requirements for an itemwould give centres the impression that there were no further improvements to bemade, thus potentially ‘closing down’ opportunities for professionals supportingearly years settings to suggest any further changes not listed by the scales. One solu-tion planned by a participating region is to use the statements themselves, but toremove the quality labels. Thus, the scale can be seen as a tool to work through,without the implication that reaching the end (i.e. achieving all the statements)means ‘perfection’ has been attained.

Training and support required to enable research tools to be successfully applied for practical purposes

Again, a number of themes are emerging here, although with a unifying thread of‘appropriate application’. Application of the ECERS and ITERS in the USA has

Dow

nloa

ded

by [

McG

ill U

nive

rsity

Lib

rary

] at

17:

46 1

6 D

ecem

ber

2014

Page 13: Using quality rating scales for professional development: experiences from the UK

Using quality rating scales 271

been overseen by the authors of the scales—Thelma Harms, Richard Clifford andDebby Cryer. Since the scales are often used in the USA for regulatory and monitor-ing purposes (as well as for research), there has been an emphasis on ‘reliable andconsistent use’. This is now the challenge for the UK: to ensure that increasing useof the scales happens in an appropriate and sensitive manner.

There are, in essence, two issues here, depending on the context in which the scalesare being used. The first is an issue of reliability, where the scales are being used for‘measurement’. To date, the use of the scales in UK research has, on the whole, beensystematic and rigorous. Evaluations of national government programmes such as theNeighbourhood Nurseries Initiative (Mathers & Sylva, 2007) have used the scales toassess quality to ‘research standards’, i.e. appropriate checks have been carried out toensure that the data are being collected in a reliable way. The emerging use of thescales by local government authorities raises the question of how to ensure rigoroususe outside the research arena. Bradford MDC is an example of an authority that hastackled this issue. The initial ‘baseline’ audit of provision within the authority wasconducted by a team of fieldworkers trained by an experienced external body, andreliability checked through a programme of paired visits with a ‘gold standard’ rater.Plans are also being made for the future to ensure reliable and consistent use of thescales: a group of practitioners and advisers will receive additional training and reli-ability checks, and will be responsible for ensuring consistency in use throughout theauthority.

The second issue relates to ‘appropriate and sensitive use’: ensuring that centresare involved in the process rather than having it ‘done to them’. A regulatory frame-work already exists in the UK, and the ECERS-R and ITERS-R scales have thepotential to offer much more than simply monitoring and regulation. Ensuringcentres are fully informed, involved and on board will be essential. Ideas arising fromuse of the scales in the UK include: supported self-evaluation; initial observations byan external observer followed by target setting with centre staff; and joint observa-tions, where centre staff observe alongside a supportive adviser to assess the currentsituation and then set targets for development. Several authorities have opted to usethe indicators/statements of the ECERS-R and ITERS-R for quality assessment andtarget setting within centres, without making reference to ‘scores’.

Overall, we recommend that:

● users of the scale are trained thoroughly, including practical guided visits applyingthe scales in centres;

● procedures are developed to ensure reliable and consistent use; this will becomeincreasingly important as use of the Environment Rating Scales increases in the UK;

● the scales are used ‘with’ centres to encourage self-evaluation processes, and notsimply applied ‘to them’ as a monitoring tool.

Conclusions

The quality debate will obviously continue for many years to come, and use of theEnvironment Rating Scales in England is also likely to develop further. Professional

Dow

nloa

ded

by [

McG

ill U

nive

rsity

Lib

rary

] at

17:

46 1

6 D

ecem

ber

2014

Page 14: Using quality rating scales for professional development: experiences from the UK

272 S. Mathers et al.

development and training programmes such as the one described in this paper willcontribute to this growing use, and add to our understanding of how research toolscan be used as tools for improving practice.

This paper has set out some early themes arising from the experiences of UKauthorities in using the Environment Rating Scales. Overall, the response to theECERS family of scales by local government authorities has been positive: they offera common language and tool for different professionals working to support earlyyears centres, provide a transparent, detailed and practical set of criteria whichallows celebration of strengths as well as identifying weaknesses, and offer a meansof measuring quality and documenting change.

However, as with any tool, there are limitations. The Environment Rating Scalespresent and promote a specific notion of quality, which has been challenged by some.Where there are differences between criteria presented by the scales and by nationalregulations, curricular guidelines and notions of quality, decisions will need to be maderegarding which criteria to use. The possibility of a UK adaptation could be consid-ered: if properly tested, this would offer a standardized tool for the development andcomparison of quality in the UK. Where the ECERS-R and ITERS-R are perceivedas not going far enough in their assessment of quality (e.g. in relation to issues of accessand inclusion), we recommend that the scales are used in conjunction with other tools.Any single tool—however comprehensive—will only ever cover a limited spectrum.The UK ECERS-E (Sylva et al., 2006) offers greater depth in terms of curricular provi-sion, provision for diversity and planning for individual learning needs. Other tools,for example those which consider the experiences of individual children, offer a differ-ent perspective. The quantitative approach of the scales might be appropriatelycomplemented by a more qualitative method, such as one using portfolios.

Finally, increasing use of the Environment Rating Scales in the UK raises chal-lenges in terms of ensuring rigorous yet appropriate application. We recommend thatpotential users of the scales undergo thorough training, including practical guidedvisits applying the scales in centres. In addition, procedures should be put in place toensure reliable and consistent use, particularly where the scales are to be used forcomparative purposes. Most importantly, it is essential that these scales do not simplybecome a ‘stick’ with which to beat early years centres in England: sensitive andappropriate application, involving centres in a process of reflective self-evaluation inaddition to monitoring, is recommended.

The next stage of work will be to develop these themes in greater depth, and toconduct several in-depth reviews, for example cross-referencing the scales with thenew curricular guidance for the early years. This work will be conducted with a viewto developing guidance for other UK regions planning to use the EnvironmentRating Scales.

Acknowledgements

The authors are grateful to the local authority staff in all seven regions who haveprovided helpful criticism and reflection on the use of the Environment Rating Scales.

Dow

nloa

ded

by [

McG

ill U

nive

rsity

Lib

rary

] at

17:

46 1

6 D

ecem

ber

2014

Page 15: Using quality rating scales for professional development: experiences from the UK

Using quality rating scales 273

References

Andersson, M. (1999) The Early Childhood Environment Rating Scale (ECERS) as a tool in eval-uating and improving quality in preschools, Studies in Educational Sciences, 19.

Athey, C. (1990) Extending thought in young children: a parent–teacher partnership (London, PaulChapman).

Bredekamp, S. (Ed.) (1987) Developmentally appropriate practice in early childhood programs servingchildren from birth through age 8 (expanded edn) (Washington, DC, NAEYC).

Bryant, D., Maxwell, K., Taylor, K., Poe, M., Peisner-Feinberg, E. & Bernier, K. (2003)Smart Start and preschool child care quality in North Carolina: change over time and relationto children’s readiness (Chapel Hill, NC, Frank Porter Graham Child DevelopmentInstitute).

Burchinal, M., Roberts, J., Nabors, L. & Bryant, D. (1996) Quality of center child care and infantcognitive and language development, Child Development, 67, 606–620.

Childcare Act 2006, introduced 8 November 2005, passed into law 11 July 2006. Available onlineat: http://www.surestart.gov.uk/resources/general/childcareact/

Dickinson, D. K. (2002) Shifting images of developmentally appropriate practice as seen throughdifferent lenses, Educational Researcher, 31(1), 26–32.

Douglas, F. (2004) A critique of ECERS as a measure of quality in early childhood educationand care, paper presented at the Questions of Quality: CECDE International Conference,Dublin, 23–25 September 2004.

Harms, T. & Clifford, R. M. (1989) Family Day Care Rating Scale (FDCRS) (New York, TeachersCollege Press).

Harms, T., Clifford, R. M. & Cryer, D. (2005) Early Childhood Environment Rating Scale RevisedEdition (ECERS-R) (New York, Teachers College Press).

Harms, T., Cryer, D. & Clifford, R. M. (2006) Infant/Toddler Environment Rating Scale, RevisedEdition (ITERS-R) (New York, Teachers College Press).

Harms, T., Cryer, D. & Clifford, R. M. (2007) Family Child Care Environment Rating Scale,Revised Edition (FCCERS-R) (New York, Teachers College Press).

Harms, T., Jacobs, E. V. & White, D. R. (1996) School Age Care Environment Rating Scale(SACERS) (New York, Teachers College Press).

HM Government (2007) Raising Standards—Improving Outcomes: Statutory Guidance on the EarlyYears Outcomes Duty (London, DfES, DWP, DH).

Laevers, F. (1994) The Leuven Involvement Scale for Young Children. Lis–YC Manual and VideoTape. Experiential Education Series no. 1 (Leuven, Centre for Experiential Education).

Lambert, R. G. (2003) Considering purpose and intended use when making evaluations of assess-ments: a response to Dickinson, Educational Researcher, 32(4), 23–26.

Mathers, S. & Sylva, K. (2007) National evaluation of the Neighbourhood Nurseries Initiative: child-care quality and children’s behaviour study final report (London, Department for Education andSkills).

Pascal, C., Bertram, A. D., Ramsden, F., Georgeson, J., Saunders, M. & Mould, C. (1996)Evaluating and developing quality in early childhood centres: a professional development programme(Worcester, Amber Publications).

Peisner-Feinberg, E. S., Burchinal, M. R., Clifford, R. M., Culkin, M. L., Howes, C., Kagan, S. L.et al. (1999) The children of the cost, quality and outcomes study go to school: technical report (ChapelHill, Frank Porter Graham Child Development Center, University of North Carolina at ChapelHill).

Phillipsen, L., Burchinal, M., Howes, C. & Cryer, D. (1997) The prediction of process qualityfrom structural features of child care, Early Childhood Research Quarterly, 12, 281–303.

Rowland, L., Munton, A. G. & Mooney, A. (1996) Can quality of family day care provision inEngland be assessed accurately using the Family Day Care Rating Scale? A report on reliability,Educational Psychology, 16(3), 329–334.

Dow

nloa

ded

by [

McG

ill U

nive

rsity

Lib

rary

] at

17:

46 1

6 D

ecem

ber

2014

Page 16: Using quality rating scales for professional development: experiences from the UK

274 S. Mathers et al.

Siraj-Blatchford, I. (2002a) Final annual evaluation report of the Gamesley Early ExcellenceCentre. Unpublished report, Institute of Education, London.

Siraj-Blatchford, I. (2002b) Final annual evaluation report of the Thomas Coram Early ExcellenceCentre. Unpublished report, Institute of Education, London.

Soucacou, E. (n.d.) The Inclusive Classroom Profile (ICP). Unpublished manuscript, Departmentof Education Studies, Oxford.

Statham, J. & Brophy, J. (1992) Using the Early Childhood Environment Rating Scale in play-groups, Educational Research, 34(2), 141–148.

Sylva, K., Melhuish, E., Sammons, P., Siraj-Blatchford, I. & Taggart, B. (2004) Effective Provisionof Pre-school Education (EPPE) project: final report (London, DfES).

Sylva, K., Siraj-Blatchford, I. & Taggart, B. (2006) Assessing quality in the early years: Early ChildhoodEnvironment Rating Scale Extension (ECERS-E): four curricular subscales—revised edition (Stokeon Trent, Trentham Books).

Talan, T. N. & Bloom, P. J. (2004) Program Administration Scale: measuring early childhood leader-ship and management (New York, Teachers College Press).

Tobin, J. (2005) Quality in early childhood education: an anthropologist’s perspective, EarlyEducation & Development, 16(4), 421–434.

Whitebook, M., Howes, C. & Phillips, D. (1990) Who cares? Child care teachers and the quality ofchild care in America. National child care staffing study (Oakland, CA, Child Care EmployeeProject).

Zill, N., Resnick, G., Kim, K., O’Donnell, K., Sorongon, A., McKey, R. et al. (2003) Head StartFACES 2000: a whole-child perspective on program performance, fourth progress report (Washington,DC, Administration on Children, Youth and Families, US Department of Health and HumanServices).

Dow

nloa

ded

by [

McG

ill U

nive

rsity

Lib

rary

] at

17:

46 1

6 D

ecem

ber

2014