teaching and learning quality indicators in australian universities

18
Outcomes of higher education: Quality relevance and impact 8-10 September 2008 Paris, France Enseignement supérieur : qualité, pertinence et impact 8-10 septembre 2008 Paris, France Teaching and Learning Quality Indicators in Australian Universities Denise Chalmers, University of Western Australia, Australia A selection of papers for the IMHE 2008 General Conference will be published in the OECD’s Higher Education Management and Policy Journal. The opinions expressed and arguments employed herein are those of the author) and do not necessarily reflect the official views of the Organisation or of the governments of its member countries. © OECD 2008 You can copy, download or print OECD content for your own use, and you can include excerpts from OECD publications, databases and multimedia products in your own documents, presentations and teaching materials, provided that suitable acknowledgment of OECD as source and copyright owner is given. All requests for public or commercial use and translation rights should be submitted to: [email protected] .

Upload: tranlien

Post on 28-Dec-2016

218 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: Teaching and Learning Quality Indicators in Australian Universities

Outcomes of higher education:Quality relevance and impact

8-10 September 2008 Paris, France

Enseignement supérieur : qualité, pertinence et impact

8-10 septembre 2008 Paris, France

Teaching and Learning Quality Indicators in Australian Universities

Denise Chalmers, University of Western Australia, Australia

A selection of papers for the IMHE 2008 General Conference will be published in the OECD’s Higher Education Management and Policy Journal. The opinions expressed and arguments employed herein are those of the author) and do not necessarily reflect the official views of the Organisation or of the governments of its member countries. © OECD 2008 You can copy, download or print OECD content for your own use, and you can include excerpts from OECD publications, databases and multimedia products in your own documents, presentations and teaching materials, provided that suitable acknowledgment of OECD as source and copyright owner is given. All requests for public or commercial use and translation rights should be submitted to: [email protected] .

Page 2: Teaching and Learning Quality Indicators in Australian Universities

2

TEACHING AND LEARNING QUALITY INDICATORS IN AUSTRALIAN UNIVERSITIES

Denise Chalmers, University of Western Australia, Australia

This national project to identify and implement teaching and learning quality indicators in universities grew from the recognition that an agreed approach was needed to recognise and reward quality teaching and teachers in higher education. A key aspect of recognising quality teaching is the development and implementation of agreed indicators and metrics across the Australian university sector. The first stage of the project involved an extensive international literature review and a survey of practice in Australian universities on the use of teaching and learning indicators at the national and institutional levels. This paper provides a brief overview of the context in which performance indicators are used in higher education and describes the types of indicators as input, output, process and outcome. It is argued that all types of indicators are needed if a comprehensive picture of a university is to be obtained. A framework of teaching quality dimensions for use at the institutional level has been developed, drawn from empirical research and literature to identify indicators that enhance student learning and learning experiences. This framework is now being trailed in eight Australian universities to assess its usefulness in providing an approach to implement and embed teaching and learning indicators at multiple levels within university, with the view of developing some common indicators that can be used across institutions and disciplines.

Performance cultures in higher education

Higher education systems and institutions worldwide have undergone extensive reform and change over the past twenty-five years with the agenda of improving quality. A significant feature of this has been the drive to produce systematic evidence of effectiveness and efficiency (Doyle, 2006; Guthrie & Neumann, 2007; Hayford, 2003).

Higher education institutions have progressively implemented more systematic, formalised quality assurance processes, recognising this as a way to achieve greater efficiency and accountability within their organisation (Burke & Minassians, 2001). The development of university quality assurance processes has occurred in concert with the establishment by governments of quality models and organisations designed to audit and review university performance across state and national boundaries. Institutional and national quality models and performance indicators are considered vital components in raising the standard of higher education, with organisations such as the World Trade Organisation (WTO) assisting developing countries to introduce performance indicators and quality assurance at institutional and national levels (Marginson & van der Wende, 2007). At the international level, OECD/UNESCO has progressively sourced quantitative performance indicators to provide international comparisons of higher education systems (OECD, 2007).

The rationale behind performance models and indicators in higher education is to ensure the education provided to students equips them for employment and provides the nation with a highly skilled workforce that supports economic growth. However, it is not focused solely on economic value; educational, social and political values also influence the development and use of performance models and indicators (Reindl & Brower, 2001; Trowler, Fanghanel & Wareham, 2005; Ward, 2007). Higher education institutions and national level governments use performance indicators for different purposes.

Page 3: Teaching and Learning Quality Indicators in Australian Universities

3

Higher education institutions use performance indicators for four primary reasons:

• To monitor their own performance for comparative purposes • To facilitate the assessment and evaluation of institutional operations • To provide information for external quality assurance audits, and • To provide information to the government for accountability and reporting purposes (Rowe, 2004).

Performance indicators used at the national level are designed to:

• Ensure accountability for public funds • Improve the quality of higher education provision • Stimulate competition within and between institutions • Verify the quality of new institutions • Assign institutional status • Underwrite transfer of authority between the state and institutions, and • Facilitate international comparisons (Fisher et al, 2000). Due to these differences in purpose at national and institutional levels, there will

necessarily be diverse perspectives on appropriate performance indicator and measure use and type (Harvey, 1998). There will also continue to be concerns surrounding whether these indicators can adequately and directly measure the quality of teaching and learning that takes place within institutions and between students and their teachers (Harvey, 1998).

Defining performance indicators

Three kinds of indicators have been noted by Cave, Hanney, Henkel and Kogan (1997).

• Simple indicators are usually expressed in the form of absolute figures and are intended to provide a relatively unbiased description of a situation or process. • Performance indicators differ from simple indicators in that they imply a point of reference; for example, a standard, objective, assessment, or comparator, and are therefore relative rather than absolute in character. Although a simple indicator is the more neutral of the two, it may become a performance indicator if a value judgment is involved. • General indicators are commonly externally driven and are not indicators in the strict sense; they are frequently opinions, survey findings or general statistics.

There is sometimes confusion between the first and second kind of indicators, the distinction being that performance indicators always involve judgement.

Currently there is no common definition of performance indicators, however, it is agreed that performance indicators cannot be considered ‘facts’ but are goal, value and context laden, and utilised in different ways depending on the performance model being employed.

The following definition has been synthesised from the literature (Bruwer, 1998; Burke & Minassians, 2002; Burke, Minassians & Yang, 2002; DEST, 2002; Romainville, 1999; Rowe & Lievesley, 2002) and is the definition used for the Teaching Quality Indicators project.

Page 4: Teaching and Learning Quality Indicators in Australian Universities

4

Performance indicators are defined as measures which give information and statistics context; permitting comparisons between fields, over time and with commonly accepted standards. They provide information about the degree to which teaching and learning quality objectives are being met within the higher education sector and institutions.

Australian higher education institutions use performance indicators to monitor their own performance for comparative purposes, to facilitate the assessment of institutional operations, and to provide evidence for typically external quality assurance audits of institutional teaching and learning quality (Bruwer, 1998; Burke & Minassians, 2002; Burke et al, 2002; DEST, 2002; Chalmers, Lee & Walker, 2008, Romainville, 1999; Rowe & Lievesley, 2002).

Types of performance indicators

There is general agreement on the four types of performance indicators as Input, Process, Output, and Outcome (Borden, & Bottrill, 1994; Carter, Klein & Day, 1992; Cave, Hanney & Kogan, 1991; Richardson, 1994). These can be more broadly categorised as Quantitative and Qualitative indicators.

Quantitative Indicators

Quantitative indicators are defined as those associated with the measurement of quantity or amount, and are expressed as numerical values; something to which meaning or value is given by assigning it a number. These include the input and output performance indicators.

Input indicators

Input indicators reflect the human, financial and physical resources involved in supporting institutional programmes, activities and services. Limitations concerning input indicators surround their inability to determine the quality of teaching and learning without extensive interpretation. For example, an indicator such as resource allocation should be interpreted with enrolment data (to determine resource to student ratio), resource quality (i.e. condition) and conceptual range (e.g. library book topics) to determine teaching and learning quality.

Output indicators

Output indicators are subject to similar limitations. Output data reflects the quantity of outcomes produced, including immediate measurable results, and direct consequences of activities implemented to produce such results (Burke, 1998). The defining feature is quantity or numerical amount, and the quality of these numbers is almost entirely disregarded. Input and output measures are inherently constrained by their data-driven “quantitative” nature, which prohibits the investigation of instructional, interactive and learning processes crucial to the quality of an institution, its educational programmes and its graduates. As such, quantitative performance indicators do not demonstrate quality of education, but rather quantities of its outcomes (Burke et al, 2002).

There is limited empirical support for quantitative indicators as enhancers of teaching and learning quality. However, qualitative measures have received significant support as they focus on quality aspects and allow measurement of deep and complex issues, of which the Higher Education system is invariably composed. The use of qualitative as opposed to

Page 5: Teaching and Learning Quality Indicators in Australian Universities

5

quantitative indicators provides information that allows a deeper understanding of the variable measured.

Qualitative indicators

Qualitative indicators are associated with observation based descriptions, rather than an exact numerical measurement or value. They relate to or involve comparisons based on qualities or non-numerical data such as the policies and processes for assessing student learning, the experience of a learning community, or the content of a mission statement. Outcome and process indicators lie within the classification of qualitative measures. These performance indicators typically do not involve generating the quantity of outcomes in the form of numerical data, but measure complex processes and results in terms of their quality and impact.

Outcome indicators

Outcome measures focus on the quality of educational program, activity and service benefits for all stakeholders. These key stakeholders include students, parents, the community, employers and industry (Burke, 1998; Warglien & Savoia, 2006). Outcome performance indicators typically do not involve generating the quantity of outcomes in the form of numerical data (as do output performance indicators), but instead measure complex processes and results in terms of their quality and impact. This is the difference between output and outcome measures. While they both measure the effects of higher education, output performance indicators measure this quantitatively, and outcome measures do this qualitatively.

An outcomes-based accountability system focuses on the ‘value’ added to students by their higher education experience, in terms of satisfaction with the quality of their experience and the quality of the skills they have developed. This approach is aligned with the ‘student as customer’ culture prevalent in higher education, where ‘learning’ is described as identifiable skills and products. The nature of outcome indicators, encompassed by values of ‘quality’, ‘satisfaction’ and ‘learning outcomes’ means that outcomes are more difficult to measure than numerical outputs. As a consequence, they are not utilised as often as their quantitative counterparts (Bormans, Brouwer, Int’Veld & Mertens, 1987; Bruwer, 1998; Romainville, 1999). However, outcome indicators are considered to be more insightful, meaningful and accurate in measuring the methods and quality of teaching and learning as they relate to the objectives of higher education. They are also more useful in providing information that can be used for enhancing teaching and learning. For example, collecting information on student satisfaction and skills is more instructive to the institution, teacher and prospective students than retention rate data, while; an indicator such as retention rate is useful from a social and economic perspective as it simplifies the complexity of the higher education experience. For this reason, qualitative indicators are considered to better account for the complexity associated with higher education.

Process indicators

Process indicators are those which include the means used to deliver educational programmes, activities and services within the institutional environment (Burke, 1998). These measurements look at how the system operates within its particular context, accounting for institutional diversity, a common confounding factor in inter- and intra-institutional comparison.

Page 6: Teaching and Learning Quality Indicators in Australian Universities

6

Process indicators allow the collection of qualitative information on aspects of teaching and learning quality; such as policies and practices related to learning and teaching, performance management and professional development of staff, quality of curriculum and the assessment of student learning, and quality of facilities, services and technology. Process indicators have been identified by empirical research to be the most practical, useful and appropriate measures of quality teaching and learning within higher education institutions (Chalmers, Cunningham & Thomson, forthcoming) and are the indicators commonly reviewed through institutional audit. Process indicators provide an understanding of current practice and the quality of that practice. This has been shown to be effective in informing further initiatives and policy decisions (Kuh, Pace & Vesper, 1997), leading to quality enhancement. They are an invaluable source of information on teaching and learning quality because they investigate the core of the student learning experience (eg, quality of teaching, curriculum, assessment, services and facilities).

Process indicators provide information and context to facilitate interpretation of output and outcome indicators. When combined with valid and reliable input measures to account for contextual diversity, and output and outcome indicators to indicate the results of teaching and learning; these measures provide a comprehensive perspective for institutional strengths and weaknesses to be identified so that further improvement and enhancement can be undertaken.

Process indicators are subject to the methodological challenges of qualitative measures. However, process measures are generally considered by institutions and their staff and students to provide better measures of the quality of teaching and learning, as they are contextualised in the institution.

A summary of teaching and learning process indicators in use in Australia has been compiled with institutional tables provided to the 37 participating institutions (Chalmers & Thomson, 2008). This summary extended the teaching and learning indicators identified in an earlier review of performance indicators (AVCC, 2004). While the 2004 review found the existence and use of performance indicators was variable in Australian universities, the 2008 review found that there was evidence of widespread use of process indicators in Australian universities. However, to identify the quality of those practices and indicators, the additional steps of review, audit and/or benchmarking would need to be undertaken.

The thirteen categories of process indicators identified in the national survey of Australian practice (Chalmers & Thomson, 2008) are:

1. Mission, Vision and Objectives

2. Teaching and Learning Plans and Policies (summary of those present)

3. Teaching and Learning Indicators (as measured and used at each institution)

4. Internal and External Performance Funds for Teaching and Learning (such as the LTPF, or opportunities for faculties to be allocated grants)

5. Organisational Unit Review (includes Disciplines, Divisions, Faculties, Schools, Centres)

6. Curriculum Review (includes units, unitsets, programs)

7. Assessment and Feedback Policies

Page 7: Teaching and Learning Quality Indicators in Australian Universities

7

8. Graduate Attribute Statement (and how this is embedded and assessed in courses)

9. Student Experience (explores the provision of resources, particularly those for target groups such as international and first year students)

10. Professional Development (outlines support provided for staff, such as workshops and peer review and more formal programs such as the Graduate Certificate in Higher Education)

11. Appointment and Promotion Criteria (details teaching, research and service requirements at each level)

12. Review of Academic Staff (summary of measures, frequency, and implications of performance reviews)

13. Recognition of excellence in teaching and enhancing the student learning experience (Details eligibility, remuneration and requirements of Awards, Grants, Citations and Fellowships recipients)

While national level process indicators can be difficult to identify as they are primarily institutionally based, they may be reported nationally, as has occurred with the Snapshot of Practice report (Chalmers & Thomson, 2008) and the Australian University Quality Agency (AUQA) institutional and synthesis reports.

Although qualitative outcome and process indicators are more insightful and accurate in measuring the methods and quality of teaching and learning, they are not often utilised as quantitative input and output indicators are more easily measured (Bormans et al, 1987; Bruwer, 1998; Romainville, 1999). This has resulted in an inappropriate dependence on less informative, quantitative, input and output performance indicators. Consistent with the findings of this report and others, the more frequent use of these quantitative indicators (particularly input measures) aligns with a system overly removed from the objectives of higher education (Pascarella 2001; Pascarella, Palmer, Moye & Pierson 2001). For example, a common indicator at the national level is retention rate, which is important from national and institutional perspectives, as it indicates efficiency and social and economic benefit. However, from the student perspective, the primary objective of attending university is not to ‘pass courses’ or ‘avoid dropping out’ or, but to gain knowledge, skills and experiences in a supportive social and academic environment that provides equal opportunities (Romainville, 1999). Given the complexity of this variable, quantitative indicators are unlikely, perhaps even unable, to effectively and accurately measure its quality content; hence the importance of using these in conjunction with qualitative process and outcomes measures.

Successful indicator systems, whether at national, institutional or campus levels, incorporate all four types of indicators to inform their decision-making and quality assessments. What is often less well understood is the importance of having a balance of all four types of indicators. Emphasis on output or outcome indicators over input and process indicators is likely to result in an unbalanced system with unintended and negative consequences (Borden & Bottrill, 1994; Burke & Minassians, 2001). The importance of the need for a balance of the four types of indicators is particularly important at the national level, where the emphasis can be on output or outcome indicators. It is also important to understand that indicators must be understood as interrelated and linked; for example, input indicators

Page 8: Teaching and Learning Quality Indicators in Australian Universities

8

such as level of funding, or student and staff numbers, can have a confounding effect on output, outcomes and process indicators (Guthrie & Neumann, 2006).

Although indicators can depict trends and uncover interesting questions about the state of higher education, they do not objectively provide explanations which reflect the complexity of higher education or permit conclusions to be drawn. Multiple sources of information and indicators are required to diagnose differences and suggest solutions (Canadian Education Statistics Council, 2006; Rojo, Seco, Martinez & Malo, 2001). Without multiple sources of both quantitative and qualitative information, interpretation may be erroneous. It is therefore imperative that indicators should only be interpreted in light of contextual information concerning institutional operation and with the assumptions and purpose for which the information is being used made explicit.

In summary, the measurement of quality teaching and learning within the higher education sector should involve indicators which are significant in informing individual and institutional performance; and where feasible, also significant on a common national or sector wide scale. A useful performance indicator is one that informs the development of strategic decision-making, resulting in measurable improvements to desired educational outcomes following implementation (Rowe & Lievesley, 2002).

As the majority of data resulting from performance indicator measurement is diverse in its intended purpose, the process of determining ‘fitness for purpose’ is extremely important and requires extensive consultation. Accordingly, the use of performance indicators is suggested by the literature to depend on at least three necessary conditions:

1. Data is meaningful when defined by the user (i.e. the data should inform the user in a way that can improve decisions)

2. Performance indicators are most reliable and valid when used as a group (i.e. the information should provide a comprehensive picture of a strategic area)

3. Data should provide information concerning the input and processes associated with a particular outcome or function, such as enrolment management, learning, teaching, outreach and community services (Cabrera, Colbeck & Terenzini, 2001).

The Teaching Quality Indicators Project

This project grew from the recognition that an agreed approach to recognising and rewarding quality teaching and teachers in higher education was needed. A key aspect of recognising quality teaching is the development and implementation of agreed indicators and metrics in universities and across the Australian university sector. The project provided the Australian higher education sector with the opportunity to proactively engage with the issue of recognising and rewarding quality teaching and teachers and to lead institutions and the sector more broadly in defining and developing indicators and outcomes of quality teaching. The intent was that the project would contribute to enhancing the quality of teaching and teachers in universities in institutions of higher education by providing tools and metrics to measure their performance and enable institutions to respond to issues identified by the evidence.

The focus of the project is on students and teaching and learning at the institutional level. It is recognised that there are many agendas and priorities in universities. Indeed the

Page 9: Teaching and Learning Quality Indicators in Australian Universities

9

Global Value of the project recognises this by acknowledging that research and teaching are the heartland of all university activities (Table 1).

Table 1: Values underpinning the teaching quality indicators project

Values of the Quality Teaching and Learning Quality Indicators Project Global value

• Values research and teaching as the heartland of all university activities Institutional values

• Values diversity over standardisation and comparability • Values the expectation that learning in higher education should be active, cooperative and

intellectually challenging • Values trust and openness at all levels • Values equity principles and practices for both staff and students • Values creative renewal opportunities for staff and students

Measures of performance values • Values a teaching quality framework as an opportunity for development and enhancement • Values measures and indicators that contribute to the development of good/effective

teaching and learning practice • Values good judgement that is aided by the use of performance indicators. • Values an evidence based approach to decision making • Values performance indicators that can contribute to/ or be generalised to the wider sector.

Stage 1: Research and Framework Development

The initial stage of the project involved a comprehensive review of the national and international literature and current practice in relation to teaching and learning quality. This research has been released as a series of reports and forms the basis for subsequent stages of the project. These can be found at the project website www.catl.uwa.edu.au/tqi. These reports have contributed to the development of the framework that provides universities with a tool by which they can review systems and implement changes.

The Framework

The framework is drawn from the extensive work carried out on student retention and progression and the key dimensions of quality teaching identified in this project. The Geometric model of student persistence and attainment reflects the US higher education system, therefore, some aspects do not have the same significance or relevance in the Australian context. The initial model was developed to address the retention and progression of minority students in the USA, and has subsequently been extended to encompass all students. Manuals and tools have been developed to promote institutional attention by the Intuitional Student Retention Assessment (ISRA), a part of the Educational Policy Institute (isra-online.com).

The Student Retention framework that grew from this model focuses on the Institutional factors: Recruitment and admissions; Financial aid; Student services; Academic services; Curriculum and instruction; and Leadership and commitment. While these institutional factors are important and need to be incorporated into any framework on teaching and learning quality, they do not explicitly address the social and cognitive factors and do not adequately address issues related to staff, particularly their engagement, reward and recognition and career development and progression. This is not unreasonable, given the

Page 10: Teaching and Learning Quality Indicators in Australian Universities

10

focus of the model on the students. However, not addressing the legitimate concerns and needs of the staff from their own perspective, and contextualising it in the wider institution, risks diminishing the usefulness and success of any strategy.

The Teaching Quality Framework endeavours to encompass not only a number of important institutional factors but some of the social and cognitive factors related to students. In addition, it accounts for the importance of staff, their engagement and career development in the context of the institutional mission and aspirations. The background on the development of these dimensions for the framework is developed in detail in Chalmers (2007) and Chalmers, Cunningham and Thomson (forthcoming).

The Framework for Teaching and Learning Quality Indicators are described under four key dimensions of teaching practice.

• Institutional climate and systems • Diversity • Assessment • Engagement and learning community

The relationship between these dimensions is illustrated in Figure 1:

Figure 1: The dimensions of quality practice in a performance culture of teaching.

These dimensions and indicators can be further broken down by level within the institution. There are four levels developed in the tables of the framework:

• Institution-wide • Faculty • Department/program • Teacher/Individual

It may be appropriate for an institution to consider including campus (off-shore, regional, multiple sites, distance, etc) where there are multiple campuses involved.

Institutional climate and systems

Page 11: Teaching and Learning Quality Indicators in Australian Universities

11

An institutional climate is characterised by a commitment to the enhancement, transformation and innovation of learning. Institutional climate is a key dimension of quality teaching and learning, referring to the evaluation of institution, staff and student levels of satisfaction and experience. The measurement of student experience and satisfaction is currently a common indicator of quality teaching and learning, however, the data only contributes a limited amount of information about the institution.

Diversity

Diversity in higher education relates to ethnic, cultural and socioeconomic diversity, as well as diversity regarding students’ abilities, talents and learning approaches. Diversity is an indicator that is theoretically and empirically supported by the research literature and is frequently employed as a measure of quality teaching.

Assessment

The most direct measures of student learning are through the assessment tasks while students are studying in their enrolled program of study. Research has repeatedly shown that assessment does not merely serve to inform students about their achievements, but is a necessary condition for quality learning. In other words: assessment drives learning (Greer, 2001; Harris & James, 2006).

The literature about good practice in assessment is extensive and well developed and many universities have adopted a number of effective approaches to assessment. There exists a great variety of methods, ideally aligned with specific learning goals, student learning approaches and the particular subject. This diversity is desirable and essential, yet it is not an end in itself. It should also be used to encourage institutional improvement (Mowl, McDowell & Brown, 1996; Peterson & Augustine, 2000). Therefore, it is on the design, delivery and administration, provision of feedback, moderation, and review of assessment where universities should be directing their attention. Indicators of quality assessment include the development and implementation of systems and reviews with an “enhancement-led” approach.

Engagement and learning community

The academic environment is the primary means by which students further their learning, abilities and interests - making it a central dimension to student success (Smart, Feldman & Ethington, 2000). Student engagement is a term used to describe student’s commitment and engagement with their own education. It is important to also include staff engagement with their students and their institution.

Each dimension of quality teaching is developed into a table which outlines indicative indicators for that dimension at the institutional, faculty, program and teacher level, and indicative measurements for those indicators. Each table is outlines the indicators and then these are expanded in more detail into checklists for each level so that the institution can assess its practices and establish the processes and measurements if they need to do so. While the indicative measures are shown with the particular dimension under scrutiny, a number will also apply to the other dimensions.

Page 12: Teaching and Learning Quality Indicators in Australian Universities

12

Table 2. Indicative teaching and learning indicators for four dimensions of teaching practice

Institutional climate & systems

Diversity Assessment Engagement & learning community

Adoption of a student-centred learning

perspective Possession of

desirable teacher characteristics Relevant and

appropriate teaching experience,

qualifications and development

Use of current research findings in informing teaching

and curriculum / course content

Community engagement / partnership

Funding model in support of teaching

and learning

Valuing and accommodating student

and staff diversity Provision of adequate

support services Active recruitment and

admissions Provision of transition and academic support

Active staff recruitment Multiple pathways for reward and recognition

of staff

Assessment policies address issues of

pedagogy Adopting an evidence-

based approach to assessment policies Alignment between

institutional policy for best practice and

faculty/ departmental activities

Commitment to formative assessment Provision of specific, continuous and timely

feedback Explicit learning

outcomes Monitoring and review

of standards and assessment tasks

Student engagement Fostering and

facilitating (academic) learning communities

Engaging and identifying with a

learning community Staff engagement

The level and the indicators outlined may not be appropriate for all institutions. Some indicators may not apply in some universities, and some indicators may apply but at different levels in the institution from what is suggested in the framework. Each university is asked to consider the indicators and the levels as indicative rather than as requirements. In addition, each institution needs to be very clear about the students they are attracting, enrolling and retaining, and those students they wish to attract, enrol and retain - whether they are the very able, first-in-family, culturally and ethnically diverse etc. This needs to be considered in the context of their mission and strategic plans. These will then impact on the indicators selected and the way in which they are then measured and reported.

Stage 2: Pilot implementation of the Framework and tools

The initial stages of the pilot process involves an institutional, multilevel audit in order to clarify the missions and goals of the university in relation to its students, uncover policy and process barriers, identify needs for new policies, bring stakeholders together to establish broad levels of agreement, design implementation strategies, assign key tasks to appropriate individuals and groups, and monitor and review progress towards the achievements of the goals established by the universities.

Each university makes its own judgements, defines its own mission, goals and priorities, and plans its own course of action. The Framework assists universities in defining their goals and objectives, by providing a tool to interrogate their practices, identify sources of evidence, and implement a process of review and monitoring. This project does not involve every university responding in the same way. The intent is to provide tools that support universities

Page 13: Teaching and Learning Quality Indicators in Australian Universities

13

in achieving their own teaching and learning ambitions based on the use of sound evidence and measures.

Eight universities agreed to pilot the framework under the leadership of their Deputy-Vice Chancellors/Pro-Vice Chancellors (Academic) (A). All universities have employed a project officer, undertaken a university audit and are currently implementing their chosen dimension of the Teaching Quality Framework. Although each university has chosen the section of the dimension of most relevance to them, the broad dimensions chosen by each of the Universities are;

Assessment Griffith University, RMIT University, The University of Queensland

Institutional Climate & Systems (Reward &Recognition):

Macquarie University, The University of Western Australia

Engagement & Learning Communities

Deakin University, University of South Australia

Diversity University of Tasmania

Initially, project officers conducted audits and formed steering groups to determine their university’s priorities, and also held focus groups to determine staff and student practice and perception in relation to teaching and learning quality. The implementation of the framework is different at each of the pilot universities and highlights include

• provision of summary of good practice in assessment, based on literature, ongoing research and student and staff feedback - Griffith • input into teaching promotions pathways for academic promotion - RMIT • revision of assessment and other policy and practice that impact teaching and learning quality - UQ • examination of student engagement in six programs of study – Deakin • the development and implementation of an employer satisfaction survey - UniSA • enhancement and promotion of the pathways/transition program – UTAS • the development of an online database of teaching and learning practice across the university – UWA • academic policy development and revision - Macquarie.

Conclusion

While it is still too early to assess the overall impact of the project, the work undertaken to date, the experiences of the pilot universities and the support of the Australian higher education sector has been extremely positive. The project deliverables and projected outcomes listed below are well on the way to being achieved.

Project Deliverables and Outcomes

• Contribution to scholarship on teaching and learning indicators • Testing a framework and model of teaching quality indicators, trialled in different types of universities • Building a shared language regarding teaching performance

Page 14: Teaching and Learning Quality Indicators in Australian Universities

14

• A multilevel approach to teaching quality • Improved links and increased transparency to reward and recognise quality teaching and learning throughout the university • Enhanced opportunities and tools for benchmarking • Opportunity for institutional renewal • A core set of indicators that can be shared between institutions • A core set of materials that can be used to undertake to process of developing and embedding institutional indicators around the framework.

Page 15: Teaching and Learning Quality Indicators in Australian Universities

15

REFERENCES

Australian Vice Chancellors Committee (AVCC). (2004). Summary on University performance indicators for learning and teaching. Data available from: http://www.universitiesaustralia.edu.au/documents/universities/key_survey_summaries/Summary_Analysis_Jan04.xls .

Borden, V. & Bottrill, K. (1994). Performance Indicators; Histories, Definitions and Methods. New Directions for Institutional Research, 82, 5-21

Bormans, M. J., Brouwer, R., In’t Veld, R. J. & Mertens, F. J. (1987). The role of performance indicators in improving the dialogue between government and universities. International Journal of Institutional Management in Higher Education, 11(2), 181-193.

Bruwer, J. (1998). First destination graduate employment as key performance indicator: Outcomes assessment perspectives. Melbourne, Australia: Paper presented at Australian Australasian Association for Institutional Research (AAIR) Annual Forum.

Burke, J.C. (1998). Performance Funding: Arguments and Answers. New Directions for Institutional Research, 97, 85-90.

Burke, J.C. & Minassians, H. (2001). Linking state resources to campus results: From fad to trend. The fifth annual survey (2001). Albany, New York: The Nelson A. Rockefeller Institute of Government.

Burke, J.C. & Minassians, H. (2002). Performance Reporting: The preferred “No Cost” Accountability Program. The sixth annual report (2002). Albany, New York: The Nelson A. Rockefeller Institute of Government.

Burke, J.C., Minassians, H. & Yang, P. (2002). State performance reporting indicators: What do they indicate? Planning for Higher Education, 31 (1), 15-29.

Cabrera, A.F., Colbeck, C. L., & Terenzini, P. T. (2001). Developing performance indicators for assessing classroom teaching practices and student learning: The case of engineering. Research in Higher Education, 42(3), 327-352.

Carter, N., Klein, R. & Day, P. (1992). How organisations measure success: the use of performance indicators in government. London: Routledge.

Cave, M., Hanney, S., Henkel, M., and Kogan, M. (1997). The Use of Performance Indicators in Higher Education: The Challenge of the Quality Movement (3rd ed.). London: Jessica Kingsley.

Page 16: Teaching and Learning Quality Indicators in Australian Universities

16

Cave, M., Hanney, S. & Kogan, M. (1991) The Use of Performance Indicators in Higher Education: A Critical Analysis of Developing Practice, Second Edition. London. Jessica Kingsley Publishers

CESC (2006). Education indicators in Canada: Report of the Pan-Canadian Education Indicators Program 2005. Ottawa, Toronto: Council of Ministers of Education & Canadian Education Statistics Council.

Chalmers, D. (2007). A review of Australian and international quality systems and indicators of learning and teaching. Carrick Institute for Learning and Teaching in Higher Education Ltd, Sydney, NSW.

Chalmers, D., Cunningham, T. & Thomson, K. (forthcoming). Higher education teacher and teaching indicators, outcomes and evidence base at institution, department and individual teacher levels. Carrick Institute for Learning and Teaching in Higher Education Ltd, Sydney, NSW.

Chalmers, D., Lee, K. & Walker, B. (2008). International and national indicators and outcomes of quality teaching and learning currently in use. Carrick Institute for Learning and Teaching in Higher Education Ltd, Sydney, NSW.

Chalmers, D. & Thomson, K. (2008). Snapshot of Teaching and Learning Practice in Australian Higher Education Institutions. Carrick Institute for Learning and Teaching in Higher Education Ltd, Sydney, NSW.

Department of Education, Science and Training (DEST). (2002). Appendix B: Higher education outcomes indicators - sources and methods. Technical Note 1: Student Outcome Indicators of Australian higher education institutions, 2002 and 2003. Canberra: Strategic Analysis and Evaluation Group, Commonwealth Department of Education, Science and Training. Retrieved January 9, 2007 from: http://www.dest.gov.au/sectors/higher_education/policy_issues_reviews/key_issues/assuring_quality_in_higher_education/technical_note_1.htm

Doyle, W. (2006). State Accountability Policies and Boyer’s Domains of Scholarship: Conflict or Collaboration? New Directions for Institutional Research, 129, 979-113.

Fisher, D., K. Rubenson, K. Rockwell, G. Grosjean, and J. Atkinson-Grosjean (2000). "Performance Indicators: A Summary." from http://www.fedcan.ca/english/fromold/perf-ind-impacts.cfm.

Greer, L. (2001). Does changing the method of assessment of a module improve the performance of a student? Assessment & Evaluation in Higher Education, 26 (2), 127-138.

Guthrie, J. & Neumann, R. (2006). Performance Indicators in Universities: The Case of the Australian University System. (Submission for Public Management Review Final February 2006).

Harvey, L. (1998). An Assessment of Past and Current Approaches to Quality in Higher Education. Australian Journal of Education, 42(3), 237-255.

Page 17: Teaching and Learning Quality Indicators in Australian Universities

17

Harris, K.L. & James, R. (2006). Facilitating reflection on assessment policies and practices: A planning framework for educative review of assessment. Studies in Learning, evaluation, Innovation and Development, 3 (2), 23-36.

Hayford, L. (2003). Reaching Underserved Populations with Basic Education in Deprived Areas of Ghana: Emerging Good Practices. Ghana: CARE International.

Kuh, G. D. (1995). The other curriculum: Out-of-class experiences associated with student learning and personal development. Journal of Higher Education, 66(2), 123-155.

Kuh, G. D. (1993b). Ethos: its influence on student learning. Liberal Education, 79(4), 22-31.

Kuh, G. D., Pace, C. R., & Vesper, N. (1997). The development of process indicators to estimate student gains associated with good practices in undergraduate education. Research in Higher Education, 38(4), 435-454.

McDaniel, E. A., Dell Felder, B., Gordon, L., Hrutka, M. E. & Quinn, S. (2000). New faculty roles in learning outcomes education: The experiences of four models and institutions. Innovative Higher Education, 25(2), 143-157.

Marginson, S., & van der Wende, M. (2007). Globalisation and Higher Education. Education Working Paper 8, OECD, pp.1-85. Retrieved December 8, 2007 from http://www.cshe.unimelb.edu.au/research/pubs.html

Mowl, G., McDowell, L. & Brown, S. (1996). Innovative assessment. Retrieved March 19, 2007 from: http://www.city.londonmet.ac.uk/deliberations/assessment/mowl_content.html

OECD. (2007). Education at a Glance. Retrieved January 12, 2008 from: http://www.oecd.org/document/30/0,3343,en_2649_39263238_39251550_1_1_1_1,00.html

Pascarella, E. T. (2001). Identifying excellence in undergraduate education. Are we even close? Change, 33 (3), 18-23.

Pascarella, E. T., Palmer, B., Moye, M. & Pierson, C. T. (2001). Do diversity experiences influence the development of critical thinking? Journal of College Student Development, 42(3), 257-271.

Peterson, M. W. & Augustine, C. H. (2000). Organisational practices enhancing the influence of student assessment information in academic decisions. Research in Higher Education, 41 (1), 21-52.

Rojo, L., Seco, R., Martínez, M. & Malo, S. (2001). Internationalisation Quality Review presentation: Institutional Experiences of Quality Assessment in Higher Education - Universidad Nacional Autonoma de Mexico (Mexico). Organisation for Economic Co-operation and Development (OECD). Retrieved April 11, 2007 from: http://www.oecd.org/dataoecd/49/4/1870961.pdf

Reindl, T. & Brower, D. (2001). Financing state colleges and universities: What is happening to the "public" in public higher education? Perspectives. College and University, 77(1), 29-43.

Page 18: Teaching and Learning Quality Indicators in Australian Universities

18

Richardson, J. T. E. (1994). A British evaluation of the Course Experience Questionnaire. Studies in Higher Education, 19, 59-68.

Romainville, M. (1999). Quality Evaluation of Teaching in Higher Education. Higher Education in Europe, 24(3), 414-424.

Rowe, K. (2004). Analysing & Reporting Performance Indicator Data: 'Caress’ the data and user beware! ACER, April, background paper for The Public Sector Performance & Reporting Conference, under the auspices of the International Institute for Research (IIR).

Rowe, K. & Lievesley, D. (2002). Constructing and using educational performance indicators. Background paper for Day 1 of the inaugural Asia-Pacific Educational Research Association (APERA) regional conference, ACER, Melbourne April 16-19, 2002. Available at: http://www.acer.edu.au/research/programs/documents/Rowe&LievesleyAPERAApril2002.pdf

Smart, J. C., Feldman, K. A. & Ethington, C. A. (2000). Academic disciplines: Holland’s theory and the study of college students and faculty. Nashville, TN: Vanderbuilt University Press.

Trowler, P., Fanghanel, J. & Wareham, T. (2005). Freeing the chi of change: the Higher Education Academy and enhancing teaching and learning in higher education. Studies in Higher Education, 30 (4), 427-444.

Ward, D. (2007). Academic Values, Institutional Management, and Public Policies. Higher Education Management and Policy, 19(2).

Warglien, M., & Savoia, M. (2001). Institutional Experiences of Quality Assessment in Higher Education - The University of Venice (Italy). Organisation for Economic Co-operation and Development (OECD). Retrieved from: http://www.oecd.org/dataoecd/48/49/1871205.pdf