transnational european evaluation project › indirme › papers-and-reports › occasional-papers...

40
Transnational European Evaluation Project Methodological Reflections ENQA Occasional Papers 6 European Network for Quality Assurance in Higher Education • Helsinki TEEP was financially supported by the European Commission

Upload: others

Post on 07-Jul-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Transnational European Evaluation Project › indirme › papers-and-reports › occasional-papers › ... · 2018-08-03 · The Transnational European Evaluation Project (TEEP) was

1

ENQA Occasional Papers

TransnationalEuropean Evaluation Project

Methodological Reflections

ENQA Occasional Papers 6

European Network for Quality Assurancein Higher Education • Helsinki

TEEP was financially supported by the European Commission

Page 2: Transnational European Evaluation Project › indirme › papers-and-reports › occasional-papers › ... · 2018-08-03 · The Transnational European Evaluation Project (TEEP) was

2

ENQA Occasional Papers

Cover design: Jussi Hirvi / Green Spot Media FarmPage layout: Anja Kauppi / Pikseri Julkaisupalvelut

© European Network for Quality Assurance in Higher Education, 2004, HelsinkiCopying allowed only with source reference.

ISBN 952-5539-00-8 (paperbound)ISBN 952-5539-01-6 (pdf)ISSN 1458-1051

Printed by MultiprintHelsinki, Finland 2004

Page 3: Transnational European Evaluation Project › indirme › papers-and-reports › occasional-papers › ... · 2018-08-03 · The Transnational European Evaluation Project (TEEP) was

3

ENQA Occasional Papers

Foreword

The Transnational European Evaluation Project (TEEP) was a pilot project, which investigated the opera-tional implications of European transnational quality evaluation of study programmes in three subjectareas: History, Physics, and Veterinary Science. In total, fourteen programmes in eleven different Euro-pean countries were evaluated.

The objectives of TEEP were to develop further a method for transnational external evaluation, build-ing on experiences, such as the TUNING Project and the Dublin descriptors developed through the JointQuality Initiative, to identify potential obstacles to transnational evaluation and to indicate strategies thatmay be used to overcome them, and to contribute to greater awareness, transparency and compatibilitywithin European higher education. The overall conclusion of this report is that these objectives wereindeed satisfactorily met.

One of the aims of TEEP was to test the use of common criteria. It is important to emphasise that thiscriteria approach has provided the basis for making comparisons possible. The common criteria havefunctioned as shared reference points, and ensured that the same topics were evaluated across the threedisciplines and the 14 programmes. Any future transnational evaluation project should establish criteria/reference points that are compatible with national and local contexts, and use terminology that is, to alarge extent, familiar and useful for the programmes being evaluated.

The TEEP project has made a significant contribution to stimulate discussions about and recognition ofthe need for the programmes to develop explicit quality assurance strategies. The project has also shownthat when national states have committed themselves to political objectives (aligned to the Bologna proc-ess) it is easier to reach a common interpretation. In that respect the project has provided a valuableinsight into the condition for the implementation of the Bologna process at programme level.

This methodological report is the final phase of the TEEP project. The publication of the TEEP meth-odological report and subject reports, available at www.enqa.net/pubs.lasso, contributes to greater trans-parency and increasing awareness of the potential and importance of compatible approaches to qualityassurance within a transnational framework.

It is my hope that the reader will find this report useful and of value.

Christian ThuneChairmanENQA Steering Group

Page 4: Transnational European Evaluation Project › indirme › papers-and-reports › occasional-papers › ... · 2018-08-03 · The Transnational European Evaluation Project (TEEP) was

4

ENQA Occasional Papers

Contents

1 Introduction _______________________________________________________________________ 51.1 The Bologna Declaration _____________________________________________________________ 51.1.1 European transnational projects on quality in higher education ____________________________ 51.2 Introduction to TEEP ________________________________________________________________ 71.2.1 Anticipated benefits from TEEP _______________________________________________________ 71.2.2 Report structure ____________________________________________________________________ 8

2 Executive summary ________________________________________________________________ 9

3 Results from the subject-specific reports ___________________________________________ 113.1 Degree structure and definition ______________________________________________________ 113.2 Competences and learning outcomes ________________________________________________ 123.3 Quality assurance __________________________________________________________________ 13

4 Methodology _____________________________________________________________________ 154.1 Organisation ______________________________________________________________________ 154.1.1 Process __________________________________________________________________________ 154.1.2 Scope of evaluation ________________________________________________________________ 164.1.3 Recruitment and selection of programmes ____________________________________________ 174.1.4 Budget ___________________________________________________________________________ 184.1.5 Language _________________________________________________________________________ 184.1.6 Selection and composition of the expert panels ________________________________________ 194.2 Self-evaluation process _____________________________________________________________ 204.2.1 Self-evaluation manual _____________________________________________________________ 204.2.2 Launch meeting ___________________________________________________________________ 214.2.3 Focused approach _________________________________________________________________ 224.2.4 Self-evaluation process _____________________________________________________________ 244.2.5 Self-evaluation group _______________________________________________________________ 244.2.6 Quality of self-evaluation reports _____________________________________________________ 254.3 Site visits _________________________________________________________________________ 254.3.1 Organisation of the site visit _________________________________________________________ 254.3.2 Interview guides ___________________________________________________________________ 274.4 Report ____________________________________________________________________________ 27

5 Results and conclusion on the use of criteria _______________________________________ 295.1 Criteria-based evaluation ___________________________________________________________ 295.1.1 Formulation of the criteria ___________________________________________________________ 295.1.2 Applicability of the criteria ___________________________________________________________ 305.1.3 Conclusions _______________________________________________________________________ 31

6 Reflections on methodology _______________________________________________________ 336.1 Reflections from the management group ______________________________________________ 336.2 Reflections from the institutions _____________________________________________________ 356.2.1 Report from the evaluated Physics programmes _______________________________________ 356.3 Reflections from the expert panels ___________________________________________________ 366.3.1 Comments on the TEEP Methodology from the Physics Expert Group ____________________ 36

7 Appendices ______________________________________________________________________ 39Appendix A _____________________________________________________________________________ 39

Appendix B _____________________________________________________________________________ 40

Page 5: Transnational European Evaluation Project › indirme › papers-and-reports › occasional-papers › ... · 2018-08-03 · The Transnational European Evaluation Project (TEEP) was

5

ENQA Occasional Papers

1.1 The Bologna Declaration

Since 1999, European perspectives on the qualityof higher education have been strongly influencedby the follow-up processes to the Bologna Decla-ration of that year, signed by 29 European Minis-ters of Education. By signing this declaration, theMinisters agreed on coordinating their policies to-wards achieving a number of objectives, which theyconsidered to be of primary relevance in establish-ing a European area of higher education and pro-moting the European system of higher educationworldwide.

This general background, together with subse-quent initiatives and developments occurring be-tween the ministerial meetings in Bologna, Pragueand beyond, have provided the main motivation forsetting up the Transnational European EvaluationProject (TEEP).

TEEP was supported by the European Commis-sion through the SOCRATES programme. It waspart of a package of measures initiated by the Eu-ropean Commission in order to stimulate the Bolo-gna Process (“From Prague to Berlin, the EU con-tribution”). The project was coordinated by the Eu-ropean Network for Quality Assurance in HigherEducation (ENQA), with the participation and con-tribution of the SOCRATES Thematic Networks ofthe three respective disciplines History, Physics andVeterinary Science. Representatives of ENQA, thechairpersons of the SOCRATES Thematic Net-works, representatives of the European Commis-sion and representatives of the relevant quality as-surance agencies constitute the management groupfor the project.

1.1.1 European transnational projects onquality in higher education

There are a number of projects that are of particu-lar relevance to the establishment and developmentof TEEP. The most important projects are:

• The wide-ranging European Pilot Projects con-ducted in 1994/1995, supported by the EuropeanCommission. Seventeen countries, the fifteen EUmembers as well as Norway and Iceland, wereinvolved in this project in which a total numberof no less than 46 programs within higher edu-cation were evaluated simultaneously. The mainidea of the project was to test a common meth-odology for programme evaluations, which wasat the same time suitable for national adaptations.

• The international evaluation of electrical engi-neering programmes in Belgium, the Nether-lands, Switzerland, Sweden and Germany, initi-ated by the Dutch Quality Assurance Agency,VSNU, and conducted in 1991/1992. The pur-pose of this project was to reach a mutual under-standing and recognition of diplomas of the cho-sen programmes across the countries involved.

• The international research project initiated byCHEPS (Center for Higher Education PolicyStudies) and conducted by researchers from theNetherlands, Germany and the UK is anotherexample of an international evaluation. In thisproject from 1991/1992 ten programmes inEconomy from the three countries mentionedabove were evaluated. The project was prima-rily oriented towards methodological develop-ment. More specifically the aim was to developa valid, reliable and effective methodology forcomparing educational quality across the systemsof higher education in a number of Europeancountries.

1 Introduction

Page 6: Transnational European Evaluation Project › indirme › papers-and-reports › occasional-papers › ... · 2018-08-03 · The Transnational European Evaluation Project (TEEP) was

6

ENQA Occasional Papers

• The two-year project TUNING EducationalStructures in Europe, launched in May 2001 andorganised by European universities and supportedby the European Commission through the SOC-RATES programme (http://www.relint.deusto.es/TUNINGProject/index.htm). The project aimedwithin the context of the Bologna process to“tune” educational structures in Europe, open adebate on the nature and the importance of sub-ject-specific and general competences involvingstakeholders, identify subject-specific and gen-eral competences and lastly develop the use ofECTS credits. The first phase of the project cameto an end in May 2002 and a second phase to testthe key findings in the project has started sincethen.

• The Cross-border Quality Assessment of Phys-ics conducted in 2000/2001 that involved fiveprogrammes from four universities placed inthree different countries. Four national/regionalquality assurance agencies were involved in theconduction of the evaluation. The aim of theproject was to compare the programmes and toanalyse whether students received equivalentqualifications. The method applied for the evalu-ation drew heavily on the lessons learned fromthe evaluation of engineering programmes men-tioned above. The overall approach with an in-ternational committee responsible for formulat-ing minimum requirements and conducting thesite visits resembled the one used in the evalua-tion of engineering programmes. However, theprinciples behind the composition of the inter-national committee differed. In the Physics evalu-ation it was decided that the committee mem-bers should all be independent of the participat-ing institutions.

• The International Comparative Evaluation ofProgrammes in Agricultural Science conductedby the Danish Evaluation Institute (EVA). Theevaluation included programmes offered in Den-mark, Germany, Ireland and the Netherlands. Theevaluation was a Danish reflection of the Bolo-gna process and the specific objective of promot-ing European cooperation in quality assurancewith a view to developing comparable criteriaand methodologies.

European ministers have recognized the vital rolethat quality assurance systems play in ensuring highquality standards and facilitating the comparabilityof qualifications throughout Europe. They have alsoencouraged closer cooperation between recognitionand quality assurance networks and sought to pro-mote European cooperation in quality assurance.

Whilst debates continue about the relative rolesand merits of different quality assurance approaches,several notable initiatives have been established.These include:

• Development of the roles of ENQA. Reflectingon the Bologna process, the EU Ministers ofEducation have assigned responsibility for thequality assurance development in higher educa-tion to the ENQA network. The ENQA networkis supported by the European Commissionthrough the SOCRATES programme. ENQA hastaken actions to disseminate information, expe-riences, good practices and new developmentsin the field of quality assessment and quality as-surance in higher education between interestedparties, public authorities, higher education in-stitutions and quality assurance agencies;

• The pilot scheme ‘Promoting a “quality culture”in universities’ will help universities introduceinternal quality assurance mechanisms that theycan consider their own. The project is supportedby the European Commission and conducted bythe European University Association (EUA)(http://www.eua.be). The expected outcome is tocreate a critical mass of universities having con-crete experience with internal quality assurancemechanisms helping them to improve their qual-ity levels and being better prepared for externalevaluations;

• The Joint Quality Initiative supported by theDutch and Flemish governments, and in particu-lar the development of the Dublin descriptors,which are shared descriptors for first and secondcycle degrees (http://www.jointquality.org/).

Page 7: Transnational European Evaluation Project › indirme › papers-and-reports › occasional-papers › ... · 2018-08-03 · The Transnational European Evaluation Project (TEEP) was

7

ENQA Occasional Papers

1.2 Introduction to TEEP

The Transnational European Evaluation Project(TEEP) was a pilot project with the objective ofinvestigating the operational implications of a Eu-ropean transnational quality evaluation of studyprogrammes in three subject areas: History, Phys-ics and Veterinary Science.

The three subject areas of Physics, History andVeterinary Science were divided respectively be-tween five, five and four participating Europeanuniversities. In total, fourteen programmes in 11different European countries were evaluated. For alist of the participating programmes, please see ap-pendix A. In Physics and History the scope of theevaluation was confined to the first cycle degreelevel or equivalent. In Veterinary Science the fullone cycle programme1 was evaluated.

The objectives of TEEP were:

• To further develop a method for transnationalexternal evaluation, building on experiences suchas the TUNING Project and the Dublindescriptors developed through the Joint QualityInitiative, using common criteria on the basis ofan evaluation process in three different disci-plines.

• To identify potential obstacles to transnationalevaluation and indicate strategies that might beused to overcome them.

• To contribute to greater awareness, transparencyand compatibility within European higher edu-cation.

The TEEP project was initiated and supported bythe European Commission through the SOCRATESprogramme with ENQA responsible for its deliv-ery. A Planning Group chaired by ENQA includedrepresentation from the European University Asso-ciation (EUA), the National Unions of Students inEurope (ESIB), and the Commission. The projectwas overseen by a Management Group, chaired byENQA, and consisted of representatives from eachof the participating quality assurance agencies, the

subject area Thematic Networks’ chairs, and an ECrepresentative. This group was involved in the de-sign of the project, the selection of participatinginstitutions and nominations for expert panels, andreceived and discussed intermediate reports. AProject Group, composed of staff members fromthe participating agencies, oversaw the details ofthe organisation and implementation of the projectand acted as secretariat at the site visits. Expert panelmembers contributed to the relevant subject reportsbut the Project Group was responsible for their fi-nal preparation, after the institutions and expertshad checked on the accuracy of relevant details andrecommendations. The Project Group also preparedthe draft methodological report. Representativesfrom the institutions, the subject experts and Man-agement Group all contributed to the final versionof the methodological report (for their contribution,see chapter 6).

1.2.1 Anticipated benefits from TEEP

The likely benefits from TEEP should include:

For European higher education:• A method for transnational evaluation building

on predefined criteria which are commonlyagreed, which have been tested and which offera dimension of transparency and comparabilityto the quality of programmes across borders;

• A contribution to the development of each sub-ject on the basis of the recommendations of theexperts and good practice from comparable pro-grammes in other countries;

For the participating institutions:• The opportunity for each of the participating in-

stitutions to promote both their institution andtheir programmes;

• The opportunity to receive feedback as a contri-bution towards improving their quality assuranceculture;

• An opportunity to share experiences betweenprogrammes and peers, and the possibility of es-tablishing networks to assure continuous im-provement of programme quality.

1 The term ’full programme’ should be understood as coveringboth the first and the second cycle leading to a second cycledegree, in this case leading to Veterinary Science master degree.

Page 8: Transnational European Evaluation Project › indirme › papers-and-reports › occasional-papers › ... · 2018-08-03 · The Transnational European Evaluation Project (TEEP) was

8

ENQA Occasional Papers

1.2.2 Report structure

The methodological report is structured in six chap-ters. The first chapter introduces the TEEP project.This is followed by chapter two that consists of anexecutive summary of the overall results of testingthe TEEP methodology.

Chapter three provides the reader with a shortcomparative view of the main findings of the threesubject-specific reports concerning 1) The level ofimplementation of the first and second cycle de-gree structure, including correspondence with theDublin descriptors; 2) The extent to which the pro-grammes have formulated and used definitions ofcompetences and learning outcomes, includingknowledge and applicability of the TUNING crite-

ria; and 3) The level of implementation of qualityassurance processes in the programmes.

In chapter four the method and its different ele-ments are discussed in more depth. Thus, the re-flections on the experiences of the different meth-odological elements of the transnational evaluationare based on feedback from the expert panels, theprogrammes and the Project Group’s experiencesfrom the project.

Finally, in chapter five applicability of commoncriteria in TEEP are discussed.

The report is completed with chapter six, in whichthe Management Group, the experts and representa-tives from the programmes reflect on their experi-ences with the TEEP project and methodology.

Page 9: Transnational European Evaluation Project › indirme › papers-and-reports › occasional-papers › ... · 2018-08-03 · The Transnational European Evaluation Project (TEEP) was

9

ENQA Occasional Papers

TEEP followed upon earlier initiatives in trans-European evaluation, such as the European PilotEvaluations of 1994/95. TEEP was an ambitious aswell as a realistic move forward in methodologicalexperiences exactly because TEEP took into con-sideration these earlier cases, as well as the now-developed principles for quality assurance in highereducation institutions. TEEP must also be seen inthe parallel context of the ‘Quality in universitiesproject’ organised by the European University As-sociation; similarly an initiative supported by theEuropean Commission.

The objectives of TEEP were to

• further develop a method for transnational ex-ternal evaluation, building on experiences suchas the TUNING Project and the Dublindescriptors developed through the Joint QualityInitiative, using common criteria as the basis ofan evaluation process in three different disciplines

• identify potential obstacles to transnational evalu-ation and indicate strategies that might be usedto overcome them

• contribute to greater awareness, transparency andcompatibility within European higher education.

The overall conclusion of this report is that theseobjectives have been satisfactorily met.

Conclusions on testing the use ofcommon criteria

One of the TEEP project aims was to test the use ofcommon criteria. In general, the use of commoncriteria was important in order to ensure that all ofthe programmes were evaluated through a singleapproach. Across the three subject areas, however,the experts and programme representatives have metsome difficulties in understanding and interpretingthe criteria as these had been set out in the projectmanual. This was especially the case with regard toarticulating competences and learning outcomes,and some aspects of quality assurance.

The conclusion on the applicability of the crite-ria is that it depends on their formulation and ‘read-ability’, and the extent to which they can be relatedto a nationally, as well as an internationally, acceptedthreshold. Further, their applicability also dependson the extent to which programmes have developedand implemented those aspects of quality assurancecovered by the criteria. The criteria can have im-portant roles in stimulating and supporting suchdevelopments where they are appropriate withinnational and institutional contexts, and their imple-mentation relates to desirable or necessary aspectsof quality assurance.

Conclusions on obstacles to transnationalevaluation and measures to overcome them

One general conclusion is that the process wouldhave benefited from an extended timeframe at cer-tain stages of the process.

The project has further been challenged through-out by the budgetary constraints, particularly relat-ing to travel arrangements. It would have been abenefit for the process if there had been more op-portunities for the expert panels to meet and dis-cuss, for instance during the drafting stage of thesubject-specific reports.

Another general conclusion relates to methodo-logical implications in conducting an evaluation inone language – as the language for most of the par-ticipants will be their second language. The projectdemonstrates the importance, and additional valueof including within each panel at least one memberwho speaks the local language and ideally also hasknowledge of the national and local contexts inwhich the evaluated programme operates. Further,it is important that the experts represent differentprofiles of experience and are drawn from differentcountries.

2 Executive summary

Page 10: Transnational European Evaluation Project › indirme › papers-and-reports › occasional-papers › ... · 2018-08-03 · The Transnational European Evaluation Project (TEEP) was

10

ENQA Occasional Papers

The project’s contribution to greaterawareness, transparency and compatibilitywithin European higher education

The overall conclusion is that the participants ingeneral found TEEP an interesting and stimulatingproject. The inclusion of institutions and expertsfrom a variety of countries has helped to contributein a small, but no doubt significant way, to a greatersharing of knowledge about approaches to qualityassurance within European universities.

The publication of the TEEP methodological re-port and subject reports should therefore contrib-ute to greater transparency and increasing aware-ness of the potential and importance of compatibleapproaches to quality assurance within a trans-national framework. In this way TEEP has provideda valuable insight into aspects of the implementa-tion of the Bologna and Prague processes, particu-larly at the level of academic programmes.

Page 11: Transnational European Evaluation Project › indirme › papers-and-reports › occasional-papers › ... · 2018-08-03 · The Transnational European Evaluation Project (TEEP) was

11

ENQA Occasional Papers

A comparison of the three subject areas at the Eu-ropean level has been developed through the sub-ject reports, in accordance with the three themesidentified within the TEEP manual – educationalcontext, competences and quality assurance. Thefollowing chapter gives an overview of the mainconclusions across the three subject areas concern-ing: 1) The level of implementation of the first andsecond cycle degree structure, including corre-spondence with the Dublin descriptors; 2) The ex-tent to which the programmes have formulated andused definitions of competences and learning out-comes, including knowledge and applicability ofthe TUNING criteria; and 3) The level of imple-mentation of quality assurance processes in the pro-grammes. It is recommended that the three subject-specific reports be read in order to gain a compre-hensive view of the evaluation of the 14 pro-grammes – the evaluation of each programmeshould be read in their specific subject, national andinstitutional context. The three subject reports canbe found on the ENQA website2.

3.1 Degree structureand definition

One of the criteria evaluated through the trans-national project was the extent to which the pro-grammes have formulated and established a firstcycle degree programme. The evaluation attemptsto establish whether the programmes have formu-lated goals for the first cycle degree, and to whatextent these formulations may match the ‘Dublindescriptors’ for the first cycle degree.

The extent to which the evaluated programmeshave implemented a first cycle degree programme

varies considerably across the programmes. TheVeterinary Science programme is a full programmeleading to a degree that is recognised as equivalentto second cycle degrees. In contrast, the evaluatedPhysics and History programmes are almost all nowin the process of implementing the first and secondcycle degree structures, but they are at differentstages of development.

The new History programmes within the TEEPevaluation are of 3 or 4 years in length and mostare planned within the Bologna concept of first andsecond cycle degrees. Most of the Physics pro-grammes have established first cycle degrees of 3years’ duration; three of the programmes have justbeen re-structured according to the Bologna model.One of the Physics programmes does not offer thefirst cycle degree, but an integrated five-year pro-gramme leading to a second cycle degree instead.

It is evident that the extent to which the evalu-ated History and Physics programmes have imple-mented the first and second cycle degree structureis related to the commitment of the respective coun-tries with regard to the Bologna process. It is notsurprising that in those countries where the first andsecond cycle degree structure has become a part ofthe government’s higher education policy or regu-lation, universities are undertaking the implemen-tation of this new structure.

Compared with the evaluations of History andPhysics, the evaluation in Veterinary Science wascarried out at second cycle degree level. This is aconsequence of the present situation of a “longunique cycle” degree in this discipline. The twocycle structure, as it is defined in the Bologna Dec-laration, is not applied in these programmes. Moreo-ver the evaluation showed that it would not be easyto formulate a shorter or intermediate degree with-out affecting the whole structure in the educationof Veterinary Science.

3 Results fromthe subject-specific reports

2 Available at www.enqa.net/pubs.lasso.

Page 12: Transnational European Evaluation Project › indirme › papers-and-reports › occasional-papers › ... · 2018-08-03 · The Transnational European Evaluation Project (TEEP) was

12

ENQA Occasional Papers

It should be noted, however, that the VeterinaryScience programmes had characteristics of both firstand second cycle degrees. The Veterinary Scienceprogrammes are clearly oriented towards the labourmarket, but at the same time the education of a vet-erinarian requires an amount of training that is evi-dently equivalent to a second cycle degree. All ofthese aspects open a relevant question about whetherthe new European framework is suitable for thoseprofessionally oriented degrees, such as Veter-inarians or Medical Doctors, which require longereducation programmes in comparison with otherdisciplines in which the degrees may be more orless targeted at a labour market.

Some of the evaluated first cycle degree pro-grammes in Physics and History have formulatedspecific level descriptors for the first cycle degrees.The range and balance of competences appear tomatch to some extent those included within thesedescriptors. The competences formulated for thefirst cycle degrees mainly aim at giving an academicgrounding in the subject leading to the second cy-cle and/or doctorate degree and prepare for the la-bour market. It is apparent that the relationshipsbetween the aims and structures of the first and sec-ond cycle degree programmes are being discussed,and that the TEEP project has contributed to thisdiscussion.

The evaluation has shown that the extent to whichthe programmes have formulated specific aims forthe first cycle degree varies considerably. The evalu-ation of the History programmes shows that mosthave identified explicit aims for first cycle degreeprogramme and some are planning articulation withsecond-degree awards. Three out of the five insti-tutions are delivering new programmes in line withBologna but have yet to produce graduates.

In Physics one of the programmes has formu-lated explicit aims, stating that the first cycle pro-gramme leads both to employment and furtherstudy; the other programmes have not explicitlyformulated their aims for the first cycle programme.For these degrees, it is implicit that the first cycledegree is the first step towards the second cycle orPhD degree. The extent to which the programmeshave formulated specific aims for the first cycledegree seems, however, to depend on the interac-

tion with the labour market and whether labourmarket representatives have been involved in for-mulating the expected learning outcomes.

3.2 Competencesand learning outcomes

The term ‘competence’ for the purpose of thisproject refers to a large extent to the outcomes ofthe TUNING project. The evaluation of the pro-grammes looked to see how far the definitions ofboth subject-specific and generic competences havebeen used in the formulation of courses and pro-grammes, and the extent to which these competencedefinitions have been used by staff to develop ashared understanding about the delivery and expec-tations of the programme, and also to what extentthese have been communicated to students so thatthey know what is expected of them. The evalua-tion also looked at whether or not teaching andlearning methods, as well as assessment methodswere designed to support the development of thedesired competences.

Although limited in its scope, this project hasdemonstrated that there are apparent differences inthe extent of awareness about and knowledge ofthe TUNING subject-specific and generic comp-etences amongst relevant academic and adminis-trative staff, both across programmes and acrossdisciplines. Furthermore, there are differences re-garding the extent to which the programmes haveformulated and used (or planning to use, or not) theTUNING descriptors.

Consideration of ‘competences’ has proved auseful focus for discussions within TEEP.

This has been the case on a generic level; in thediscussion of the sort of abilities and approachesthat are relevant to students reaching the end of thefirst cycle, but at the same time the particular as-pects relevant both within and between disciplinesare apparently different.

In History the subject-specific competences fromthe TUNING work are beginning to be recognisedas relevant and being used by some of the staffwithin the departments visited. The TUNING cri-teria are seen as external reference points that can

Page 13: Transnational European Evaluation Project › indirme › papers-and-reports › occasional-papers › ... · 2018-08-03 · The Transnational European Evaluation Project (TEEP) was

13

ENQA Occasional Papers

assist in the development and articulation of pro-grammes and their expectations. However, thosewho had considered them to be less relevant sawthe TUNING criteria as generic competences; theyare regarded as too numerous to be of real value. Itwas apparent that in the majority of cases the stafffound it easier to relate to the subject-specificcompetences than to the generic ones.

The extent to which the Physics programmes arefamiliar and employ the competence terminologyvaries considerably. Two of the programmes haveformulated both subject-specific and generic com-petences at programme and course level. For theremaining three programmes the competence ter-minology is unfamiliar and therefore not activelyused.

In Veterinary Science, due to requirements of EUdirectives, national regulations and other Europeandevelopments, the programmes are defined in or-der to deliver all subject related competencesthrough compulsory subjects. At present, the pro-grammes have defined a set of generic skills for theVeterinary education, but the balance of subject andgeneric competences is to be modified.

Another important dimension of the TEEP crite-ria and the application of competences is the extentto which teaching and learning strategies and as-sessment methods support the development of bothsubject-specific and generic competences. Althoughsome of the History and Physics programmes havenot yet explicitly formulated expected competencesor communicated them effectively, the teaching andassessment methods do seem to support the devel-opment of both subject-specific and genericcompetences. There seems to be an implicit under-standing amongst the staff as to what the expectedcompetences are; however, these have not yet beenmade explicit to students in all cases. Also in Vet-erinary Science, the teaching and learning methodspermit the achievement of subject related com-petences. However, in some areas a developmentof teaching methods would facilitate improved ac-quisition of skills and attributes.

Although Veterinary Science programmes havenot participated in the TUNING project, they areaccustomed to think in competence terms and toreflect on external reference points such as the

TUNING descriptors. The exercise and the vocabu-lary are familiar to the programmes since the Euro-pean Association of Establishments for VeterinaryEducation have formulated similar transnationalcommon core competences.

This was not the case in History and Physicswhere the evaluated programmes had some prob-lems understanding and applying the terminologyof competences and learning outcomes as set out.A substantial effort will therefore still be needed inorder to ensure a shared vocabulary and culture ofinternal and external reference points, such as theTUNING outcomes and competences.

3.3 Quality assurance

The quality assurance criteria used within the TEEPproject included consideration of whether pro-grammes formulate their quality assurance proce-dures to ensure that the programmes remain cur-rent and valid in the light of developing knowledgewithin the particular discipline and its practical ap-plication, and the extent to which the aims and in-tended outcomes of the programmes remain appro-priate to factors such as changes in student demand,student entry qualifications, employer expectationsand employment opportunities.

From the documentation it appears that many ofthe evaluated programmes are in an early stage offormulating and implementing a general qualityassurance strategy. The extent to which the pro-grammes have a quality assurance culture seems todepend on national traditions of internal qualityassurance mechanisms in higher education pro-grammes. It should be noted that the professionaldimension of Veterinary Science degrees, jointlywith the existence of a European directive, haveencouraged the implementation of some wider qual-ity control measures.

The TEEP evaluations indicate that the qualityassurance activities of the programmes mainly fo-cus on course evaluation. A very limited number ofthe programmes have established a coherent frame-work for quality assurance that includes a broadrange of quality assurance activities e.g. programmeevaluation, course evaluation, alumni surveys etc.

Page 14: Transnational European Evaluation Project › indirme › papers-and-reports › occasional-papers › ... · 2018-08-03 · The Transnational European Evaluation Project (TEEP) was

14

ENQA Occasional Papers

The extent to which the programmes collect sys-tematic feedback from stakeholders is low. In termsof curriculum development very few of the pro-grammes have established, for instance, a system-atic procedure for feedback from the labour marketat programme level. Neither do the programmescollect feedback from the graduates on a system-atic basis. Mostly, feedback is provided on a spo-radic basis and depends on personal relations.

It has been evident that few of the programmessystematically gather and use information on stu-dent progress and graduate employment. Accord-ing to some of the evaluated programmes, the exer-

cise of collecting statistical information on studentprogress for the transnational evaluation has been avery valuable experience, making the programmesmore aware of their student population.

Though limited in its scope, the TEEP projectappears to have made a significant contribution instimulating discussions about and recognition of theneed for the programmes to develop explicit qual-ity assurance strategies. The programmes visitedcommented that they recognise that quality assur-ance strategies are necessary for the future and in-tend to build on their TEEP experiences.

Page 15: Transnational European Evaluation Project › indirme › papers-and-reports › occasional-papers › ... · 2018-08-03 · The Transnational European Evaluation Project (TEEP) was

15

ENQA Occasional Papers

This chapter provides a description of and reflec-tions on the methodological elements included inthe evaluation. The evaluation model follows theEuropean Council Recommendation from 24 Sep-tember 1998 on European Cooperation in QualityAssurance in Higher Education where a so-calledfour-stage model for good practice on quality as-surance is introduced. The four stages of this evalu-ation model include: 1) autonomy and independ-ence, in terms of procedures and methods concern-ing quality evaluation, both from government andfrom institutions of higher education; 2) internalself-examination 3) externally-composed elementbased on appraisal and visit by external experts, and4) the publication of a report.

The four-stage model is today generally acceptedas the shared foundation of European quality as-surance, and it also has a prominent place in theCouncil Recommendation of 1998, and in the cri-teria for ENQA membership.

The following chapter presents the process of theTEEP evaluation and the main methodologicalchoices made. Thus, the Project Group reflects onthe experiences of the different methodological el-ements of the transnational evaluation, which arein part based on the feedback from the expert pan-els, the programmes, and in part based on the ProjectGroup’s experiences from the project. Each sectionconcludes with a summary of ‘lessons learned’. TheProject Group hopes that these reflections will beof value to future transnational evaluations.

4.1 Organisation4.1.1 Process

The TEEP project started in June 2002 and wasconcluded with this methodological report. Thewhole process is illustrated in the figure shown be-low and described in detail in the following sec-tions.

After setting up the organisational framework ofthe programme – consisting of the Project PlanningGroup, Management Group and Project Group, acall for programmes to participate within eachsubject area was made. The project required fiveinstitutions/programmes from different countries foreach of the subject areas. A project plan and an in-troductory letter were sent to all the programmesparticipating in the CLIOHnet, EUPEN andEAEVE-network (the three Thematic Networks).A parallel process involved the writing of amanual, which contained guidelines for self-evalu-ation and formulation of the criteria that would beused by the panels in their evaluations.

The next step was the selection of the partici-pating programmes by the Management Group.The manual was sent to the selected institutions inSeptember 2002 and they were invited to partici-pate in a two-day working seminar in Brussels atthe beginning of October. Here the programme rep-resentatives had the opportunity to work togetherwith the manual before starting their self-evalua-tion processes.

4 Methodology

Figure 1. The process

Page 16: Transnational European Evaluation Project › indirme › papers-and-reports › occasional-papers › ... · 2018-08-03 · The Transnational European Evaluation Project (TEEP) was

16

ENQA Occasional Papers

From October to December each of the pro-gramme teams conducted their self-evaluations.During the self-evaluation period the experts wereidentified, approved by the Management Group andthe programmes they would be evaluating, and thenrecruited. As many of the expert panel members aswere available met in Brussels in January 2002 toundertake inductions and discussions about theirroles and responsibilities. Following the submissionof the self-evaluation reports (by December 2002 –January 2003) site visits by groups of 3–4 expertsdrawn from the relevant panels were organised forthe period February to April 2003.

Following the completion of the site visits, thethree subject reports were drafted in May 2003 bythe three national agencies respectively, drawing onthe extensive notes taken and on the panels’ con-clusions made during the site visits. These draft re-ports were sent to the experts for their considera-tion and comments.

In June 2003 the three draft subject reports thathad been approved by the relevant experts were sentin consultation to the institutions for factual com-ments. The three subject reports were publishedin August 2003 (on the ENQA website3).

A methodological report coordinated by ENQAand prepared by the Project Group was drafted fromJune to August 2003. The main results of the projectwere presented to the ENQA members at the ENQAGeneral Assembly in September 2003. A closingseminar was held in October 2003 with membersof the expert subject panels and representatives fromthe evaluated programmes experts invited to con-tribute their final reflections and formulate theirrecommendations for future projects. The projectcompleted with the publication of the methodologi-cal report.

One of the overall conclusions concerning theprocess is that the project would have benefited frommore time, particularly at certain stages of the proc-ess. The period from institutions receiving the invi-tation to participate to the completion of the finalreport was only 8 months. Considering the projectinvolved a transnational evaluation covering three

different subjects and 11 different countries andexperts participating from an even higher numberof European countries, this may be considered aquite extraordinary achievement. Many of the ex-perts’ and programme representatives’ reflectionsrefer to the time frame of the project. These com-ments will be presented in connection with the dis-cussion of the different methodological elementsin the following sections.

4.1.2 Scope of evaluation

The transnational evaluation covered programmesin the three subject areas; History, Physics and Vet-erinary Science. The European Commission wasresponsible for the identification of the three sub-ject areas. The advantage of the three subject areasis that they represent both arts and science subjects,whilst Veterinary Science also represents a profes-sionally oriented subject. It was considered moreof a challenge to test common criteria across dif-ferent subjects than if the programmes were fromthe same or similar areas.

From a methodological point of view it is inter-esting to compare more general degrees to profes-sional oriented degrees. However, the evaluationhas also been complicated by choice of evaluatingVeterinary Science. The Veterinary Science area hasa long-standing tradition of international evaluationand it was therefore difficult to recruit institutionsto volunteer for the project. Those institutions thatparticipated found it interesting and beneficial tocompare the TEEP method to the EAEVE method.

In Physics and History the scope of the evalua-tion was from the outset confined to the programmelevel – more specifically to the first cycle degree orequivalent. However, the intention to focus on thefirst cycle degree was complicated by the fact thatVeterinary Science, due to European Directives, isnot committed to implement a first cycle degree.Therefore, it was decided that it would be moreappropriate for the full Veterinary Science pro-gramme to be evaluated.

The focus of the evaluation on the level of thefirst cycle or equivalent was motivated by the em-phasis in the Bologna Declaration concerning theapplication of a transparent system of qualificationsin higher education based on two cycles, and the3 Available at www.enqa.net/pubs.lasso.

Page 17: Transnational European Evaluation Project › indirme › papers-and-reports › occasional-papers › ... · 2018-08-03 · The Transnational European Evaluation Project (TEEP) was

17

ENQA Occasional Papers

increasing interest in the development of first cycleprogrammes.

Furthermore, the background of previous expe-riences with transnational evaluation and the TUN-ING project suggested that the programmes wouldbe more likely to have greater common contentduring their first cycles than during the second cy-cle when there is a higher degree of specialisation.The greater proportion of common content was con-sidered beneficial as the evaluation covered a rangeof countries as well as the different subject areas.

4.1.3 Recruitment and selectionof programmes

As the project was based on cooperation betweenENQA and the three Thematic Networks, invita-tions to participate within the project were gener-ated through the members of the Thematic Networksvia the lists on the network websites. It was an ad-vantage for this pilot project that the recruited andselected institutions were part of a well-function-ing network as the information flowed smoothlythrough existing channels of communication andthe project benefited from existing links, under-standings and mutual trust.

The disadvantage of only including networkmembers was that the effects of the project wereperhaps more limited. It may be assumed that withinEurope the network institutions are amongst thosethat would already be more likely to be aware ofthe mechanisms for explicit quality assurance andtheir importance. It would probably be beneficialfor a project of this type to reach out to a widerrange of other institutions interested in Europeanevaluation and exchange of information and prac-tice. Furthermore, by inviting all European univer-sities to participate in such a project the scope forselection would also be broader and it is likely thatthe effects of the project would spread to more pro-grammes.

The Management Group using the following cri-teria selected the programmes:

• Involve as close to 15 different countries as pos-sible with five different countries in each subjectgroup;

• One country from Central and Eastern Europeshould be included within each subject;

• Participation should be based on each institution’sown initiative;

• There should be encouragement for inclusion ofsome institutions without former external evalu-ation experience;

• Participating programmes should have a mini-mum of 100 enrolled students;

• Institutions that had not participated in the TUN-ING project should be encouraged to participate.

An overall consideration of the geographical spreadwas also included, with a North – South-East di-mension, and a small country – large country di-mension considered within the selection process.

In order to ensure that the programmes evalu-ated had a sufficient number of students, pro-grammes with a total enrolment (not per year orper cohort) of not less than 100 students were in-vited to participate. As the project builds upon theTUNING experience, it was interesting to inviteboth institutions with and without TUNING expe-rience to participate.

The selection of institutions / countries was com-plicated by the fact that there were different levelsof responses from the three subject areas. In Phys-ics many participants volunteered whilst there wereless in the other two subjects.

As mentioned in 4.1.2, the fact that the Veteri-nary Science area has a well-established interna-tional evaluation system was the main reason thatvery few Veterinary Science institutions volunteeredto be included within TEEP, and in the end onlyfour Veterinary Science programmes were evalu-ated. The scope for selection was therefore narrowand, as an exception, one of the selection criteriahad to be adjusted and as a consequence two EUAssociated countries participated in the VeterinaryScience evaluation.

The geographical criteria were valuable in orderto ensure a European perspective. In all three sub-ject areas Northern, Southern and Eastern Europewere represented. An important dimension withinthe pilot project was in testing whether the samecriteria could be applied across different European

Page 18: Transnational European Evaluation Project › indirme › papers-and-reports › occasional-papers › ... · 2018-08-03 · The Transnational European Evaluation Project (TEEP) was

18

ENQA Occasional Papers

countries despite their different educational tradi-tions. The three reports together provide a case studywith a perspective of 11 different European univer-sity systems and a provisional insight into wherethe different countries are in the Bologna process.

Lessons learned

• How to engage with a wider spectrum of

institutions, and not solely institutions active in the

Thematic Networks, should be considered.

• The appropriateness of choosing a subject area that

has its own quality assurance system should be

considered.

• The geographical criteria were important in ensuring

the European dimension.

4.1.4 Budget

Throughout the process of the project the institu-tions and the experts have shown a significant com-mitment to participate in the evaluations. From theexperts’ point of view, their commitment has beenchallenged by the budgetary constraints of theproject.

All the panel members participated without re-ceiving a fee or other compensation for their timeor work. All travel, board and lodging for visits andseminars were paid by the project but due to Euro-pean Commission rules of economy class travel themembers of the expert panels had inevitably to stayover a Saturday night on some occasions becauseof airline restrictions.

The fact that the site visits were carried out dur-ing the weekends and over a short period meantthat the project to a large extent relied on the panelmembers’ good will and enthusiasm. Whilst manywere interested in this new type of project, it wouldbe unreasonable to expect that a sustained and ex-tensive international evaluation programme couldbe established and supported through such good-will and generous free time of institutional staff andvisiting experts. Additional time allocated for theoverall TEEP project must be regarded as essentialin the planning of any further transnational evalua-tions based on TEEP.

It is the view of the expert panel in Physics thatthe number of expert panel meetings was insuffi-cient. Both the time and budget constrains limited

the possibilities for the panel members to meet anddiscuss sufficiently the preparation and recommen-dations.

Lessons learned

• Transnational evaluation should not normally be

conducted on the timescale used by TEEP;

additional time should be included

• Time schedules and budgets should include a

meeting for the panel members to meet and discuss

the draft report and recommendations.

4.1.5 Language

It was decided that the transnational evaluationshould be conducted in English. All self-evaluationreports, site visits in general, and panel discussionswere in English. Practically it is necessary to use acommon language in a transnational evaluation.

Methodologically there are some implicationsconducting an evaluation in one language – as thelanguage for most of the participants will be theirsecond language. It has been essential for the projectthat the experts and secretaries participating in theevaluation had good English language skills. Thesame is the case for the (majority of the) self-evalu-ation groups and the student and staff interviewedat the site visits. However, the extent to which theevaluated programmes use English as a means forcommunication varies.

In general, the staff and students at the site visitshad a good command of English. However, this wasnot always the case. Fortunately, it generally provedpossible through both design and coincidence to findshared languages other than English when neces-sary. The project did however demonstrate the im-portance and additional value of including withineach panel at least one member who spoke the lo-cal language. The TEEP evaluations would not havebeen as complete had they been restricted only tothose who could contribute in the shared language(English).

Another problem connected with the evaluationwas the availability of documentation about the pro-grammes in English. In some cases material describ-ing a programme was only provided in the locallanguage and in others the programme team / insti-tution had contributed additional (scarce) resources

Page 19: Transnational European Evaluation Project › indirme › papers-and-reports › occasional-papers › ... · 2018-08-03 · The Transnational European Evaluation Project (TEEP) was

19

ENQA Occasional Papers

to have the material translated. The language issuethus set limits for how a transnational evaluationcan study a subject in depth.

A last problem arising from conducting an evalu-ation in a foreign language is that lack of sufficientknowledge of the local language may impact on theextent of knowledge and understanding of the rolesplayed by national, cultural and historical contexts.It is therefore an advantage to include within eachpanel an expert who has some knowledge and un-derstanding of such specific contexts and their cur-rent impacts. In the Veterinary Science evaluationsa local graduate participated in the initial meetingof the expert panel before the site visit. This provedof considerable benefit to the project as the expertgroup gained a good first hand impression of theprogramme context before the actual site visit. Thismeant that more time was available for in-depthdiscussions instead of gathering factual informationin order to understand the programme context.

Lessons learned

• The primary language used for such a project

should be considered carefully.

• A common language is important for ensuring

communication; practically English has been most

useful.

• It is valuable to have at least one expert who has

some competency, if not fluency, in the local

languages.

4.1.6 Selection and compositionof the expert panels

4.1.6a Criteria for selection and composition ofexpert panelsWhen the expert groups were being selected andappointed it was emphasised that the expert groupsshould each represent different profiles and coun-tries, thereby supplementing each other in differentEuropean aspects of quality assurance within eachspecific subject. It was decided that each expertpanel should include at least one subject-area spe-cialist, an expert with TUNING experience, an ex-pert with managerial experience in the subject field,and an expert with didactic-pedagogical experience,and that different specialisations should be repre-sented.

It has been an advantage for the project that someof the experts have had extensive experience withquality assurance reviews and / or of setting up qual-ity assurance systems. Both institutions and expertshave gained a mutual insight into different Euro-pean quality assurance cultures through sharing suchexperiences.

In establishing the panels it was noted that thechairperson should have the necessary qualificationsto fulfil the role and lead the panel. Thus, the chair-person should for example have a good subject repu-tation in the field being evaluated, have managerialexperience, and most importantly have credibleexperience with quality assurance and preferablyexternal reviews.

Other fundamental criteria applied to the appoint-ment of members were:

• Members should speak and write the workinglanguage (English) fluently;

• Members should not be a resident of or have thesame nationality as one of the countries partici-pating in the particular evaluation;

• Members should not be directly connected to anyof the institutions participating in the evaluationof the particular programme;

• Members should not be involved in the board ofa Thematic Network.

There was an intention that a student member shouldjoin each of the visiting expert panels. The studentshould be from the country being visited in order tocontribute with national context knowledge. In ac-cordance with the criteria, the student should comefrom another institution in order to be independentfrom the institution(s) being evaluated. The role ofthe student should be to focus on areas and ques-tions relevant for the student body. In the ProjectPlanning Group, the National Unions of Studentsin Europe (ESIB) took the responsibility for identi-fying the students through their national organisa-tions. However, ESIB was met with unexpected dif-ficulties in this task with the result that unfortunatelyonly one visit included a student member.

Opinions have been divided as to the effects andconsequences on the quality of the evaluations ofincluding a student member. The History panel wasfor example not convinced that its work would have

Page 20: Transnational European Evaluation Project › indirme › papers-and-reports › occasional-papers › ... · 2018-08-03 · The Transnational European Evaluation Project (TEEP) was

20

ENQA Occasional Papers

been enhanced by inclusion of a student member.Some members of the Physics panel however re-garded it as a weakness that the project was unableto include student members, as this should have beenpart of the method testing.

In Veterinary Science another model was used.Some countries only have one Veterinary Scienceprogramme and the independence criteria for thestudents were therefore unlikely to be met. There-fore, graduates of the evaluated programme wereidentified to meet with the expert group and facili-tate the experts’ understanding of the national con-text of the programme. This approach proved to bea good contribution to the evaluation and gener-ated good experience.

4.1.6b Expert panelDue to the budget and time constraints of the projectit was decided that the expert panels for each sitevisit would be drawn from a pool of 8 members.The visiting panel to each institution consisted of 4members. For continuity, the panel chairperson andsecretary participated in each visit.

The ideal number of experts for a site visit is opento discussion. The experience of the project showsthat a panel of 3–4 members proved to be a goodsize, with the experts able to represent a range ofdifferent profiles and countries. In one case, due tolate and unavoidable circumstances, one of the His-tory panels included only 2 subject specialists andthe secretary. This proved to be less satisfactory,and additionally placed a considerable burden onthe team.

It was an advantage for the European dimensionof the project that the experts represented differentcountries and educational cultures and were able todraw upon their different professional backgroundswhen contributing to the project.

The appointment of a pool of experts from whicheach site visit panel could be selected was conceivedas a solution to a practical problem. However, itmight be a methodological advantage to have oneexpert group visiting all institutions; this would pro-vide greater continuity, but might have reduced therange of experience and expertise contributing tothe evaluation.

4.1.6c Division of labour between expert panels andsecretariesThe quality assurance agencies have been respon-sible for the practical implementation of the evalu-ations and for applying relevant methods. Further-more, they have acted as secretaries for the expertpanels and have had the contact with the institu-tions and have drafted the report for the expert pan-els. The expert panels have been responsible for theacademic content of the evaluations and for the rec-ommendations and conclusions in the report.

This division of labour is well known from mostof the quality assurance agencies in operation inthe European Union countries allowing the expertsto concentrate on the content and leaving the prac-tical and methodological responsibility for theevaluation with the agencies, which have a profes-sional expertise in evaluation.

The division of labour functioned well in His-tory and Veterinary Science where as some of themembers of the Physics team found that the lack ofa common meeting between the experts to discussthe recommendations of report made ownership ofthe recommendations for all the experts difficult.

Lessons learned

• The experience of the project has shown that 3 or 4

members is a good size for the compositions of the

expert panel for site visits, with the experts

representing different profiles of experience and

drawn from different countries.

• It has to be clear to the experts what is the division

of labour between the quality assurance agency

and the expert panel when the experts are

recruited.

4.2 Self-evaluation process

4.2.1 Self-evaluation manual

It has been valuable to have the same self-evalua-tion manual for a comparative project where ashared method is used for three different subjectareas, and where the participants have experiencesfrom different European national quality systems.The fact that the Management Group has been in-volved in the development of the method and the

Page 21: Transnational European Evaluation Project › indirme › papers-and-reports › occasional-papers › ... · 2018-08-03 · The Transnational European Evaluation Project (TEEP) was

21

ENQA Occasional Papers

institutions and the expert panels have participatedin training, have provided a shared understandingof and commitment to the terms of the project.

The use of a single self-evaluation manual forthe three subjects has generally ensured that theinformation provided by each of the institutions hasbeen presented in a similar and consistent way, fa-cilitating a comparison both across programmeswithin a subject and between subjects on a Euro-pean level. Also, the provision of formatted tablesfor insertion of required quantitative data proved tobe a valuable exercise. However, there were alsosome problems in applying national data into stand-ard tables. It is therefore recommended that futuretransnational evaluations pay careful attention to thedesign and definitions of comparable indicators.

Furthermore, the responses from institutions andexperts offered other suggestions for criteria andindicators, e.g. there was a single suggestion foraddition of an indicator of the ratio of female andmale students.

The institutions reported that they generallyfound the manual to be informative and useful andthat it included all information and guidelines forthe evaluation. However, the institutions also agreethat the manual needs editing for consistency andclarification. The need for a revision of the manualand criteria is discussed in chapter 5.

Lessons learned

• A common self-evaluation manual is essential to

ensure a shared understanding of and commitment

to the terms of the project.

• The manual used for the TEEP project would need

revising prior to any further evaluations.

4.2.2 Launch meeting

A launch meeting was held between representativesof the self-evaluation groups and the project secre-taries early in the process. Considering the pilot sta-tus of the project, such an early meeting was con-sidered necessary to guide the institutions to pro-vide comprehensive self-evaluation reports cover-ing not only descriptions of existing practises butalso reflections on these. Although there were vari-ations in how analytical the content of the self-evalu-

ations reports were, the early initiation of the self-evaluation process is thought to have lead to moreanalytical reports than would otherwise have beenthe case.

Throughout the process of the evaluation, theinstitutions and the Thematic Network chairmenhave shown commitment to participate in the evalu-ation. It has been very important for the processthat the Thematic Network chairmen and institu-tions were given the opportunity to comment uponthe framework of the evaluation. The commitmentfrom the institutions was also linked to the oppor-tunity for the institutions to discuss and test thequestions and criteria of the self-evaluation manualat the launch seminar. The launch meeting was alsoimportant in establishing a shared understanding andinterpretation of the self-evaluation manual and self-evaluation process.

During the site visits, the self-evaluation groupsexpressed some uncertainties about interpretationof some of the questions and terms included in theself-evaluation guide. These had come up duringlater stages in the self-evaluation process – whilstit was possible to raise questions with the secre-tariat via, for example e-mail, it was noted that therepresentatives from the self evaluation groupswould have found it more helpful to have a jointdiscussion meeting with the secretary later in theself-evaluation process. Whilst such a meetingmight not have functioned well at an early stage inthe process, a meeting during the self-evaluationexercise could have facilitated more discussion andclarification of questions, the terminology and notleast the interpretation and application of the crite-ria used.

Lessons learned

• Consideration should be given to how to ensure a

shared understanding and interpretation of the self-

evaluation manual and self-evaluation process.

• Regular contact with the institutions during the self-

evaluation phase could help with interpretation of

some of the questions and terms, and be valuable

for the development of the self-evaluations reports.

Page 22: Transnational European Evaluation Project › indirme › papers-and-reports › occasional-papers › ... · 2018-08-03 · The Transnational European Evaluation Project (TEEP) was

22

ENQA Occasional Papers

4.2.3 Focused approach

The transnational evaluation was a ‘light’ versionof typical national evaluations; it applied a focusedapproach structured around three pre-selectedpoints:

• Educational context;• Competences and learning outcomes;• Quality assurance mechanisms.

The experience of using such a focused approachhas generally been positive. It has provided theevaluation with an opportunity to make compari-son between programmes more easily and morecomprehensively. Further, the approach has beenvaluable in that it has provided focused self-evalu-ation reports that have been a stimulus to reflectionwithin the institutions, and have been of help instructuring the site visits. It is perhaps worth notingthat, with regard to the length and focus of the guide-lines for self-evaluation and project, a broader evalu-ation with more areas would have been practicallyimpossible both for institutions and experts to con-duct in the timeframe given for both the prepara-tion of the self-evaluation reports and the site visits.

A focused evaluation does, however, also implythat some issues are left out. The evaluation there-fore presents only a part of the picture of the pro-grammes compared with that provided by broaderor more extensive evaluations. The Veterinary Sci-ence institutions and experts note that, althoughthere is an EAEVE system to evaluate VeterinaryScience faculties, the TEEP approach was appreci-ated as it introduced another method of evaluation.TEEP was lighter, less wide ranging, less rigorousin those areas that it addresses, and introduces somenew aspects related to quality assurance.

The choice of the three areas for the focused ap-proach was also an important aspect of the devel-opment of the TEEP method. The themes for self-evaluation generally seemed to be suitable althoughnot all institutions could readily provide all of therequested statistical data.

Educational context as focus areaThe focus on education context proved valuable; itwould be difficult to conduct a transnational evalu-ation without a focus on both the national and theprogramme contexts. It was evident that the nationallegal systems have a significant impact on pro-gramme structure and composition, as well as onthe organisation of quality assurance mechanisms.It would have been very difficult to understand therationale underpinning the programmes without theinformation on national contexts. It cannot be em-phasised enough that this focus is of vital impor-tance when conducting transitional evaluation.Without it there is a significant risk that conclu-sions and recommendations will not be relevant tothe evaluated programmes.

In the Physics evaluations the information pro-vided about programme structure and content wasnot sufficient for the experts to fully understand thecontext in which the programme functioned. Thisinfluenced the quality of the evaluation. The Phys-ics experts found it hard to assess the level and qual-ity of the programme without information on thecurriculum (or alternatively, detail of subjectcompetences to be acquired). In their view, themanual should have been more explicit in askingthe programmes to provide such material. However,the other panels did not express such a view, andindeed different arguments can be made on the ex-tent to which the evaluation should go into detailon the programmes’ curricula and content.

In transnational evaluations a detailed considera-tion of curriculum and content could also have sig-nificant disadvantages. An in-depth examination ofcurricula would require individual institutions/pro-grammes to provide details of their rationale forcurricula design, strategies for teaching and learn-ing, and how these are then reflected in approachesto (different types of) assessment, the setting andapplication of assessment criteria, and the rationalefor progression routes and ‘hurdles’. All of thesewould need to be provided in the agreed languagefor the evaluation. All of this would be an immensework both for the participating institutions and alsoincrease the workload of the panel substantially.

Page 23: Transnational European Evaluation Project › indirme › papers-and-reports › occasional-papers › ... · 2018-08-03 · The Transnational European Evaluation Project (TEEP) was

23

ENQA Occasional Papers

It could also be argued that it would be difficultfor a transitional evaluation panel to assess pro-gramme quality through an in-depth study of thecurriculum when the extent to which the curricu-lum can be varied depends to different extents onthe consequences of national standards and require-ments. The rationale of TEEP from the outset wasthat evaluations should be part of and support a proc-ess of development of quality assurance cultureswithin individual institutions through applicationof and reflection on the relevance of shared criteria.

Competences and learning outcome as focus areaThe focus on competences and learning outcomeswas an essential ingredient if the evaluation was tobuild on previous experiences with the TUNINGproject; this approach also provided an opportunityto consider the applicability of the recently devel-oped generic descriptors for first- and second-cy-cle degrees (the so-called ‘Dublin descriptors’).

An aspiration associated with the creation of aEuropean Higher Education Area is the establish-ment of comparable degrees with easily readablenomenclatures, within a system essentially basedon two cycles. This requires the identification ofrelevant competences that are associated with eachof the degrees. These are both subject-specific, butalso require the identification of competences thatare shared across programmes. The identificationof such competences, and the development of sharedunderstanding of their importance, should add tomutual trust and confidence about HE qualificationswithin Europe, and enhance those distinctive fea-tures of European cooperation that are closely linkedwith the transparency required to facilitate greaterlabour market mobility.

The relevance of competences and learning out-comes as a focus area was a subject of discussionamong experts and institutions. One group consid-ered that it was particularly valuable to focus oncompetence and learning outcome, as this providesthe basis for the important change from comparingone syllabus against another to comparing the realoutcome of the programmes. This then allows fordifferent approaches and choices within a syllabuswhilst still providing for the development of com-

parable competences.Another group, however, found this approach

difficult in concept and in operation. The majorityof the institutions and experts were not familiar withthe vocabulary of the TUNING project and it wastherefore difficult to initiate a discussion of defini-tions at a sufficiently detailed level to take the com-parative evaluation forward quickly. It proved dif-ficult to convert what appeared to some to be anabstract terminology quickly into operational terms.For those institutions that were familiar with thecompetence terminology and had managed to useit operationally (i.e. provide explicit course and pro-gramme outcomes) it turned out to be an importanttool in informing students of the content and goalsof the programme.

Quality assurance as focus areaThe last focus of the evaluation was on the pro-grammes’ quality assurance mechanisms. In gen-eral, both institutions and experts agreed that thisfocus area was the one from which the programmesbenefited the most. Several emphasised that theprovision of statistics on student cohort and progresshad been highly valuable to the programme.

It was also suggested that the focus on qualityassurance should be seen as a way of helping insti-tutions to set up a quality assurance system andmake it effective. The fact that the evaluation is atransnational one, with a focus on quality assurance,can also be used as an instrument to help strengthenthe arguments of individual universities in their re-quests for a greater focus on quality assurance withinan international context, and in this way provideopportunity for greater mobility and internationalrecognition.

Other focus areasAs mentioned earlier in this section, a focused evalu-ation does leave out some issues. Other focus areassuch as research or information on infrastructurehave been mentioned as important focus area bysome experts and institutions and in case of Veteri-nary Science, practical training has been pointedout to be a focus area of relevance. However, abroader evaluation with more areas would have been

Page 24: Transnational European Evaluation Project › indirme › papers-and-reports › occasional-papers › ... · 2018-08-03 · The Transnational European Evaluation Project (TEEP) was

24

ENQA Occasional Papers

practically impossible both for institutions and ex-perts to conduct in the time frame given for boththe preparation of the self-evaluation reports andthe site visits.

Lessons learned

• The focus evaluation approach makes comparison

easier and the self-evaluation reports and the site

visits more structured.

• Focus on the national and local context is vital in a

transnational evaluation.

• It is important to have sufficient information on

programme structure.

• The competence area is difficult to approach if there

is a lack of knowledge in the terminology

• Quality assurance is an important focus area and

helps the programmes in developing an effective

quality assurance system.

4.2.4 Self-evaluation process

The preparation of the self-evaluation reports wasdesigned to serve three distinct aims. It should pro-vide:

• A framework to stimulate internal discussions ofstrengths and weaknesses related to the threethemes (degree structure and definition; comp-etences and learning outcomes; quality assur-ance) that are the foci for the evaluation. Thiswas intended to assist the continuous improve-ment in the quality of the programme.

• Comparable documentation to be used by thepanels of experts in their preparations, site vis-its, evaluations and reports.

• Comments on the utility of the criteria when ap-plied to different programmes delivered withindifferent national contexts.

The self-evaluation reports together with the infor-mation gathered during the site visits constitutedthe documentation for the evaluation.

In general, the programmes have pointed out thatthe self-evaluation process stimulated internal dis-cussion about the quality of the programme. Theevaluation was also seen as a starting point for (fur-ther) awareness of quality assurance and the self-evaluation process helped the programmes to dis-cuss and see things in a new way. With more time

for the process the evaluations would have beenmore deeply rooted in the institutions.

However, some institutions also stated that theyfound the time too limited for self-reflection andfor a broad participation by different stakeholders.The evaluation was nevertheless seen as a very valu-able exercise that ought to be shared with the restof the staff.

Lessons learned

• The self-evaluation process is important in

stimulating internal discussion about the quality of

the programme.

• It is important that the time schedule allows time for

reflection and for the participation of other

stakeholders of the programme.

4.2.5 Self-evaluation group

A self-evaluation group was established for each ofthe programmes involved in TEEP. Under its chair-person the group prepared a self-evaluation report.The group consisted of five or more members andincluded representative from each of the relevantstakeholders at the programme level, includingmanagement, staff actively involved in teaching,students and administrative staff.

The self-evaluation groups generally functionedas anticipated within the project. Not only mem-bers of the programme’s management team, but alsostudents and staff represented in the self-evaluationgroup contributed to the site visits, and appeared tohave an important ownership in the exercise andcommitment to bring about the changes suggestedby the overall process. Students and staff partici-pating in the interviews at the site visits expressedan appreciation of the value of participating in theprocess.

However, a point for consideration in futureevaluations is how to ensure that the results andownership of the evaluation process could have awide ‘spreading effect’ among the remaining staffand students who are involved in the programme.Often the self-evaluation group appeared to haveprepared the self-evaluation report with little inter-action with the rest of the programme. The manualshould perhaps be more explicit in encouraging the

Page 25: Transnational European Evaluation Project › indirme › papers-and-reports › occasional-papers › ... · 2018-08-03 · The Transnational European Evaluation Project (TEEP) was

25

ENQA Occasional Papers

self-evaluation group to initiate a dialog with therest of the programme in the self-evaluation proc-ess. If the information is not getting through to therest of the programme there is a significant risk thatthe results of the evaluation will be of only formalvalue and not have any real impact or consequences.

Lessons learned

• A self-evaluation group with representatives of the

different contributors and stakeholders in the

programme should ensure broad ownership of the

process and support widespread commitment to

bring about any necessary changes

• Consideration should be given to ensuring that the

ownership and results of the evaluation process are

widely spread among the staff and students who

participate in the programme.

4.2.6 Quality of self-evaluation reports

The purpose of using a single self-evaluationmanual was to facilitate a comparison across pro-grammes from different national contexts, by seek-ing to ensure that the information provided by eachof the institutions was presented in a similar andconsistent way. The experience in this evaluationshows that even with such a framework there willbe diversity among the self-evaluation reports.

Some self-evaluation reports were rather descrip-tive in nature; some thought this was due to thetransnational context in which the evaluation wasconducted. For the experts to be able to evaluatethe programmes it is important that they are pro-vided with a suitable framework. Since much of theself-evaluation document, and the discussions dur-ing the site visits, were concerned with the signifi-cance of a national and local context, the reportsare not necessarily as evaluative as those producedfrom within a single country, where the context isrecognised and understood. Significant considera-tion must be given within a transnational evalua-tion to the guidelines for the self-evaluation manualand the proforma tables for inclusion of compara-ble data sets. The different ways in which some ofthe institutions interpreted the instructions for theself-evaluation indicates that further developmentof the self-evaluation manual and report are re-quired.

The descriptive nature of some self-evaluationreports may also have been influenced by the factthat a number of the questions raised in the self-evaluation requested descriptive answers. To coun-teract the descriptive tendency it would be valu-able if questions encourage the self-evaluation pan-els to be reflective.

Lessons learned

• Transnational evaluations carry the risk of being

descriptive due to the need for explanation of the

national context

• Consideration must be given to how the guidelines

for the self-evaluation report and the proforma

tables fit with national interpretations

• The self-evaluation format should include questions

that encourage reflectiveness.

4.3 Site visits

The self-evaluation was followed by site visits bythe expert panels. The site visits took place in Janu-ary–March 2003 and lasted 1,5 days per institution.All site visits were structured in a similar way, inaccordance with a standard programme. The sitevisits provided the panel with an opportunity to in-vite the institutions to elaborate on unclear and lesssubstantiated sections of the self-evaluation reports.At the same time, the site visits served to validatethe information provided in the self-evaluation re-ports. Furthermore, the site visits allowed the ex-perts to get a comprehensive and clear view of theprogramme through discussions and interviews withmain stakeholders.

4.3.1 Organisation of the site visit

The length of the site visits was generally regardedas being appropriate. It would have been possibleto complete each of the site visits in one day, butthe inclusion of an extra half-day provided an op-portunity to go into more depth on important is-sues. A one-day visit would have required that eachof the interviews would have had to be shortenedin order to allow for interviews with all of thestakeholders. The experience from the site visitshowed that an interview of approximately 1,5 hour

Page 26: Transnational European Evaluation Project › indirme › papers-and-reports › occasional-papers › ... · 2018-08-03 · The Transnational European Evaluation Project (TEEP) was

26

ENQA Occasional Papers

is appropriate, and that less time would be insuffi-cient to cover all of the necessary areas. Further-more, it would not have provided the panels withthe opportunity to prepare for each interview.

The organisation of the site visits, with separateinterviews with different groups of stakeholders,served the intended purpose of validating the con-tent of the self-evaluations reports by providing thepanel with a clear picture of the opinions and per-spectives of the different stakeholders.

The stakeholders interviewed at the site visitsincluded: the self-evaluation group, the senior man-agement, the teaching staff, students and graduates.The institutions were asked to provide 5-8 mem-bers for each group. This number proved to be ap-propriate and prevented potential difficulties aris-ing where the experts outnumbered the group be-ing interviewed. This was thought to be particularlyimportant for the student group to ensure that theydid not feel intimidated. A larger number that 8would have been very time consuming and increasethe difficulty of keeping a structured conversation.

Accordingly, it was emphasised before the sitevisits that an overlap of participants in the differentsessions should not occur unless it was unavoid-able due to the organisational structures relating tothe programme. This proved to be difficult as therewas often significant overlap between the self-evaluation group and the senior management team.In the Physics evaluation it was therefore decidedto substitute the meeting with the managementgroup with an additional meeting with the self-evaluation group. Both institutions and expertsfound this solution beneficial.

It had been anticipated that that the student rep-resentatives would be (randomly) selected accord-ing to the required programmes/specialisations. Thisalso proved to be difficult in part because of thelanguage barriers. The students were often, althoughnot always, selected according to their ability tospeak English. Institutions did a remarkable job inencouraging the students to participate and identi-fying students who met all the criteria and it was ageneral opinion among the experts that the inter-views with the students were some of the most re-warding. In one of the evaluations the institutionshad chosen to include both international students

on mobility programmes and a home student whohad studied abroad within the group to be inter-viewed; this proved to be particularly valuable asthey provided a comparative European perspectiveto the discussions.

The institutions were also asked to invite 5-8graduates according to a number of criteria i.e. thegraduates should represent different sources ofemployment. As in other areas of the evaluation theinstitutions showed a strong commitment to theproject and provided the sufficient amount of gradu-ates. It was interesting for the experts to gain aninsight into the range of jobs the graduates werequalified to take up. Unfortunately, some of thegraduates for the interview groups seemed to havebeen selected as a consequence of their outstand-ing curriculum vitae and they could therefore notbe considered as being representative of the aver-age graduates. The extent to which an evaluationwith a focus on the first cycle degree benefits frominterviews with graduates who are mostly doingtheir second cycle degree or PhD degree has beenquestioned. The evaluation might benefit more frominterviewing both second cycle degree students andalso graduates with a first cycle degree that are inemployment.

One problem that should be considered is theextent to which the points of view of a small groupcan be considered representative. This is pertinentto the interviews with the students, teaching staffand graduates. Interviews with stakeholders at thesite visit can only serve as an indication of this par-ticular group of individual’s reflections, not as be-ing representative for the stakeholder group as awhole. In evaluations conducted by the nationalquality assurance agencies surveys on students,graduates and employers normally constitute a veryimportant element of the evaluation of quality. Con-ducting surveys on transnational level would de-mand considerable resources. However, the survey-ing of more representative groups should be con-sidered in any future transnational evaluation.

The Physics evaluation included a tour of facili-ties within the programme of the site visit. This wasgenerally regarded as a positive experience in gain-ing an insight into the facilities. However, it wasalso noted that there is a particular and explicit need

Page 27: Transnational European Evaluation Project › indirme › papers-and-reports › occasional-papers › ... · 2018-08-03 · The Transnational European Evaluation Project (TEEP) was

27

ENQA Occasional Papers

to identify just what facilities the experts feel theyneed to see so that time aspects can be controlledeffectively Any tour must have a specific purposerelated to supporting teaching and learning; it shouldnot just be research equipment and computers thatare not generally available to students.

Lessons learned

• 1,5 day for site visits with 1,5 hour for each

interview has proved appropriate.

• Separate intervieews with different groups of

stakeholders provides the panel with broad

perspectives of the programme

• 5–8 members for each interview group appears to

work effectively

• Overlap of participants in the different sessions

should be avoided whenever possible

• Interviewing international students on mobility

programmes contributes to a comparative

European perspective

• A solution to the selection of students and

graduates to provide a reasonably representative

panel must be found.

4.3.2 Interview guides

For each site visit an interview guide was prepared;this provided the framework for the sessions to beheld during the site visits. Reflecting the functionof the site visits, the guide primarily contained ques-tions related to those areas where more informa-tion or validation of information was needed in or-der to facilitate the subsequent comparative evalu-ation of the programmes.

The guide contained questions relevant to eachof the groups to be interviewed, including someinitial questions related to the self-evaluation proc-ess about how the various stakeholders had beeninvolved in or informed about the process and theself-evaluation report. This was followed by anumber of questions related to the content of theself-evaluation report and the focus areas.

In general the interview guides proved useful forthe site visits. As the time for each interview wasshort the interview guide served to structure theinterviews, and hence to generate the informationneeded to supplement the self-evaluation reports.The interview guides also functioned as a useful

instrument for the expert group and secretary jointlyto prioritise the questions before the meetings.

A disadvantage of using relatively strict interviewguides was that it countered the flow of the inter-views to some extent and often meant that therewas insufficient time to follow up on answers. Ex-perience also showed that interview guides with toomany questions limited the room for reflective as-pects. However, the interview guides did ensure thatall focus areas were treated equally at each site visit,thus facilitating the comparative element of theevaluation. Comparable, institution-specific inter-view guides are, therefore, recommended, providedthat the length of the individual interviews is care-fully taken into account when deciding upon thenumber and type of questions to be posed.

Lessons learned

• An interview guide is useful to assist structured

interviews and to get the information needed to

supplement the self-evaluation reports

• In comparative evaluations the interview guides

ensure that all focus areas are treated equally at

each site visit, thus facilitating the comparative

element of the evaluation

• The length of the individual interviews should

carefully taken into account when deciding upon

the number and type of questions to be posed.

4.4 Report

TEEP resulted in publication of three subject-spe-cific reports – one for each discipline. Each reportwas structured in two parts. The first part made acomparative view across the programmes accord-ing to the three focus areas. The second part con-tained programme reviews focusing on strengthsand weakness of each of the programmes accord-ing to the criteria of the manual. This part also in-cluded recommendations for further developmentof the programme.

The character of the reports was more descrip-tive than initially expected. However, due to thedifferent national cultures and systems, it was con-sidered necessary to use a considerable part of thereport to describe the national context of each of

Page 28: Transnational European Evaluation Project › indirme › papers-and-reports › occasional-papers › ... · 2018-08-03 · The Transnational European Evaluation Project (TEEP) was

28

ENQA Occasional Papers

programmes evaluated. This dimension of theproject was underestimated in the project planning.As previously mentioned in chapter 4.2.3 on thefocus area on educational context, this part is worthgiving a higher priority in the planning of futuretransnational evaluations.

The first step in the process of report writing wasthat for each evaluation the secretary wrote an ini-tial draft of the report. The experts received this

draft version with a request for comments. Unfor-tunately, the tight time schedule of TEEP did notallow an additional meeting of the experts, but thesecretaries collected and reviewed comments andincorporated these into the final report. Subse-quently, the institutions had the opportunity to com-ment on factual elements. Lastly, the report was sentto the experts for a consultation before publication.

Page 29: Transnational European Evaluation Project › indirme › papers-and-reports › occasional-papers › ... · 2018-08-03 · The Transnational European Evaluation Project (TEEP) was

29

ENQA Occasional Papers

5.1 Criteria-based evaluation

In national evaluations of educational programmesquality is often assessed in terms of the extent towhich the individual programmes achieve their owngoals, and the legal regulations under which theyoperate. This approach is commonly referred to asassessing the ‘fitness for purpose’.

However, the goals of the programmes partici-pating in this transnational evaluation, and the le-gal framework under which they operate, differ andtherefore it is necessary to have a common frame-work of reference for the evaluation in the form ofcriteria. Generally, the common criteria have beenuseful in making such comparison possible. Theyhave provided the three external expert panels andthe programmes with a transparent basis for theevaluation of programmes across three differentdisciplines and the 11 countries.

At the same time, the evaluation has demon-strated that the formulation and application of thecriteria must be carefully considered. The sectionbelow describes how the criteria have been formu-lated. This is followed by a critical discussion ofthe application of the criteria. Finally the ProjectGroup concludes on the applicability of commoncriteria in TEEP.

5.1.1 Formulation of the criteria

The two sets of criteria relate to the two focus ar-eas: ‘competences and learning outcome’ and ‘qual-ity assurance’. The third focus area ‘educationalcontext’ deals with the programme context and hasserved as a means to provide factual informationabout the programme; therefore no criteria wereformulated for this area. The criteria have been for-mulated with reference to a number of differentsources. Overall the objectives of the Bologna Dec-laration and the agreements reached at the 2001Prague meeting have constituted one important ref-

erence point for the formulation of the specific cri-teria. Another important source has been the TUN-ING Project.

Under the focus area ‘competences and learningoutcome’ the criteria are particularly concerned withthe content of the programmes in terms of subject-related and generic competences, a terminology thatwas applied in the TUNING project.

Other criteria for ‘competences and learning out-come’ focus on the extent to which the first-cycledegree programmes (in the case of Physics and His-tory), and second-cycle degree (in the case of Vet-erinary Science) corresponds with the formulatedobjectives in the Bologna Declaration. The criteriahave been prepared on the basis of the Dublindescriptors for first and second-cycle degrees de-veloped by the Joint Quality Initiative (http://www.jointquality.org). This developmental activityhas been undertaken in line with the Bologna Dec-laration that proposes the introduction, within aEuropean higher education space, of a system ofqualifications in higher (tertiary) education that isbased on two cycles. Each descriptor indicates anoverarching summary of the outcomes of a wholeprogramme of study. The descriptor is concernedwith the totality of the study, and a student’s abili-ties and attributes that have resulted in the award ofthe qualification. In this way the descriptors are alsoconnected to the set of the criteria that concentrateon the learning outcomes of the programme.

The set of criteria associated with the focus area‘quality assurance’ was formulated to provide abasis for an analysis of the comparability of thesystems and procedures applied by and on the par-ticipating programmes. They consider strategies,procedures and systems for quality assurance. Oneintention of the TEEP project was that that the cri-teria on quality assurance should assist the pro-grammes in the development of quality assuranceprocesses. The selection of these criteria has rested

5 Results and conclusionon the use of criteria

Page 30: Transnational European Evaluation Project › indirme › papers-and-reports › occasional-papers › ... · 2018-08-03 · The Transnational European Evaluation Project (TEEP) was

30

ENQA Occasional Papers

upon the experience and knowledge that the Euro-pean Network for Quality Assurance in Higher Edu-cation has gained from the implementation of nu-merous evaluations of higher education pro-grammes.

The experts and programmes were asked to as-sess the formulation and applicability of the crite-ria used in the evaluation. In the self-evaluation,during the site-visits and in the aftermath, the pro-grammes and experts have been very thorough intheir reflections and feedback on the criteria. Thefollowing section, reflecting upon the applicationof the different sets of criteria, is based on the feed-back from the programmes and experts, and theProject Group’s own experiences with the project.

5.1.2 Applicability of the criteria

In general, the criteria have been important in or-der to ensure that all of the programmes were evalu-ated through a single approach, thus ensuring com-parability in the assessment of the three subjectsand 14 programmes. It is important to emphasisethat the criteria were not being used for making for-mal judgements but were primarily used for testingtheir effectiveness in ensuring comparability. Thecriteria were directly linked to the focus areas andtheir related questions asked in the project manual.However, based on the feedback from programmesit is clear that this link could have been made moreexplicit.

5.1.2a Criteria for competences andlearning outcomeIn this set of criteria the programmes were asked toidentify the establishment of programme goals, andreflect on the extent to which the goals were com-parable to those encapsulated within the Dublindescriptors.

The evaluation revealed that there is consider-able variation in the degree to which the pro-grammes have formulated specific aims for the firstcycle degree, and the extent to which they align withthe Dublin descriptors. Some programmes have anexplicit aim, stating that the first cycle programmeleads both to employment and further study. Otherprogrammes have not explicitly formulated theiraims for the first cycle programme.

In general, the programmes and experts found itrelevant to ask about programme goals. However,according to some experts and programmes thequestions concerning goals for the programme werenot formulated clearly enough. The manual oper-ated with the terms ‘goals’, ‘aims’ and ‘goals forcompetences’ and the difference between theseterms was not made clear to the programmes. It issuggested that the terms are defined more precisely.

The programmes were also asked to describe andreflect on the formulation of subject-related andgeneric competences, their consistency with pro-gramme content, and how the students may achievethe competences through the programme.

The focus on ‘competences and learning out-come’ in the evaluation was an interesting and anew experience for many of the programmes andexperts. It was interesting to consider the genericcompetences and the subject-specific competencesin regard to the discussion of which abilities andapproaches that are relevant to students reachingthe end of the first cycle. However, the applicationand understanding of the terminology were verydifferent across the disciplines.

In History the subject-specific competences fromTUNING were being used and recognised as rel-evant by some of the programmes. The TUNINGcriteria were seen as external reference points to-wards which the programmes can place themselves.However, the TUNING generic competences func-tioned less effectively, because they were regardedas too numerous to be of real value. It was apparentthat in the majority of cases the staff members foundit easier to relate to the subject-specific competencesthan to the generic ones. The competence terminol-ogy was further complicated by the fact that theTEEP manual did not adequately distinguish be-tween competences and learning outcomes.

Across the three subject areas the experts andprogrammes had some difficulties understandingand interpreting this set of criteria. For many of theprogrammes, the use of the competence terminol-ogy was unfamiliar and therefore not applicable.Evidence from the project suggests that the dissemi-nation and impact of the TUNING terminology ingeneral, is limited and not necessarily widespreadeven within departments that include members of

Page 31: Transnational European Evaluation Project › indirme › papers-and-reports › occasional-papers › ... · 2018-08-03 · The Transnational European Evaluation Project (TEEP) was

31

ENQA Occasional Papers

staff involved in TUNING. The application of theterminology therefore seemed artificial and toocomplicated to some of those involved in the projectand site visits. It can be concluded that a consider-able amount of effort is still needed in order to en-sure a wider recognition of internal and externalreference points, such as TUNING and the compe-tence-terminology, and acknowledgement of theirvalue in articulating standards and ensuring theirquality assurance.

5.1.2b Criteria for quality assuranceThe experts and programmes in general found thecriteria relating to quality assurance to be relevant.However, the extent to which they were being ap-plied varied according to subject area.

The History programmes approached the qual-ity assurance criteria with varying degrees of en-thusiasm, commitment and support, and with dif-ferent expectations about their continuing applica-bility. Some considered them as a starting point tobe developed within the local context (after someinitial scepticism). Others saw them as potentiallyauthoritative within European developments andindicated that they would be seeking to continuetheir use in support of quality assurance changes,but that some detailed redrafting would be neces-sary. There was however a general recognition ofthe value of some/many of the criteria and, in par-ticular, those related to the quality assurance roleof self-evaluation.

In Physics the programmes generally appreciat-ed the relevance of the focus areas on qualityassurance. However, the programmes also notedsome difficulties in applying all the criteria to theirpractise.

The professional dimension of Veterinary Sci-ence degrees jointly with the existence of a Euro-pean directive has traditionally encouraged the im-plementation of some quality control measures, andthe approach of the criteria appears to have beenmore immediately recognised and valued. The re-flections obtained from the self-evaluation reportsand expressed during the site visits noted that theprogrammes found that the TEEP evaluation intro-duced some new aspects related to quality assur-ance. The Veterinary Science programmes, however,

also noted some difficulties in understanding thedefinitions in the manual.

The Project Group suggests that the applicabil-ity of the quality assurance criteria is related to thematurity of the quality assurance systems of theprogrammes. Those programmes that have had ex-perience in setting up systems that cover differentaspects of quality assurance have found it worth-while to be evaluated according to all the criteria.These have provided an ‘added value’ through com-parisons to other European systems. However, forthose programmes that are still in the process ofbuilding up a broad quality assurance system, onlysome of the criteria were applicable. For these pro-grammes the remaining criteria might serve moreas an inspiration for the continuing development ofquality assurance processes.

Project participants have provided suggestionsfor re-phrasing of the criteria. Whilst these are notpresented here in detail, it is anticipated that theywill provide valuable help in future revisions of thecriteria.

5.1.3 Conclusions

One of the aims of TEEP was to test the use of com-mon criteria. The Project Group concludes that, ata general level, the applicability of the criteria de-pends on their formulation and ‘readability’, andthe extent to which they represent a nationally aswell as internationally accepted threshold. Further,their applicability also clearly depends on the ex-tent to which the programmes have developed andimplemented aspects covered by the criteria. Thecriteria have important roles in stimulating and sup-porting such developments and implementationwhere they are recognised and acknowledged asrelating to necessary aspects of quality assurance.

The TEEP project has illustrated that there areconsiderable differences in terms of the educationalcultures, national traditions and regulatory systemswithin which the individual programmes operate.Though limited in its scope, TEEP has thereforeclearly demonstrated that with European highereducation national systems and educational priori-ties still predominate. For that reason it is impor-tant that the differences in educational cultures are

Page 32: Transnational European Evaluation Project › indirme › papers-and-reports › occasional-papers › ... · 2018-08-03 · The Transnational European Evaluation Project (TEEP) was

32

ENQA Occasional Papers

carefully considered and the criteria are formulatedin order to be flexible enough to allow for the im-pact of local and national contexts, legislation, anddevelopments.

The project has also shown, however, that wherenational states have committed themselves to po-litical objectives (aligned to the Bologna process)it is easier to reach a common interpretation. Theproject has in that respect provided a valuable in-sight into the condition for the implementation ofBologna and Prague at programme level.

It is important to emphasise that the criteria ap-proach has provided the basis for making compari-son possible. The common criteria have functionedas shared reference points, and ensured that the sametopics were evaluated across the three disciplinesand the 14 programmes. Any future transnational

evaluation project should seek criteria/referencepoints that are compatible with national and localcontexts, and use terminology that is, to a large ex-tent, familiar and useful for the programmes beingevaluated.

Overall, it can be concluded that the participantsin general found TEEP an interesting and stimulat-ing project. Hence, many of the programmes havenoted that they considered the evaluation very use-ful in providing the stimulus for extensive discus-sions about quality assurance in general and thenature and relevance of the criteria in particular. TheTEEP process has thus proved to be valuable froma number of perspectives, but it has also providedan indication of further developments and improve-ments that would better support a criteria-based,transnational evaluation.

Page 33: Transnational European Evaluation Project › indirme › papers-and-reports › occasional-papers › ... · 2018-08-03 · The Transnational European Evaluation Project (TEEP) was

33

ENQA Occasional Papers

TEEP is a pilot project with the objective of inves-tigating the operational implications of a Europeantransnational quality evaluation using common cri-teria. It has therefore been an essential part of theprocess to engage the institutions and the expert inthe evaluation of the method used. The first fivechapters of this report was drafted by the ProjectGroup on the basis on comments from the institu-tions and experts and on the Project groups ownexperiences. The draft report was presented to theinvolved experts and institutions at a joint closingseminar in Brussel October 2003. The experts andinstitutions were invited to discuss the method ofTEEP at the seminar. Furthermore they were invitedto send their reflections in a written format to beincluded in the report. In the following section thecomments are presented in a non-edited format. TheProject Group has received comments from theManagement group, the Physics programmes andthe Physics expert panel.

6.1 Reflections fromthe management group

The Management Group of TEEP discussed theTEEP methodological report at its meeting 10 Sep-tember. The overall judgement of the ManagementGroup is that TEEP has succeeded well in reachingits objectives of testing a methodology for trans-national evaluation in higher education and in dis-covering what obstacles are encountered and whatimprovements can be made – naturally within theframework provided by budget and other con-straints. The Management Group considers the re-port to reflect fairly and precisely the strong andweaker elements in the TEEP process, but the Man-agement Group has at the same time a number ofrecommendations for additions and revisions.

First of all, the Management group wishes toemphasize that all the phases of TEEP took place

in a dynamic context. Most institutions are now in-volved or have recently been involved in reform-ing their degree structure and courses study in ac-cordance with the Bologna process; the TUNING Iproject was just closing as TEEP was launched; thefinal report of TUNING I became available severalmonths later. As a consequence, the TUNING cri-teria had just been elaborated and were being dis-seminated in the institutions but knowledge of themwas still limited. The Management Group remindsthe reader that the Methodological report should beread keeping this dynamic context in mind.

Furthermore, the Management Group emphasizesthat TEEP should be considered in the context ofseveral other transnational projects. In this way thepositive added value of TEEP can be brought intosharper focus. TEEP after all is in some aspects moretransnational than earlier projects because trans-nationality is the basis for the identification of in-stitutions, programmes and experts as well asquality assurance agencies.

In general terms, the Management Group acceptsthe need for a project to achieve the optimal bal-ance between existing resources and overall objec-tives. Accordingly the Management Group is fullyaware that the relatively limited resources availablefor TEEP have made it necessary to focus the projecttightly, using a so-called “lighter touch”. This“lighter touch” has made the project possible, buthas at the same time put restrictions on the appliedmethodology and on the scope and level of theevaluations. An in-depth review of the quality ofprogrammes was not feasible and the scope couldnot for instance include an evaluation of the rela-tion between teaching and research. However, be-cause the “lighter touch” has been essential to thesuccessful conclusion of TEEP and has providedcredible evaluation results as well as being appre-ciated by the institutions, the Management Groupsuggests that the “lighter touch” be reflected uponin upcoming transnational evaluation projects in

6 Reflections on methodology

Page 34: Transnational European Evaluation Project › indirme › papers-and-reports › occasional-papers › ... · 2018-08-03 · The Transnational European Evaluation Project (TEEP) was

34

ENQA Occasional Papers

order to investigate further the method's usefulnessfor evaluation and in developing a positive “qual-ity culture”.

The Management Group finds more problematicthe extent to which the Dublin Descriptors or theTUNING criteria may be considered adequatelytested. However, the Management Group points tothe results of TEEP which is that the criteria havein general terms been important in order to ensurethat all of the programmes were evaluated througha single approach.

At the same time it must be acknowledged thatacross the three subject areas the experts and pro-grammes had some difficulties understanding andinterpreting the set of criteria concerning comp-etences and learning outcome and quality assurance.The Management Group agrees that the applicabil-ity of the criteria depends on their formulation and‘readability’, and the extent to which they repre-sent a nationally as well as internationally acceptedthreshold. Further, their applicability also clearlydepends on the extent to which the programmeshave developed and implemented aspects coveredby the criteria. The criteria have important roles instimulating and supporting such developments andimplementation where they are recognised and ac-knowledged as relating to necessary aspects of qual-ity assurance.

In any case the Management Group strongly sug-gests that it will be useful to provide feedback onexperiences obtained from TEEP into TUNING II,especially in terms of analysing the methodologyused and its feasibility. Further transnationalprojects will benefit from the work in TUNING IIwhile at the same time TUNING II will benefit fromthe TEEP experience, continuing the positive dy-namism evident during the TEEP project itself.

The Management Group considers the TEEPevaluation manual to be generally well prepared.However, the Management Group feels also thatthe definitions and terminology applied could havebenefited from clearer expression beforehand. If thathad been the case the participating institutions andprogrammes had stood a better chance of under-standing the criteria in the same manner. In futureprojects the evaluation manual must provide exact

definitions of terminology and be drafted on thepremise that most persons involved (experts, agencypersonnel and secretaries, staff and students of theself-evaluating institutions) will not be native speak-ers of English. For this reason special attentionshould be given to the simplicity and clarity of syn-tax and to clear definitions of terminology.

The Management Group takes note of the factthat Central and Eastern Europe institutions had acomparative disadvantage with respect to WesternEuropean institutions in terms of prior experiencein transnational evaluations. For example, CEE in-stitutions did not participate in TUNING althoughsome were informed about it through the respec-tive Thematic Networks. The Management Groupconsiders this unfortunate but notes with approvalthe fact that they are more directly involved in Tun-ing II. CEE institutions must not only be includedin further transnational evaluation projects, but theymust also be expected to share fully in the meth-odological developments and projects, such asTUNING.

Concerning the selection of external expertstransnational projects require that the appointedexperts have knowledge of the different local highereducation systems involved and the experts shouldalso be selected according to criteria that include aswide a geographical distribution as possible. TheManagement Group makes a special recommenda-tion for future projects by emphasising that thetransnational aspect of the project must be very vis-ible not only in the methodology but also in thecomposition of expert panels. This will be vital forthe credibility and the acceptability of transnationalevaluation. This is an issue not only of image butalso of substance, as knowledge and experience ofdifferent national approaches will be necessary forall parties – including quality assurance agencies –involved in developing the European Higher Edu-cational Area.

The Management Group would in principle havewished for TEEP to contribute even more clearlythan it has to transparency and compatibility ofquality of European higher education. However, asstated above, the Management Group accepts thatthe primary TEEP objective has been to act as a

Page 35: Transnational European Evaluation Project › indirme › papers-and-reports › occasional-papers › ... · 2018-08-03 · The Transnational European Evaluation Project (TEEP) was

35

ENQA Occasional Papers

methodological pilot to explore how the proposedmethodology actually works in a variety of con-crete situations. If the scope and level of the sub-ject evaluation have necessarily been limited, thisfact in no way detracts from the success of TEEP inreaching its primary objectives. In terms of meth-odology TEEP may have been slightly too optimis-tic in its initial ambitions. Still, the ManagementGroup considers TEEP useful exactly because theresults will help the planning of other projects.

To conclude, the Management Group recognisesthe importance of the fact that the final edition ofthe methodological report will include a further sec-tion reflecting the discussions to be held on 13 Oc-tober 2003 when the institutions and the experts willmeet in Brussels to share their views on the project.

6.2 Reflections fromthe institutions

6.2.1 Report from the evaluatedPhysics programmes4

We provide our answers to the four questions ad-dressed in the guidelines of the Seminar, with a pre-liminary remark: the scope of the evaluation werethe 3-year programs, but only one university had areal 3-year program in full operation, as others werein the process of completing their new 3-year pro-grams and another had a 5-year program.

1. One of the TEEP objectives was the developmentof a transnational evaluation manual with com-mon criteria applicable to three different subjectareas. To what extent has TEEP fulfilled this ob-jective? Identify possible weaknesses as well.

This objective was only fulfilled to some extent;this was due to several reasons including the fol-lowing one. Indeed, the manual was rather unclear:e.g. the criteria given in the appendix were too nu-merous, sometimes redundant or obvious, othertimes confusing or not clearly enough formulatedor too difficult to assess. In addition very little in-formation was asked about the contents of the teach-

ing programmes and about the level of knowledgetransferred to the students. As the manual wasmostly devoted to the organizational and logisticalaspects of the study programmes, any issue con-cerned with the link of the studies to research andto research-oriented training was missing. Ofcourse, the part related to the topic of quality assur-ance was much appreciated.

2. The scope of the evaluation considered accord-ing to the three main dimensions: Educationalcontext, competences and learning outcomes andquality assurance mechanisms. To what extentwere the focus areas beneficial? Take into con-sideration the international dimension of theevaluation, and the third TEEP objective to con-tribute to greater awareness, transparency andcompatibility, within European HE. What areyour proposals to improve the evaluation ap-proach?

The three focus areas were in general beneficial, aswe found it useful to reflect and answer the relatedquestions. However, we feel that the 'Tuning' crite-ria should not have been taken as a reference stand-ard, because in the ‘Tuning’ pilot they were clearlyintended to being considered only as a starting point.We estimate that the contribution of the TEEPproject to awareness, transparency and compatibil-ity within the European HE is quite limited. Surelythe self-evaluation teams are now much more awarethan before the evaluation, but little awareness hasbeen created among the rest of the academic com-munity, because we think that not many of our col-leagues will read the reports (see also answer toquestion 4).

3. The TEEP process consisted of an internal andexternal evaluation. Could you underline goodpractices and improvement areas in both steps?(internal participation, site-visit schedule, cal-endar, meeting with stakeholders, the function-ing of internal and external teams).

As far as we are concerned, the schedule of the site-visits was good, although the overall schedule of

4 Giovanni V. Pallottino, Dept. of Physics, University LaSapienza, Rome, Italy, October 20, 2003.

Page 36: Transnational European Evaluation Project › indirme › papers-and-reports › occasional-papers › ... · 2018-08-03 · The Transnational European Evaluation Project (TEEP) was

36

ENQA Occasional Papers

the project was too tight, particularly the phase de-voted to finalizing the subject-specific report (lead-ing to some factual error as indicated by the Viennainternal team). We also feel that the experts shouldhave been more involved in the process, wherebythe coverage concerning the teaching contents wouldhave been improved. The experts should have beengiven the opportunity to visit all or most of the in-stitutions in one group. Immediately after each site-visit time should have been foreseen for discussionand for writing up of their first draft report. In addi-tion, in the group of visiting experts there shouldhave been (at least) one expert who would be fluentin the local language and terminology, and whowould have some knowledge of the local academicculture.

4. Reporting was done in two blocks: subject-spe-cific reports containing the results of the exter-nal evaluations of the individual institution anda methodological report. To what extent are thereports useful tools for institutions? For the sub-ject area at the European level? Identify strongand weak points of the reports and reporting pro-cedure.

The reports are certainly useful for the visited insti-tutions. But in order to be really effective, the re-ports need to be written in such a way that theywould be worth reading. The crucial question is here:who in the academic community of our five visitedinstitutions and in other European universities, willreally read them in the form they are published nowon the ENQA website only? As a matter of fact,both reports appear to be written by specialists inthe evaluation discipline and to be only addressedto other specialists in this same discipline using theappropriate jargon, rather than to their natural in-tended target, i.e. the academic communities of thesubject areas concerned.

6.3 Reflections fromthe expert panels

6.3.1 Comments on the TEEP Methodologyfrom the Physics Expert Group5

The Physics Expert Group found the TEEP projectinteresting and stimulating. We were all pleased tohave had the opportunity to be involved with theproject and have benefited from working togetherand from the opportunity to learn about the institu-tions we visited. The project, and our participationin it, will probably help to the profile of QualityAssurance in our own institutions and in those wevisited. However, we were disappointed that thecontributions we could have made as experts werenot fully exploited, and we would have been happyto have had more asked of us.

We were asked to look at four questions concern-ing the evaluation manual, the scope of the evalua-tion, the internal and external aspects of the evalu-ation, and the reports. We comment on these in turnbelow.

1. The Manual. We think that the manual as sup-plied was a reasonable first attempt at writingsuch a document and it shows that the approachof developing a manual with common criteria fora trans-national evaluation could work. However,the language was in many places ambiguous andunclear and much of the terminology would notbe accessible to teaching staff. If the TUNINGcriteria are to be referred to, then it must be en-sured that this terminology is familiar to thereader. A comprehensive glossary of all the termsused is essential – an attempt was made at this inthe manual but this was not very useful and thedefinitions were not clearly written. In order for such a manual to work well, it iscrucial that there is no ambiguity in the language,so that the information supplied for different pro-grammes is mutually consistent. In particular, the

5 Compiled by Prof R C Thompson, Imperial College London.

Page 37: Transnational European Evaluation Project › indirme › papers-and-reports › occasional-papers › ... · 2018-08-03 · The Transnational European Evaluation Project (TEEP) was

37

ENQA Occasional Papers

tables of numerical data in this manual were im-precisely defined so that even in the final reportfor physics there were still inconsistencies of in-terpretation between the five evaluated pro-grammes. It was hard to compare programmeswithout the information being given in a consist-ent format.

2. The Scope of the Evaluation. We understandthat for practical reasons it was necessary to limitthe scope of the evaluation. The inclusion ofQuality Assurance (QA) issues has helped to raisethe profile of quality assurance in the expert groupand in the institutions running the evaluated pro-grammes, although this may be limited to the staffdirectly involved in the TEEP project and maynot have spread effectively to all teaching staff.In general, the interpretation of QA in the manualis focussed on particular procedures and we feltthat more emphasis on the outcomes of QA prac-tices rather than the procedures themselves wouldhave been fairer to the institutions and more help-ful in general. Our main concern here is that the interpretationof “Educational Context” should have beenrather broader in the project. We would have likedto see more information provided on the Sub-ject-specific Competences to be developed,which was requested in the manual but wasmostly supplied only in a very general way. Thiswould not be for direct evaluation of the detailsof the subject content of the degree programmes,but for the purpose of background informationnecessary for appreciating what the programmesets out to achieve. We would also have appreci-ated receiving some information on the researchprofile of the department, the provision of learn-ing resources, the experience and qualificationsof the teaching staff, and the involvement of thedepartment in schemes such as ERASMUS(which is highly relevant to European mobilityof students). This information would allow theexperts to set the programme in context, espe-cially as the student learning experience will bestrongly influenced by the research environment.

3. The evaluation process. We found that the proc-ess of self-evaluation had a very positive effecton the institutions in general. It stimulated inter-nal discussions about the programmes and a re-view of the teaching activities. However, this mayhave been limited to the members of the self-evaluation team in some cases, and these mem-bers may not have been representative of thewhole department. The documents produced bythe self-evaluation process are helpful and in-formative. For the external reviewers, they pro-vide a very interesting insight into the evaluatedprogrammes. We were concerned about several aspects of thepanel visit. In particular, we would have preferredto have a greater involvement in the evaluationprocess – both helping to set the agenda for thediscussions and putting together conclusions andrecommendations. The self-evaluation reports ofall institutions should have been made availablebefore the first visit took place. We felt that thevisits should have been longer in order to allowfor more internal discussion among the visitingpanel. An initial meeting at the start of each visitto discuss the main issues to be addressed duringthe visit and to review the interview guide wouldhave helped to focus the meetings with the insti-tution. At this stage (or even before the visit),each member could have been assigned to a par-ticular aspect of the evaluation so that he or shecould concentrate on drawing together all theinformation on that topic. We strongly recommend that at the end of thevisit a further meeting of the panel take place, atwhich the preliminary conclusions and recom-mendations of the panel are discussed and for-mulated. We regard the lack of such a meeting asa serious weakness of the TEEP project. We sug-gest that at this meeting each member of the panelshould present his or her conclusions on theirassigned aspect of the review to the rest of thepanel for discussion. An initial draft of the re-port for that visit should be prepared at this stage,and discussed with the self-evaluation group,before departure.

Page 38: Transnational European Evaluation Project › indirme › papers-and-reports › occasional-papers › ... · 2018-08-03 · The Transnational European Evaluation Project (TEEP) was

38

ENQA Occasional Papers

4. The final reports. We believe that the attempt tofind a common language for the description ofdegree programmes at different institutions indifferent countries is a good one and that it hasbeen generally successful. The physics reportgives an interesting overview of the way that theprogrammes run in different institutions. However, we are disappointed that the processof preparation of the report on the physics evalu-ation was not transparent. As a result of the wayin which the report was prepared, it has resultedin a rather bland and repetitive report which doesnot contain as much substance as could have beenpossible. We believe that had the experts beenmore involved in the preparation of the reportfrom the beginning it could have resulted in amore substantial report which the experts wouldhave been able to identify with and which mayhave been of more use to the institutions. As itstands we do not feel that we as the experts havetrue ownership of the report. In addition, wedoubt that it will be seen to be of much use to theinstitutions involved in the evaluation. A sum-mary version, intended for wider distribution,would have had more impact.

In conclusion, we are pleased to have been involvedin the TEEP project and we have all benefited fromour involvement in various ways. It was unfortu-nate that the project was run at a time when someof the institutions were in the middle of a transitionto a new Bologna-compliant programme structure,but this was unavoidable. We believe that the projectwill result in significant sharing of good practicebetween the evaluated institutions and our own in-stitutions. TEEP was a pilot project. We see somegood points in the project but we do not believethat it can be used for further trans-national evalu-ations without major revision. We have suggestedways of addressing some of the weaknesses wefound. We strongly recommend that these sugges-tions be adopted in any future project along the linesof TEEP.

Page 39: Transnational European Evaluation Project › indirme › papers-and-reports › occasional-papers › ... · 2018-08-03 · The Transnational European Evaluation Project (TEEP) was

39

ENQA Occasional Papers

7 Appendices

Appendix A

History

History programmes:• University of Bologna, Italy• University Pierre-Mendez-France, Grenoble,

France• University of Coimbra, Portugal• University of Aberdeen, Scotland (UK)• University of Latvia, Latvia

Expert panel:• Dr Colin Brooks, School of English & American

Studies, University of Sussex• Professor Steven Ellis, Department of History,

National University of Ireland• Dr Raphaela Averkorn, Universität Hannover• Dr Taina Syrjämaa, Department of History, Uni-

versity of Turku• Professor Tity de Vries, History Department,

University of Groningen• Professor Juan Pan-Montojo, Departamento de

Historia Contemporánea, Universidad Autónomade Madrid

• Professor Roumen Genov, University of Sofia

Physics

Physics programmes:• Vienna University of Technology, Austria• Warsaw University, Poland• Paul Sabatier University, Toulouse, France• University of Rome La Sapienza, Italy• Copenhagen University, Denmark

Expert panel:• Chairman: Professor David W. Hughes, Univer-

sity of Sheffield, Department of Physics and As-tronomy.

• Vice-Chairman: Professor Richard Thompson,Imperial College London, Department of Phys-ics.

• Professor Christoph Bargholtz, Stockholm Uni-versity, Department of Physics.

• Professor of Physics Faculty Vilnius UniversityGintaras Dikcius, Vilniaus Universitetas.

• Prof. Dr. Ramon Pascual, Universitat Autònomade Barcelona, Department de Fisica.

• Director of the Institute of Physics Education,Clemens L.M. Pouw, University of Twente, De-partment of Applied Physics.

• Professor Peter U. Sauer, University of Hanover,Institute for Theoretical Physics.

• Dr. Frank Witte, Manager of the Master's pro-grammes, Department of Physics and As-tronomy of Utrecht University.

Veterinary Science

Veterinary Science programmes:• Universitat Autònoma de Barcelona, Catalonia

(Spain)• Szent István University of Budapest, Hungary• University of Glasgow, Scotland (UK)• University of Ljubljana, Slovenia

Expert panel:• Chairman: Prof. Dr. Patrick Benard, Ministère de

l’agriculture, de l’alimentation, de la pêche et desaffaires rurales

• Vice-Chairman: Prof. Dr. Professor Dr. AndréParodi (Ecole Nationale Vétérinaire d’Alfort),

• Dr. Francis Anthony BVMS MRCVS (Formerpresident of Federation of Veterinarians of Eu-rope)

• Professor Dr. Jaroslav Hanák (University of Vet-erinary and Pharmaceutical Sciences Brno, CzecRepublic)

• Professor Dr. Jürgen Gropp (Institut für Tier-ernährung, Ernährungsschäden und DiäteikVeterinärmedizinische Fakultät Leipzig)

• Professor Dr. Henriette Strom (Royal Veterinaryand Agricultural University)

In each site visit a young graduate in VeterinaryScience participated:• Mr. Simon Doherty BVMS MRCVS (Veterinary

practitioner) (Glasgow)• Mr. Xavier de Paz (Veterinary of B&M) (Barce-

lona)• Ms. Katarina Sveticic (LEK International Phar-

maceutical group) (Ljubljana)• Ms. Noémi Szilagyi (Veterinary of CEVA Sante

Animale) (Budapest)

Page 40: Transnational European Evaluation Project › indirme › papers-and-reports › occasional-papers › ... · 2018-08-03 · The Transnational European Evaluation Project (TEEP) was

40

ENQA Occasional Papers

Appendix B

List of the members of the Project Planning Group, Management Group and Project Group

PROJECT PLANNING GROUPENQA Steering Group: Christian Thune (PPG Chairman)

András Róna-TasEuropean University Association: Leslie Wilson

Andrée Sursock (deputy representative)National Unions of Students in Europe: Martina Vukasovic

Mads Aspelin (deputy representative)European Commission: Peter van der HijdenSecretaries: Kimmo Hämäläinen (ENQA Secretariat)

(Karl Holm, Finnish Higher Education EvaluationCouncil, June 2002–September 2002)

MANAGEMENT GROUPENQA Steering Group: Christian Thune (MG Chairman)

Ton VroeijenstijnProject agencies: Nick Harris (Quality Assurance Agency for Higher

Education, UK)Dorte Kristoffersen (Danish Evaluation Institute)Gemma Rauret (Agencia per la Qualitat del SistemaUniversitaria Catalunya)

Socrates/Erasmus Thematic Networks: Katherine Isaacs (CLIOHNET – European HistoryNetwork)Hendrik Ferdinande (EUPEN – European PhysicsEducation Network)Tito Horacio Fernandes (EAEVE – Interaction andCo-operation in European Veterinary education).

European Commission: Peter van der HijdenSecretaries: Tine Holm (Danish Evaluation Institute)

Kimmo Hämäläinen (ENQA Secretariat)(Karl Holm, Finnish Higher Education EvaluationCouncil, June 2002–September 2002)

PROJECT GROUPProject agencies: Nick Harris (Quality Assurance Agency for Higher

Education, UK)Fiona Crozier (Quality Assurance Agency for HigherEducation, UK)Dorte Kristoffersen (Danish Evaluation Institute)Josef Grifoll (Agencia per la Qualitat del SistemaUniversitaria Catalunya)

Secretary: Tine Holm (Danish Evaluation Institute)