lessons for university ranking

Upload: sadothsandovaltorres

Post on 10-Apr-2018

219 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/8/2019 Lessons for University Ranking

    1/23

    Evaluation Practices and Methodologies:Lessons for University Ranking

    By Bertrand B ELLON

    1. Introduction

    Hierarchical ranking is the most common and simplest instrument of comparison between discrete indicators. It is the quickest way to obtain acomparison between competing entities, as long as objectives, rules of

    behavior and the relevant measurement tools are shared within a community(school class, athletic group, business, technology measured by marks,speed, profit, financial assets, etc.). As an instrument of evaluation, ranking

    can be applied to almost any criteria. From school marks to the book of records (that hold the attention on the individual who first performed recordfigures), ranking is used everywhere as a way of measurement and of comparison. How can one improve this unavoidable measure? If one agreeswith the choice of data, ranking presents no difficulties excepting the qualityof data collection. According to a criterion or a set of criteria, the researcher ranks an entity number one, until another entity surpasses it under theaccepted criteria. This process has been accelerated within an environmentmarked by increasingly open societies and expanding economies andsocieties, providing ranking with a new era of development.

    Even so, ranking appears highly problematic when dealing with complexand intangible goods such as knowledge, science, technology and

    innovation. In the case of the production of universities, simple criteria donot apply due to the high complexity level (which is the case in mostdimensions of Social Sciences). Ranking may help at first in making crudedistinctions, but it immediately becomes a limited instrument, for there is nounique best way to apply it in any human activity.

    Given the fact that there are many possibilities to improve the ranking process within its own rules and limits, this chapter intends to drain fromthe methodology of evaluation several elements with which to improveranking of world-class universities. The author will begin with the

  • 8/8/2019 Lessons for University Ranking

    2/23

    Evaluation Practices and Methodologies2

    extension of needs regarding a better understanding of university structuresand strategies in the present times in Section 1. He will then underline thediversity of the objectives of evaluation, comparing them to the simpler,thus easier to understand, objectives of ranking in Section 2. Then, he willrecall a selected panel of indicators, which can be managed withinevaluation process in Section 3. Section 4 will show lessons drawn fromevaluation indicators to improve the ranking activity. In conclusion, theauthor will revisit a few core questions about the goals of ranking andevaluation.

    2. The Increasing Need for a Better Characterization of Universities

    In an open economy and society, the characterization of academic activityand of performances is not only a concern for transversal authorities [for years, the OECD has published indicators on education and research andUNESCO has produced an important study on performance indicators anduniversities (Fielden and Abercromby, 1969)] but also an increasing needfor each individual University. Characterization is thereby jointly related tomeasures of absolute excellence, and toward self-improvement within eachspecific context. Better understanding of a universitys stand-point, better management of the given assets, better efficacy and output, are thusmandatory, both for the locally embedded as well as for the world-classuniversity.

    Yet, characterization raises two different questions:

    how well do universities perform when their goals and means aretaken into account?; and,

    how is a particular university better, equivalent or worse than itscompetitors?

    Interestingly, however, these questions are not the same according toeverybody, for they inevitably differentiate according to the values of theuniversities stakeholders.

    2.1. External stakeholders

    Universities gather a wide variety of stakeholders (internal and external)who are increasingly active and concerned with the way they are managed

  • 8/8/2019 Lessons for University Ranking

    3/23

    Bertrand B ELLON 3

    and with their results. These partners becoming drivers, their requirementsdiffer widely from one to the other. That is to say:

    Public authorities involved into university support are increasinglyconcerned with the use of public money. The main share of universities

    budgets still depends mainly on public decisions (in coherence with AdamSmith and Alfred Marshal theory of external effects being attributed toknowledge and education). In a period of relative shortage of public budgets

    due to increasing competition between states, to the non interventionismideology and to cuts into budget deficits an increasing responsibility is puton public project managers, with more attention being paid to results (andconsequently to the universities management mode).

    Taxpayers are increasingly reactive to the way their money is used by public as well as private research institutions. This often justifies politicaland public financial short-term views, in contrast to the long-termdimension of the research process and the complex articulation betweenFundamental and Applied Sciences.

    Universities have become increasingly decisive tools for economiccompetitiveness, knowledge and innovation. This has led industries to bedirectly concerned with university possesses ( e.g. , hiring skilled studentsand benefiting as directly as possible from new research). This concerns notonly high tech industries, but also includes every mid-tech and low-tech

    business that is involved into innovation and to increasing use of general purpose technology (Helpman, 1998).

    Finally, journalists and other opinion makers are very active inuniversities visibility. They create and convey the images given to peopleas proven reality emphasizing both fictive and real strengths andweaknesses of universities.

    So, one can well see that universities have become increasingly in debtto, or at least dependent upon an increasing number of external partners,such as taxpayers, government administration and politicians, national andinternational organizations, business managers, journalists, as well asfoundations and NGOs, etc. For various reasons, those external stakeholdersfocus on the final outputs of universities. At best, they require informationconcerning the relation between material and labor inputs (what they have

    paid for) and output or production. Thus, external stakeholders arelargely unconcerned with the two central processes of university activity,i.e. , the production of new knowledge, and the teaching-learning process

  • 8/8/2019 Lessons for University Ranking

    4/23

    Evaluation Practices and Methodologies4

    between professors and students. Indeed, in most cases, the universityremains a mysterious black box to them, a vision reinforced by the verycomplexity of these two intangible, ambiguous (and therefore hard-to-evaluate) production processes. Hence, these complex problems of education, learning, researching, and governing these institutions are left tospecialists.

    2.2. Internal interest and need for self-evaluation of universities

    External interests are not the only universitys partners, however. Universitymanagers, students, professors, researchers, administrative staffs, are theother major internal partners that come to the institution with their specificinterests and objectives.

    Students also participate in the openness of economies. This is done bytheir shopping among universities worldwide, according to their ownobjectives, capabilities and means. Students might choose a university

    because it is nearest to their home, but, more and more often, they will maketheir choices according to institutions and diplomas fame, given that their main concern is to increase their chance of finding rewarding jobs after graduation. They are found to be more discriminating as concerns foreignuniversities than between their own countrys universities, which increasesthe artificial differences carried by reputation and image, as compared toreal relative capabilities.

    Researchers and professors tend to field multiple job applications simultaneously among diverse universities looking for the most famousone or, barring that, the one that provides them the best facilities for research and teaching. Quality of life issues, in their various dimensions,from day-to-day particulars to lifelong career prospects, are the maindeterminants of their final choice.

    Working amid such driver behaviors, university managers carry theresponsibility to achieve the optimum of production and productivity out of the two above mentioned groups, by building a working coherence fromheterogeneous and highly individualistic behaviors.

    Each internal partner is thus a stakeholder, carrying forward his or her own objectives, governed, in part, by an interfacing of personal and socialopinions and abilities of self and others. This makes the university resemblea network of competing interests. In this regard, the ultimate responsibility

  • 8/8/2019 Lessons for University Ranking

    5/23

    Bertrand B ELLON 5

    of its management is to provide a minimum degree of satisfaction to each partner. Therefore, there is a great need by universities for finer abilities of inward understanding and evaluation.

    It is known that academic ranking has different meanings according to

    who is looking at it. In this context it is not surprinzing that universitymanagers interest in academic ranking has been two-fold, i.e. , fostering better management of universities and consolidating a new field of researchabout the production and diffusion of knowledge. Other objectives can beadded to these, each one bringing its own consequences to bear upon thework to be done, but the ranking user issue remains the central one. Assuch, it will have effects on the whole process of academic measurement,including the choice of indicators and methods of data collection.

    2.3. The Multi-helix model

    The new exigency of information and control, initiated from the doubleinside-outside stakeholder demand is strengthened by the increasinginterest of researchers in Science and Technology production and inlearning activities and processes. The character of this interest is situated atthe intersection of four vectors.

    A growing interest in macro studies, that is, making large comparisonsof data, and/or providing general historical perspectives of trends (Moweryand Sampat, 2004; Martin and Etzkowitz, 2000). Many trend studies are

    based on historical cases of specific university (Jacob, Lundqvist andHellsmark, 2003).

    The renewed attention to excellence among competing universities,can also be associated with scientometric benchmarking and patent analysis(Noyons, Buter, van Raan, Schmoch, Heinze, Hinze, Rangnow, 2003), aswell as, in the United States of America, Canada and in Europe (Balconi,Borghini, Moisello, 2002), (Carayol and Matt, 2003-2004), (Azagra Caro et al. , 2001).

    Emerging questions on the strategies of universities (EuropeanCommission, 2004) and related issues including the governance of universities (Musselin and Mignot Gerard, 2004; Reale and Pot, 2003) andthe organization of research activities: laboratories and research centresversus teaching departments, interdisciplinary versus disciplinary

  • 8/8/2019 Lessons for University Ranking

    6/23

    Evaluation Practices and Methodologies6

    organizations, allocation and rewarding mechanisms, articulation betweenteaching and research positions, role of research in career stages, etc.).

    Finally, every public institution involved in R&D and education isincreasingly interested in studies on the production process of knowledge.

    Regional observatories have therefore been created as instruments for orientation of funding decisions (as has been done at the European, nationaland local levels, even including medium-sized cities).

    This representation is currently enlarged with the relations developed between the university and the public at large and the multiplicity of organizations that belong neither to government nor to business (e.g. ,

    NGOs, international organizations, foundations, multilateral, European andregional entities, cities, etc.

    Except for the hardware necessary to conduct research, both academicinputs and outputs are intangibles. In consequence, only a small part of suchintangibles are identified and thus very limited instruments exist to measurethem (Caibano and Snchez, 2004). Furthermore, research in such a science

    requires multidisciplinary work: Sociology, Economics, Science Policy,Management, etc. Yet, one finds recent theoretical developments have

    brought some interesting benefit to this field of study. Partha Dasgutpa andPaul David (1994) have suggested a framework for a new Economics of Science, Michael Gibbons (2004) has identified the Mode 2 concept of research, Henry Etzkowitz and Loet Leydesdorff have popularized theTriple Helix concept as a way to see government, academia and industryas parts of one structure (1997). In sum, the complex relation system(University-Industry-Government) increasingly provides the knowledgeinfrastructure of society (Etzkowitz and Leydesdorff, 2000; p.1). The modelverifies that: 1) the relationships among universities, industry andgovernment are changing; and, 2) there are internal transformations in eachof these individual sectors as well. Consequently, universities are not justteaching or research institutions, but combine teaching, research and servicetoward and for society. In other words, within a knowledge-based economythe noted triple helix model turns into a multi-helix one, with the mainfunction being given over to universities. It is this growing sphere of socialfunction and responsibility that explains growing pressures for an accountingof resources employed and deployed by universities, yet they are given nounique set of criteria given by which to measure their performances.

    Com gather theauthor meansorganizations

    belongneither tonational governmentnor

    businessinthis

    passage,give

    nsome of theexampleslisted(i.e.,cities).

  • 8/8/2019 Lessons for University Ranking

    7/23

    Bertrand B ELLON 7

    Figure 1. The Multi-helix Model

    2. Ranking versus Evaluation Processes

    In this context, evaluation processes will take different forms and includedifferent objectives according to different problems and different missionsof institutions.

    These can be distinguished thusly:

    evaluation (to fix the value, and measure the position regardingobjectives or partners);

    monitoring (to verify the process of the activity; to admonish, alertand remind);

  • 8/8/2019 Lessons for University Ranking

    8/23

    Evaluation Practices and Methodologies8

    control (to verify an experiment by duplication of it and comparison);

    accountability (to be able to provide explanations to stakeholders for actions taken);

    ranking (to put individual elements in order and relative position; toclassify according to certain criteria).

    Complicating matter, however, is the fact that the field of theuniversity is composed of ideas, knowledge, information, communication,etc., which are typically unique non-positional, hence, non-rival goods.

    2.1. Taking into account the diversity of higher educational missions

    Because universities deal with the creation and the diffusion of knowledge,the varieties of their missions are endless. These composite varieties are the

    joint result of multiple knowledge characteristics and of the variety of eachuniversitys stakeholders and their concerns. The first partner will focus onthe ability to train wider numbers of students; the second to increase theinternational research network; the third will consider as a priority to

    produce Nobel Prize or Fields Medal winners, etc. However, whatever thediversity, every stake holding group will be concerned in gaining better recognition and visibility of its university.

    In addition, university functions are ever growing in diversity. This isdue to the enlargement of the boundaries of scientific thought well into thearea beyond the laboratory, thus facing increasing pressures to introducenew applied technologies into day-to-day life ( i.e., into the production of goods and services). On a synthetic level, a university can be characterizedas having a double mission of training (basic and continuing education) andof researching (production and diffusion of new knowledge). Beyond these,

    a third mission of universities (Spru, 2002) of providing services tosociety is growing in importance, with broad social-economic impacts,encompassing both profit and non-for-profit output).

    2.2. Evaluation as a starting point

    When using different individual evaluations to compare universities,researchers are faced with a strong difficulty to agree upon indicators and to

    proceed to efficient benchmarking. At best, universities will be comparablewhen they share similar goals and they benefit from similar means which

  • 8/8/2019 Lessons for University Ranking

    9/23

    Bertrand B ELLON 9

    is very rarely the case. Yet, one can consider the ongoing processes of evaluation as trials to identify the useful indicators for a given set of questions and for a set of universities. The characterization of universities isthus a first step toward the benchmarking of universities worldwide.

    In the evaluation processes, various types of practices and meanings can be considered:

    discipline-based versus institution-based evaluation;

    inputs-based versus outputs-based evaluation;

    internal versus external evaluation;

    qualitative versus quantitative evaluation.

    2.3. The ranking process

    The worldwide liberalization of markets and societies has created a newglobal competition among universities. When considering research and

    teaching, universities considered as the best universities will attract moretalented students; attracting the best students, they will thus be able toreinforce their capabilities for autonomous selection-processes for their own

    benefit. Academic ranking intends to provide means of comparison andidentification between universities according to their academic or research

    performance. Yet, the result will be twofold: on the one side, an increasingneed for worldwide universities of excellence, on the other side, anincreasing need also for local universities that will be specialized at

    providing college-level (rather than at doctoral-level) training, including astrong commitment to regional issues and development, corresponding tothe eventual creation of disciplinary niches for research at a level of excellence.

    There is a gap, and often an unbridgeable one, between evaluation processes and ranking processes. As far as this work is concerned, it willfocus on ranking considerations. Given that ranking must be based uponincontestable (or at least objective) indicators, the objective of this authorscontribution is to take into consideration a set of experiences growing fromspecific evaluation processes toward the methodology of benefit-ranking

    production.

  • 8/8/2019 Lessons for University Ranking

    10/23

    Evaluation Practices and Methodologies10

    3. Evaluation Indicators

    This section will present data and indicators commonly used for evaluation,the objective being to deduct a few especially grounded data that can beadapted and used for general ranking processes. In so doing, however, it is

    vital to keep in mind that ranking on a worldwide scale induces very strictconstraints that limit the number of possible indicators that can beemployed. Therefore, indicators will be limited to those that are: alreadyexisting, comparable, and, easy to collect.

    This explains why the selection of indicators will be very limited. This brings new credit to some very restrictive measures (such as the Nobel Prizeaward) due to their worldwide recognition and already existing status.

    3.1. Criteria for data collection

    Criteria for data collection within evaluation process have a much wider basis. Data are primarily selected at the level of each institution, for its

    specific purposes. The objective is to evaluate the variety and weight of universities inputs and outputs, drawing out the relations between them.The university evaluation compares its production with its own goals andmeans, as converse to its counterparts. An important share of theinformation process is done at the level of the university itself, allowinglimited comparisons with other partners. The main objective is to identifyindicators that represent the most complete range of intellectual activity:from production of knowledge to its use.

    Evaluation indicators must be both feasible and useful. For this, theymust answer a set of criteria. Below are mentioned the criteria adopted by aEuropean project, the Meritum, that agreed upon a set of characteristicsleading to a set of quantitative indicators (MERITUM, 2002).

    Figure 2. Characteristics required for evaluation indicators

    USEFUL Relevant Comparable Reliable

    Significant UnderstandableTimely Objective Truthful VerifiableFeasible

  • 8/8/2019 Lessons for University Ranking

    11/23

    Bertrand B ELLON 11

    The degree of fulfillment of these eleven characteristics induces thedegree of quality of the overall process. Details of each of thesecharacteristics are:

    useful: allows decision making both by internal and external users to

    occur. relevant: provides information that can be modified or affirm the

    expectations of decision-makers. In such cases the informationshould be:

    significant: related to issues critical for universities;

    understandable : presented in a way it can easily be understood by potential users;

    timely : available when it is required for analysis, comparison or decision-making purposes.

    Turning to the development of specific indicators, they should becomparable , that is, indicators should follow criteria generally accepted byall implicated organizations in order to allow for comparative analysis and

    benchmarking; and reliable , that is, users need to be able to trust them. Tomeet these criteria, indicators are required to be:

    objective : the value is not affected by any bias arising from the partiesinvolved in the preparation of the informations interests;

    Truthful : the information reflects the real situation;

    Verifiable : it is possible to assess the credibility of the information it provides.

    Finally, calculation of all indicators should be cost-efficient, or feasible .

    That is to say, the information required for the proposed indicator and itscomputation should be easily obtained. The information from theuniversitys information system, or the cost of modifying those systems inorder to obtain the required information should be lower than the benefits(whether private or social) arising from the use of this indicator.

  • 8/8/2019 Lessons for University Ranking

    12/23

    Evaluation Practices and Methodologies12

    3.2. A matrix of indicators

    The European project on the Observatory of European University (OEU) hasdeveloped a framework to characterize universities research activities (atthe moment, it does not include teaching activities) (see ).The result is a two dimensional matrix, devoted to:

    characterizing the status of university research management;

    identifying the best performing universities; and,

    comparing the settings within which universities operate.

    Figure 3. Observatory of European University evaluation matrix

    Tools Objectives

    Funding HumanresourcesAcademicoutcomes

    Thirdmission Governance

    AttractivenessAutonomy

    Strategic capabilitiesDifferentiation profileTerritorial embedding

    Source : Observatory of European University: PRIME Network .

    The matrix and its elements are as follows:

    The first dimension of the matrix analyses thematic aspects of university management. The OEU research has considered fivethemes herein: the first two representing inputs; the next tworepresenting outputs and the fifth one representing the governanceof the institution:

    Funding : includes all budget elements, both revenues andexpenses (total budget, budget structure, sources of funding, rulesfor funding and for management);

    Human Resources : includes professors, researchers, researchengineers and administrative staff, plus PhDs and post-docs(number, distribution, functions between research, teaching andmanagement, staff turnover, and visiting and foreign fellows).

  • 8/8/2019 Lessons for University Ranking

    13/23

    Bertrand B ELLON 13

    Human resources must be considered both as labor stocks(numbers of people) and as labor flows (human flows, mobility);

    Academic Outcomes : includes articles and books, other academic publications, citations, and the knowledge embodied in PhDs

    being trained through research activities; Third Mission : (the university third mission is noted as an

    addition to the two other traditional university missions:teaching and research) concerns the service outreach linkages

    between the university and its non-academic partners, e.g. ,industry, public authorities, international organizations, NGOsand the public-at-large (covering activities such as employment of alumni, patent and licenses, spin-off and job creation, support of

    public policy, consultancy and promotion and diffusion of Scienceand Research activities);

    Governance : includes the process by which the universityconverts its inputs (funding and human resources) into researchoutputs (academic outcomes and third mission). It concerns themanagement of institutions, from both above the university (as inits manner of relations with government and other finance

    providers) and within the university.

    The second dimension of the matrix deals with transversal issues thatcan be applied to each thematic category, identifying or measuringthe capabilities of the university regarding its various stakeholders.The OEU research team has considered five transversal issues:

    Attractiveness : Each universitys capacity to attract differentresources (money, people, equipment, collaboration, etc.) within a

    context of scarcity. Autonomy : Measures each universitys margin of maneuver,

    formally defined as the limits, established from external partners(mainly government and other finance providers), to which auniversity must conform.

    Strategic Capabilities : Indicates each universitys actual ability toimplement its strategic choices.

  • 8/8/2019 Lessons for University Ranking

    14/23

    Evaluation Practices and Methodologies14

    Differentiation Profile : The main features of each university thatdistinguishes it from other strategic actors (competing universitiesand other research organizations) by its degree of specializationand degree of interdisciplinarity, etc.

    Territorial Embedding : The geographical distribution of eachuniversitys involvements, contacts, collaborations, and so onwithin a defined locale, i.e. , being a measure of the territorialutility of the university activity.

    In the actual process of using the matrix, however, many adjustmentshave to be made, mainly to adjust complexity and feasibility. One of themost complex examples is the service or third mission dimension, for itrequires adjusting business-type dimensions (intellectual property, contractswith industry, spin-offs, etc.) with social and policy dimensions (publicunderstanding of science, involvement into social and cultural life,

    participation to policy-making, etc.). Thus, the following chart detailing thissphere of activity recalls various dimensions of the previous one, withconcise added presentations of relevant data.

    Figure 4. The third mission dimension, eight items for data collection

    1. Human resources Competencies trained through research transferred to industry (typical case of embodied

    knowledge).The essential indicator is: PhD students who work in industry, built upon numbers andratios. The combination is important, since having a ratio of 100 percent, i.e. , all work withindustry with one PhD delivered might be far less relevant for industry than ratio of 25

    percent based on twenty PhD students.

    2. Ownership

    Research leading to publications or patents; with a changing balance between them.The key indicators are: patent inventors (number and ratio) and returns to the university (vialicenses form patents, copyrights, etc., calculated as a total amount/ratio to non-publicresources). Other complementary indicators reflect the proactive attitude of the university(existence of patent office, numbers of patents taken by university).

    3. Spin-offs Indicators relevant here are composite ones, that is to say they take into consideration three

    following entries:

    the number of incorporated firms; the number of permanent staff involved;

  • 8/8/2019 Lessons for University Ranking

    15/23

    Bertrand B ELLON 15

    more qualitative involvement such as: the existence of support staff funded by university;the presence of business incubators; incentives for creation, funds for seed capital; strategicalliances with venture capital firms, etc.

    4. Contracts with industry The traditional indicators are number of contracts (some prefer number of partners, which is

    more difficult to assess), and total financial assets generated, the ratio of which be calculatedvis--vis external resources.

    5. Contracts with public bodies With this axis, the societal dimension is entered.

    The key indicators here are contracts asked for by a public body in order to solve problems(versus academic research funding)

    It is important here to differentiate local (or nearby environment) from other (mostlynational in large countries, may be quite international in small countries) contracts.

    Elements for analysis are the same as for industrial contracts, i.e., number, volume, ratio

    6. Participation into policy-making Qualitative context: to build a composite index based on involvement in certain activities,

    with yes/no entries and measures of importance included.

    List of activities to consider includes: norms/standards/regulation committees, expertise,formalized public debates.

    7. Involvement in social and cultural life Qualitative context: a composite index concerning specific investments, existence of

    dedicated research teams, or involvement in specific cultural and social developments.

    8. Promoting the publics understanding of science Qualitative context: another composite index built on specific events to promote science, to

    classical involvement of researchers into dissemination and other forms of publicunderstanding of science, including articles, TV appearances; books, films, etc.

    Source : Observatory of European University: PRIME Network. .

    4. Lessons Drawn from Evaluation Processes

    Based on the authors considerations of the evaluation processes, this fourthsection will suggest ideas to improve the academic ranking processes, notwith the aim of creating new conclusions, but to provide new elements for consideration in the ranking versus evaluation debate.

    The multiplication of evaluation processes facilitates new competition between universities, greatly modifying the existing dynamics of science.Researchers are now faced with a multiple model, which challenges big

  • 8/8/2019 Lessons for University Ranking

    16/23

    Evaluation Practices and Methodologies16

    science (and the Nobel Prizes it brings) with new forms of co-operativescience and more internally driven research strategies. This newlandscape, with a wider variety of dynamic models, must now be taken intoaccount.

    At this point, strong arguments exist to advocate a radical divergence between evaluation (and its characterization of universities) and ranking. Ona strictly critical approach, there exist, on one side, ranking processes,limited to structurally crude bibliometric approaches, based on the smallestmost visible parts of output of the complex process of knowledge. Therisk appears that such a limited focus will lead to a caricatured vision of university missions, providing almost no possibility to draw useful relations

    between input and output. The existing set of indicators for the Jiao TongUniversity ranking is:

    Criterion Indicator

    Quality of Education Alumni of an institution winning Nobel Prizes and Fields MedalsStaff of an institution winning Nobel Prizes and Fields MedalsQuality of Faculty Highly cited researchers in 21 broad subject categories

    Articles published in Nature and Science *Research Output Articles in Science Citation Index-expanded and Social Science

    Citation IndexSize of Institution Academic performance with respect to the size of an institution

    On the other side, there exist evaluation processes, which are over-complex, too qualitative and subjective, appearing restricted to internal use

    by each individually evaluated university. External comparisons are thuslimited to the benchmarking of specific functions between selecteduniversities. At first glance, therefore, it seems evaluation processes may not

    be well adapted to making global comparisons between universities.From this perspective, ranking and evaluation processes stand opposed

    to one another. Yet, from the perspective of their objectives, they seem veryclose . That is, both aim to meet the need of better efficiency via a better management of university missions .

    The ways and means for institutional ranking have already progressed, but they can still be greatly improved. In this respect, ranking can greatly benefit from certain indicators being used in the act of evaluation, but onlywhen they can be generalized. The question now remaining is how to best

  • 8/8/2019 Lessons for University Ranking

    17/23

    Bertrand B ELLON 17

    identify relevant indicators and discover the best way to produce themwithin the strict governing limit of means.

    4.1. Ranking must reflect a minimum of diversity

    The Nobel Prize models main limitation is its strong reference to theone best way model, which is conceptually inadequate, considering theactual worldwide competition of universities, based on differentiation of competencies and a competition in a limited number of specific fields ( e.g. ,

    Nano- and Macro-Sciences). At this point, a preliminary debate would beneeded, which would clarify the objectives of academic ranking by movingaway from a monolithic vision of world-class universities toward a set of criteria adequate to measure the diverse strategic objectives of universitieswith differentiated development trajectories.

    Moreover, ranking processes must also take into account the variety of meanings given to each indicator. An indicator may be efficient in one caseand totally misused in another. Thus it can rightly be argued that:

    What is useful or relevant for one university, in one scientific field, isnot systematically useful or relevant for another universityspecialized in other domains, with other constraints and objectives;and,

    What is useful or relevant for public authority or other stakeholders isnot systematically useful or relevant for the university itself.

    The choice of a universal set of defining characteristics of excellence will nonetheless end with the splitting of universities intodifferent categories; as it is the case for any organized championship match.

    4.2. The set of characteristics should fulfill input, output and governanceindicators

    The relative utility or relevance of an indicator is its ability to be used as atool for university management (finance, governance and work). Indicatorsmust provide access to the universitys production spectrum anddifferentiation profile. The first two improvements concern thediscontinuance of some existing indicators and the adoption of moreappropriate ones, for example:

  • 8/8/2019 Lessons for University Ranking

    18/23

    Evaluation Practices and Methodologies18

    Differentiation by discipline or scientific field (including SocialSciences).

    Introduction of significant input data and production of someinput/output ratio.

    Development of indicators for local embedded ness and globalreach ( i.e. , local and global impact of universities).

    Enlargement towards effective teaching indicators (as compared toresearch).

    4.3. These changes will require specific collections of data: new indicatorsmean new work

    Such a renewal project will demand specific computation of existinginformation as well as the creation of new information. New methodologicalwork has to be done, in addition to the creation of normalized measuresnecessary for the rebuilding of global indicators. For this to happen,effective connections with the OECD are crucial.

    4.4. Enrich the debate in order to enrich the process and the result

    The debate on academic ranking will grow in importance in the future, andwill not be limited to a simple evaluation-ranking dispute. At this point, four questions arise:

    What does excellence mean, and what is its impact on research andteaching orientations and activities?

    How important is the degree of diversity within globalization (not being limited to the dualistic global/local debate)?

    What are the differences and specificities within processes of production, productivity and visibility?

    Finally (under a transversal approach), a debate appears, questioningthe quality of data themselves and their adaptation to the diversity of legal, financial and administrative structure of the bodies that formuniversities.

  • 8/8/2019 Lessons for University Ranking

    19/23

    Bertrand B ELLON 19

    5. Conclusion: The Missing Link

    In looking over the ranking versus evaluation debate, a factor has come to be seen by the author as central: the impact of the world ranking processupon the development dynamics of universities. He posits this because thevast majority of universities in the world is not, and has no chance to be,listed within any world-class list . They may possess niches of world-classexcellence and they may produce excellent output. But, for theseuniversities, the impact of academic ranking is either non-existent or negative (why would a university fight to get in if there is no hope towin or even to be visible and respected?).

    At the opposite side, the elected universities (those that findthemselves within the 1000; 500; 100 top universities, in one or manydifferent rankings) will incorporate ranking commitment and criteria, bothwithin their daily management and within their long-term developmentstrategy. As a result, they will naturally select the indicators that have

    been already selected by the ranking producers and will make themmandatory to their component group members ( e.g. , professors will be

    pushed to publish or perish even if the resulting research is less thanuseful). In such cases, artificial ranking criteria become the new rules thatwill be adopted and enforced by the universities themselves. Themovements ideology and methods are thereby self-reinforced (as in thecase of the heavy elemental weight given to the Nature and Science reviewsin the Shanghai ranking) with possible negative effects on the generation of new hypotheses and academic fields, on the diversity of supported research,and on interdisciplinary co-operation.

    In some cases, academic ranking may have an unexpected structural sideeffect. If university becomes the unit of evaluation and of action, the

    current fragmentation of the French higher education system intouniversities, Grandes coles and specialized research bodies (they mayshare academic staff and research activities with the University as well asresearch bodies such as CNRS, the ENSCM have particular status asindependent bodies with their own research laboratories) is made to appear to be completely outdated, whatever its actual rationality. On the other hand,one of the results may be the reinforcement of recent moves to increase thesize of universities by merging existing organizations with limitedconsideration to their real coherence and synergies.

  • 8/8/2019 Lessons for University Ranking

    20/23

    Evaluation Practices and Methodologies20

    The evident impact on university management is of importantconsequence. University activity is increasingly embedded into a multi-actor social space that modifies the governance of research, of innovationand of teaching, taking part within a new dynamic within the public sector.Consequently, institutional ranking processes, along with other tools for unitcharacterization, may provide original and useful information in the difficult

    process of university management: to create and consolidate platforms of quantitative data in the act of measuring the multidimensional nature of

    performance.Regarding external stakeholders, ranking introduces new rationales for

    public intervention and for the incorporation of new actors. Considering itsimplications on policy-making both for governments and for the universitiesthemselves, ranking opens a whole new field of research. In short, thedebate on university ranking (and on differentiating characterizations ingeneral) is just beginning.

    ReferencesAzagra Caro, J. M., Fernndez de Lucio, I. and A. Gutirrez Gracia (2001)

    University patent knowledge: the case of the Polytechnic University of Valencia, 78th International Conference AEA , November 22-23, 2001,Brussel, Belgium.

    Balconi, M., Borghini, S., and A. Moisello (2003). Ivory tower vs.spanning university: il caso dellUniversit di Pavia, in, Bonaccorsi, A.(ed.), Il sistema della ricerca pubblica in Italia . Milano: Franco Angeli,

    pp. 133-175.

    Carayol, N. and M. Matt (2004) Does Research Organization Influence

    Academic Production? Laboratory Level Evidence from a LargeEuropean University, Research Policy 33, p. 1081-1102.

    Dasgupta, P., and P. A. David (1994) Towards a New Economics of Science, Research Policy 23 5, p. 487-521.

    Etzkowitz, H. and L. Leydesdorff (2000) The dynamics of innovation:from National Systems and Mode 2 to a Triple Helix of university-industry-government relations, Research Policy 29, p. 109-123.

  • 8/8/2019 Lessons for University Ranking

    21/23

    Bertrand B ELLON 21

    Etzkowitz, H., and L. Leytesdorff (1997) Universities in the Global Economy: A Triple Helix of academic-industry-government relation .London: Croom Helm.

    European Commission (2003) Downsizing And Specializing: The

    University Model For The 21st Century?, Snap Shots from the Third European Report on Science and Technology Indicators 2003 , p. 1-2.

    Fielden, J. and K. Abercromby (1969) Accountability and InternationalCo-Operation in the Renewal of Higher Education, in, UNESCO

    Higher Education Indicators Study . UNESCO and ACU-CHEMS..

    Gibbons, M. (2004) The New Production of Knowledge . London: Sage.

    Helpman, E. (1998) General Purpose Technologies and Economic Growth,Cambridge, MA: Massachusetts Institute of Technology Press.

    Jacob, M., Lundqvist, M. and H. Hellsmark (2003) Entrepreneurialtransformations in the Swedish University system: the case of ChalmersUniversity of Technology, Research Policy 32, p. 15551568

    Martin, B. R. and H. Etzkowitz (2000) The Origin and Evolution of theUniversity Species, VEST 13 3-4, p. 9-34 .

    MERITUM. Caibano, L., Snchez, P., Garca-Ayuso, M. and C.Chaminade (Eds.) (2002). Guidelines for Managing and Reporting on

    Intangibles: Intellectual Capital Statements . Madrid: VodafoneFoundation.

    Mowery, D. C, and B. N. Sampat (2004) Universities in nationalinnovation systems, Chapter 8, in, J. Fagerberg, D. C. Mowery andR.R. Nelson (eds.), Oxford Handbook of Innovation , Oxford: OxfordUniversity Press.

    Musselin, C. and S. Mignot Gerard. (2004) Analyse comparative du gouvernement de quatre universities (Comparative analysis of thegovernment of four universities). .

    Noyons, E. C. M., Buter, R. K., van Raan, A. F. J., Schmoch, U., Heinze,T., Hinze, S., and R. Rangnow (2003) Mapping Excellence in Science

  • 8/8/2019 Lessons for University Ranking

    22/23

    Evaluation Practices and Methodologies22

    and Technology across Europe. Life Sciences . Report to the EuropeanCommission. Leiden University.

    Observatory of European University (2005). PRIME Network of Excellence. October. .

    Reale, E. and B. Pot (2003) La ricerca universitaria, in, Scarda A. M.(ed.), Rapporto sul sistema scientifico e tecnologico in Italia ,Milano:Angeli, p. 79-99.

  • 8/8/2019 Lessons for University Ranking

    23/23