options for evaluating research: inputs, outputs, outcomes michele garfinkel manager, science policy...

Post on 18-Dec-2015

216 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Options for Evaluating Research: Inputs, Outputs, Outcomes

Michele GarfinkelManager, Science Policy Programme

ICSTI ITOC Workshop19 January 2015, Berlin

Today’s talk

• About EMBO

• A policy view of research assessment

• Stakeholder roles

About EMBO• European Molecular Biology Organization

(Maria Leptin, Director)• Founded 1964, Heidelberg, DE• Funded by the European Molecular Biology

Conference– 27 Member States– 3 cooperation agreements

• Advancing policies for a world-class European research environment

• Governance

• Three main areas: biotechnology, responsible conduct of research, scientific publishing– Technology assessment

• Scientific publishing– Open access– Data– Responsibilities of editors, administrators,

authors

Science Policy Programme

Scientific publishing

The publication of scientific information is intended to move science forward. More specifically, the act of publishing is a quid pro quo in which authors receive credit and acknowledgment in exchange for disclosure of their scientific findings.

Journal name as proxy for quality • Journal Impact Factor: a librarian’s number

• The concern is not use, but misuse– Research assessment– “JIF 38.597: a subscription for the price of the IF”

• Why has this been adopted for research assessment?– Cross-disciplinary– Intuitive and reflective– Prospective

Research assessment is an ecosystem

Funders

Researchers

Journals

Other assessors?

What DORA sets out

• Main recommendation: Do not use journal-based metrics, such as Journal Impact Factors, as a surrogate measure of the quality of individual research articles, to assess an individual scientist’s contributions, or in hiring, promotion, or funding decisions

• Implementation?

What DORA sets out

• Research institutions and funding agencies: be clear on evaluation criteria and consider all contributions

• Publishers: do not use JIF as a marketing tool, make more article level metrics available, make all reference lists open, remove limits on reference list length

What DORA sets out

• Metrics suppliers: provide methodology and data in a useful form, account for variation in article types (reviews v. research articles)

• Researchers: as assessors, review for scientific content; as authors, cite appropriate (primary) literature; challenge bad practices

What DORA does not say

• Metrics based research assessment is wrong• JIF is flawed for assessing journals• Citations are a flawed metric • There is a simple alternative• Publishers are to blame• Thomson Reuters is to blame

What DORA does not say

• Metrics based research assessment is wrong• JIF is flawed for assessing journals• Citations are a flawed metric • There is a simple alternative• Publishers are to blame• X is to blame Altmetric Score

• More institutions and funders emphasizing biosketches and 'select your 5 best papers' strategies over IF.

• Constructive discussions with Thomson Reuters. More interest in dialogue and a willingness to improve the JIF as a metric

• Competition is good for everyone

Incremental advances

• Engagement with funders

• Engaging additional research communities

• Study national/regional variations

• Editorials forthcoming– Key point: better analyses needed

• Policy analysis– Implementation and governance issues,

metrics, stakeholders

Incremental advances

• This is not (just) about overworked or lazy promotion committees and rapacious journals

• The reward system in science is (becoming) warped

• Resources for thorough evaluation are not available

• Journal articles have become the currency of rewards rather than a contribution to knowledge

It’s the system (?)

• Researchers

• Publishers

• Research administrators

• Funders

• Metrics researchers

• Metrics providers

• Decision-makers

Research Assessment: Stakeholders

• We are great at measuring inputs (funding, numbers of students)

• We are good at measuring outputs (numbers of papers, some impact measures)

• Outcomes measurements are a problem

What should we be assessing?

• Papers– And how they are discovered?

• Data– And how they are discovered?

• Reviewing?• Teaching?• Committee work?• Responsible conduct?

What should we be assessing?

• Workshops– Governance issues

– Stakeholders

• Engagement with funders

Ongoing work

top related