Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.
Challenges and opportunities in research
evaluation: Toward a better evaluation
environment
Sergio Benedetto
Consiglio Dir...
Research evaluation
2
A taxonomy
3
Research evaluation
Ex ante Ex post
before research takes place, to assess its
potential relevance, the prospects of
suc...
National research assessments
• What? …..Evaluated objects…
• Why? …..Goals…
• How?....Evaluation methodologies…
• When?.....
Research evaluation
5
What ?
• The volume of the scientific outcomes
• Their quality
• Their scientific impact
• Their impacts on the economy, society ...
Research evaluation
7
Why ?
To inform HEI government bodies and other stakeholders about
the status of national research
To help Ministry of Educati...
Research evaluation
9
How ?
10
Sample of research
outcomes
Expert
panels
Reading Bibliometry
Informed peer review
External peer
review
Bibliometry
Fin...
Research evaluation
11
When?
RAE-REF in UK: 1986, 1989, 1992, 1996, 2001, 2008, 2014
VTR-VQR in Italy: 2006, 2013, 2017 (?)
ERA in Australia: 2010, 201...
Research evaluation
13
With what
consequences?
No measuring technique leaves the measured object
unaffected, so:
14
• What are the intended consequences?
• Evaluation le...
No measuring technique leaves the measured object
unaffected, so:
15
• What could be the risks and unintended consequences...
• Collecting data and objects:
– Local (often incompatible) repositories
– Copyright issues
• Cleaning data:
– Human error...
• What to evaluate and how to evaluate strictly depend on
• Size: Individuals, research groups, departments, institutions
...
• Input indicators measure resources, human, physical and financial, devoted to research
– Typical examples are the number...
The quality of a publication is an elusive attribute
Research evaluation: Outcome indicators
19
Measured through proxies
Q...
The bibliometric evaluation
20
Based on measurable indicators of publication impact:
• The “quality” (the peer review ex-a...
(a) Normative theory of citations
- Citation as recognition of scientific value (Smith, 1981, Merton, 1988)
(b) Constructi...
• The reliability of bibliometric indicators tends to decrease with the size of samples
they are applied to (institutions,...
Pros
• Efficient
• Fast
• Economic
• Not intrusive
• Objective
• Helps in identifying the origin and impact of scientific ...
• The “quality” of a publication cannot be assessed through
quantitative measures, just like the beauty of human beings or...
• Is the peer review the solution?
• Citing Richard Horton, editor of Lancet
“The mistake, of course, is to have thought t...
• In several scientific areas a significant correlation has been found between
bibliometric indicators and peer review eva...
Bibliometric indicators, based on indexing international journals, mainly written in
English, and on extracting citational...
Journal classifications in SSH:
• Who does it?
• How is it done?
• How many classes?
• For what?
• Four examples:
– The ra...
A tsunami of criticisms…
• UK historians: “crude and oversimplified”
• A set of journals classified in th best class A req...
• Journal classification in SSH as a rough quantisation of the continuous ranking
induced by impact factor in hard and lif...
• Different kind of research outputs beyond journal articles:
– Books
– Book chapters
– Translations
– Notes to court ruli...
Books evaluation
• Explore the feasibility of publishers classification (Spain)
• Use of indicators such as:
– Reviews on ...
• Avoid global rankings of institutions based on a single score
aggregating many different indicators
• Use a multidimensi...
• A crucial first step is the availability of a national data base of researchers with
the list of their publications and ...
• All researchers should be uniquely identifiable through a single identifier,
linking the researcher to his/her publicati...
• Local assessment in Institutions should be performed more frequently (yearly?) to
provide bridge between national assess...
• Research assessments must include an indicator of the performance variation with
time, so as to reward improvement even ...
• To be effective, research assessment must be a shared experience in goals and
methodologies between evaluators and evalu...
If you cannot measure it, you cannot improve it. Lord Kelvin
39
Not everything that can be counted counts, and not everyth...
40
Thank you
Upcoming SlideShare
Loading in …5
×

Challenges and opportunities in research evaluation: toward a better evaluation environment

1,427 views

Published on

Prof Sergio Benedetto's keynote address from the ORCID and Casrai joint conference, Barcelona, May 2015

  • Be the first to comment

  • Be the first to like this

Challenges and opportunities in research evaluation: toward a better evaluation environment

  1. 1. Challenges and opportunities in research evaluation: Toward a better evaluation environment Sergio Benedetto Consiglio Direttivo ANVUR sergio.benedetto@anvur.org Barcelona, May 18, 2015
  2. 2. Research evaluation 2 A taxonomy
  3. 3. 3 Research evaluation Ex ante Ex post before research takes place, to assess its potential relevance, the prospects of success and the cost appropriateness after research has been concluded, to assess its results in terms of scientific quality and impact Comparative Individual aimed at defining a ranking of individuals, research groups or HEIs, often within a homogeneous area of research comparing qualification against a threshold to promote or not individuals, research groups
  4. 4. National research assessments • What? …..Evaluated objects… • Why? …..Goals… • How?....Evaluation methodologies… • When?....Continuity, frequency… • With what consequences?..... On institutions, on researchers, on society… 4
  5. 5. Research evaluation 5 What ?
  6. 6. • The volume of the scientific outcomes • Their quality • Their scientific impact • Their impacts on the economy, society and/or culture • More in general, the so-called “third mission” of HEIs, i.e. their involvement with society at large Objects of research evaluation 6
  7. 7. Research evaluation 7 Why ?
  8. 8. To inform HEI government bodies and other stakeholders about the status of national research To help Ministry of Education and Research (or other national bodies) distributing resources to HEIs To help HEI government bodies taking strategic decisions to improve the quality and effectiveness of research and in internal resource (positions, funds) assignment Goals of comparative research evaluation 8
  9. 9. Research evaluation 9 How ?
  10. 10. 10 Sample of research outcomes Expert panels Reading Bibliometry Informed peer review External peer review Bibliometry Final evaluation Final evaluation Criteria
  11. 11. Research evaluation 11 When?
  12. 12. RAE-REF in UK: 1986, 1989, 1992, 1996, 2001, 2008, 2014 VTR-VQR in Italy: 2006, 2013, 2017 (?) ERA in Australia: 2010, 2012, 2015 The period of research evaluation 12 3 43 5 7 6 7 4 2 3
  13. 13. Research evaluation 13 With what consequences?
  14. 14. No measuring technique leaves the measured object unaffected, so: 14 • What are the intended consequences? • Evaluation leads to an improvement in the quality of research (UK: Adams & Gurney, 2010, Australia: Butler, 2003) • Evaluation modifies dissemination channels, e.g., it makes the journal article published in highly ranked journals the main publication outlet (Rin, 2009) • Resources distribution to HEI based on assessment outcomes (UK, Italy,…) • Improved HEIs infrastructure and archival repositories • Enhanced “quality” recruitment • Strategic positioning and more consistent policies • Trust from society
  15. 15. No measuring technique leaves the measured object unaffected, so: 15 • What could be the risks and unintended consequences? • Worse publication practices: excessive segmentation of the research results, clinging to the mainstream, safe disciplinary research, citation stacking , coercive citation, ...) • Research freedom limitation: too much emphasis on accountability • Underestimating the teaching activity (J. Warner, 1998) • Misuse of assessment outcomes: evaluating individuals, apply national-level criteria automatically to local issues • …
  16. 16. • Collecting data and objects: – Local (often incompatible) repositories – Copyright issues • Cleaning data: – Human errors in uploading – Names ambiguity – Duplicated records • Connecting data to “owners”: – Researchers, institutions • Transferring data and objects to evaluators (panels, peer reviewers,…) – IP protection, “big data” issues Research evaluation: The challenges 16
  17. 17. • What to evaluate and how to evaluate strictly depend on • Size: Individuals, research groups, departments, institutions • Scientific field: Hard and life sciences, social sciences and humanities • Goal: – Performance-based HEIs funding (to enhance average or excellence performance?) – Improve HEI-industry collaboration – Incentivize social impact of research – … Research evaluation: The challenges 17
  18. 18. • Input indicators measure resources, human, physical and financial, devoted to research – Typical examples are the number of (academic) staff employed or revenues such as competitive, project funding for research • Process indicators measure how research is conducted, including its management and evaluation – A typical example is the total of human resources employed by university departments, offices or affiliated agencies to support and fulfill technology transfer activities • Output indicators measure the quantity of research products – Typical examples are the number of papers published or the number of PhDs delivered • Outcome indicators relate to a level of performance, or achievement, for instance the contribution research makes to the advancement of scientific‐scholarly knowledge • Impact and benefits refers to the contribution of research outcomes for society, culture, the environment and/or the economy Research evaluation: Indicators 18
  19. 19. The quality of a publication is an elusive attribute Research evaluation: Outcome indicators 19 Measured through proxies Quantitative: Bibliometric indicators Qualitative: Peers’ opinion
  20. 20. The bibliometric evaluation 20 Based on measurable indicators of publication impact: • The “quality” (the peer review ex-ante process, the acceptance ratio,…) and the number of citations of the journal (Impact Factor, Eigen Factor, Source normalized impact per paper (SNIP) , …) • The number of citations of the article • The number of citations of the author (h index and related indicators) • Alternative scholarly impact metrics (altmetrics), which cover other aspects of the impact of a work, such as – how many data and knowledge bases refer to it – article views – artcile downloads – mentions in social media and news media
  21. 21. (a) Normative theory of citations - Citation as recognition of scientific value (Smith, 1981, Merton, 1988) (b) Constructivist social theory of citations - Citation as act of academic deference - Citation as attempt at persuading (Gilbert, 1977) - Assertive citation (Moed and Garfield, 2004) - Citation as simple discourse articulation (Crossick, 2007) The citational behaviour 21 “The main point which emerges is that citations stand at the intersection between two systems: a rhetorical (conceptual, cognitive) system, through which scientists try to persuade each other of their knowledge claims; and a reward (recognition, reputation) system, through which credit for achievements is allocated” (Cozzens, 1989)
  22. 22. • The reliability of bibliometric indicators tends to decrease with the size of samples they are applied to (institutions, department, research groups, individuals) • Never confuse the impact of journals with the impact of articles they publish (skewness of citations distribution) • Use of a plurality of indicators (e.g., at journal level: IF, Article influence, Eigenfactor, SJR, SNIP, …) reduces risks of manipulation: self-citation, citation stacking,… • Always normalize within a coherent, uniform scientific area wrt traditions of publishing and citing The bibliometric evaluation 22
  23. 23. Pros • Efficient • Fast • Economic • Not intrusive • Objective • Helps in identifying the origin and impact of scientific theories Cons • Different citational behaviours among disciplines and publication type (books vs articles) • Self citations • Data bases transparency and pitfalls • Differences between OA and non-OA journals • Language of publication The bibliometric evaluation 23
  24. 24. • The “quality” of a publication cannot be assessed through quantitative measures, just like the beauty of human beings or artworks • Can we assess the beauty of Leonardo’s Gioconda from the number of tickets sold at Louvre or from the average time spent by visitors in front of the painting? • These are the arguments of those affirming the supremacy of peer review against bibliometrics The bibliometric evaluation 24
  25. 25. • Is the peer review the solution? • Citing Richard Horton, editor of Lancet “The mistake, of course, is to have thought that peer review was any more than a crude means of discovering the acceptability—not the validity—of a new finding. Editors and scientists alike insist on the pivotal importance of peer review. We portray peer review to the public as a quasi-sacred process that helps to make science our most objective truth teller. But we know that the system of peer review is biased, unjust, unaccountable, incomplete, easily fixed, often insulting, usually ignorant, occasionally foolish, and frequently wrong” The peer review evaluation 25
  26. 26. • In several scientific areas a significant correlation has been found between bibliometric indicators and peer review evaluations • Italian VTE 2001-2003 - 9 areas of hard science and economics: High correlation (Franceschet, 2009) • Italian VQR 2004-2010 – Hard and life science and economics: Higher correlation between bibliometrics and peer review thank between the two peer reviews of the same article (Benedetto, 2013) • Research Assessment Exercise (RAE) 1992 (UK) - Genetics, anatomy, archeology: High correlation (Holmes & Oppenheim, 2001) • Research Assessment Exercise (RAE) 2001 (UK) - Psicology - Correlazione equal to 0.86 (Smith and Eysenck, 2002) Bibliometrics and peer review 26
  27. 27. Bibliometric indicators, based on indexing international journals, mainly written in English, and on extracting citational indicators, are not reliable in SSH Challenges in research evaluation in SSH 27 How about peer review? Publications characteristics make peer review even more difficult and less reliable in SSH: • Research outcomes difficult to be made objective and comparable • Belonging bias (different schools of thought, …) • Reduced number of potential peers (niche disciplines, marginal publication language,…)
  28. 28. Journal classifications in SSH: • Who does it? • How is it done? • How many classes? • For what? • Four examples: – The ranking of Australian Research Council (2008) – The ranking of European Science Foundation (ERIH project, 2007-2008) – The ranking of AERES (2008) – The ranking of ANVUR within the National Research Habilitation Challenges in research evaluation in SSH 28
  29. 29. A tsunami of criticisms… • UK historians: “crude and oversimplified” • A set of journals classified in th best class A requested to be cancelled from the ESF-ERIH lists • Petition to AERES to withdraw the lists: “non transparent criteria” • The ARC classification of ARC became an electoral issue in Australia, and the new government declared it over • Increased degree of acceptance after initial resistance in Italy Journal classification in SSH 29
  30. 30. • Journal classification in SSH as a rough quantisation of the continuous ranking induced by impact factor in hard and life sciences • Journal IF is based on average number of citations received by published articles in a period of time (2 or 5 years): it generates a continuous ranking of journals within a homogenous scientific area • Journal classification in SSH has a similar objective, but needs different classification criteria, mainly of qualitative nature, and is limited to a small number of classes (typically 2-3): it is a bridge between peer review and bibliometrics • Since the number of citations is missing, journal classification cannot fully replace peer review Journal classification in SSH 30
  31. 31. • Different kind of research outputs beyond journal articles: – Books – Book chapters – Translations – Notes to court rulings (for law disciplines) – Exhibitions and their catalogues – Architectural designs – Archeological excavations – Artistic performance – … Challenges in research evaluation in SSH 31
  32. 32. Books evaluation • Explore the feasibility of publishers classification (Spain) • Use of indicators such as: – Reviews on international journals – Characteristics of the publishing series: • Existence of an editorial board • Transparent review procedures governing the decision to publish • International diffusion of the publisher nooks • … Challenges in research evaluation in SSH 32
  33. 33. • Avoid global rankings of institutions based on a single score aggregating many different indicators • Use a multidimensional approach based on five steps1: – Define the purpose and audience of the research assessment – Involve institutions to be evaluated in step 1 – Identify the appropriate indicators – Perform the assessment – Identify the range of actions and decisions to be taken after assessment 1. Assessing Europe’s university-based research, Final Report of the Expert Group on Assessment of University-based Research, 2010, http://ec.europa.eu/research/science-society/document_library/pdf_06/assessing-europe-university- based-research_en.pdf Toward an ideal evaluation environment 33
  34. 34. • A crucial first step is the availability of a national data base of researchers with the list of their publications and other relevant information (research contracts, awards, editorial responsibilities,…) • An excellent example is the “Plataforma Lattes” in Brasil (http://lattes.cnpq.br) • A bad example is the “Anagrafe nazionale della ricerca (ANPRePS) in Italy, prescribed by a law in 2009 and never implemented • The publications metadata records should be linked to the pdf of the publications (taking into account copyright issues when relevant) Toward an ideal evaluation environment 34
  35. 35. • All researchers should be uniquely identifiable through a single identifier, linking the researcher to his/her publications and other information • Use of ORCID identifier is one viable solution: Non-profit organisation supported by members (majority of hem non-profit organisations) Free to individuals Growing adoption: Sweden, Finland, Denmark, Norway, UK, Spain, Portugal, Australia • Italy launched recently the I.R.ID.E project, aiming at providing an ORCID identifier to 80% of researchers by the end of 2016 Toward an ideal evaluation environment 35
  36. 36. • Local assessment in Institutions should be performed more frequently (yearly?) to provide bridge between national assessments (4-5 years period) • Local assessment must consider a wider range of context variables: – The critical mass of research groups – The strategic promotion of some areas – The opening of new research frontiers, e.g., interdisciplinary • National and local research assessments must be coordinated, to present researchers with a coherent set of goals and incentives Toward an ideal evaluation environment 36
  37. 37. • Research assessments must include an indicator of the performance variation with time, so as to reward improvement even when the absolute performance is still poor • This implies a certain degree of persistence of indicators • The evaluation of research outputs should use an informed peer review methodology, where the panel in charge acquires information from:  Biliometrics  Expert peers  … to make the final decision Toward an ideal evaluation environment 37
  38. 38. • To be effective, research assessment must be a shared experience in goals and methodologies between evaluators and evaluated • Evaluation criteria must be known a priori • The evaluation results should not be applied to different contexts wrt the initial ones  Outcomes evaluations addressed to institutional performance should never be used to assess individuals • Performance-based funding should not erode the institution survival quota • Assessment methodology should not underestimate the inter (multi) disciplinary research A few final hints 38
  39. 39. If you cannot measure it, you cannot improve it. Lord Kelvin 39 Not everything that can be counted counts, and not everything that counts can be counted. William B. Cameron, Informal Sociology: “A Casual Introduction to Sociological Thinking” (1963)
  40. 40. 40 Thank you

×