SlideShare a Scribd company logo
1 of 19
Download to read offline
Universiteit
Antwerpen
Conference "New Frontiers in Evaluation", Vienna, April 24th-25th 2006.
Reliability and Comparability of
Peer Review Results
Nadine Rons, Coordinator of Research Evaluations & Policy Studies
Research & Development Department, Vrije Universiteit Brussel
Eric Spruyt, Head of the Research Administration Department
Universiteit Antwerpen
Universiteit
Antwerpen
Reliability and Comparability of Peer Review Results
Nadine Rons (Vrije Universiteit Brussel) & Eric Spruyt (Universiteit Antwerpen) | pag. 2
“Three cheers for peers”
  ‘Three cheers for peers’, Editorial, Nature 439, 118 (12 January
2006).
•  "Thanks are due to researchers who act as
referees, as editors resolve their often contradictory
advice."
•  "Only in a minority of cases does every referee
agree ..."
Universiteit
Antwerpen
Reliability and Comparability of Peer Review Results
Nadine Rons (Vrije Universiteit Brussel) & Eric Spruyt (Universiteit Antwerpen) | pag. 3
Presentation plan
I.  Validation of results
Reliability & comparability
II.  Material investigated
'Ex post' peer review + citation analysis of teams
III.  Investigation of results
Reliability: inter-peer agreement & different rating habits
Comparability: related concepts & intrinsic characteristics
IV.  Conclusions
Aimed at improved results, a better understanding, choosing the
right method
Universiteit
Antwerpen
Reliability and Comparability of Peer Review Results
Nadine Rons (Vrije Universiteit Brussel) & Eric Spruyt (Universiteit Antwerpen) | pag. 4
I. Validation of results
1. Reliability
Peer review: principal method to evaluate research quality.
BUT: various kinds of bias & different rating habits.
& Not always feasible to use measures limiting their influence.
⇒  Possible to measure reliability ?
2. Comparability
  H F Moed (2005), 'Citation Analysis in Research Evaluation', chapter
18: 'Peer Review and the Validity of Citation Analysis', Springer.
More reliable results ⇒ better correlations with other outcomes?
Correlations often relatively weak & depending on the discipline.
⇒  Can this be explained? (crucial for further acceptance!)
Universiteit
Antwerpen
Reliability and Comparability of Peer Review Results
Nadine Rons (Vrije Universiteit Brussel) & Eric Spruyt (Universiteit Antwerpen) | pag. 5
II. Material investigated
(Peer review)
1. Peer review
–  Shared principles for the panel-evaluations of teams per discipline:
•  Expertise-based
•  International level
•  Uniform treatment
•  Coherence of results
•  Multi-criteria approach
•  Pertinent advice
–  Exceptions:
•  Different experts for each team (1 discipline at VUB).
•  Specific methodology using different indicators (1 discipline at UA).
Universiteit
Antwerpen
Reliability and Comparability of Peer Review Results
Nadine Rons (Vrije Universiteit Brussel) & Eric Spruyt (Universiteit Antwerpen) | pag. 6
II. Material investigated
(Peer review @ VUB)
–  VUB-indicators:
  Standard procedure 'VUB-Richtstramien voor de Disciplinegewijze
Onderzoeksevaluaties', VUB Research Council (2001).
•  Scientific merit of the research / uniqueness of the research
•  Research approach / plan / focus / coordination
•  Innovation
•  Quality of the research team
•  Probability that the research objectives will be achieved
•  Research productivity
•  Potential impact on further research and on the development of applications
•  Potential impact for transition to or utility for the community
•  Dominant character of the research (fundamental / applied / policy oriented)
•  Overall research evaluation
Universiteit
Antwerpen
Reliability and Comparability of Peer Review Results
Nadine Rons (Vrije Universiteit Brussel) & Eric Spruyt (Universiteit Antwerpen) | pag. 7
II. Material investigated
(Peer review @ UA)
–  UA-indicators:
  'Protocol 1998' for the Assessment of Research Quality, Association of Universities of
the Netherlands (VSNU, 1998).
•  Academic quality
•  Academic productivity
•  Scientific relevance
•  Academic perspective
Exception (1 discipline, "partial" indicators):
•  Publications
•  Projects
•  Conference participations
•  Other
•  Globally
Universiteit
Antwerpen
Reliability and Comparability of Peer Review Results
Nadine Rons (Vrije Universiteit Brussel) & Eric Spruyt (Universiteit Antwerpen) | pag. 8
II. Material investigated
(Citation analysis)
2. Citation analysis
  'New Bibliometric Tools for the Assessment of National Research Performance:
Database Description, Overview of Indicators and First Apllications', H F Moed et al.,
Scientometrics 33 (1995).
–  Centre for Science and Technology Studies (CWTS), Leiden University.
–  Thomson ISI citation indexes, corresponding period, same teams.
–  Indicators include:
•  CPP/JCSm: citations / publication with respect to expectations for the
journals
•  CPP/FCSm: citations / publication with respect to expectations for the field
•  JCSm/FCSm: journal citation score with respect to expectations for the field
Universiteit
Antwerpen
Reliability and Comparability of Peer Review Results
Nadine Rons (Vrije Universiteit Brussel) & Eric Spruyt (Universiteit Antwerpen) | pag. 9
III. Investigation of results
(Overview)
1. Reliability
a. Inter-peer agreement:
Three groups of evaluations according to measured level of agreement.
b. Rating habits:
Panel-procedures vs. exception with different experts for each team.
⇒  Influence on results & on correlations between peer review indicators
investigated.
2. Comparability
a. Related concepts:
'Global' vs. 'partial' indicators & variation with discipline.
b. Intrinsic characteristics of methods:
Contributions to ratings counted differently & scale effects.
⇒  Influence on comparability investigated.
Universiteit
Antwerpen
Reliability and Comparability of Peer Review Results
Nadine Rons (Vrije Universiteit Brussel) & Eric Spruyt (Universiteit Antwerpen) | pag. 10
III. Investigation of results
(1. Reliability, a. Inter-peer agreement)
1.  Reliability
1. a. Inter-peer agreement
In panels: different opinions ⇒ different positions of teams.
⇒  Level of inter-peer agreement measured by correlations
between the ratings from different peers.
⇒  3 groups compared: panels with high, intermediate and low
inter-peer agreement.
Universiteit
Antwerpen
Reliability and Comparability of Peer Review Results
Nadine Rons (Vrije Universiteit Brussel) & Eric Spruyt (Universiteit Antwerpen) | pag. 11
III. Investigation of results
(1. Reliability, a. Inter-peer agreement)
–  Influence on results:
Results compared to citation analysis:
⇒  Better inter-peer agreement = higher number of significant
correlations,
BUT: only at the higher aggregation level of the 3 groups.
⇒  Other mechanisms have a stronger impact on correlations.
–  Influence on correlations between peer review indicators:
Significant correlations for each pair of peer review indicators, for
each of the 3 groups (also for indiviual disciplines).
⇒  Correlations between peer review indicators are relatively robust
for variations in inter-peer agreement.
Universiteit
Antwerpen
Reliability and Comparability of Peer Review Results
Nadine Rons (Vrije Universiteit Brussel) & Eric Spruyt (Universiteit Antwerpen) | pag. 12
III. Investigation of results
(1. Reliability, b. Rating habits)
1.b. Rating habits
Opinions → ratings: according to own habits, reference levels
in other evaluations, scores given to other files, known use
of scores, ...
Two cases compared:
•  Exception with different experts for each team ⇒ scores not
necessarily in line with opinions.
•  Standard panel-evaluations ⇒ uniform reference level.
Universiteit
Antwerpen
Reliability and Comparability of Peer Review Results
Nadine Rons (Vrije Universiteit Brussel) & Eric Spruyt (Universiteit Antwerpen) | pag. 13
III. Investigation of results
(1. Reliability, b. Rating habits)
–  Influence on results:
Results compared to citation analysis:
•  Panel-evaluations: significant correlations for all peer review
indicators with some or all citation analysis indicators (& vice versa).
•  Different experts: significant correlation for only 1 pair of indicators.
⇒  Rating habits can influence results significantly.
–  Influence on correlations between peer review indicators:
•  Panel-evaluations: significant correlations for all pairs of indicators.
•  Different experts: significant correlations for only 8% of the pairs.
⇒  Low observed correlations between indicators (expected to be
correlated) can indicate diverging rating habits.
Universiteit
Antwerpen
Reliability and Comparability of Peer Review Results
Nadine Rons (Vrije Universiteit Brussel) & Eric Spruyt (Universiteit Antwerpen) | pag. 14
III. Investigation of results
(2. Comparability, a. Related concepts)
2. Comparability
2.a. Related concepts
–  Partial indicators (publications, projects, conferences, ...): no significant
correlations between peer review indicators, in contrast to global
indicators (scientific merit, productivity, relevance, ...).
⇒ Performances in different activities are not necessarily correlated.
–  Correlations of peer review with citation analysis indicators: the pairs
correlating best strongly vary with discipline.
⇒ An indicator may not represent a same concept for all subject areas.
⇒ Always use more than one indicator!
Universiteit
Antwerpen
Reliability and Comparability of Peer Review Results
Nadine Rons (Vrije Universiteit Brussel) & Eric Spruyt (Universiteit Antwerpen) | pag. 15
III. Investigation of results
(2. Comparability, b. Intrinsic characteristics)
2.b. Intrinsic characteristics
–  Contributions to ratings:
Different in the minds of peers (pro & contra) and in citation analysis
(positive counts).
–  Scale effects:
Minimum & maximum limits & their position with respect to the mean
value.
Universiteit
Antwerpen
Reliability and Comparability of Peer Review Results
Nadine Rons (Vrije Universiteit Brussel) & Eric Spruyt (Universiteit Antwerpen) | pag. 16
III. Investigation of results
(2. Comparability, b. Intrinsic characteristics)
•  Peer rating
frequency
distribution:
–  Peer ratings:
pro & contra,
also elements
counted
'negatively'.
–  Scale:
minimum &
maximum
limit.
Relative frequency distribution of peer results
0%
5%
10%
15%
20%
25%
30%
35%
40%
45%
50%
LO
W
(1)
LO
W
(2)
FAIR
(3)
FAIR
(4)AVERAG
E
(5)AVERAG
E
(6)
G
O
O
D
(7)
G
O
O
D
(8)
H
IG
H
(9)
H
IG
H
(10)
Peer results
Percentageofthenumberofteams(58)
Scientific merit of the research —
Uniqueness of the research
Research approach / plan / focus /
co-ordination
Innovation
Quality of the research team
Probability that the research
objectives will be achieved
Research productivity
Potential impact on further
research and on the
development of applications
Potential for transition to or utility
for the community
Overall research evaluation
Universiteit
Antwerpen
Reliability and Comparability of Peer Review Results
Nadine Rons (Vrije Universiteit Brussel) & Eric Spruyt (Universiteit Antwerpen) | pag. 17
III. Investigation of results
(2. Comparability, b. Intrinsic characteristics)
•  Citation impact
frequency
distribution:
–  Citation impact:
only positive
counts, strong
influence of
highly cited
articles.
–  Scale: minimum
limit closer to
mean & no
maximum limit.
Relative frequency distribution of citation impact
All teams in the pure ISI analysis
0%
5%
10%
15%
20%
25%
30%
35%
40%
0,1 0,4 0,7 1 1,3 1,6 1,9 2,2 2,5 2,8 3,1
Indicator value
Percentageofthenumberofteams(60)
CPP/JCSm
CPP/FCSm
Universiteit
Antwerpen
Reliability and Comparability of Peer Review Results
Nadine Rons (Vrije Universiteit Brussel) & Eric Spruyt (Universiteit Antwerpen) | pag. 18
III. Investigation of results
(2. Comparability, b. Intrinsic characteristics)
⇒ Good
correlations
only when
effects of
intrinsic
characteristics
can be filtered
out.
Scientific relevance vs. Field citation impact
High & intermediate inter-peer agreement group
Universiteit
Antwerpen
Reliability and Comparability of Peer Review Results
Nadine Rons (Vrije Universiteit Brussel) & Eric Spruyt (Universiteit Antwerpen) | pag. 19
IV. Conclusions
•  Reliability
–  Peer review results can be influenced considerably by rating habits.
–  It is recommended to create a uniform reference level (e.g. using
panel procedures) or check for signs of low reliability by analysing the
outcomes of the peer evaluation itself.
•  Comparability
–  Besides reliability, comparability of results depends on the nature of
the indicators, on the subject area, on intrinsic characteristics of the
methods, ...
–  Different methods describe different aspects. The most suitable
method should be carefully chosen or developed.
•  Evaluations should always be based on a series of indicators,
never on one single indicator.

More Related Content

What's hot

Article Critique
Article CritiqueArticle Critique
Article Critique
slyyy
 
Progressive focusing and trustworthiness in qualitative research: The enablin...
Progressive focusing and trustworthiness in qualitative research: The enablin...Progressive focusing and trustworthiness in qualitative research: The enablin...
Progressive focusing and trustworthiness in qualitative research: The enablin...
University of Glasgow
 

What's hot (19)

Jcdl2017 Poster
Jcdl2017 PosterJcdl2017 Poster
Jcdl2017 Poster
 
I conf2016
I conf2016I conf2016
I conf2016
 
Mixed method research
Mixed method researchMixed method research
Mixed method research
 
Relevance Clues: Developing an experimental research design to investigate a ...
Relevance Clues: Developing an experimental research design to investigate a ...Relevance Clues: Developing an experimental research design to investigate a ...
Relevance Clues: Developing an experimental research design to investigate a ...
 
Literature review in a research proposal
Literature review in a research proposalLiterature review in a research proposal
Literature review in a research proposal
 
Quality Qualitative Research
Quality Qualitative ResearchQuality Qualitative Research
Quality Qualitative Research
 
Mixed methods
Mixed methodsMixed methods
Mixed methods
 
Comparative Research Method. t.mohamed
Comparative Research Method. t.mohamedComparative Research Method. t.mohamed
Comparative Research Method. t.mohamed
 
07 data analysis
07 data analysis07 data analysis
07 data analysis
 
Writing the review or related literature
Writing the review or related literatureWriting the review or related literature
Writing the review or related literature
 
Article Critique
Article CritiqueArticle Critique
Article Critique
 
Systematic review
Systematic reviewSystematic review
Systematic review
 
Bolouri qualitative method
Bolouri qualitative methodBolouri qualitative method
Bolouri qualitative method
 
Identifying Research Problems, Hypothesis and Its Testing, Types of Variables...
Identifying Research Problems, Hypothesis and Its Testing, Types of Variables...Identifying Research Problems, Hypothesis and Its Testing, Types of Variables...
Identifying Research Problems, Hypothesis and Its Testing, Types of Variables...
 
Mixed Method
Mixed MethodMixed Method
Mixed Method
 
Academic writing - publishing in international journals and conferences
Academic writing - publishing in international journals and conferencesAcademic writing - publishing in international journals and conferences
Academic writing - publishing in international journals and conferences
 
Progressive focusing and trustworthiness in qualitative research: The enablin...
Progressive focusing and trustworthiness in qualitative research: The enablin...Progressive focusing and trustworthiness in qualitative research: The enablin...
Progressive focusing and trustworthiness in qualitative research: The enablin...
 
Aist academic writing
Aist academic writingAist academic writing
Aist academic writing
 
Khalid data collection
Khalid data collectionKhalid data collection
Khalid data collection
 

Similar to Reliability and Comparability of Peer Review Results

From RAE to REF
From RAE to REFFrom RAE to REF
From RAE to REF
David Clay
 
Publishing and impact : presentation for PhD Infoirmation Literacy course
Publishing and impact : presentation for PhD Infoirmation Literacy coursePublishing and impact : presentation for PhD Infoirmation Literacy course
Publishing and impact : presentation for PhD Infoirmation Literacy course
Hugo Besemer
 
Its a PHD not a nobel prize
Its a PHD not a nobel prizeIts a PHD not a nobel prize
Its a PHD not a nobel prize
Mahammad Khadafi
 
lecture 4 Quantitative research design.ppt
lecture 4 Quantitative research design.pptlecture 4 Quantitative research design.ppt
lecture 4 Quantitative research design.ppt
AbdallahAlasal1
 
Publishing and impact 20140617
Publishing and impact 20140617Publishing and impact 20140617
Publishing and impact 20140617
Hugo Besemer
 

Similar to Reliability and Comparability of Peer Review Results (20)

Quantitative CV-based indicators for research quality, validated by peer review
Quantitative CV-based indicators for research quality, validated by peer reviewQuantitative CV-based indicators for research quality, validated by peer review
Quantitative CV-based indicators for research quality, validated by peer review
 
Comparing scientific performance across disciplines: Methodological and conce...
Comparing scientific performance across disciplines: Methodological and conce...Comparing scientific performance across disciplines: Methodological and conce...
Comparing scientific performance across disciplines: Methodological and conce...
 
Challenges and opportunities in research evaluation: toward a better evaluati...
Challenges and opportunities in research evaluation: toward a better evaluati...Challenges and opportunities in research evaluation: toward a better evaluati...
Challenges and opportunities in research evaluation: toward a better evaluati...
 
Beyond the Factor: Talking about Research Impact
Beyond the Factor: Talking about Research ImpactBeyond the Factor: Talking about Research Impact
Beyond the Factor: Talking about Research Impact
 
Peer review uncertainty at the institutional level
Peer review uncertainty at the institutional levelPeer review uncertainty at the institutional level
Peer review uncertainty at the institutional level
 
From RAE to REF
From RAE to REFFrom RAE to REF
From RAE to REF
 
In metrics we trust?
In metrics we trust?In metrics we trust?
In metrics we trust?
 
Publishing and impact 20141028
Publishing and impact 20141028Publishing and impact 20141028
Publishing and impact 20141028
 
Publishing and impact : presentation for PhD Infoirmation Literacy course
Publishing and impact : presentation for PhD Infoirmation Literacy coursePublishing and impact : presentation for PhD Infoirmation Literacy course
Publishing and impact : presentation for PhD Infoirmation Literacy course
 
Citation analysis: State of the art, good practices, and future developments
Citation analysis: State of the art, good practices, and future developmentsCitation analysis: State of the art, good practices, and future developments
Citation analysis: State of the art, good practices, and future developments
 
Classification of Researcher's Collaboration Patterns Towards Research Perfor...
Classification of Researcher's Collaboration Patterns Towards Research Perfor...Classification of Researcher's Collaboration Patterns Towards Research Perfor...
Classification of Researcher's Collaboration Patterns Towards Research Perfor...
 
Moed - Towards new scientific development models
Moed - Towards new scientific development modelsMoed - Towards new scientific development models
Moed - Towards new scientific development models
 
Its a PHD not a nobel prize
Its a PHD not a nobel prizeIts a PHD not a nobel prize
Its a PHD not a nobel prize
 
LITERATURE REVIEWING WITH RESEARCH TOOLS, Part 1: Systematic Review
LITERATURE REVIEWING WITH RESEARCH TOOLS, Part 1: Systematic ReviewLITERATURE REVIEWING WITH RESEARCH TOOLS, Part 1: Systematic Review
LITERATURE REVIEWING WITH RESEARCH TOOLS, Part 1: Systematic Review
 
Do metrics match peer review
Do metrics match peer reviewDo metrics match peer review
Do metrics match peer review
 
lecture 4 Quantitative research design.ppt
lecture 4 Quantitative research design.pptlecture 4 Quantitative research design.ppt
lecture 4 Quantitative research design.ppt
 
Publishing and impact 20140617
Publishing and impact 20140617Publishing and impact 20140617
Publishing and impact 20140617
 
impact of COViD 19.pdf
impact of COViD 19.pdfimpact of COViD 19.pdf
impact of COViD 19.pdf
 
Research Critique
Research Critique Research Critique
Research Critique
 
Introduction to Peer review, updated 2015-03-05
Introduction to Peer review, updated 2015-03-05Introduction to Peer review, updated 2015-03-05
Introduction to Peer review, updated 2015-03-05
 

More from Nadine Rons

Testing Reviewer Suggestions Derived from Bibliometric Specialty Approximatio...
Testing Reviewer Suggestions Derived from Bibliometric Specialty Approximatio...Testing Reviewer Suggestions Derived from Bibliometric Specialty Approximatio...
Testing Reviewer Suggestions Derived from Bibliometric Specialty Approximatio...
Nadine Rons
 

More from Nadine Rons (10)

Testing Reviewer Suggestions Derived from Bibliometric Specialty Approximatio...
Testing Reviewer Suggestions Derived from Bibliometric Specialty Approximatio...Testing Reviewer Suggestions Derived from Bibliometric Specialty Approximatio...
Testing Reviewer Suggestions Derived from Bibliometric Specialty Approximatio...
 
4D Specialty Approximation: Ability to Distinguish between Related Specialties
4D Specialty Approximation: Ability to Distinguish between Related Specialties4D Specialty Approximation: Ability to Distinguish between Related Specialties
4D Specialty Approximation: Ability to Distinguish between Related Specialties
 
Investigation of Partition Cells as a Structural Basis Suitable for Assessmen...
Investigation of Partition Cells as a Structural Basis Suitable for Assessmen...Investigation of Partition Cells as a Structural Basis Suitable for Assessmen...
Investigation of Partition Cells as a Structural Basis Suitable for Assessmen...
 
Groups of Highly Cited Publications: Stability in Content with Citation Windo...
Groups of Highly Cited Publications: Stability in Content with Citation Windo...Groups of Highly Cited Publications: Stability in Content with Citation Windo...
Groups of Highly Cited Publications: Stability in Content with Citation Windo...
 
Characteristics of International versus Non-International Scientific Publicat...
Characteristics of International versus Non-International Scientific Publicat...Characteristics of International versus Non-International Scientific Publicat...
Characteristics of International versus Non-International Scientific Publicat...
 
Research Excellence Milestones of BRIC and N-11 Countries
Research Excellence Milestones of BRIC and N-11 CountriesResearch Excellence Milestones of BRIC and N-11 Countries
Research Excellence Milestones of BRIC and N-11 Countries
 
Interdisciplinary Research Collaborations: Evaluation of a Funding Program
Interdisciplinary Research Collaborations: Evaluation of a Funding ProgramInterdisciplinary Research Collaborations: Evaluation of a Funding Program
Interdisciplinary Research Collaborations: Evaluation of a Funding Program
 
Output and citation impact of interdisciplinary networks: Experiences from a ...
Output and citation impact of interdisciplinary networks: Experiences from a ...Output and citation impact of interdisciplinary networks: Experiences from a ...
Output and citation impact of interdisciplinary networks: Experiences from a ...
 
Quality related publication categories in social sciences and humanities, bas...
Quality related publication categories in social sciences and humanities, bas...Quality related publication categories in social sciences and humanities, bas...
Quality related publication categories in social sciences and humanities, bas...
 
Impact Vitality – A Measure for Excellent Scientists
Impact Vitality – A Measure for Excellent ScientistsImpact Vitality – A Measure for Excellent Scientists
Impact Vitality – A Measure for Excellent Scientists
 

Recently uploaded

Seal of Good Local Governance (SGLG) 2024Final.pptx
Seal of Good Local Governance (SGLG) 2024Final.pptxSeal of Good Local Governance (SGLG) 2024Final.pptx
Seal of Good Local Governance (SGLG) 2024Final.pptx
negromaestrong
 
Beyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactBeyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global Impact
PECB
 
1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdf
QucHHunhnh
 

Recently uploaded (20)

Seal of Good Local Governance (SGLG) 2024Final.pptx
Seal of Good Local Governance (SGLG) 2024Final.pptxSeal of Good Local Governance (SGLG) 2024Final.pptx
Seal of Good Local Governance (SGLG) 2024Final.pptx
 
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
 
Unit-IV- Pharma. Marketing Channels.pptx
Unit-IV- Pharma. Marketing Channels.pptxUnit-IV- Pharma. Marketing Channels.pptx
Unit-IV- Pharma. Marketing Channels.pptx
 
Beyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactBeyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global Impact
 
Holdier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdfHoldier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdf
 
Sociology 101 Demonstration of Learning Exhibit
Sociology 101 Demonstration of Learning ExhibitSociology 101 Demonstration of Learning Exhibit
Sociology 101 Demonstration of Learning Exhibit
 
Micro-Scholarship, What it is, How can it help me.pdf
Micro-Scholarship, What it is, How can it help me.pdfMicro-Scholarship, What it is, How can it help me.pdf
Micro-Scholarship, What it is, How can it help me.pdf
 
Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104
 
Energy Resources. ( B. Pharmacy, 1st Year, Sem-II) Natural Resources
Energy Resources. ( B. Pharmacy, 1st Year, Sem-II) Natural ResourcesEnergy Resources. ( B. Pharmacy, 1st Year, Sem-II) Natural Resources
Energy Resources. ( B. Pharmacy, 1st Year, Sem-II) Natural Resources
 
2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx
2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx
2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx
 
Measures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeMeasures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and Mode
 
Advanced Views - Calendar View in Odoo 17
Advanced Views - Calendar View in Odoo 17Advanced Views - Calendar View in Odoo 17
Advanced Views - Calendar View in Odoo 17
 
Z Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphZ Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot Graph
 
Introduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The BasicsIntroduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The Basics
 
1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdf
 
ICT Role in 21st Century Education & its Challenges.pptx
ICT Role in 21st Century Education & its Challenges.pptxICT Role in 21st Century Education & its Challenges.pptx
ICT Role in 21st Century Education & its Challenges.pptx
 
Unit-IV; Professional Sales Representative (PSR).pptx
Unit-IV; Professional Sales Representative (PSR).pptxUnit-IV; Professional Sales Representative (PSR).pptx
Unit-IV; Professional Sales Representative (PSR).pptx
 
Grant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingGrant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy Consulting
 
microwave assisted reaction. General introduction
microwave assisted reaction. General introductionmicrowave assisted reaction. General introduction
microwave assisted reaction. General introduction
 
PROCESS RECORDING FORMAT.docx
PROCESS      RECORDING        FORMAT.docxPROCESS      RECORDING        FORMAT.docx
PROCESS RECORDING FORMAT.docx
 

Reliability and Comparability of Peer Review Results

  • 1. Universiteit Antwerpen Conference "New Frontiers in Evaluation", Vienna, April 24th-25th 2006. Reliability and Comparability of Peer Review Results Nadine Rons, Coordinator of Research Evaluations & Policy Studies Research & Development Department, Vrije Universiteit Brussel Eric Spruyt, Head of the Research Administration Department Universiteit Antwerpen
  • 2. Universiteit Antwerpen Reliability and Comparability of Peer Review Results Nadine Rons (Vrije Universiteit Brussel) & Eric Spruyt (Universiteit Antwerpen) | pag. 2 “Three cheers for peers”   ‘Three cheers for peers’, Editorial, Nature 439, 118 (12 January 2006). •  "Thanks are due to researchers who act as referees, as editors resolve their often contradictory advice." •  "Only in a minority of cases does every referee agree ..."
  • 3. Universiteit Antwerpen Reliability and Comparability of Peer Review Results Nadine Rons (Vrije Universiteit Brussel) & Eric Spruyt (Universiteit Antwerpen) | pag. 3 Presentation plan I.  Validation of results Reliability & comparability II.  Material investigated 'Ex post' peer review + citation analysis of teams III.  Investigation of results Reliability: inter-peer agreement & different rating habits Comparability: related concepts & intrinsic characteristics IV.  Conclusions Aimed at improved results, a better understanding, choosing the right method
  • 4. Universiteit Antwerpen Reliability and Comparability of Peer Review Results Nadine Rons (Vrije Universiteit Brussel) & Eric Spruyt (Universiteit Antwerpen) | pag. 4 I. Validation of results 1. Reliability Peer review: principal method to evaluate research quality. BUT: various kinds of bias & different rating habits. & Not always feasible to use measures limiting their influence. ⇒  Possible to measure reliability ? 2. Comparability   H F Moed (2005), 'Citation Analysis in Research Evaluation', chapter 18: 'Peer Review and the Validity of Citation Analysis', Springer. More reliable results ⇒ better correlations with other outcomes? Correlations often relatively weak & depending on the discipline. ⇒  Can this be explained? (crucial for further acceptance!)
  • 5. Universiteit Antwerpen Reliability and Comparability of Peer Review Results Nadine Rons (Vrije Universiteit Brussel) & Eric Spruyt (Universiteit Antwerpen) | pag. 5 II. Material investigated (Peer review) 1. Peer review –  Shared principles for the panel-evaluations of teams per discipline: •  Expertise-based •  International level •  Uniform treatment •  Coherence of results •  Multi-criteria approach •  Pertinent advice –  Exceptions: •  Different experts for each team (1 discipline at VUB). •  Specific methodology using different indicators (1 discipline at UA).
  • 6. Universiteit Antwerpen Reliability and Comparability of Peer Review Results Nadine Rons (Vrije Universiteit Brussel) & Eric Spruyt (Universiteit Antwerpen) | pag. 6 II. Material investigated (Peer review @ VUB) –  VUB-indicators:   Standard procedure 'VUB-Richtstramien voor de Disciplinegewijze Onderzoeksevaluaties', VUB Research Council (2001). •  Scientific merit of the research / uniqueness of the research •  Research approach / plan / focus / coordination •  Innovation •  Quality of the research team •  Probability that the research objectives will be achieved •  Research productivity •  Potential impact on further research and on the development of applications •  Potential impact for transition to or utility for the community •  Dominant character of the research (fundamental / applied / policy oriented) •  Overall research evaluation
  • 7. Universiteit Antwerpen Reliability and Comparability of Peer Review Results Nadine Rons (Vrije Universiteit Brussel) & Eric Spruyt (Universiteit Antwerpen) | pag. 7 II. Material investigated (Peer review @ UA) –  UA-indicators:   'Protocol 1998' for the Assessment of Research Quality, Association of Universities of the Netherlands (VSNU, 1998). •  Academic quality •  Academic productivity •  Scientific relevance •  Academic perspective Exception (1 discipline, "partial" indicators): •  Publications •  Projects •  Conference participations •  Other •  Globally
  • 8. Universiteit Antwerpen Reliability and Comparability of Peer Review Results Nadine Rons (Vrije Universiteit Brussel) & Eric Spruyt (Universiteit Antwerpen) | pag. 8 II. Material investigated (Citation analysis) 2. Citation analysis   'New Bibliometric Tools for the Assessment of National Research Performance: Database Description, Overview of Indicators and First Apllications', H F Moed et al., Scientometrics 33 (1995). –  Centre for Science and Technology Studies (CWTS), Leiden University. –  Thomson ISI citation indexes, corresponding period, same teams. –  Indicators include: •  CPP/JCSm: citations / publication with respect to expectations for the journals •  CPP/FCSm: citations / publication with respect to expectations for the field •  JCSm/FCSm: journal citation score with respect to expectations for the field
  • 9. Universiteit Antwerpen Reliability and Comparability of Peer Review Results Nadine Rons (Vrije Universiteit Brussel) & Eric Spruyt (Universiteit Antwerpen) | pag. 9 III. Investigation of results (Overview) 1. Reliability a. Inter-peer agreement: Three groups of evaluations according to measured level of agreement. b. Rating habits: Panel-procedures vs. exception with different experts for each team. ⇒  Influence on results & on correlations between peer review indicators investigated. 2. Comparability a. Related concepts: 'Global' vs. 'partial' indicators & variation with discipline. b. Intrinsic characteristics of methods: Contributions to ratings counted differently & scale effects. ⇒  Influence on comparability investigated.
  • 10. Universiteit Antwerpen Reliability and Comparability of Peer Review Results Nadine Rons (Vrije Universiteit Brussel) & Eric Spruyt (Universiteit Antwerpen) | pag. 10 III. Investigation of results (1. Reliability, a. Inter-peer agreement) 1.  Reliability 1. a. Inter-peer agreement In panels: different opinions ⇒ different positions of teams. ⇒  Level of inter-peer agreement measured by correlations between the ratings from different peers. ⇒  3 groups compared: panels with high, intermediate and low inter-peer agreement.
  • 11. Universiteit Antwerpen Reliability and Comparability of Peer Review Results Nadine Rons (Vrije Universiteit Brussel) & Eric Spruyt (Universiteit Antwerpen) | pag. 11 III. Investigation of results (1. Reliability, a. Inter-peer agreement) –  Influence on results: Results compared to citation analysis: ⇒  Better inter-peer agreement = higher number of significant correlations, BUT: only at the higher aggregation level of the 3 groups. ⇒  Other mechanisms have a stronger impact on correlations. –  Influence on correlations between peer review indicators: Significant correlations for each pair of peer review indicators, for each of the 3 groups (also for indiviual disciplines). ⇒  Correlations between peer review indicators are relatively robust for variations in inter-peer agreement.
  • 12. Universiteit Antwerpen Reliability and Comparability of Peer Review Results Nadine Rons (Vrije Universiteit Brussel) & Eric Spruyt (Universiteit Antwerpen) | pag. 12 III. Investigation of results (1. Reliability, b. Rating habits) 1.b. Rating habits Opinions → ratings: according to own habits, reference levels in other evaluations, scores given to other files, known use of scores, ... Two cases compared: •  Exception with different experts for each team ⇒ scores not necessarily in line with opinions. •  Standard panel-evaluations ⇒ uniform reference level.
  • 13. Universiteit Antwerpen Reliability and Comparability of Peer Review Results Nadine Rons (Vrije Universiteit Brussel) & Eric Spruyt (Universiteit Antwerpen) | pag. 13 III. Investigation of results (1. Reliability, b. Rating habits) –  Influence on results: Results compared to citation analysis: •  Panel-evaluations: significant correlations for all peer review indicators with some or all citation analysis indicators (& vice versa). •  Different experts: significant correlation for only 1 pair of indicators. ⇒  Rating habits can influence results significantly. –  Influence on correlations between peer review indicators: •  Panel-evaluations: significant correlations for all pairs of indicators. •  Different experts: significant correlations for only 8% of the pairs. ⇒  Low observed correlations between indicators (expected to be correlated) can indicate diverging rating habits.
  • 14. Universiteit Antwerpen Reliability and Comparability of Peer Review Results Nadine Rons (Vrije Universiteit Brussel) & Eric Spruyt (Universiteit Antwerpen) | pag. 14 III. Investigation of results (2. Comparability, a. Related concepts) 2. Comparability 2.a. Related concepts –  Partial indicators (publications, projects, conferences, ...): no significant correlations between peer review indicators, in contrast to global indicators (scientific merit, productivity, relevance, ...). ⇒ Performances in different activities are not necessarily correlated. –  Correlations of peer review with citation analysis indicators: the pairs correlating best strongly vary with discipline. ⇒ An indicator may not represent a same concept for all subject areas. ⇒ Always use more than one indicator!
  • 15. Universiteit Antwerpen Reliability and Comparability of Peer Review Results Nadine Rons (Vrije Universiteit Brussel) & Eric Spruyt (Universiteit Antwerpen) | pag. 15 III. Investigation of results (2. Comparability, b. Intrinsic characteristics) 2.b. Intrinsic characteristics –  Contributions to ratings: Different in the minds of peers (pro & contra) and in citation analysis (positive counts). –  Scale effects: Minimum & maximum limits & their position with respect to the mean value.
  • 16. Universiteit Antwerpen Reliability and Comparability of Peer Review Results Nadine Rons (Vrije Universiteit Brussel) & Eric Spruyt (Universiteit Antwerpen) | pag. 16 III. Investigation of results (2. Comparability, b. Intrinsic characteristics) •  Peer rating frequency distribution: –  Peer ratings: pro & contra, also elements counted 'negatively'. –  Scale: minimum & maximum limit. Relative frequency distribution of peer results 0% 5% 10% 15% 20% 25% 30% 35% 40% 45% 50% LO W (1) LO W (2) FAIR (3) FAIR (4)AVERAG E (5)AVERAG E (6) G O O D (7) G O O D (8) H IG H (9) H IG H (10) Peer results Percentageofthenumberofteams(58) Scientific merit of the research — Uniqueness of the research Research approach / plan / focus / co-ordination Innovation Quality of the research team Probability that the research objectives will be achieved Research productivity Potential impact on further research and on the development of applications Potential for transition to or utility for the community Overall research evaluation
  • 17. Universiteit Antwerpen Reliability and Comparability of Peer Review Results Nadine Rons (Vrije Universiteit Brussel) & Eric Spruyt (Universiteit Antwerpen) | pag. 17 III. Investigation of results (2. Comparability, b. Intrinsic characteristics) •  Citation impact frequency distribution: –  Citation impact: only positive counts, strong influence of highly cited articles. –  Scale: minimum limit closer to mean & no maximum limit. Relative frequency distribution of citation impact All teams in the pure ISI analysis 0% 5% 10% 15% 20% 25% 30% 35% 40% 0,1 0,4 0,7 1 1,3 1,6 1,9 2,2 2,5 2,8 3,1 Indicator value Percentageofthenumberofteams(60) CPP/JCSm CPP/FCSm
  • 18. Universiteit Antwerpen Reliability and Comparability of Peer Review Results Nadine Rons (Vrije Universiteit Brussel) & Eric Spruyt (Universiteit Antwerpen) | pag. 18 III. Investigation of results (2. Comparability, b. Intrinsic characteristics) ⇒ Good correlations only when effects of intrinsic characteristics can be filtered out. Scientific relevance vs. Field citation impact High & intermediate inter-peer agreement group
  • 19. Universiteit Antwerpen Reliability and Comparability of Peer Review Results Nadine Rons (Vrije Universiteit Brussel) & Eric Spruyt (Universiteit Antwerpen) | pag. 19 IV. Conclusions •  Reliability –  Peer review results can be influenced considerably by rating habits. –  It is recommended to create a uniform reference level (e.g. using panel procedures) or check for signs of low reliability by analysing the outcomes of the peer evaluation itself. •  Comparability –  Besides reliability, comparability of results depends on the nature of the indicators, on the subject area, on intrinsic characteristics of the methods, ... –  Different methods describe different aspects. The most suitable method should be carefully chosen or developed. •  Evaluations should always be based on a series of indicators, never on one single indicator.