A generic approach to creating virtual benchmarks for research assessment is presented. The benchmarks offer information on the performance of research units smaller than universities, e.g. research centres or departments.
Unlocking the Potential: Deep dive into ocean of Ceramic Magnets.pptx
STI 2017: Virtual benchmarks in bibliometric research assessment
1. 6 SEPTEMBER 2017 POSTDOC
STI 2017 CONFERENCE PARIS JENS PETER ANDERSENDEPARTMENT OF POLITICAL SCIENCE
AARHUS UNIVERSITY
DANISH CENTRE FOR STUDIES IN
RESEARCH AND RESEARCH POLICY
COMPARING APPLES TO…
The necessity of comparing like with like in evaluative
scientometrics.
Jens Peter Andersen, Jesper W. Schneider & Fereshteh Didegah
2. STI 2017 CONFERENCE PARIS JENS PETER ANDERSEN
6 SEPTEMBER 2017 POSTDOC
DEPARTMENT OF POLITICAL SCIENCE
AARHUS UNIVERSITY
DANISH CENTRE FOR STUDIES IN
RESEARCH AND RESEARCH POLICY
VIRTUAL BENCHMARKS
Department of Political Science, Aarhus University
London School of Economics and Political Science
3. STI 2017 CONFERENCE PARIS JENS PETER ANDERSEN
6 SEPTEMBER 2017 POSTDOC
DEPARTMENT OF POLITICAL SCIENCE
AARHUS UNIVERSITY
DANISH CENTRE FOR STUDIES IN
RESEARCH AND RESEARCH POLICY
OUTLINE – THE CONCEPT
• Lacking data quality impedes university department benchmarks
• And even if they were good – is Dept. of Physics A comparable to Dept. of Physics
B?
• Virtual benchmarks:
• Profiling core research of units of analysis (seed unit)
• Finding similar profiles
• Labelling and splitting external units
• Standard benchmarking indicators++
4. STI 2017 CONFERENCE PARIS JENS PETER ANDERSEN
6 SEPTEMBER 2017 POSTDOC
DEPARTMENT OF POLITICAL SCIENCE
AARHUS UNIVERSITY
DANISH CENTRE FOR STUDIES IN
RESEARCH AND RESEARCH POLICY
PROFILING
Topic profile:
• Number of papers per topic, e.g. micro-cluster
• Seed core: most frequent topics accounting for
majority of publications, e.g. 80%
Virtual units:
• All other universities publishing in the selected topics
• Limited to universities publishing comparable
numbers of publications
• (Might require splitting universities into smaller
groups)
SEED
UNIT
Topic profile Seed core
Virtual unit
UNIV Y
Virtual unit
UNIV Z
Virtual unit
UNIV X
Profilecomparison
5. STI 2017 CONFERENCE PARIS JENS PETER ANDERSEN
6 SEPTEMBER 2017 POSTDOC
DEPARTMENT OF POLITICAL SCIENCE
AARHUS UNIVERSITY
DANISH CENTRE FOR STUDIES IN
RESEARCH AND RESEARCH POLICY
LABELLING AND SPLITTING
Labelling:
• Human interaction with benchmarks requires labels to understand their meaning.
• E.g. “Research in the same area as the seed unit, conducted at Harvard University”.
Splitting:
• Specialised and large institutions surpass smaller institutions manyfold in publication
counts. Some will be highly productive in all topics
• These units require splitting, e.g. by clustering units around author hubs (~Principal
investigators, group leaders)
6. STI 2017 CONFERENCE PARIS JENS PETER ANDERSEN
6 SEPTEMBER 2017 POSTDOC
DEPARTMENT OF POLITICAL SCIENCE
AARHUS UNIVERSITY
DANISH CENTRE FOR STUDIES IN
RESEARCH AND RESEARCH POLICY
SIMILARITY
The best benchmark may not be the most impactful, or the most productive. It could be the
most similar. But what is similarity?
Similarity types:
• Cognitive distance (keyword distances, keyword overlap)
• Profile vector direction (cosine similarity)
• Collaborative distance
• …
7. STI 2017 CONFERENCE PARIS JENS PETER ANDERSEN
6 SEPTEMBER 2017 POSTDOC
DEPARTMENT OF POLITICAL SCIENCE
AARHUS UNIVERSITY
DANISH CENTRE FOR STUDIES IN
RESEARCH AND RESEARCH POLICY
SIMILARITY CONSIDERATIONS
• Aim of the benchmarking
• Availability of data
• Type of seed unit
• Profile type
• Diversity
• Collaboration
• …
8. STI 2017 CONFERENCE PARIS JENS PETER ANDERSEN
6 SEPTEMBER 2017 POSTDOC
DEPARTMENT OF POLITICAL SCIENCE
AARHUS UNIVERSITY
DANISH CENTRE FOR STUDIES IN
RESEARCH AND RESEARCH POLICY
TESTING TOPICALITY
• Title keywords: relevant results, large degree of noise.
• WoS subject categories: ditto, more noise.
• Leiden article-level classification, meso/micro-level: Considerably less noise,
depending on the type of seed unit. Very multidisciplinary units still create noise, more
monodisciplinary units much less. Difficult to interpret.
9. STI 2017 CONFERENCE PARIS JENS PETER ANDERSEN
6 SEPTEMBER 2017 POSTDOC
DEPARTMENT OF POLITICAL SCIENCE
AARHUS UNIVERSITY
DANISH CENTRE FOR STUDIES IN
RESEARCH AND RESEARCH POLICY
TESTING SIMILARITY
Developing the benchmarking model, we used cosine similarity:
• Each unit (seed/virtual) is represented as a vector:
• Each dimension represents a particular topic in the seed core
• The size of each dimension is equal to the number of papers in that topic
• Cosine similarity measures the “angle” between two n-dimensional vectors:
• The “size” of an individual dimension matters less than the distribution between
them.
10. STI 2017 CONFERENCE PARIS JENS PETER ANDERSEN
6 SEPTEMBER 2017 POSTDOC
DEPARTMENT OF POLITICAL SCIENCE
AARHUS UNIVERSITY
DANISH CENTRE FOR STUDIES IN
RESEARCH AND RESEARCH POLICY
BRIEF DEMONSTRATION CASE
Seed unit: Interdisciplinary Nanoscience Centre, Aarhus University (iNANO).
Existing expert-confirmed benchmark institutions.
11. STI 2017 CONFERENCE PARIS JENS PETER ANDERSEN
6 SEPTEMBER 2017 POSTDOC
DEPARTMENT OF POLITICAL SCIENCE
AARHUS UNIVERSITY
DANISH CENTRE FOR STUDIES IN
RESEARCH AND RESEARCH POLICY
CASE RESULTS
Predicted Top similarity Top collaboration Top impact
Univ Cambridge TU Munich Aarhus Univ Stanford Univ
Univ Copenhagen Tech Univ Denmark Lund Univ UC Berkeley
Lund Univ Moscow Lomonosov
SU Tech Univ Denmark Princeton Univ
Univ Munster Caltech
Univ South Denmark
National Ctr Nanosci &
Technol
Univ Basel UC Berkeley Univ Wisconsin Caltech
Predicted Rank, similarity Rank, collaboration Rank, impact
Univ Cambridge 49 102 40
Univ Copenhagen 112 35 100
Lund Univ 9 4 96
iNANO - - 38
12.
13. STI 2017 CONFERENCE PARIS JENS PETER ANDERSEN
6 SEPTEMBER 2017 POSTDOC
DEPARTMENT OF POLITICAL SCIENCE
AARHUS UNIVERSITY
DANISH CENTRE FOR STUDIES IN
RESEARCH AND RESEARCH POLICY
CURRENT ISSUES
Topic profiling: very different quality, depending on the unit in question.
Cosine similarity: ditto. Also difficult to discriminate between units – low variation in scores.
Size differences.
14. STI 2017 CONFERENCE PARIS JENS PETER ANDERSEN
6 SEPTEMBER 2017 POSTDOC
DEPARTMENT OF POLITICAL SCIENCE
AARHUS UNIVERSITY
DANISH CENTRE FOR STUDIES IN
RESEARCH AND RESEARCH POLICY
WHAT DOES OUR STUDY ADD?
A conceptual way of thinking of benchmarks, pointing the way towards a more fair and
relevant approach hereto.
15. STI 2017 CONFERENCE PARIS JENS PETER ANDERSEN
6 SEPTEMBER 2017 POSTDOC
DEPARTMENT OF POLITICAL SCIENCE
AARHUS UNIVERSITY
DANISH CENTRE FOR STUDIES IN
RESEARCH AND RESEARCH POLICY
MOVING FORWARD
Splitting units: not yet implemented, but could potentially fix problems with topic similarity.
Best candidate approach: clustering authors from same institution by collaboration to
identify hubs.
Testing other similarity measures – better discriminatory power.
Granulating profiles: combining topic clusters with keywords for increased precision
16. STI 2017 CONFERENCE PARIS JENS PETER ANDERSEN
6 SEPTEMBER 2017 POSTDOC
DEPARTMENT OF POLITICAL SCIENCE
AARHUS UNIVERSITY
DANISH CENTRE FOR STUDIES IN
RESEARCH AND RESEARCH POLICY
FUNDING
This study was funded by the Novo Nordisk Foundation. Funding was limited to wages for
research time.
None of the involved researchers have or have had personal, financial interest in the study.
17. DEPARTMENT OF POLITICAL SCIENCE
AARHUS UNIVERSITY
DANISH CENTRE FOR STUDIES IN
RESEARCH AND RESEARCH POLICY