This document discusses approaches to broadening and opening up science and technology policy appraisal using indicators. It argues that conventional indicators often have perverse effects by reinforcing existing power structures and reducing diversity. The document presents conceptual frameworks for broadening appraisal inputs and making indicator outputs more open and plural rather than justifying specific decisions. Examples show how indicators can preserve multiple dimensions, represent different perspectives on concepts like excellence and interdisciplinarity, and explore directions of research portfolios. The goal is to use indicators to open up debate rather than provide unitary and prescriptive advice.
The workplace ecosystem of the future 24.4.2024 Fabritius_share ii.pdf
Towards indicators for ‘opening up’ science and technology policy
1. Towards indicators for ‘opening up’
science and technology policy
Ismael Rafols
Ingenio (CSIC-UPV), Universitat Politècnica de València
SPRU (Science Policy Research Unit), University of Sussex, Brighton, UK
Observatoire des Sciences et des Téchniques (OST-HCERES), Paris
ORCID CASRAI Barcelona, May 2015
Building on work with Tommaso Ciarli and Andy Stirling (SPRU),
Loet Leydesdorff (Amsterdam), Alan Porter (GTech, Atlanta)
2. Pressing demands of research management and evaluation
• Increasing size of research endeavour
1.5 M papers per year only in Web of Science
Globalisation. Many mid-income countries have multiplied their
publication output (China)
Within a country: 3,000 postgraduate programmes are evaluated in 48
panels in BR
• Increasing competition for funding – globally and locally
Success rates of research calls are very low in the US, EU (10%-20%)
• Increasing societal demands
Interactions with industry and social actors (NGOs)
Grand challenges (climate change, epidemics, water & food security)
Traditional qualitative techniques of management cannot cope.
Hope that use of indicators can help...
3. Can indicators help?
Yes, indicators can help make decisions…
Increase transparency and sense of objectivity
Reduce complexity
Reduce time and costs
The dream of rationality, “the science of science policy”
(De Solla Price, Garfield, 1960s….Marburguer, Julia Lane, 2000s)
but do they lead to the “right” decisions?
Evaluation gap (Wouters):
“discrepancy between evaluation criteria and the social
and economic functions of science”
4. Perverse effects of conventional indicators
Conventional indicators (such as IFs, or h-index)
are (often) biased against:
Field research (epidemiology)
Applied research
Social science and humanities
Peripheral countries
Non-English publications and authors
Some topics outside outside mainstream (e.g. preventive
medicine)??
re-inforcing existing power structures in S&T
reducing diversity, making S&T less relevant to society
(Q: would use of peer review lead to same biased outcomes?)
5. Current use of S&T indicators
Use of conventional S&T indicators is *problematic*
Narrow inputs (only pubs!)
Scalar outputs (rankings!)
Aggregated solutions --missing variation
Opaque selections and classifications (privately owned
databases)
Large, leading scientometric groups embedded in government /
consultancy, with limited possibility of public scrutiny
Sometimes even mathematically debatable
Impact Factor of journals (only 2 years, large error bar)
Average number of citations (pubs) in skewed distributions
6. The Leiden Manifesto (in the “making”) on use of indicators
Metrics
• Should support, not replace expert evaluation.
• Should match institutional mission
• Should not suppress locally relevant research
• Should be simple, transparent, accessible and verifiable by evaluated
• Should take into account field and country differences/contexts
• Metrics for individual researchers must be based on qualitative
judgment.
• Intended and unintended effects of metrics should be reflected upon
before use
Hicks, Wouters, Waltman, de Rijcke and Rafols (Nature, in press)
7. How can S&T indicators help in science policy?
What type of “answer" should indicators provide?
Model 2: Plural and conditional
Exploring complementary choices
Facilitating options/choices in landscapes
Model 1: Unique and prescriptive
Proposing “best choices”
Rankings -- ranking list of preferences
8. From S&T indicators for justification and disciplining…
Justification in decision-making
• Weak justification, “Give me a number, any number!”
• Strong justification, “Show in numberrs that X is the best choice!”
S&T Indicators have a performative role:
They don’t just measure. Not ‘just happen to be used’ in science
policy (neutral)
Constitutive part incentive structure for “disciplining” (loaded)
They signal to stakeholders what is important.
Institutions use these techniques to discipline subjects
Articulate framings, goals and narratives on performance,
collaboration, interdisciplinarity…
9. … towards S&T indicators as tools for deliberation
Yet is possible to design indicators that foster plural reflection rather
than justifying or reinforcing dominant perspectives
This shift is facilitated by trends pushed by ICT and visualisation tools
More inputs (pubs, pats, but also news, webs, etc.)
Multidimensional outputs (interactive maps)
Institutional repositories
Multiple solutions -- highlighting variation, confidence intervals
More inclusive and contrasting classifications (by-passing
private data ownership? Pubmed, Arxiv)
More possibilities for open scrutiny (new research groups)
11. Policy use of S&T indicators: Appraisal
Appraisal:
‘the ensemble of processes through which knowledges are
gathered and produced in order to inform decision-making
and wider institutional commitments’ Leach et al. (2008)
Breadth: extent to which appraisal covers diverse dimensions of
knowledge
Openness: degree to which outputs provide an array of options for
policies.
12. Policy use of S&T indicators: Appraisal
Appraisal:
‘the ensemble of processes through which knowledges are
gathered and produced in order to inform decision-making
and wider institutional commitments’ Leach et al. (2010)
Example:
Allocation of resources based on research “excellence”
Breadth: extent to which appraisal covers diverse dimensions of
knowledge
Narrow: citations/paper
Broad: citations, peer interview, stakeholder view, media coverage, altmetrics
Openness: degree to which outputs provide an array of options for
policies.
Closed: fixed composite measure of variables unitary and prescriptive
Open: consideration of various dimensions plural and conditional
16. narrow
broad
closing-down opening-up
range of
appraisals
inputs
(issues, perspectives,
scenarios, methods)
effect of appraisal ‘outputs’ on decision-making
Broadening out S&T Indicators
Conventional
S&T indicators??
Broadening out
Incorporation plural
analytical dimensions:
global & local networks
hybrid lexical-actor nets
etc.
New analytical inputs:
media, blogsphere.
17. narrow
broad
closing-down opening-up
range of
appraisals
inputs
(issues, perspectives,
scenarios, methods)
effect of appraisal ‘outputs’ on decision-making
Appraisal methods: broad vs. narrow & closing vs. opening
Journal rankings
University rankings
Unitary measures
that are opaque, tendency
to favour the established
perspectives
… and easily translated
into prescription
European Innovation
Scoreboard
18. narrow
broad
closing-down opening-up
range of
appraisals
inputs
(issues, perspectives,
scenarios, methods)
effect of appraisal ‘outputs’ on decision-making
Opening up S&T Indicators
Conventional
S&T Indicators??
opening-up
Making explicit underlying
conceptualisations and
creating heuristic tools to facilitate
exploration
NOT about the uniquely best method
Or about the unitary best explanation
Or the single best prediction
19. 2. Examples of Opening Up
a. Broadening out AND Opening up
b. Opening up WITH NARROW inputs
21. Composite Innovation Indicators (25-30 indicators)
European (Union) Innovation Scoreboard
Grupp and Schubert (2010) show that order
is highly dependent on indicators
weightings.
Sensitivity analysis
22. Solution: representing multiple dimensions
(critique by Grupp and Schubert, 2010)
Use of spider diagrams
allows comparing
like with like
U-rank,
University performance
Comparison tools
(Univ. Twente)
5.4 Community trademarks indicator
23. 2. Examples of Opening Up
b. Opening up WITH NARROW inputs
24. narrow
broad
closing-down opening-up
range of
appraisals
inputs
(issues, perspectives,
scenarios, methods)
effect of appraisal ‘outputs’ on decision-making
Opening up S&T Indicators
Conventional
S&T Indicators??
Leach et al. 2010
opening-up
Making explicit underlying
conceptualisations and
creating heuristic tools to facilitate
exploration
NOT about the uniquely best method
Or about the unitary best explanation
Or the single best prediction
25. 1. Excellence: Opening Up Perspectives
Provide different perspectives of scientific impact
26. Measures of “scientific excellence”
0
0.5
1
1.5
2
2.5
3
3.5
4
ISSTI SPRU MIoIR Imperial WBS LBS
ABSRank
0
1
2
3
4
5
ISSTI SPRU MIoIR Imperial WBS LBS
Citations/pub
Journal-fieldNormalised
Which one is more meaningful??
0
1
2
3
4
ISSTI SPRU MIoIR Imperial WBS LBS
JournalImpactFactor
Rafols et al. (2012, Research Policy)
29. Multiple concepts of interdisciplinarity:
Conspicuous lack of consensus but
most indicators aim to capture
the following concepts
Integration (diversity & coherence)
• Research that draws on
diverse bodies of knowledge
• Research that links different
disciplines
Intermediation
• Research that lies between or
outside the dominant disciplines
Coherence
Low High
Diversity
LowHigh
InterdisciplinaryMultidisciplinary
Monodisciplinary
Intermediation
Low High
Monodisciplinary Interdisciplinary
33. Summary: IS (blue) units are more interdisciplinary than BMS (orange)
More Diverse
Rao-Stirling Diversity
More Coherent
Observed/Expected
Cross-Citation Distance
More Interstitial
Average Similarity
0.02
0.03
0.04
0.05
0.06
0.07
34. 3. Research focus: Opening Up Perspectives
Explore directions of research
35. Rice Varieties
Classic Genetics
Transgenics
Mol. Biology
Genomics
Pests
Plant protection
Weeds
Plant protection
Plant nutrition
Production &
socioeconomic issues
Consumption
Hum. nutrition,
food techs)
Thinking in terms of research portfolios: the case of rice
Ciarli and Rafols (2014, unpublished)
41. S&T indicator as a tools to open up the debate
• ‘Conventional’ use of indicators (‘Pure scientist ‘--Pielke)
Purely analytical character (i.e. free of normative assumptions)
Instruments of objectification of dominant perspectives
Aimed at legitimising /justifying decisions (e.g. excellence)
Unitary and prescriptive advice
• Opening up scientometrics (‘Honest broker’ --Pielke)
Aimed at locating the actors in their context and dynamics
Not predictive, or explanatory, but exploratory
Construction of indicators is based on choice of perspectives
Make explicit the possible choices on what matters
Supporting debate
Making science policy more ‘socially robust’
Plural and conditional advice
Barré (2001, 2004, 2010), Stirling (2008)
42. Strategies for opening up or
how to “keep it complex” yet “manageable”
• Presenting contrasting perspectives
At least TWO, in order to give a taste of choice
• Simultaneous visualisation of multiple properties /
dimensions
Allowing the user take its own perspective
• Interactivity
Allowing the user give its own weigh to criteria / factors
Allowing the user manipulate visuals
.
43. Is ‘opening up’ worth the effort? (1)
Sustaining diversity in S&T system
Decrease in diversity.
Potential unintended consequence of the evaluation machine:
Why diversity matters
Systemic (‘ecological’) understanding of the S&T
S&T outcomes depend on synergistic interactions between
disparate elements.
Dynamic understanding of excellence and relevance
New social needs, challenges, expectations from S&T
Manage diverse portfolios to hedge against uncertainty in research
Office of Portfolio Analysis (National Institutes of Health)
http://dpcpsi.nih.gov/opa/
Open possibility for S&T to work for the disenfranchised
Topics outside dominant science (e.g. neglected diseases)