2. The aim originally was to reveal important
discoveries and allow researchers to
navigate the ‘citation network’ to uncover
relevant literature
WHAT IS CITATION ANALYSIS FOR?
3. ‘‘Do results of research really become valueless
by remaining buried in the literature?’ If there is
anything but a YES answer to this question then
perhaps we should burn our libraries and the
literature in them.’
EUGENE GARFIELD
CHEMICAL AND ENGINEERING NEWS 30(50), 5232 (1952)
HTTP://GARFIELD.LIBRARY.UPENN.EDU/PAPERS/CHEMICALANDENGINEERI
NGNEWS30%2850%29Y1952.HTML
4. Since 1975 citation data is being used to:
rank journal titles
to understand how science works
And more recently
measure impact and influence of a work
evaluate an author or institution’s influence
as a filter (most worrying!)
WHAT IS CITATION ANALYSIS FOR?
5. Citations are used to calculate ‘Research Influence’ which equals 30% of
the overall score in the World University Rankings.
EVALUATE AN INSTITUTION’S INFLUENCE
6. EVALUATE AN INSTITUTION’S INFLUENCE
Some Panels use citation data and ‘…consider the number of times an output
has been cited as additional information about the academic significance of
submitted outputs.’
REF Panel criteria and working methods (2012)
7. First, stop and think!
Why do you want to do this?
Are citations the best measure?
What other information are you going to use?
Are you interested in citations in relation to:
A journal title?
A particular work e.g. a journal article?
An author, or group of authors?
A subject category?
An institution?
HOW DO I ANALYSE CITATIONS?
8. Covers approx. 12,000 selected, peer reviewed journal titles.
Mostly English language
Citation data from 1945
Source data for THE World University Rankings *changing to Scopus
Source data for the annual Journal Citation Report, and the
Journal Impact Factor
Useful for :
Tracking citations over time
Following citations forward and backward in the literature
Citation reports for any set of results
e.g. author, topic, article
WEB OF SCIENCE
9. Reports produced annually based on the
data from Web of Science
Subject Categories to enable comparison
within a discipline
Many metrics, including the Journal Impact
Factor, Eigenfactor
Useful for:
Ranking journal titles in subject categories
Comparing journals within subject categories
JOURNAL CITATION REPORTS
10. Covers approx. 22,000 titles
Citation data from 1996
Source data for the REF
Many metrics including SJR, SNIP, IPP
Useful for:
Analysing an author’s citations
Visual citation reports for any set of results e.g. topic, author,
article
Discovering trends
SCOPUS
11. Name ambiguity – getting a comprehensive list is not easy
(but getting easier)
No one metric can capture all citations or ‘impacts’
Early career researchers will be at a disadvantage
Citations ≠ endorsements of quality!
Each resource will give you different results
WHAT’S WRONG WITH CITATION
ANALYSIS?
Source of data Citations
Web of Science 2137
Scopus 2418
Google Scholar 3072
Publisher’s website 1538
Martinez, F. D., Wright, A. L., Taussig, L.
et al. (1995). Asthma and wheezing in
the first six years of life. New England
Journal of Medicine, 332(3), 133-138.
12. ‘no set of numbers is likely to be able to capture the
nuanced judgments that the REF process currently
provides’
Metrics cannot replace peer review in the next REF, 8th July 2015, HEFCE
‘Metrics have proliferated: usually well intentioned, not always well informed,
often ill applied. We risk damaging the system with the very tools designed
to improve it, as evaluation is increasingly implemented by organizations
without knowledge of, or advice on, good practice and interpretation .’
Bibliometrics: The Leiden Manifesto for research metrics, 22nd April 2015, Hicks et al.
WHAT’S WRONG WITH CITATION
ANALYSIS?
13. “…both administrators and the management
discipline will be well served by efforts to
evaluate each article on its own merits
rather than abdicate this responsibility by
using journal ranking as a proxy for quality.
Singh, Haddad & Chow (2007: 319)
14. Publish or Perish
Google Scholar
Altmetrics
Research Evaluation tools such as SciVal or InCites
OF FURTHER INTEREST
15. Harzing, A. W. (2010). The publish or perish book. Melbourne: Tarma
Software Research. http://www.harzing.com/popbook/toc.htm
Hicks, D., Wouters, P., Waltman, L., de Rijcke, S., & Rafols, I.
(2015). The Leiden Manifesto for research metrics. Nature, 520,
429-431. http://www.nature.com/news/bibliometrics-the-leiden-
manifesto-for-research-metrics-1.17351
Singh, G., Haddad, K. M., & Chow, C. W. (2007). Are articles in “top”
management journals necessarily of higher quality? Journal of
Management Inquiry, 16(4): 319331.
REFERENCES
Editor's Notes
This indicator is worth 6 per cent overall.
Citations: Research influence (30%)
Our research influence indicator is the flagship. Weighted at 30 per cent of the overall score, it is the single most influential of the 13 indicators, and looks at the role of universities in spreading new knowledge and ideas.We examine research influence by capturing the number of times a university's published work is cited by scholars globally. This year, our data supplier Thomson Reuters examined more than 50 million citations to 6 million journal articles, published over five years. The data are drawn from the 12,000 academic journals indexed by Thomson Reuters' Web of Science database and include all indexed journals published between 2008 and 2012.Citations to these papers made in the six years from 2008 to 2013 are also collected.The citations help show us how much each university is contributing to the sum of human knowledge: they tell us whose research has stood out, has been picked up and built on by other scholars and, most importantly, has been shared around the global scholarly community to push further the boundaries of our collective understanding, irrespective of discipline.The data are fully normalised to reflect variations in citation volume between different subject areas. This means that institutions with high levels of research activity in subjects with traditionally high citation counts do not gain an unfair advantage.We exclude from the rankings any institution that publishes fewer than 200 papers a year to ensure that we have enough data to make statistically valid comparisons.