Presented to members of the Psychology department as part of the New Tricks Seminar series (February 2016)
• journal metrics using WoS and Scopus
• article level metrics in WoS, Scopus and Google Scholar, and from publishers and the differences in each. Touch on altmetrics.
• author metrics in the above. Touch on Publish or Perish
Tanya Williamson, Academic Liaison Librarian
Updated 30/01/2015
This session included discussions around the value of bibliometrics for individual performance management/promotion and the REF.
What are bibliometrics?
Journal metrics
Personal metrics
Article level metrics and altmetrics
Updated 30/01/2015
This session included discussions around the value of bibliometrics for individual performance management/promotion and the REF.
What are bibliometrics?
Journal metrics
Personal metrics
Article level metrics and altmetrics
Presentation from a University of York Library workshop on bibliometrics. The session covers how published research outputs are measured at the article, author and journal level; with discussion of the limitations of a bibliometric approach.
Bibliometrics and research impact workshop in the scienes and engineering fieldsDiane Clark
This presentation gives an introduction to researchers in the sciences and engineering about bibliometrics. It also recommends ways to increase impact of published and non-published works.
Metrics: what they are and how to use themDavid Jenkins
In this training session we defined metrics (a.k.a. bibliometrics or quantitative research indicators), looked at how researchers are using them to demonstrate their excellence, contrasted three databases that provide metrics, examined certain popular metrics, looked at author profile systems in relation to metrics and discussed the uses and abuses of metrics.
We aimed to equip attendees with the knowledge they need to navigate this part of the research environment and we hope that people left with an understanding of how metrics can be useful and what their srengths and weaknesses are. The session really highlighted how metrics continue to be an important albeit contentious area that sheds a useful light on some of the murkier aspects of research assessment.
Google Scholar: Can it Really Be Used for Bibliometrics? by Isobel Stark and Michael Whitton, University of Southampton. Presentation at the Research Evaluation: Is It Our Business? The Role of Librarians in the Brave New World of Research Evaluation 29 June 2011, University of Birmingham, Edgbaston Campus.
Presentation of findings on Bibliometrics; description, methods with examples, advantages and disadvantages. Methods: Citation counts, Publication counts, H-index and Journal Impact Factor (JIF).
Resources used are shared, please use them.
Lecture on "Altmetrics: An Alternative View-Point to Assess Research Impact" in Five days Advanced Training Programme on Bibliometrics and Research Output Analysis during 15th - 20th June, 2015 at INFLIBNET Centre, Gandhinagar.
Impact Factor: An Index of Research JournalAJAY SEMALTY
PLEASE SUBSCRIBE OUR YOUTUBE CHANNEL OPENKNOWLEDGE or see URL https://youtu.be/nPLnJqLEknY
Research Indices are the indicators of the credibility and recognition of a researcher, a journal, an article and/or and institute. These include Impact Factor, immediacy Index, h-index etc. Researchers and students must know about these indices for better recognition in the academia and research. In the first part of the series we are discussing Impact Factor as a vital research Index. Impact factor (IF) is the most Important basis of selection of journal by the researchers and readers. Its a a measure of the reputation of a journal. IF is a measure of the frequency with which the "average article" in a journal has been cited in a particular year. The OER shall cover how (IF is calculated), Who (provides the IF), on which factors IF depends upon, The importance of IF in academic recognition and knowing the IF of journal. Also SUBSCRIBE OUR YOUTUBE CHANNEL OPENKNOWLEDGE or see https://youtu.be/nPLnJqLEknY
Impact factor (using impact factor to assess the impact of a journal)shri mangalambikai
The impact factor (IF) is a measure of the frequency with which the average article in a journal has been cited in a particular year. It is used to measure the importance or rank of a journal by calculating the times it's articles are cited.
Impact Factors are useful, but they should not be the only consideration when judging quality. Not all journals are tracked in the JCR database and, as a result, do not have impact factors. New journals must wait until they have a record of citations before even being considered for inclusion. The scientific worth of an individual article has nothing to do with the impact factor of a journal.
Prof. sp singh.ph d.course work.2020-21.citation index, journal impact factor...Saurashtra University
Citation index, Journal Impact Factors , H – Index and Impact Factor
-------
RESEARCH, PUBLICATIONS AND QUALITY ASSESSMENT
WIDE VARIATION IN THE ASSESSMENT AND QUALITY JUDGMENT
DIFFRENTIAL LEVEL OF RESEARCH OUTPUT- Reflected by number/frequency/quality of the publication
LACK OF INTEREST
DIFFERNCES IN OVER ALL OBJECTIVES
TYPES OF PUBLICATIONS
TYPES AND QUALITY OF THE JOURNALS
It’s important to remember that the impact factor only looks at an average citation and that a journal may have a few highly cited papers that greatly increase its impact factor, while other papers in that same journal may not be cited at all. Therefore, there is no direct correlation between an individual article’s citation frequency or quality and the journal impact factor.
Biblioteca, CVN y CRIS: la gestión del currículum del investigador en la Univ...Bucle Jornadas
La Biblioteca de la Universidad de Burgos implicada desde abril de 2013 en la certificación y gestión del CVN a través de la aplicación Universitas XXI Investigación. Trabajo desarrollado: certificación en la exportación del CVN. Proyectos futuros: certificación en la importación del CVN; validación de los datos introducidos por el investigador; incorporación voluntaria de los pdfs al repositorio. Papel de la Biblioteca: asesoramiento, formación y apoyo al investigador en lo referente al CVN y uso de la aplicación UXXI.
Presentation from a University of York Library workshop on bibliometrics. The session covers how published research outputs are measured at the article, author and journal level; with discussion of the limitations of a bibliometric approach.
Bibliometrics and research impact workshop in the scienes and engineering fieldsDiane Clark
This presentation gives an introduction to researchers in the sciences and engineering about bibliometrics. It also recommends ways to increase impact of published and non-published works.
Metrics: what they are and how to use themDavid Jenkins
In this training session we defined metrics (a.k.a. bibliometrics or quantitative research indicators), looked at how researchers are using them to demonstrate their excellence, contrasted three databases that provide metrics, examined certain popular metrics, looked at author profile systems in relation to metrics and discussed the uses and abuses of metrics.
We aimed to equip attendees with the knowledge they need to navigate this part of the research environment and we hope that people left with an understanding of how metrics can be useful and what their srengths and weaknesses are. The session really highlighted how metrics continue to be an important albeit contentious area that sheds a useful light on some of the murkier aspects of research assessment.
Google Scholar: Can it Really Be Used for Bibliometrics? by Isobel Stark and Michael Whitton, University of Southampton. Presentation at the Research Evaluation: Is It Our Business? The Role of Librarians in the Brave New World of Research Evaluation 29 June 2011, University of Birmingham, Edgbaston Campus.
Presentation of findings on Bibliometrics; description, methods with examples, advantages and disadvantages. Methods: Citation counts, Publication counts, H-index and Journal Impact Factor (JIF).
Resources used are shared, please use them.
Lecture on "Altmetrics: An Alternative View-Point to Assess Research Impact" in Five days Advanced Training Programme on Bibliometrics and Research Output Analysis during 15th - 20th June, 2015 at INFLIBNET Centre, Gandhinagar.
Impact Factor: An Index of Research JournalAJAY SEMALTY
PLEASE SUBSCRIBE OUR YOUTUBE CHANNEL OPENKNOWLEDGE or see URL https://youtu.be/nPLnJqLEknY
Research Indices are the indicators of the credibility and recognition of a researcher, a journal, an article and/or and institute. These include Impact Factor, immediacy Index, h-index etc. Researchers and students must know about these indices for better recognition in the academia and research. In the first part of the series we are discussing Impact Factor as a vital research Index. Impact factor (IF) is the most Important basis of selection of journal by the researchers and readers. Its a a measure of the reputation of a journal. IF is a measure of the frequency with which the "average article" in a journal has been cited in a particular year. The OER shall cover how (IF is calculated), Who (provides the IF), on which factors IF depends upon, The importance of IF in academic recognition and knowing the IF of journal. Also SUBSCRIBE OUR YOUTUBE CHANNEL OPENKNOWLEDGE or see https://youtu.be/nPLnJqLEknY
Impact factor (using impact factor to assess the impact of a journal)shri mangalambikai
The impact factor (IF) is a measure of the frequency with which the average article in a journal has been cited in a particular year. It is used to measure the importance or rank of a journal by calculating the times it's articles are cited.
Impact Factors are useful, but they should not be the only consideration when judging quality. Not all journals are tracked in the JCR database and, as a result, do not have impact factors. New journals must wait until they have a record of citations before even being considered for inclusion. The scientific worth of an individual article has nothing to do with the impact factor of a journal.
Prof. sp singh.ph d.course work.2020-21.citation index, journal impact factor...Saurashtra University
Citation index, Journal Impact Factors , H – Index and Impact Factor
-------
RESEARCH, PUBLICATIONS AND QUALITY ASSESSMENT
WIDE VARIATION IN THE ASSESSMENT AND QUALITY JUDGMENT
DIFFRENTIAL LEVEL OF RESEARCH OUTPUT- Reflected by number/frequency/quality of the publication
LACK OF INTEREST
DIFFERNCES IN OVER ALL OBJECTIVES
TYPES OF PUBLICATIONS
TYPES AND QUALITY OF THE JOURNALS
It’s important to remember that the impact factor only looks at an average citation and that a journal may have a few highly cited papers that greatly increase its impact factor, while other papers in that same journal may not be cited at all. Therefore, there is no direct correlation between an individual article’s citation frequency or quality and the journal impact factor.
Biblioteca, CVN y CRIS: la gestión del currículum del investigador en la Univ...Bucle Jornadas
La Biblioteca de la Universidad de Burgos implicada desde abril de 2013 en la certificación y gestión del CVN a través de la aplicación Universitas XXI Investigación. Trabajo desarrollado: certificación en la exportación del CVN. Proyectos futuros: certificación en la importación del CVN; validación de los datos introducidos por el investigador; incorporación voluntaria de los pdfs al repositorio. Papel de la Biblioteca: asesoramiento, formación y apoyo al investigador en lo referente al CVN y uso de la aplicación UXXI.
Citation Metrics: Established and Emerging ToolsLinda Galloway
An overview of established and emerging citation analysis tools including Scopus, Web of Science, Google Scholar Citations and altmetric tools used to measure scholarly influence. The presenter will compare and contrast these tools and provide an example of a basic search in each resource.
Practice with PoP: How to use Publish or Perish EffectivelyAnne-Wil Harzing
Covers four key ways in which Publish or Perish can be used:
1. Search for an individual's citation metrics
2. Do a literature review
3. Prepare your case for tenure or promotion
4. Prepare for a meeting with your "academic hero"
Also covers the why's of citation analysis, different metrics and diffferent databases and shows how to use PoP's multi-query center.
Reputation and bibliometric approaches to identifying the most influential journals to which a scholar should submit his or her research for maximum impact and influence.
Durham Researcher Development Programme 2015-16: Bibliometric Research Indica...Jamie Bisset
There is an ever-increasing need to make your research more visible as you establish your career, and metrics to measure your research performance when it comes to thinking about promotion and probation.
This session will focus on bibliometric research indicators (such as the Journal Impact Factor and SCImago, author metrics such as the h-index and g-index) and sources for accessing citation data (Web of Science, Journal Citation Reports and Google Scholar). These may be one of several factors to consider when thinking about where to submit an article manuscript for publication to maximise the potential academic impact of the research, and tools useful to be familiar with if they form part of any research evaluation you and your authored journal papers may be subject to.
An additional section will also look at tips to consider when writing an article abstract to maximise its discoverability and cite-ability.
Learning Outcomes:
• Understanding of meaning and intended uses of bibliometric research indicators
• Understanding of how some key indicators (JIF, H-index) are calculated
• Ability to make a judgement as to the appropriateness and limitations of such indicators
• Ability to use online datasets to view and calculate key bibliometric measures
• Awareness of some factors which can increase the visibility and discoverability of your own research in bibliographic databases.
Previous participants have said:
"The session has helped provide me with the basic information on Journal Impact and where to find information such as an author's h-index. It will be useful for future journal submission consideration."
"This session was very useful for me to become familiar with the topic."
A presentation delivered online to the Mountain Plains Management Conference at Cedar City, UT on Oct. 18, 2013.
Presented by: Jon Ritterbush of the Calvin T. Ryan Library at the University of Nebraska-Kearney.
A presentation on 'Publishing in Academic Journals – Tips to help you succeed' presented at the 2015 University of South Africa (Unisa) Authors Workshop organised by Unisa’s College of Graduate Studies and the Unisa Library
Showcasing your Research Impact using BibliometricsCiarán Quinn
Seminar to make academics aware of the bibliometric resources available to them and how to use them to improve their research impact. The session looked at
• What are Bibliometrics and Altmetrics
• Why they are important for you
• How to identify your research impact
and research profile
• How to improve your citations
• How to identify potential research collaborations
Early Career Tactics to Increase Scholarly ImpactElaine Lasda
Workshp for Ph.D. candidates, postdocs and faculy on how bilbiometrics, altmetrics, open access, ORCID, and other resources enable greater visibility of research output.
Exercise at NoWAL Open Research workshop 13 June 2019, led by Lancaster University Library. Blog post about the event available at https://wp.me/p81NIC-f9
Presentation given by Louise Tripp, Joshua Sendall and Hardy Schwamm at NoWAL Exchange of Experience 13 June 2019. Blog post on event available at https://wp.me/p81NIC-f9
06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Data and AI
Round table discussion of vector databases, unstructured data, ai, big data, real-time, robots and Milvus.
A lively discussion with NJ Gen AI Meetup Lead, Prasad and Procure.FYI's Co-Found
Adjusting OpenMP PageRank : SHORT REPORT / NOTESSubhajit Sahu
For massive graphs that fit in RAM, but not in GPU memory, it is possible to take
advantage of a shared memory system with multiple CPUs, each with multiple cores, to
accelerate pagerank computation. If the NUMA architecture of the system is properly taken
into account with good vertex partitioning, the speedup can be significant. To take steps in
this direction, experiments are conducted to implement pagerank in OpenMP using two
different approaches, uniform and hybrid. The uniform approach runs all primitives required
for pagerank in OpenMP mode (with multiple threads). On the other hand, the hybrid
approach runs certain primitives in sequential mode (i.e., sumAt, multiply).
Analysis insight about a Flyball dog competition team's performanceroli9797
Insight of my analysis about a Flyball dog competition team's last year performance. Find more: https://github.com/rolandnagy-ds/flyball_race_analysis/tree/main
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...Subhajit Sahu
Abstract — Levelwise PageRank is an alternative method of PageRank computation which decomposes the input graph into a directed acyclic block-graph of strongly connected components, and processes them in topological order, one level at a time. This enables calculation for ranks in a distributed fashion without per-iteration communication, unlike the standard method where all vertices are processed in each iteration. It however comes with a precondition of the absence of dead ends in the input graph. Here, the native non-distributed performance of Levelwise PageRank was compared against Monolithic PageRank on a CPU as well as a GPU. To ensure a fair comparison, Monolithic PageRank was also performed on a graph where vertices were split by components. Results indicate that Levelwise PageRank is about as fast as Monolithic PageRank on the CPU, but quite a bit slower on the GPU. Slowdown on the GPU is likely caused by a large submission of small workloads, and expected to be non-issue when the computation is performed on massive graphs.
Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...John Andrews
SlideShare Description for "Chatty Kathy - UNC Bootcamp Final Project Presentation"
Title: Chatty Kathy: Enhancing Physical Activity Among Older Adults
Description:
Discover how Chatty Kathy, an innovative project developed at the UNC Bootcamp, aims to tackle the challenge of low physical activity among older adults. Our AI-driven solution uses peer interaction to boost and sustain exercise levels, significantly improving health outcomes. This presentation covers our problem statement, the rationale behind Chatty Kathy, synthetic data and persona creation, model performance metrics, a visual demonstration of the project, and potential future developments. Join us for an insightful Q&A session to explore the potential of this groundbreaking project.
Project Team: Jay Requarth, Jana Avery, John Andrews, Dr. Dick Davis II, Nee Buntoum, Nam Yeongjin & Mat Nicholas
06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Data and AI
Discussion on Vector Databases, Unstructured Data and AI
https://www.meetup.com/unstructured-data-meetup-new-york/
This meetup is for people working in unstructured data. Speakers will come present about related topics such as vector databases, LLMs, and managing data at scale. The intended audience of this group includes roles like machine learning engineers, data scientists, data engineers, software engineers, and PMs.This meetup was formerly Milvus Meetup, and is sponsored by Zilliz maintainers of Milvus.
Enhanced Enterprise Intelligence with your personal AI Data Copilot.pdfGetInData
Recently we have observed the rise of open-source Large Language Models (LLMs) that are community-driven or developed by the AI market leaders, such as Meta (Llama3), Databricks (DBRX) and Snowflake (Arctic). On the other hand, there is a growth in interest in specialized, carefully fine-tuned yet relatively small models that can efficiently assist programmers in day-to-day tasks. Finally, Retrieval-Augmented Generation (RAG) architectures have gained a lot of traction as the preferred approach for LLMs context and prompt augmentation for building conversational SQL data copilots, code copilots and chatbots.
In this presentation, we will show how we built upon these three concepts a robust Data Copilot that can help to democratize access to company data assets and boost performance of everyone working with data platforms.
Why do we need yet another (open-source ) Copilot?
How can we build one?
Architecture and evaluation
Unleashing the Power of Data_ Choosing a Trusted Analytics Platform.pdfEnterprise Wired
In this guide, we'll explore the key considerations and features to look for when choosing a Trusted analytics platform that meets your organization's needs and delivers actionable intelligence you can trust.
Adjusting primitives for graph : SHORT REPORT / NOTESSubhajit Sahu
Graph algorithms, like PageRank Compressed Sparse Row (CSR) is an adjacency-list based graph representation that is
Multiply with different modes (map)
1. Performance of sequential execution based vs OpenMP based vector multiply.
2. Comparing various launch configs for CUDA based vector multiply.
Sum with different storage types (reduce)
1. Performance of vector element sum using float vs bfloat16 as the storage type.
Sum with different modes (reduce)
1. Performance of sequential execution based vs OpenMP based vector element sum.
2. Performance of memcpy vs in-place based CUDA based vector element sum.
3. Comparing various launch configs for CUDA based vector element sum (memcpy).
4. Comparing various launch configs for CUDA based vector element sum (in-place).
Sum with in-place strategies of CUDA mode (reduce)
1. Comparing various launch configs for CUDA based vector element sum (in-place).
2. Overview
Learn about the most common bibliometrics, how to find them,
the data they are based on, and their limitations.
1. What is citation analysis?
2. Journal metrics
3. Author metrics
4. Article level metrics and altmetrics
3. • Citation analysis is the quantitative analysis of research
publications and citations
• Based on the assumption that citations = academic impact
• Bibliometrics are the measures produced by citation analysis
What is citation analysis?
Citation counts in Web of Science Graphic from Eigenfactor.org
Citation counts in Scopus
Article-level metrics in Scopus
4. SciVal and InCites are research intelligence tools based on citation data +
Key tools for finding bibliometric data
Product Pros Cons
Web of
Science
(incl. Journal
Citation
Reports)
• Citation data back to 1945
• Produces the Journal Impact Factor
• Can compare journals within subject
category
• Less intuitive user interface
• English language bias
• Excludes some new OA journals
• Journal Citation Reports does not include
journals which are solely concerned with
Arts & Humanities
Scopus • Broader coverage than WoS
• Produces IPP and SNIP
• Source data for the REF and World
University Rankings
• Can compare journals across subject
• Citation data back to 1996 (backfilling to
1972)
• Journal comparisons limited to 10 titles
• No subject category rank
• English language bias
Google
Scholar
• Free and easy to use
• Very broad coverage i.e. yields high results
• Good coverage of grey literature and books
• No clear coverage policy
• Author metrics rely on authors creating
profiles or using additional software e.g.
Publish or Perish
• ‘Top journals’ by language and H-index
5. An attempt to compare the influence and impact of journal titles in a particular
discipline based on citations received.
Can be useful to inform decisions on strategic publishing alongside judgements
about reaching the best audience for the research.
Commonly used metrics
• Journal Impact Factor available through Journal Citation Reports, based on
Web of Science data
• SJR (SCImago Journal rank) based on Scopus data
• SNIP (Source Normalised Impact per Paper) based on Scopus data
Journal metrics
7. The Journal Impact Factor is the average number of times articles
from the Journal published in the past two years have been cited
in the JCR year.
Also available: 5 year Journal Impact Factor
Journal Impact Factor
An example: Cognition
8. • Based on source data from Web Of
Science
• “The theory behind the eigenfactor
metrics is that a single citation from
a high-quality journal may hold
more value than multiple citations
from more peripheral publications.”
Eigenfactor and Article Influence
An example: Cognition
Demo of Journal Citation Reports
9. • Source Normalised Impact per Paper (SNIP)– enables
comparison between disciplines with different citation
conventions
• Impact Per Paper – same method as the JIF with different
source data, and based on 3 years of citations
Scopus - SNIP and IPP
10. Scopus – Compare Journals
Select and compare up to
10 journals
Demo of Scopus
11. SCImago journal rank
An example: Developmental and Educational Psychology
• SCImago Journal Rank uses source data from Scopus
• Based on the Google PageRank algorithm
• Weights citations from high prestige journals and attempts to
balance the influence of the size of a journal
Developmental and Educational Psychology category rank
12. • Journal Impact Factor doesn’t work (at all/well) for every
discipline due to different publishing patterns and coverage of
different publication types
• Journal Impact Factor mustn’t be used to compare different
disciplines
• Focusing only on journal metrics could lead to you overlooking
smaller, specialist or emerging publications
• Dependent on the coverage and biases of the source data i.e.
will include only citations from other items in the database
• The overuse of the JIF has encouraged gaming and false
precision
• Can we infer individual impact from journal impact?
Limitations of journal metrics
13. Key metrics
• Citation counts - uses a particular dataset to count how many
documents an author has had published, and how many
citations those documents have received over time
• H-index, devised by: J. E. Hirsch in 2005
• There are several ‘improvements’ on the h-index, e.g. the g-
index, the hc-index, the hIindex, the ACW index but none has
gained the same popularity as the h-index
Author metrics
Easy explanation:
If a scholar has 20 articles which have each
been cited 20 times, s/he has an h-index of 20
16. Google Scholar
• A way to find your own h-index
• The author needs to have a
Google Scholar profile
Author metrics from
Google Scholar
Publish or Perish
• Uses Google Scholar data
• Many nuanced author
metrics
• Needs careful checking of
results for duplicates and
false results
Demo of Google Scholar
17. • Name ambiguity – getting a comprehensive, accurate list is not
easy, even in Web of Science
• No one metric can capture all citations or ‘impacts’
• May not be enough publications to be valid
• Early career researchers will be at a disadvantage
• Results will vary based on the source data
• Citations ≠ endorsements of quality!
Limitations of author metrics
Resistance against the
h-index, ImpactStory
18. • Citation data in Web of Science, Scholar, Google Scholar
• Publishers’ interfaces often include citing articles
Metrics
• Total cites
• Cites per year
• Average cites per paper, or per year
• Field Weighted Citation Impact
• Benchmark percentile (based on age of paper and subject area)
• Usage: downloads and views
• Emerging ‘altmetrics’
Article level metrics
24. “Article-Level Metrics (ALMs) leverage
the acceleration of research
communication made possible by the
networked landscape of researcher
tools and services. Also by
incorporating the manifold ways in
which research is disseminated, these
article impact indicators are made
available rapidly after publication and
are continually updated.”
PLOS One Article Level Metrics Information
Frontiers: Article level metrics for an OA
article published in Frontiers in Psychology
Article level metrics
25. Alternative metrics or
altmetrics
ImpactStory: like a living publications CV.
Adding metrics all the time
https://impactstory.org/metrics
• Emerging alternative metrics which capture the
attention that articles/works receive online
• Shows the broader, societal attention which may or
may not translate into citations
Loop: metrics presented on an author’s profile
26. Altmetrics
Altmetric.com: watch social media sites,
newspapers, government policy
documents and other sources for
mentions of scholarly articles to
compile article level metrics.
27. Example: A highly cited clinical article from 1995?
Limitations of article level
and altmetrics
Source of data Citations to article
Web of Science 2214
Google Scholar 3184
Scopus 2500
Publisher’s website 1626
Example:
Martinez, F. D., Wright, A. L., Taussig, L. M., Holberg, C. J., Halonen, M., &
Morgan, W. J. (1995). Asthma and wheezing in the first six years of life. New
England Journal of Medicine, 332(3), 133-138.
Citation data from 10/2/2016
28. • There are multiple sources of citation data to consider
• No single source is perfect, all have different coverage
• Citation counts/metrics alone only tell part of the story
• Publishing and citing patterns are different in different
disciplines (and even within disciplines like Psychology)
• ‘Outputs’ other than journal articles and conference
proceedings may not be well-represented
• Altmetrics give a view of the online attention surrounding a
work
Summary
29. • Journal metrics (Elsevier)
• The State of Journal Evaluation (Thomson Reuters)
• Publish or Perish (Harzing)
• Measure Your Research Impact toolkit (Irish academic libraries)
• Leiden manifesto as published in Hicks, Diana, et al. "The Leiden Manifesto
for research metrics." Nature 520 (2015): 429-431.
• PlumX from Plum Analytics: another altmetrics tool not discussed in this
presentation
The University has a current subscription to SciVal and InCites, which are
research intelligence tools which include benchmarking based on citation
data from Scopus and WoS respectively.
Follow up resources
Editor's Notes
Learn about the most common bibliometrics used to measure research impact and influence. Have a go at finding a journal's impact factor and calculating the h-index.
This is a complex area, I do not aim to teach you how to be bibliometricians – I’m not one myself!
Ask: I’d like to gauge where your priorities lie for the session today. Is there something specific you hope to get out of the session?
This is a complex area, I do not aim to teach you how to be bibliometricians – I’m not one myself!
There are metrics that you can access, with source data, which can help you to gain a picture of research impact. Ideally bibliometrics should be used alongside other ‘measures’ of impact or quality, which include:
-- successful funding applications
-- influence on policy/society
-- peer review
-- analysis of public engagement
We will look into a range of metrics and their limitations
Two versions of JCR – basic and InCites (graphical) not much difference, basic is probably easier, though InCites allows you to use BOTH Sci Index and SSci Index
journal impact factors in the Journal Citation Reports database which ranks journals using citation data from ISI Citation Indexes in the Web of Knowledge based on the previous 2 years
5 year impact factor which extends the citation data over 5 years
Eigenfactor score which aims to measure total influence by the use of an algorithm which includes the journal impact factor
SCImago (see mah go)
The Impact Factor is calculated by dividing the number of citations in the JCR year by the total number of articles published in the two previous years.
An Impact Factor of 1.0 means that, on average, the articles published one or two year ago have been cited one time. An Impact Factor of 2.5 means that, on average, the articles published one or two years ago have been cited two and a half times.
The citing works may be articles published in the same journal. However, most citing works are from different journals, proceedings, or books indexed by Web of Science.
Eigenfactor scores are scaled so that the sum of the Eigenfactor scores of all journals listed in Thomson's Journal Citation Reports (JCR) is 100. In 2006, the journal Nature has the highest Eigenfactor score, with a score of 1.992. The top thousand journals, as ranked by Eigenfactor score, all have Eigenfactor scores above 0.01.
A journal's Article Influence score is a measure of the average influence of each of its articles over the first five years after publication.
Article Influence score measures the average influence, per article, of the papers in a journal. As such, it is comparable to Thomson Scientific's widely-used Impact Factor. Article Influence scores are normalized so that the mean article in the entire Thomson Journal Citation Reports (JCR) database has an article influence of 1.00.
In 2006, the top journal by Article Influence score is Annual Reviews of Immunology, with an article influence of 27.454. This means that the average article in that journal has twenty seven times the influence of the mean journal in the JCR.
SNIP is the ratio of a source's average citation count per paper and the citation potential of its subject field.
Si-MAhG-o
Global data – country ranks
Common one is the H-index is a peculiar measure. Anybody heard of it? What does it mean? What value is placed on it?
Not an average, (though you can get an average citations per article from WoS)
Demo WoS
Topic: wheezing in children
Sort: most highly cited
Choose author: Martinez
Create Citation Report
Mention that you would actually have to work a little harder to ensure that you have everything in the database by this author.
Possible to create a Scholar account which enables you to (and others if the account is public) to view citations to your work, and citation analysis.
The i10-index indicates the number of academic publications an author has written that have at least ten citations from others. It was introduced in July 2011 by Google as part of their work on Google Scholar, a search engine dedicated to academic and related papers.
Growing area of development in bibliometrics, that goes hand in hand with the Open Access movement. Rather than using the journal metrics as a proxy for article quality, it’s now possible to split that link and focus on the actual article – Have been able to do this in Web of Science for a long time.
Highlight differences in the count.
From OA journal PLOS One (Public Library of Science). No citations from other
Altmetric.com focuses, like traditional metrics on journal articles.
ImpactStory looks at datasets, slides, videos, patents etc etc, not just articles
(>2000 cites) Older articles won’t necessarily get fresh attention.
Which source do you trust?