This document discusses several metrics for measuring scientific performance: the Impact Factor, H-Index, and Quality Factor. The Impact Factor is a metric calculated annually that measures how often a journal is cited. The H-Index measures both an individual scientist's productivity and impact by considering both the number of papers published and citations received. The Quality Factor is a newer system that aims to improve publication standards by auditing journals based on various criteria and providing a ranking. While these metrics provide useful indicators, they each have limitations and should be considered alongside other factors when evaluating scientific work.
The document discusses the h-index, a metric created by Jorge Hirsch to measure both the productivity and impact of a researcher's publications. The h-index is defined as the number of papers with at least h citations. This index improves upon metrics like total citations or publications by considering both quantity and citation impact. While originally created for physics, the h-index is now commonly used across disciplines to evaluate researchers and is also applied to journals and groups. The index has advantages over journal impact factors but also limitations, such as not accounting for variations between fields.
The document discusses research performance measurement (RPM) and metrics used to evaluate researchers and research output. RPM is used to identify research trends, evaluate researchers for tenure and promotion, support funding and grant applications, track institutional research performance, and inform policy decisions. Key metrics discussed include the h-index, which measures both research productivity and citation impact, and the impact factor, which indicates the average number of citations to articles published in a journal. The document provides details on calculating the h-index and where to find impact factors listed in the Journal Citation Reports.
Journal and author impact measures Assessing your impact (h-index and beyond)Aboul Ella Hassanien
This seminar presented at faculty of Computers Monofiya university on Saturday 12 Dec. 2015. Seminar for researchers and graduate students at Egyptian universities to increase awareness of the importance of publication and scientific research and how to increase the researchers weight, its calculation, and calculation of magazines weight and how to calculate new weights that differ from the impact of the magazines and tips for students attic studies on how to increase citation of the published research papers and How to use open access publishing. In addition discuss the Issues in the field of open access including its advantages and disadvantages
This document summarizes a virtual workshop on thesis writing and publication organized by Lavender Literacy Club and Cape Comorin Trust in collaboration with other institutions. It discusses research metrics, which are quantitative measures used to assess scholarly research outputs and impacts. Various metrics are explained, including journal metrics like impact factor, author metrics like h-index, and alternative metrics. The importance of research profiles, publishing ethics, and increasing research visibility and impacts are also covered.
This document discusses various metrics used for assessing research outputs and impact. It describes journal impact factor, which measures the average number of citations to recent articles published in that journal. It also discusses author-level metrics like the h-index, g-index, and i10-index, which measure an individual researcher's productivity and citation impact. These metrics are useful for tasks like grant allocation, benchmarking, hiring, promotions, and reviewing faculty/departments. However, no single metric should be considered in isolation as results can vary depending on the database or time period used.
It’s important to remember that the impact factor only looks at an average citation and that a journal may have a few highly cited papers that greatly increase its impact factor, while other papers in that same journal may not be cited at all. Therefore, there is no direct correlation between an individual article’s citation frequency or quality and the journal impact factor.
The document discusses the h-index, a metric created by Jorge Hirsch to measure both the productivity and impact of a researcher's publications. The h-index is defined as the number of papers with at least h citations. This index improves upon metrics like total citations or publications by considering both quantity and citation impact. While originally created for physics, the h-index is now commonly used across disciplines to evaluate researchers and is also applied to journals and groups. The index has advantages over journal impact factors but also limitations, such as not accounting for variations between fields.
The document discusses research performance measurement (RPM) and metrics used to evaluate researchers and research output. RPM is used to identify research trends, evaluate researchers for tenure and promotion, support funding and grant applications, track institutional research performance, and inform policy decisions. Key metrics discussed include the h-index, which measures both research productivity and citation impact, and the impact factor, which indicates the average number of citations to articles published in a journal. The document provides details on calculating the h-index and where to find impact factors listed in the Journal Citation Reports.
Journal and author impact measures Assessing your impact (h-index and beyond)Aboul Ella Hassanien
This seminar presented at faculty of Computers Monofiya university on Saturday 12 Dec. 2015. Seminar for researchers and graduate students at Egyptian universities to increase awareness of the importance of publication and scientific research and how to increase the researchers weight, its calculation, and calculation of magazines weight and how to calculate new weights that differ from the impact of the magazines and tips for students attic studies on how to increase citation of the published research papers and How to use open access publishing. In addition discuss the Issues in the field of open access including its advantages and disadvantages
This document summarizes a virtual workshop on thesis writing and publication organized by Lavender Literacy Club and Cape Comorin Trust in collaboration with other institutions. It discusses research metrics, which are quantitative measures used to assess scholarly research outputs and impacts. Various metrics are explained, including journal metrics like impact factor, author metrics like h-index, and alternative metrics. The importance of research profiles, publishing ethics, and increasing research visibility and impacts are also covered.
This document discusses various metrics used for assessing research outputs and impact. It describes journal impact factor, which measures the average number of citations to recent articles published in that journal. It also discusses author-level metrics like the h-index, g-index, and i10-index, which measure an individual researcher's productivity and citation impact. These metrics are useful for tasks like grant allocation, benchmarking, hiring, promotions, and reviewing faculty/departments. However, no single metric should be considered in isolation as results can vary depending on the database or time period used.
It’s important to remember that the impact factor only looks at an average citation and that a journal may have a few highly cited papers that greatly increase its impact factor, while other papers in that same journal may not be cited at all. Therefore, there is no direct correlation between an individual article’s citation frequency or quality and the journal impact factor.
The document discusses author level metrics and how they are used to measure the impact of individual authors. It defines author level metrics as citation metrics that measure the bibliometric impact of individual researchers. It also discusses different types of author level metrics, including article-level metrics, journal-level metrics, h-index, i10-index, g-index, and altmetrics. Finally, it discusses tools that can be used to measure author metrics, such as Google Scholar, Web of Science, Scopus, and Publish or Perish.
The document discusses various academic metrics used to measure the impact and quality of scholarly work, including journals, authors, and institutions. It defines ISSN numbers, journal finders, DOIs, SJR, impact factors, indexing services, and Google Scholar metrics. It explains how to calculate the h-index, i10-index, and g-index for authors. It distinguishes between reference lists and bibliographies and discusses various referencing styles. It provides examples of citations in the Vancouver referencing style. The document is intended as a guide to help understand different methods used to evaluate academic work.
Journal ranking metrices new perspective in journal performance managementAboul Ella Hassanien
The document discusses various metrics for evaluating journals and research, including impact factor, immediacy index, and the h-index. It provides definitions and explanations of how these metrics are calculated. For example, it explains that impact factor is calculated by dividing the number of citations in the current year by the total number of articles published in the previous two years. It also discusses some limitations and criticisms of solely relying on impact factor for evaluation.
Journal ranking metrices new perspective in journal performance managementAboul Ella Hassanien
The document discusses various metrics for evaluating journals and research, including impact factor, immediacy index, and the h-index. It provides definitions and explanations of how these metrics are calculated. For example, it explains that impact factor is calculated by dividing the number of citations in the current year by the total number of articles published in the previous two years. It also discusses some limitations and criticisms of solely relying on impact factor for evaluations.
The presentation discusses about a Thesis, Research paper, Review Article & Technical Reports: Organization of thesis and reports, formatting issues, citation methods, references, effective oral presentation of research. Quality indices of research publication: impact factor, immediacy factor, H- index and other citation indices. A verbal consent of Prof. Dr. C. B. Bhatt was obtained (at 4.15pm on Dt. 26-11-2016 at Hall A-2, GTU, Chandkheda) to float the presentation online in benefits of the research scholar society.
Paradoxical betweenness in Academic endeavors and research metricsSaptarshi Ghosh
Publish or perish" is an aphorism describing the pressure to publish academic work in order to succeed in an academic career. ... The pressure to publish has been cited as a cause of poor work being submitted to academic journals.
The document discusses reasons for publishing research, types of journals (open access), the peer review process, indexing of journals, impact factor and its limitations, and the H-index. It recommends publishing in peer-reviewed, open access journals that are indexed in reputed databases like PubMed and have no or low publication fees. It also notes impact factor should not be the sole criteria for journal selection.
Bibliometrics presentation, Window on Research June 2010Jenny Delasalle
The document discusses bibliometrics and how they are used to measure research impact and performance. It describes journal impact factors, the H-index, and citation metrics. Bibliometrics are used by universities and funding bodies like HEFCE to evaluate staff performance and target support. The document provides tips for authors to increase their citation counts and research profiles, such as publishing in open access journals and high impact journals, and using their university's research repository to boost visibility.
This document discusses research metrics and how they are used to measure the impact and influence of scientific research. It defines several types of metrics including journal impact factors, author metrics, article metrics, and altmetrics. It also explains how impact factors are calculated for journals and describes other measures like the h-index, SNIP, and IPP that provide additional ways to evaluate research outputs and impacts. Scopus and the Web of Science are identified as databases used to find citation counts and metrics.
The document discusses various quality indices used to evaluate research publications and authors. It defines indices such as the impact factor, immediacy index, Eigenfactor, SCImago Journal Rank, H-index, G-index, and HB-index. It provides details on how each index is calculated and its significance. It also discusses limitations of impact factor and compares different journal quality indices. The document aims to explain these quality metrics to evaluate journals and authors.
This document discusses various quality indices used to evaluate research publications and authors. It defines indices such as the impact factor, immediacy index, Eigenfactor, SCImago Journal Rank, H-index, G-index, and HB-index. It provides details on how each index is calculated and its purpose. For example, the impact factor measures the average number of citations to articles in a journal, while the H-index quantifies an individual author's scientific research output based on both their productivity and citation impact. The document also notes some criticisms of these indices and how they can be determined using databases like Web of Science and Scopus.
Publishing in Credible Journals and disseminating Research to different Audi...tccafrica
This document provides information on publishing research in credible journals and disseminating research to different audiences. It discusses the history of scholarly publishing, reasons for publishing, what makes a journal credible, issues with impact factor and predatory journals. Specifically, it outlines the brief history of scholarly publishing dating back to the 14th century. It explains that publishing can improve careers by increasing one's h-index measure. It also provides tips on assessing the credibility of journals based on peer review process, citation indices, publishing history and impact factor. Finally, it warns about predatory open access journals and provides indicators for identifying them.
What Faculty Need to Know About Open Access & Increasing Their Publishing Im...Charles Lyons
This document discusses open access publishing and alternative metrics to measure scholarly impact beyond traditional journal impact factors. It notes that open access publishing can provide more readers and citations, leading to greater impact. The document explores metrics like the h-index and eigenfactor that may better capture an individual researcher's impact across disciplines. It finds that open access articles tend to be cited more frequently than non-open access articles, including a 64% citation advantage for social work articles. The document encourages researchers to consider open access options and institutional repositories to broaden the reach of their work.
Impact Factor and the Evaluation of Scientists - a book chapter by Nicola de ...Xanat V. Meza
Disclaimer: all original texts and images belong to their rightful owners.
Chapter 6 of the Book "Bibliometrics and citation analysis" by Nicola de Bellis.
Research metrics are quantitative analyses used to assess the quality, impact, and influence of scholarly research outputs. Key metrics include journal impact factors, author metrics, article metrics, and altmetrics. Journal impact factors are calculated based on the number of citations a journal's articles receive. Author metrics measure researcher impact and productivity. Article metrics track citations of individual works. Altmetrics provide broader measures of online attention and impact.
This document introduces two new journal metrics, SJR and SNIP, that have been endorsed by Elsevier's Scopus database. SJR measures journal prestige by weighting citations based on the status and reputation of the citing journal. SNIP accounts for differences in citation potential across research fields by normalizing a journal's raw citation impact based on the average citations in its subject field. The document compares the two new metrics to traditional journal impact factors and discusses their potential uses for publishers, librarians, and researchers to evaluate journal performance and research impact.
This document provides guidance on selecting an appropriate journal to publish research. It discusses factors to consider like the paper's content, intended audience, and journal scope. It also covers differences between indexed and non-indexed journals, as well as open access and subscription models. Metrics for evaluating journals are defined, including impact factor, eigenfactor, h-index, and quartiles. The differences between Scopus and Web of Science databases are outlined. Tools for preliminary journal searches like Ulrich's and journal finder databases are recommended. The presentation emphasizes understanding journal metrics and selection criteria before submitting to ensure matching research with a suitable publication outlet.
The document discusses author level metrics and how they are used to measure the impact of individual authors. It defines author level metrics as citation metrics that measure the bibliometric impact of individual researchers. It also discusses different types of author level metrics, including article-level metrics, journal-level metrics, h-index, i10-index, g-index, and altmetrics. Finally, it discusses tools that can be used to measure author metrics, such as Google Scholar, Web of Science, Scopus, and Publish or Perish.
The document discusses various academic metrics used to measure the impact and quality of scholarly work, including journals, authors, and institutions. It defines ISSN numbers, journal finders, DOIs, SJR, impact factors, indexing services, and Google Scholar metrics. It explains how to calculate the h-index, i10-index, and g-index for authors. It distinguishes between reference lists and bibliographies and discusses various referencing styles. It provides examples of citations in the Vancouver referencing style. The document is intended as a guide to help understand different methods used to evaluate academic work.
Journal ranking metrices new perspective in journal performance managementAboul Ella Hassanien
The document discusses various metrics for evaluating journals and research, including impact factor, immediacy index, and the h-index. It provides definitions and explanations of how these metrics are calculated. For example, it explains that impact factor is calculated by dividing the number of citations in the current year by the total number of articles published in the previous two years. It also discusses some limitations and criticisms of solely relying on impact factor for evaluation.
Journal ranking metrices new perspective in journal performance managementAboul Ella Hassanien
The document discusses various metrics for evaluating journals and research, including impact factor, immediacy index, and the h-index. It provides definitions and explanations of how these metrics are calculated. For example, it explains that impact factor is calculated by dividing the number of citations in the current year by the total number of articles published in the previous two years. It also discusses some limitations and criticisms of solely relying on impact factor for evaluations.
The presentation discusses about a Thesis, Research paper, Review Article & Technical Reports: Organization of thesis and reports, formatting issues, citation methods, references, effective oral presentation of research. Quality indices of research publication: impact factor, immediacy factor, H- index and other citation indices. A verbal consent of Prof. Dr. C. B. Bhatt was obtained (at 4.15pm on Dt. 26-11-2016 at Hall A-2, GTU, Chandkheda) to float the presentation online in benefits of the research scholar society.
Paradoxical betweenness in Academic endeavors and research metricsSaptarshi Ghosh
Publish or perish" is an aphorism describing the pressure to publish academic work in order to succeed in an academic career. ... The pressure to publish has been cited as a cause of poor work being submitted to academic journals.
The document discusses reasons for publishing research, types of journals (open access), the peer review process, indexing of journals, impact factor and its limitations, and the H-index. It recommends publishing in peer-reviewed, open access journals that are indexed in reputed databases like PubMed and have no or low publication fees. It also notes impact factor should not be the sole criteria for journal selection.
Bibliometrics presentation, Window on Research June 2010Jenny Delasalle
The document discusses bibliometrics and how they are used to measure research impact and performance. It describes journal impact factors, the H-index, and citation metrics. Bibliometrics are used by universities and funding bodies like HEFCE to evaluate staff performance and target support. The document provides tips for authors to increase their citation counts and research profiles, such as publishing in open access journals and high impact journals, and using their university's research repository to boost visibility.
This document discusses research metrics and how they are used to measure the impact and influence of scientific research. It defines several types of metrics including journal impact factors, author metrics, article metrics, and altmetrics. It also explains how impact factors are calculated for journals and describes other measures like the h-index, SNIP, and IPP that provide additional ways to evaluate research outputs and impacts. Scopus and the Web of Science are identified as databases used to find citation counts and metrics.
The document discusses various quality indices used to evaluate research publications and authors. It defines indices such as the impact factor, immediacy index, Eigenfactor, SCImago Journal Rank, H-index, G-index, and HB-index. It provides details on how each index is calculated and its significance. It also discusses limitations of impact factor and compares different journal quality indices. The document aims to explain these quality metrics to evaluate journals and authors.
This document discusses various quality indices used to evaluate research publications and authors. It defines indices such as the impact factor, immediacy index, Eigenfactor, SCImago Journal Rank, H-index, G-index, and HB-index. It provides details on how each index is calculated and its purpose. For example, the impact factor measures the average number of citations to articles in a journal, while the H-index quantifies an individual author's scientific research output based on both their productivity and citation impact. The document also notes some criticisms of these indices and how they can be determined using databases like Web of Science and Scopus.
Publishing in Credible Journals and disseminating Research to different Audi...tccafrica
This document provides information on publishing research in credible journals and disseminating research to different audiences. It discusses the history of scholarly publishing, reasons for publishing, what makes a journal credible, issues with impact factor and predatory journals. Specifically, it outlines the brief history of scholarly publishing dating back to the 14th century. It explains that publishing can improve careers by increasing one's h-index measure. It also provides tips on assessing the credibility of journals based on peer review process, citation indices, publishing history and impact factor. Finally, it warns about predatory open access journals and provides indicators for identifying them.
What Faculty Need to Know About Open Access & Increasing Their Publishing Im...Charles Lyons
This document discusses open access publishing and alternative metrics to measure scholarly impact beyond traditional journal impact factors. It notes that open access publishing can provide more readers and citations, leading to greater impact. The document explores metrics like the h-index and eigenfactor that may better capture an individual researcher's impact across disciplines. It finds that open access articles tend to be cited more frequently than non-open access articles, including a 64% citation advantage for social work articles. The document encourages researchers to consider open access options and institutional repositories to broaden the reach of their work.
Impact Factor and the Evaluation of Scientists - a book chapter by Nicola de ...Xanat V. Meza
Disclaimer: all original texts and images belong to their rightful owners.
Chapter 6 of the Book "Bibliometrics and citation analysis" by Nicola de Bellis.
Research metrics are quantitative analyses used to assess the quality, impact, and influence of scholarly research outputs. Key metrics include journal impact factors, author metrics, article metrics, and altmetrics. Journal impact factors are calculated based on the number of citations a journal's articles receive. Author metrics measure researcher impact and productivity. Article metrics track citations of individual works. Altmetrics provide broader measures of online attention and impact.
This document introduces two new journal metrics, SJR and SNIP, that have been endorsed by Elsevier's Scopus database. SJR measures journal prestige by weighting citations based on the status and reputation of the citing journal. SNIP accounts for differences in citation potential across research fields by normalizing a journal's raw citation impact based on the average citations in its subject field. The document compares the two new metrics to traditional journal impact factors and discusses their potential uses for publishers, librarians, and researchers to evaluate journal performance and research impact.
This document provides guidance on selecting an appropriate journal to publish research. It discusses factors to consider like the paper's content, intended audience, and journal scope. It also covers differences between indexed and non-indexed journals, as well as open access and subscription models. Metrics for evaluating journals are defined, including impact factor, eigenfactor, h-index, and quartiles. The differences between Scopus and Web of Science databases are outlined. Tools for preliminary journal searches like Ulrich's and journal finder databases are recommended. The presentation emphasizes understanding journal metrics and selection criteria before submitting to ensure matching research with a suitable publication outlet.
Similar to Metrics for Measuring Scientific Performance.ppsx (20)
Current Ms word generated power point presentation covers major details about the micronuclei test. It's significance and assays to conduct it. It is used to detect the micronuclei formation inside the cells of nearly every multicellular organism. It's formation takes place during chromosomal sepration at metaphase.
The cost of acquiring information by natural selectionCarl Bergstrom
This is a short talk that I gave at the Banff International Research Station workshop on Modeling and Theory in Population Biology. The idea is to try to understand how the burden of natural selection relates to the amount of information that selection puts into the genome.
It's based on the first part of this research paper:
The cost of information acquisition by natural selection
Ryan Seamus McGee, Olivia Kosterlitz, Artem Kaznatcheev, Benjamin Kerr, Carl T. Bergstrom
bioRxiv 2022.07.02.498577; doi: https://doi.org/10.1101/2022.07.02.498577
Or: Beyond linear.
Abstract: Equivariant neural networks are neural networks that incorporate symmetries. The nonlinear activation functions in these networks result in interesting nonlinear equivariant maps between simple representations, and motivate the key player of this talk: piecewise linear representation theory.
Disclaimer: No one is perfect, so please mind that there might be mistakes and typos.
dtubbenhauer@gmail.com
Corrected slides: dtubbenhauer.com/talks.html
EWOCS-I: The catalog of X-ray sources in Westerlund 1 from the Extended Weste...Sérgio Sacani
Context. With a mass exceeding several 104 M⊙ and a rich and dense population of massive stars, supermassive young star clusters
represent the most massive star-forming environment that is dominated by the feedback from massive stars and gravitational interactions
among stars.
Aims. In this paper we present the Extended Westerlund 1 and 2 Open Clusters Survey (EWOCS) project, which aims to investigate
the influence of the starburst environment on the formation of stars and planets, and on the evolution of both low and high mass stars.
The primary targets of this project are Westerlund 1 and 2, the closest supermassive star clusters to the Sun.
Methods. The project is based primarily on recent observations conducted with the Chandra and JWST observatories. Specifically,
the Chandra survey of Westerlund 1 consists of 36 new ACIS-I observations, nearly co-pointed, for a total exposure time of 1 Msec.
Additionally, we included 8 archival Chandra/ACIS-S observations. This paper presents the resulting catalog of X-ray sources within
and around Westerlund 1. Sources were detected by combining various existing methods, and photon extraction and source validation
were carried out using the ACIS-Extract software.
Results. The EWOCS X-ray catalog comprises 5963 validated sources out of the 9420 initially provided to ACIS-Extract, reaching a
photon flux threshold of approximately 2 × 10−8 photons cm−2
s
−1
. The X-ray sources exhibit a highly concentrated spatial distribution,
with 1075 sources located within the central 1 arcmin. We have successfully detected X-ray emissions from 126 out of the 166 known
massive stars of the cluster, and we have collected over 71 000 photons from the magnetar CXO J164710.20-455217.
Immersive Learning That Works: Research Grounding and Paths ForwardLeonel Morgado
We will metaverse into the essence of immersive learning, into its three dimensions and conceptual models. This approach encompasses elements from teaching methodologies to social involvement, through organizational concerns and technologies. Challenging the perception of learning as knowledge transfer, we introduce a 'Uses, Practices & Strategies' model operationalized by the 'Immersive Learning Brain' and ‘Immersion Cube’ frameworks. This approach offers a comprehensive guide through the intricacies of immersive educational experiences and spotlighting research frontiers, along the immersion dimensions of system, narrative, and agency. Our discourse extends to stakeholders beyond the academic sphere, addressing the interests of technologists, instructional designers, and policymakers. We span various contexts, from formal education to organizational transformation to the new horizon of an AI-pervasive society. This keynote aims to unite the iLRN community in a collaborative journey towards a future where immersive learning research and practice coalesce, paving the way for innovative educational research and practice landscapes.
4. History
The creators of the factor, Eugene Garfield and Irving Sher, aimed to
invent a metric that will serve to evaluate the journals. Following the
Genetics Citation Index's first experimental publication in 1955,
Science Citation Index (SCI) was published in 1961. Garfield and Sher
introduced the metric for selecting the new journals to include in the
SCI. Since then, the assumption is there is a direct dependence
between the journal's impact factor and quality.
5. What is an Impact Factor?
Journal Impact Factor (JIF) is a metric of authority and importance for
academic journals, calculated every year. Since its establishment in
1975, the JIF has been calculated by different organizations.
Currently, Thomson Scientific ( Scopus) is calculating and publishing
Journal Citation Reports (JCR) annually. You may access the report at
the Scopus Preview website.
6. Impact Factor Calculation:
Impact Factor is calculated by dividing the number of citations of the
journal by the sum of its citable publications during the previous two
consecutive years. Consequently, you should understand the metric
as an indicator of how often a certain journal is cited in different
publications. Logically, the more journal is cited, the more people
trust and rely on its information validity. So, you can consider high-
impact journals as a good source to conduct your research. On the
other side, the Impact Factor will help choose where to publish your
Ph.D. or other research works.
7. The Impact Factor Criticisms :
However, there are critics of the metric, pointing to the
disadvantages or flaws of its calculation. There is a suggestion that it
would be more appropriate to present the median rather than the
mean of the data Impact Factor uses for calculations. Also, critics say
JIF can't be an accurate predictive measure of where to publish
works. A certain article can achieve few citations even if published in
a frequently cited journal. Addressing the wave of skepticism toward
the metric, the European Association of Science Editors restated its
purpose in 2017. JIF should be an indicator to assess the entire
journal's influence, not single publications or researchers separately.
9. The H-Index
In 2005, Jorge E. Hirsch of UCSD
published a paper in PNAS in which he
put forward the h-index as a metric
for measuring and comparing the
overall scientific productivity of
individual scientists.
The h-index has been quickly adopted
as the metric of choice for many
committees and bodies.
10. The H-Index
Conceptually, the h-index is pretty
simple. we just plot the number of
papers versus the number of citations
you (or someone else) have received,
and the h-index is the number of papers
at which the 45-degree line
(citations=papers) intercepts the curve,
as shown in the diagram. That is, h
equals the number of papers that have
received at least h citations. For
example, do we have one publication
that has been cited at least once? If the
answer is yes, then we can go on to our
next publication. Have our two
publications each been cited twice? If
yes, then our h-index is at least 2. We
can keep going until we get to a “no.”
11. The H-Index
So, if we have an h-index of 20, then that means we have 20 papers
with at least 20 citations. It also means that we are doing pretty well
with our science!
The advantage of the h-index is that it combines productivity (i.e.
number of papers produced) and impact (number of citations) in a
single number. So, both productivity and impact are required for a
high h-index; neither a few highly cited papers nor a long list of
papers with only a handful of (or no!) citations will yield a high h-
index.
What is a Good h-Index?
Hirsch reckons that after 20 years of research, an h-index of 20 is
good, 40 is outstanding, and 60 is truly exceptional.
12. Limitations of the H-Index
Although having a single number that measures scientific performance
is attractive, the h-index is only a rough indicator of scientific
performance and should only be considered as such.
Limitations of the h-index include the following:
• It does not take into account the number of authors on a paper. A
scientist who is the sole author of a paper with 100 citations should
be given more credit than one who is on a similarly cited paper with
10 co-authors.
• It penalizes early-career scientists. Outstanding scientists with only
a small number of publications cannot have a high h-index, even if
all of those publications are ground-breaking and highly cited. For
example, if “Albert Einstein died in early 1906, then his h-index
would be stuck at 4 or 5, despite his being widely acknowledged as
one of the most important physicists, even considering only his
publications to that date.”
13. Limitations of the H-Index
• Review articles have a greater impact on the h-index than original
papers since they are generally cited more often.
• Use of the h-index has now broadened beyond science. However,
it’s difficult to compare fields and disciplines directly, so, really, a
‘good’ h-index is impossible to define.
14. Calculating the H-Index
There are several online resources and h-index calculators for
obtaining a scientist’s h-index. The most established are ISI Web of
Knowledge, and Scopus, both of which require a subscription
(probably via your institution), but there are free options too, one of
which is Publish or Perish.
If you check your own (or someone else’s) h-index with each of these
databases, you might get a different value. This is because each uses a
different database to count the total publications and citations. ISI and
Scopus use their own databases, and Publish or Perish uses Google
Scholar. Each database has different coverage, so will come up with
different h-index values. For
example, ISI has good coverage of journal publications, but poor
coverage of conferences, while Scopus covers conferences better, but
has poor journal coverage pre 1992.
h-index = the number of publications with a citation number greater
than or equal to h.
15. The H-Index Summed Up
The h-index provides a useful metric for scientific performance, but
only when viewed in the context of other factors. So, when making
decisions that are important to you (funding, job, finding a PI) be sure
to read through publication lists, talk to other scientists (and students)
and peers, and take account of career stage. So, keep in mind that an
h-index is only one consideration among many—and we should
definitely know our h-index—but it doesn’t define you (or anyone
else) as a scientist.
17. Quality Factor Approach
Quality Factor is a system that seeks to improve the publication
services with an emphasis on future results. It involves based on the
Deming’s Method of PDCA (Plan-Do-Check-Act) Cycle to measure the
standard of the journals/Publications. If necessary, the plan may be
revised on the basis of the results, so that the improvement is
ongoing.
19. Quality Factor Calculation
Journal Selection Criteria: Academic journal necessity to have ISSN
number is the eligible criteria to get Quality Factor Measurement.
Quality Factor is calculated using the below formula.
Quality Factor =
Q1+Q2+Q3+Q4+Q5+Q6+Q7+Q8+Q9+Q10+Q11+Q12+Q13+Q14+Q15
The Audit parameters and grid for Quality Factor calculation purpose
are listed .
Based on the criteria all journals
will be measured yearly and
share the Quality Factor ranking
to improve the journal standards.
The Journal Quality factor
Reports (JQR) will be published
and also includes a previous year
quality factor. The JQR also shows
rankings of journals by Quality
Factor.
20. Quality Factor Criticisms
Frequent criticisms have been made of the use of a quality factor. For
one thing, the quality factor might not be consistently reproduced in an
independent audit bases on the journal data we evaluate the Quality
Factor. There is a more universalargument on the rationality of the
quality factor as a measure of journal standards and the effect of
strategies that editors/publishers may adopt to improvement their
Quality factor. Other criticism emphases on the consequence of the
quality factor on performance of scholars, editors and other patrons.
Another reason that can undermine this system is that there is a
general inclination on the fragment of a mentioning individual to be
prejudiced by the already specified QF. Of any kind criticisms
about Quality Factor may not be true, public have to understand
benefit of the Quality Factor.
21. Categories Of Quality Factor Status
There are three classes of Quality factor Status defined by NOT-
2016/21, General, Special & Roster. These classes were the equivalent
of Category I, Category II & Roster status that were defined in NOT-
2016/21. Below are the current definitions.
General Status: A system with high quality factor (QF: 5 & above) is said
to be General Status Journal.
Special Status: A system with Intermediate quality factor (QF: 3.5 to 4.9)
is said to be Special Status Journal.
Roster Status: A system with Low quality factor (QF: Below 3.4) is said
to be Roster Status Journal.