Defining the h index and the calculation process. Also the main advantages and limitations besides how to increasing the h index.
Dr. Hassan Najman MUHAMED
hassan.muhamed@uod.ac
The University of Duhok - Kurdistan region of Iraq
Scopus is Elsevier’s abstract and citation database launched in 2004. Scopus covers nearly 36,377 titles from approximately 11,678 publishers, of which 34,346 are peer-reviewed journals in top-level subject fields: life sciences, social sciences, physical sciences, and health sciences
Journal impact measures: the Impact FactorTorres Salinas
The seminar on impact measures will first shed light on the best known and most controversial indicator, namely Garfield’s Journal Impact Factor. Its strengths and weaknesses as well as its correct use will be discussed thoroughly. Moreover the corresponding analytical tool, Clarivate Analytics’s Journal Citation Reports will be demonstrated.
Presented at the european summer school for scientometrics ESSS - July 16th, 2019 - Louvain
Defining the h index and the calculation process. Also the main advantages and limitations besides how to increasing the h index.
Dr. Hassan Najman MUHAMED
hassan.muhamed@uod.ac
The University of Duhok - Kurdistan region of Iraq
Scopus is Elsevier’s abstract and citation database launched in 2004. Scopus covers nearly 36,377 titles from approximately 11,678 publishers, of which 34,346 are peer-reviewed journals in top-level subject fields: life sciences, social sciences, physical sciences, and health sciences
Journal impact measures: the Impact FactorTorres Salinas
The seminar on impact measures will first shed light on the best known and most controversial indicator, namely Garfield’s Journal Impact Factor. Its strengths and weaknesses as well as its correct use will be discussed thoroughly. Moreover the corresponding analytical tool, Clarivate Analytics’s Journal Citation Reports will be demonstrated.
Presented at the european summer school for scientometrics ESSS - July 16th, 2019 - Louvain
Elsevier's Scopus.com upgraded the Journal Analyzer with Source Normalized Impact per Paper (SNIP), which measures a source's contextual impact, and SCImago Journal Rank (SJR), which measures the scientific prestige of scholarly sources.
These indicators will be applied to all journals indexed by Scopus and will be freely available to both subscribers and non-subscribers @ scopus.com and www.journalmetrics.com
RESEARCH METRICS
It is the quantitative analysis of scientific and scholarly outputs and their impacts. Research Metrics measure impact and provide insight into the influence of specific journal publications, individual articles, and authors.
The presentation discusses about a Thesis, Research paper, Review Article & Technical Reports: Organization of thesis and reports, formatting issues, citation methods, references, effective oral presentation of research. Quality indices of research publication: impact factor, immediacy factor, H- index and other citation indices. A verbal consent of Prof. Dr. C. B. Bhatt was obtained (at 4.15pm on Dt. 26-11-2016 at Hall A-2, GTU, Chandkheda) to float the presentation online in benefits of the research scholar society.
Scopus : the largest abstract and citation database of peer-reviewed literatureSumit Kumar Gupta
Scopus is the largest abstract and citation database of peer-reviewed literature: scientific journals, books and conference proceedings. Delivering a comprehensive overview of the world's research output in the fields of science, technology, medicine, social sciences, and arts and humanities, Scopus features smart tools to track, analyse and visualise research.
As research becomes increasingly global, interdisciplinary and collaborative, you can make sure that critical research from around the world is not missed when you choose Scopus.
There is no straight formula to determine the best journal to publish your manuscript. However, analyzing various parameters may help you to decide the journal that best suits you for publishing. Following are some of those criteria:
https://www.cognibrain.com/criteria-for-selecting-journal-for-publication/
Impact Factor Journals as per JCR, SNIP, SJR, IPP, CiteScoreSaptarshi Ghosh
Journal-level metrics
Metrics have become a fact of life in many - if not all - fields of research and scholarship. In an age of information abundance (often termed ‘information overload’), having a shorthand for the signals for where in the ocean of published literature to focus our limited attention has become increasingly important.
Research metrics are sometimes controversial, especially when in popular usage they become proxies for multidimensional concepts such as research quality or impact. Each metric may offer a different emphasis based on its underlying data source, method of calculation, or context of use. For this reason, Elsevier promotes the responsible use of research metrics encapsulated in two “golden rules”. Those are: always use both qualitative and quantitative input for decisions (i.e. expert opinion alongside metrics), and always use more than one research metric as the quantitative input. This second rule acknowledges that performance cannot be expressed by any single metric, as well as the fact that all metrics have specific strengths and weaknesses. Therefore, using multiple complementary metrics can help to provide a more complete picture and reflect different aspects of research productivity and impact in the final assessment. ( Elsevier)
Citation metrics across disciplines - Google Scholar, Scopus, and the Web of ...Anne-Wil Harzing
Key conclusions:
1. Will the use of citation metrics disadvantage the Social Sciences and Humanities?
* Not, if you use a database that includes publications important in those disciplines (e.g. books, national journals)
* Not, if you correct for differences in co-authorships
2. Is peer review better than metrics for the Social Sciences and Humanities?
* Yes, in a way…. The ideal version of peer review (informed, dedicated, and unbiased experts) is better than a reductionist version of metrics
* However, an inclusive version of metrics is probably better than the likely reality of peer review (hurried semi-experts, potentially influenced by journal outlet and affiliation)
Elsevier's Scopus.com upgraded the Journal Analyzer with Source Normalized Impact per Paper (SNIP), which measures a source's contextual impact, and SCImago Journal Rank (SJR), which measures the scientific prestige of scholarly sources.
These indicators will be applied to all journals indexed by Scopus and will be freely available to both subscribers and non-subscribers @ scopus.com and www.journalmetrics.com
RESEARCH METRICS
It is the quantitative analysis of scientific and scholarly outputs and their impacts. Research Metrics measure impact and provide insight into the influence of specific journal publications, individual articles, and authors.
The presentation discusses about a Thesis, Research paper, Review Article & Technical Reports: Organization of thesis and reports, formatting issues, citation methods, references, effective oral presentation of research. Quality indices of research publication: impact factor, immediacy factor, H- index and other citation indices. A verbal consent of Prof. Dr. C. B. Bhatt was obtained (at 4.15pm on Dt. 26-11-2016 at Hall A-2, GTU, Chandkheda) to float the presentation online in benefits of the research scholar society.
Scopus : the largest abstract and citation database of peer-reviewed literatureSumit Kumar Gupta
Scopus is the largest abstract and citation database of peer-reviewed literature: scientific journals, books and conference proceedings. Delivering a comprehensive overview of the world's research output in the fields of science, technology, medicine, social sciences, and arts and humanities, Scopus features smart tools to track, analyse and visualise research.
As research becomes increasingly global, interdisciplinary and collaborative, you can make sure that critical research from around the world is not missed when you choose Scopus.
There is no straight formula to determine the best journal to publish your manuscript. However, analyzing various parameters may help you to decide the journal that best suits you for publishing. Following are some of those criteria:
https://www.cognibrain.com/criteria-for-selecting-journal-for-publication/
Impact Factor Journals as per JCR, SNIP, SJR, IPP, CiteScoreSaptarshi Ghosh
Journal-level metrics
Metrics have become a fact of life in many - if not all - fields of research and scholarship. In an age of information abundance (often termed ‘information overload’), having a shorthand for the signals for where in the ocean of published literature to focus our limited attention has become increasingly important.
Research metrics are sometimes controversial, especially when in popular usage they become proxies for multidimensional concepts such as research quality or impact. Each metric may offer a different emphasis based on its underlying data source, method of calculation, or context of use. For this reason, Elsevier promotes the responsible use of research metrics encapsulated in two “golden rules”. Those are: always use both qualitative and quantitative input for decisions (i.e. expert opinion alongside metrics), and always use more than one research metric as the quantitative input. This second rule acknowledges that performance cannot be expressed by any single metric, as well as the fact that all metrics have specific strengths and weaknesses. Therefore, using multiple complementary metrics can help to provide a more complete picture and reflect different aspects of research productivity and impact in the final assessment. ( Elsevier)
Citation metrics across disciplines - Google Scholar, Scopus, and the Web of ...Anne-Wil Harzing
Key conclusions:
1. Will the use of citation metrics disadvantage the Social Sciences and Humanities?
* Not, if you use a database that includes publications important in those disciplines (e.g. books, national journals)
* Not, if you correct for differences in co-authorships
2. Is peer review better than metrics for the Social Sciences and Humanities?
* Yes, in a way…. The ideal version of peer review (informed, dedicated, and unbiased experts) is better than a reductionist version of metrics
* However, an inclusive version of metrics is probably better than the likely reality of peer review (hurried semi-experts, potentially influenced by journal outlet and affiliation)
Metrics vs peer review: Why metrics can (and should?) be applied in the Socia...Anne-Wil Harzing
Review the debates on metrics vs peer review and suggests that we are comparing the idealised version of peer review to the reductionist version of metrics. Instead we should compare the reality of peer review with the inclusive version of metrics.
Impact Factor and the Evaluation of Scientists - a book chapter by Nicola de ...Xanat V. Meza
Disclaimer: all original texts and images belong to their rightful owners.
Chapter 6 of the Book "Bibliometrics and citation analysis" by Nicola de Bellis.
Citation metrics versus peer review: Google Scholar, Scopus and the Web of Sc...Anne-Wil Harzing
This presentations reports on a systematic and comprehensive comparison of the coverage of the three major bibliometric databases: Google Scholar, Scopus and the Web of Science. Based on a sample of 146 senior academics in five broad disciplinary areas, we therefore provide both a longitudinal and a cross-disciplinary comparison of the three databases.
Our longitudinal comparison of eight data points between 2013 and 2015 shows a consistent and reasonably stable quarterly growth for both publications and citations across the three databases. This suggests that all three databases provide sufficient stability of coverage to be used for more detailed cross-disciplinary comparisons.
Our cross-disciplinary comparison of the three databases includes four key research metrics (publications, citations, h-index, and hI,annual, an annualised individual h-index) and five major disciplines (Humanities, Social Sciences, Engineering, Sciences and Life Sciences). We show that both the data source and the specific metrics used change the conclusions that can be drawn from cross-disciplinary comparisons.
Citations—often termed as intellectual transactions, acknowledgment of intellectual debts, and conceptual association—are a link between the author’s current study and already published work. It not only provides credibility to the author’s work but also helps funders evaluate the impact of the research study. Citation indexes are maintained for information retrieval of both cited and citing work, facilitating the literature search process. It also helps authors in identifying the number of citations that their papers have received. Citation data is considered as a legitimate measure to rank authors, journals, and publishers. Through this webinar, we aim to provide information about citation indexing and how authors and publishers can get indexed in established citation databases.
Instructions for Submissions thorugh G- Classroom.pptxJheel Barad
This presentation provides a briefing on how to upload submissions and documents in Google Classroom. It was prepared as part of an orientation for new Sainik School in-service teacher trainees. As a training officer, my goal is to ensure that you are comfortable and proficient with this essential tool for managing assignments and fostering student engagement.
The Indian economy is classified into different sectors to simplify the analysis and understanding of economic activities. For Class 10, it's essential to grasp the sectors of the Indian economy, understand their characteristics, and recognize their importance. This guide will provide detailed notes on the Sectors of the Indian Economy Class 10, using specific long-tail keywords to enhance comprehension.
For more information, visit-www.vavaclasses.com
2024.06.01 Introducing a competency framework for languag learning materials ...Sandy Millin
http://sandymillin.wordpress.com/iateflwebinar2024
Published classroom materials form the basis of syllabuses, drive teacher professional development, and have a potentially huge influence on learners, teachers and education systems. All teachers also create their own materials, whether a few sentences on a blackboard, a highly-structured fully-realised online course, or anything in between. Despite this, the knowledge and skills needed to create effective language learning materials are rarely part of teacher training, and are mostly learnt by trial and error.
Knowledge and skills frameworks, generally called competency frameworks, for ELT teachers, trainers and managers have existed for a few years now. However, until I created one for my MA dissertation, there wasn’t one drawing together what we need to know and do to be able to effectively produce language learning materials.
This webinar will introduce you to my framework, highlighting the key competencies I identified from my research. It will also show how anybody involved in language teaching (any language, not just English!), teacher training, managing schools or developing language learning materials can benefit from using the framework.
The Art Pastor's Guide to Sabbath | Steve ThomasonSteve Thomason
What is the purpose of the Sabbath Law in the Torah. It is interesting to compare how the context of the law shifts from Exodus to Deuteronomy. Who gets to rest, and why?
The French Revolution, which began in 1789, was a period of radical social and political upheaval in France. It marked the decline of absolute monarchies, the rise of secular and democratic republics, and the eventual rise of Napoleon Bonaparte. This revolutionary period is crucial in understanding the transition from feudalism to modernity in Europe.
For more information, visit-www.vavaclasses.com
Unit 8 - Information and Communication Technology (Paper I).pdfThiyagu K
This slides describes the basic concepts of ICT, basics of Email, Emerging Technology and Digital Initiatives in Education. This presentations aligns with the UGC Paper I syllabus.
2. Today’s article Bornmann L, Daniel HD. What do we know about the h index? J Am Soc Inf Sci Technol. 2007;58(9):1381-1385. Review of research inspired by: Hirsch JE. An index to quantify an individual’s scientific research output. Proc Natl Acad Sci U S A 2005;102(46):16569-16572. Also will allude to: Bornmann L, Daniel HD. The state of h index research. EMBO Rep 2009;10(1):2-6.
3. What is the h Index? Proposed by Hirsch (2005) Quantifies scientific output of a single researcher as a single number: An h index of 40 means that a scientist has published 40 papers that each had at least 40 citations. “… original and simple new measure incorporating both quantity and visibility of publications”
4. h index and the Scientific Life Cycle A scientist's h index never decreases It increases as… New high-impact papers are published Sleeping Beauties come to life (van Raan, 2004) A scientist's papers attract citations
5. Advantages of h Estimates broad impact of cumulative research contributions (Hirsch, 2005) Insensitive to citation extremes Infrequently cited or noncited papers “One-hit wonders” It “favors enduring performers that publish a continuous stream of papers with lasting and above-average impact.”
9. Cited by 111 articles h=111 Article #111 in results list
10. Applications of the h Index Micro vs. meso level applications Micro: single researcher Meso (intermediate): group of researchers Scientific facilities Countries (Csajbók et al, 2007) h - b index based on topic or compound search in Web of Science (Banks, 2006) How much work has already been done on topic? What topic is hot or mainstream now?
11. Convergent Validity of the h Index Hirsch computed h indices for: Nobel Prize physicists in the last 20 years Physicists and astronomers elected to National Academy of Sciences in 2005 Highly cited scientists in biological and biomedical sciences Hirsch’s threshold values Successful scientist: h=20 after 20 years Outstanding scientist: h=40 after 20 years Truly unique individual: h=60 after 20 years, or 90 after 30 years
28. Potato, Potahto Simplistic BUTmeasures broad impact of one's work Can’t differentiate between active and inactive scientists BUT never decreases Can’t differentiate between no longer significant works and those now shaping scientific thinking BUTalso can't spotlight works which are "trendy”--this is a bad thing?
29. Different disciplines, different citation patterns: should be used to compare only scientists of similar professional age working in similar disciplines BUTHirsch himself noted that h in biosciences higher than in physics Combining publication and citation frequencies into one value "posits an equality between two quantities with no evident logical connection" BUT preferable to other single-number criteria (total number of papers; total number of citations; citations per paper)
30. "Think of two scientists, each with 10 papers with 10 citations, but one with an additional 90 papers with 9 citations each; or suppose one has exactly 10 papers of 10 citations and the other exactly 10 papers of 100 each. Would anyone think them equivalent?” BUT2 individuals with similar hs are comparable in terms of their overall scientific impact, even if their total number of papers or their total number of citations is very different.
31. Indices that Correct or Complement h One that corrects for self-citations* m quotient: Corrects for bias toward researchers with longer careers/more papers* Ones that look at a definite time period, such as c (for most recent calendar year) hI: Normalizes for different numbers of co-authors in different fields* a and g indexes: More sensitive to highly cited papers—the scientist’s “top performers” * Proposed or developed by Hirsch himself
32. Conclusion No thorough validation of the h index yet in its various applications Would need to entail… Cross-discipline validation Broad statistical data So, h index and its derivatives and alternatives “should not (yet) be used as a criterion to inform decision making in science…”
33. What do we know aboutthe h index? Pat Weiss HSLS Journal Club March 4, 2009
Editor's Notes
h presumably for “Hirsch” – physicist at UCSD who first proposed the index in 2005Paper has been cited 267 times (last time I checked)h = 0: May have published papers—they just haven’t been cited
Hirsch: an individual’s h should increase more or less directly with time.vanRaan (2004):“A ‘Sleeping Beauty in Science’ is a publication that goes unnoticed (‘sleeps’) for a long time and then, almost suddenly, attracts a lot of attention (‘is awakened by a prince’).”
Data needed for calculation available in:Web of ScienceScopusGoogle ScholarChemical AbstractsIn WoS, do with author search in straight Search (not Cited Ref Search)*CLICK* Or, use the Citation Report creator…
… and read h directly off the report.
In Scopus, Starzlh=87 (diff DB, diff results)
Countries: in biology and biochemistry for the period 1996–2006, the US, the UK, and Germany have h indices of 400, 219 and 206 respectively (Csajbók et al, 2007).Complement to journal impact factors (Braun et al., 2005)Caveat: don’t include review journals b/c upper limit of a journal’s h is # of papers published, and they don’t publish very many.Banks (2006): h - b index for interesting topics and compounds: found by entering a topic (search string, like superstring or teleportation ) or compound (name or chemical formula) into the Web of Science database and then ordering the results in terms of citations, by largest first.how much work has already been done on certain topics or compounds, what the hot topics (or older topics ) of interest are, or what topic or compound is mainstream research at the present time
Convergent ValidityThe ability of a measurement scale to correlate (or converge) with other measures of the same variable. How does h relate to other standards for evaluating research performance?Other bibliometric indicators Outcomes of peer review
Boehringer Ingelheim Fonds (BIF) = an international foundation for promotion of basic biomedical research
van Raan’s actual conclusion is more nuanced:“Results show that the h-index and our bibliometric ‘crown indicator’ [a composite measure] both relate in a quite comparable way with peer judgments. But for smaller groups in fields with ‘less heavy citation traffic’ the crown indicator appears to be a more appropriate measure of research performance.” [emphasis mine]
Database = SPIRES (Stanford Physics Information Retrieval System database)
As derived from Web of Science records (same holds true for any other resource whose records don’t include a definitive author ID—which covers all products)From WoS documentation:“The h-index factor is based on the depth of your Web of Science subscription and your selected timespan. Items that do not appear on the Results page will not be factored into the calculation. If your subscription depth is 10 years, then the h-index value is based on this depth even though a particular author may have published articles more than 10 years ago.
h as alternative to inappropriate use on journal impact factors to report personal achievement in tenure and promotion decisions.My Dec 2006 HSLS Update article on this problem:http://www.hsls.pitt.edu/about/news/hslsupdate/2006/december/personal_cit_counting
But for every advantage to h cited by Hirsch and colleagues who see the glass half full, someone (sometimes the same person) points out a problem.Bornmann & Daniel (2007) citing Glänzel (2006)Hirsch (2005)Bornmann & Daniel (2007) citing Sidiropoulos et al. (2006)Bornmann & Daniel (2007) citing Sidiropoulos, Katsaros, & Manolopoulos (2006)Bornmann & Daniel (2007) citing Sidiropoulos et al. (2006)Bornmann & Daniel (2007) citing Sidiropoulos et al. (2006)
Bornmann & Daniel (2007) citing Kelly & Jennions (2006); Sidiropoulos et al. (2006)Hirsch (2005)Bornmann & Daniel (2007) citing Lehmann et al. (2008)Hirsch (2005)
Bornmann & Daniel (2007) citing Joint Committee on Quantitative Assessment of Research (2008)Hirsch (2005)
N.B.: Need for several suggested by Hirsch himselfm quotient: Corrects for bias toward researchers with longer careers/more papershI: More co-authors, more articles, and potentially more citations (?)