Elsevier - MedicReS World Congress 2011

536 views

Published on

Measuring Journal Prestige

Published in: Health & Medicine, Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
536
On SlideShare
0
From Embeds
0
Number of Embeds
8
Actions
Shares
0
Downloads
4
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide
  • Opening questions to be considered one by one with optional audience participation and discussion after each. These are meant to gauge the audience’s understanding of journal prestige and bibliometrics coming into the presentation.
    The goal is to get audience to think about why certain journals are considered good and others considered bad.
  • The 3 questions that the presentation will attempt to answer are:
    Why is there the need to discern the quality of journals?
    What metrics are used to compare journals?
    Which journal is appropriate for a researcher to submit their research to?
  • Section title Slide
  • For every generation of scholars, the problem of information overload has always seemed insurmountable. The annual launch of new journals has often been seen as a contributing factor to the burgeoning literature, rather than a consequence of it. In actuality, the increase in journals over time has mirrored the increase in the number of researchers. Information overload may be occurring, but it is not because of too many journals, it is due to the growth in the number of scientists.
    Bibliometric analysis can serve as a tool to demonstrate how a newly launched journal contributes uniquely to the community it serves and paves the way for the dissemination of research.
  • Here is one example of how crowded some subject areas have become.
    In the category of “Analytical Chemistry”, 11 journals titles are unveiled here.
    The point is that not only do the titles sound very similar in certain respects, it is difficult to judge which journal might be better than others unless you had a deeper understanding of the journal histories, editors, prestige, and audience.
    Furthermore, these are just 11 of more than 60 journals in this subject area.
    The second point is that journals must vigorously compete with one another to attract the best authors and best editors and elevate their reputation and prestige.
    There are various metrics which can serve as useful tools to help in assessing the relative standing of journals.
  • Section Title Slide
  • The use of citation data and bibliometrics can be effectively used as one means for measuring the impact or influence of articles, authors, and journals.
    Academic institutions often use journal metrics to evaluate faculty for tenure and promotion.
    Some of the more commonly used journal metrics are listed here and each will be described in detail in the slides ahead:
    Impact Factor
    H-index
    SCImago Journal Rank
    Usage
  • In the field of bibliometrics, or citation metrics, Thomson has developed a suite of products that better inform our knowledge of the number of citations a journal receives. Thus, the most common way that we judge the “quality” of a journal is by how many times its articles get cited by other articles.
    The ISI Web of Science is a Citation index that tracks citations for science, social science, and arts journals.
    The Impact Factor is a metric that is calcualted for Science and Social science journals as part of the Journal Citation Reports.
    The Impact Factor has become the traditional metric that is used to track citations and compare journals in the same field. It is the most well-known and most established metric.
  • The Impact Factor is defined as a ratio between the citations received and the recent citable items published in a journal. In other words, it is the average number of citations received per published article.
    The IF calculations are standardized by taking the total citations in the current year to articles published in the past two years (blue box);
    And dividing that by the total number of articles published in the past two years (red box).
    These calculations are carried for all of the journals that are listed in Thomson’s ISI database and are published 6 months into each new year.
  • Another weakness of IFs is that they are very discipline-dependent.
    As can be seen from this first chart, the IF varies greatly by subject area with biomedical fields receiving many more citations on average than the mathematics, social science, or materials science fields.
    Even when we look within the same subject area at different categories (such as in Materials Science in this second chart), there are still widely varying IF values.
    It is ineffective to compare journals across categories and especially across subject areas as the research communities in each exhibit different behaviors when it comes to citations and referencing.
  • Finally it is important to summarize the weaknesses and proper uses of the IF
    It should be used as a tool for libraries to assess their collection development and whether the resources they are providing are meeting the needs of their users.
    Even though the IF may serve as a good benchmark for a journal’s importance to its field, it should not be used as a direct proxy for quality alone considering how the IF can be manipulated and considering that in many scientific circles, citations are not the purest form of a journal’s worth.
    The IF should not be used to compare different types of journals (e.g. review journal versus a letter journal).
    The IF should not be used to compare journals in different fields.
    The IF should not be used for authors to derive “vanity” ratings
    And, though it can be, the IF should not be abused and manipulated by authors, reviewers, editors, and publishers in the way they treat citations.
  • There are other journal metrics that are commonly associated with Impact Factors and provide supplementary ways to describe the citation behavior for a particular journal:
    The cited half life is the median age of the journal’s articles cited in the current year.
    A higher or lower cited half-life does not imply any particular value for a journal. For instance, a primary research journal might have a longer cited half-life than a journal that provides rapid communication of current information. Cited half-life figures may be useful to assist in collection management and archiving decisions.
    2. The immediacy index is the average number of times an article is cited in the year it is published. For comparing journals specializing in cutting-edge research, the immediacy index can provide a useful perspective.
    3. The influence indicates the share of citations that an individual title or publisher has within a given subject category or subject group.
  • A metric that is gaining some weight and momentum in terms of being used is the H-index.
    It was developed just a few years ago by Jorge Hirsch and is used to rate not only journals, but also individuals.
    The H-index takes into account both the quantity of output by a journal or author as well as the quality of the journal or author (as determined by the number of citations).
  • By definition, the H-index is most easily calculated when the number of papers are lined up in decreasing order according to the number of citations they received.
    When the number of papers is equal to the number of citations those papers received, this value is then determined as the H-index.
  • In Elsevier’s case, we have adopted the H-index as a built in feature of our abstracting and indexing database, Scopus
    If we searched for Hirsch’s articles and ordered them according to the number of citations they received (right-hand column of the table), we can then calculate Hirsch’s own H-index.
    A curve can then be generated which pinpoints his H-index as 16.
  • Analogously, journals can also have H-indexes which are based on all the articles that have ever been published in the journal and based on all the citations that the journal may have ever received.
  • Another major alternative to the IF, is the SCImago journal rank, or SJR.
    This was a metric developed in Spain and it uses Scopus data as its source data.
    Unlike the IF and H-index, the equation to calculate the SJR is MUCH more complicated.
    Qualitatively, the SJR uses the number of citations a journal receives for articles published in the last 3 years, but weights those citations according to the SJR of the citing journal. In other words, citations from highly ranked journals count for more than citations from lower ranked journals. This is similar to the way Google Page Ranking works.
  • 1. In the case of the SJR, even though it is called a “rank”, the higher the SJR, the better the quality of the journal.
  • Finally, a metric which is getting more and more attention as a way to measure journal value and impact is usage.
    Rather than being citation-based, usage is based on the reader and end-user behavior. One “use” is typically defined as when a full-text article is downloaded or viewed.
    The development of usage is still in its infancy. Industry has formed a project organization called “COUNTER” to attempt to standardize usage reporting and also develop a formal “Usage Factor”.
    Libraries already use usage statistics heavily to evaluate their collections and spending, so a formal metric will further facilitate these efforts.
    Authors are also very interested to see how much their work is being read.
    This chart gives a sample spreadsheet for how a library could track the usage of its subscribed titles on a monthly basis.
    The advantages of usage over citation-based metrics include the fact that usage data can be compiled and reported immediately for a more current view of the impact of an article or journal, usage is not prone to self-citation manipulations by authors. But usage can be prone to other manipulations.
  • In summary:
    The H-index, SJR, and usage all serve as alternative metrics that are gaining weight and being used more frequently to evaluate authors and journals.
    There are even more metrics being developed, for example, the Eigenfactor.
    But the IF has been the established metric and still is the industry-standard to first assess journal importance.
  • Section Title Slide
  • In looking back at our original example of journals in Analytical Chemistry, how can we better determine which journal is right for a researcher to submit to?
  • Selection of a journal will depend on many factors in addition to journal metrics:
    1. Some factors make the decision easier, such as the type of manuscript written (reviews should ideally be submitted to review journals; letters submitted to letter journals).
    2. Another factor to consider is the specific subject area (electroanalysis is more likely to fit with one of the “Electroanalysis” titled journals). 3. The significance or expected impact of the work will indicate whether you should publish in a top journal or middle of the pack journal.
    4. The prestige/quality of the journal
    5. The respect of the editors in the field
    6. The editorial and production speed of the journal (ideally you would want to get your work published as quickly as possible).
    7. The community and audience associated with the journal
    8. The coverage and distribution (regional, international)
  • Summary points, as written.
  • Elsevier - MedicReS World Congress 2011

    1. 1. Measuring Journal and Research Prestige Stephen Troth 26th March 2011 s.troth@elsevier.com MedicReS International Conference on Good Medical Research
    2. 2. 2 Opening Questions  What is the best scientific journal that you know of?  Why is that one the best?  Why can’t another journal be just as good as that one?
    3. 3. 3 Objectives  Why is there a need to discern quality among different journals?  What metrics are used to compare journals?  Which journal is appropriate for me to submit my research to?
    4. 4. Why is there a need to discern quality among different journals?
    5. 5. 5 Growth of peer-reviewed journals “This is truly the decade of the journal and one should seek to limit their number rather than to increase them, since there can be too many periodicals.” (1789) Neues medicinisches Wochenblatt fur Aerzte
    6. 6. 6 Growth of scholarly journals 0 5000 10000 15000 20000 25000 <1900 1900s 1910s 1920s 1930s 1940s 1950s 1960s 1970s 1980s 1990s >2000 Decade NumberofActive,Peer-ReviewedJournals Source: ~3% per annum • Number of journals and disciplines is increasing • The need to discern journal quality becomes that much more important
    7. 7. 7 Journal Competition • Journals must vigorously compete with each other for the best papers and the best authors • The concept of journal prestige originates from this competition For example: In the category of “Analytical Chemistry” Analytical Chemistry Analytica Chimica Acta Analytical Biochemistry Analytical and Bioanalytical Chemistry Journal of Electroanalytical Chemistry Analyst Electroanalysis Analytical Sciences Journal of Analytical Chemistry Current Analytical Chemistry Reviews in Analytical Chemistry And >50 others! How can you tell which of these are high quality journals?
    8. 8. What metrics are used to compare journals?
    9. 9. 9 Overview of Journal Metrics  Impact Factor  H-index  SCImago Journal Rank  Usage  Others • Journal citation data and bibliometrics can be used to measure the impact or influence of articles, authors, and journals
    10. 10. 10 Impact Factor • Citation index of Science, Social Science, Arts & Humanities journals • Impact Factors of Science and Social Science journals Impact Factor is the most well-known citation metric
    11. 11. 11 Impact Factor Definition & Calculation IF is published 6 months after the end of the year it relates to (i.e. 2006 data published in mid-2007) Definition: A ratio between citations and recent citable items published in a journal (i.e. the average number of citations received per published article)
    12. 12. 12 The Impact Factor
    13. 13. 13 Citations to non-source items (editorials, letters, news items, book reviews, abstracts, etc) may inflate the IF The Impact Factor anomaly To all items (regardless of type) Of source items (“articles” and “reviews”)
    14. 14. 14 Influences on the IF: Article TypeCitations Articles Reviews Years after publication Notes 4 8 12 160 2 6 10 14 18 20 Impact Factor
    15. 15. 15 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 Mathematics & Computer Sciences Social Sciences Materials Science & Engineering Biological Sciences Environmental Sciences Earth Sciences Chemistry & Chemical Engineering Physics Pharmacology & Toxicology Clinical Medicine Neuroscience Fundamental Life Sciences Mean Impact Factor (1998) Influences on the IF: Subject Area CHARACTERIZATION & TESTING 0.000 0.500 1.000 1.500 2.000 2.500 3.000 PAPER & WOOD TEXTILES CERAMICS COMPOSITES COATINGS & FILMS MULTIDISCIPLINARY POLYMER SCIENCE NANOSCI & NANOTECH BIOMATERIALS Aggregate 2006 IF Materials Science disciplines Impact Factors carry little meaning unless they are compared within the same subject area and discipline
    16. 16. 16 Influences on the IF: Subject Area
    17. 17. 17 2008 IF
    18. 18. 18 Impact Factor Use and Abuse • Used for library collection development  • Open to manipulation by authors, reviewers, editors and publishers  • Used to compare journals of different types  • Used to compare journals in different fields  • Used to derive a ‘personal IF’  • Used as a lone proxy for journal ‘quality’ 
    19. 19. 19 Impact Factor doubts June 5, 2006 October 14, 2005
    20. 20. 20 Elsevier’s philosophy on the IF “Elsevier uses the Impact Factor as one of a number of performance indicators for journals. It acknowledges the many caveats associated with its use and strives to share best practice with its authors, editors, readers and other stakeholders in scholarly communication. Elsevier seeks clarity and openness in all communications relating to the IF and does not condone the practice of manipulation of the IF for its own sake.”
    21. 21. 21 Other IF-related metrics  Cited Half-life  The cited half-life for the journal is the median age of its articles cited in the current JCR year  Immediacy Index  The immediacy index is the average number of times an article is cited in the year that it is published  Influence  The influence indicates the share of citations that an individual title or publisher has within a given subject category or subject group
    22. 22. 22 h-index • Proposed by physicist Jorge Hirsch in 2005 • Rates individuals or journals based on career publications • Incorporates both quantity (no. of publications) and quality (no. of citations)
    23. 23. 23 Calculating the h-index If you list a scientist’s papers in descending order of the number of citations received to date, his/ her h-index is 8 if 8 papers have each received 8 or more citations
    24. 24. 24 h-index
    25. 25. 25 h-index for journals
    26. 26. 26 SCImago Journal Rank  Produced by experts in Spain  Data sourced from Scopus and incorporated in it  A ratio of citations in current year to articles published in the previous 3 years  Citations are weighted by the SJR of the citing journal (like Google weights links to webpages)
    27. 27. 27 SCImago Journal Rank Note European decimal notation!
    28. 28. New metrics : Popularity 28
    29. 29. 29 Free (eigenfactor.org); also now part of the JCR Similar to Impact Factor, but considers 5 years Self-citations excluded Citations weighted by EF of the citing journal Eigenfactor Article Influence
    30. 30. 30 Usage  Usage is a new concept for measuring journal value and impact  Typically defined as when a full-text article is downloaded or viewed  COUNTER is attempting to standardize usage reporting and develop a “Usage Factor” metric  Libraries already use usage statistics heavily to evaluate their collections and spending  Authors also interested to see how much their works are used
    31. 31. 31 Summary of Different Metrics  H-index, SJR, and usage are gaining weight as more users include them as evaluative tools  Other metrics are also being studied and developed not mentioned here: Eigenfactor  But IF is still the industry-standard and first metric that is used to assess journal importance
    32. 32. Which journal is appropriate for me to submit my research to?
    33. 33. 33 Journal Selection How would you select among the journals of “Analytical Chemistry”? Analytical Chemistry Analytica Chimica Acta Analytical Biochemistry Analytical and Bioanalytical Chemistry Journal of Electroanalytical Chemistry Analyst Electroanalysis Analytical Sciences Journal of Analytical Chemistry Current Analytical Chemistry Reviews in Analytical Chemistry And >50 others!
    34. 34. 34 Journal Selection  Selection of a journal will depend on many factors in addition to journal metrics  The aims and scope of the journal  The type of manuscript you have written (review, letter, articles)  The specific subject area  The significance of your work  The prestige/quality of the journal  The respect of the editors in the field  The editorial and production speed of the journal  The community and audience associated with the journal  The coverage and distribution (regional, international)
    35. 35. 35 Summary  Why is there a need to discern quality among different journals?  Increasing number of journals and disciplines  What metrics are used compare journals?  Impact Factor  H-index  SCImago Journal Rank  SNIP/Eigenfactor  Usage  Which journal is appropriate for me to submit my research to?  Consider the significance and scope of your work. Ask professors in your field what journal would be appropriate for the area and level of research you have conducted  Consider the aims, scope, subject area, prestige, editors, editorial and production speed, community/audience, and coverage of a journal

    ×