This document discusses criticisms of using the journal impact factor (IF) as a measure of a journal's influence. It begins by explaining how IF is calculated based on the number of citations to articles published in a journal over the past two years. While IF is widely used, the document outlines several criticisms, including that IF is not a transparent measure, excludes some citation sources, does not account for article quality, and can be manipulated through self-citation. The document concludes that while IF is easy to measure, it has limitations and should not be used to evaluate individual articles or researchers.
Does a Long Reference List Guarantee More Citations? Analysis of Malaysian Hi...Nader Ale Ebrahim
Earlier publications have shown that the number of references as well as the number of received citations are field-dependent. Consequently, a long reference list may lead to more citations. The purpose of this article is to study the concrete relationship between number of references and citation counts. This article tries to find an answer for the concrete case of Malaysian highly cited papers and Malaysian review papers. Malaysian paper is a paper with at least one Malaysian affiliation. A total of 2466 papers consisting of two sets, namely 1966 review papers and 500 highly-cited articles, are studied. The statistical analysis shows that an increase in the number of references leads to a slight increase in the number of citations. Yet, this increase is not statistically significant. Therefore, a researcher should not try to increase the number of received citations by artificially increasing the number of references.
This document provides an overview of citation indexing and describes some key tools and concepts. Citation indexing traces the use of ideas across research by identifying papers that cite older publications. The Institute for Scientific Information pioneered citation indexing databases like the Web of Science. While comprehensive, the WoS has limitations in coverage of non-English language and developing world journals. The Indian Citation Index was created to index more Indian publications and support research evaluation in India. Impact factors are calculated based on citations in the Journal Citation Reports to measure journal influence.
The presentation discusses about a Thesis, Research paper, Review Article & Technical Reports: Organization of thesis and reports, formatting issues, citation methods, references, effective oral presentation of research. Quality indices of research publication: impact factor, immediacy factor, H- index and other citation indices. A verbal consent of Prof. Dr. C. B. Bhatt was obtained (at 4.15pm on Dt. 26-11-2016 at Hall A-2, GTU, Chandkheda) to float the presentation online in benefits of the research scholar society.
The document discusses author level metrics and how they are used to measure the impact of individual authors. It defines author level metrics as citation metrics that measure the bibliometric impact of individual researchers. It also discusses different types of author level metrics, including article-level metrics, journal-level metrics, h-index, i10-index, g-index, and altmetrics. Finally, it discusses tools that can be used to measure author metrics, such as Google Scholar, Web of Science, Scopus, and Publish or Perish.
This document discusses the rise of electronic journals (e-journals) as a means for research in the 21st century. It notes that e-journals now make up 96% of scholarly articles in sciences, technology, medicine, and 86% in arts, humanities, and social sciences. The advantages of e-journals include unlimited resources, international access, cost-effectiveness, enhanced access to articles, and 24/7 access without geographical boundaries. The document also examines the history of e-journals, publishers' role in marketing them, and issues librarians face in supporting e-journals, such as budget constraints and developing infrastructure.
The document discusses citation indexing. It defines citation indexing as a process that detects relationships between documents through citations. When one document cites another document, there is a conceptual relationship between the ideas in the two documents. The document outlines the history and development of citation indexing, including the first citation index created by Frank Shephard and important contributions by Eugene Garfield. It describes the major citation indexes produced by the Institute for Scientific Information (ISI), now Thomson Reuters, including the Science Citation Index, Social Sciences Citation Index, and Arts and Humanities Citation Index.
This document provides an overview of various bibliometric products and metrics that can be used to measure research impact, including journal impact factor, h-index, citation counts, and journal/article ranking tools from Journal Citation Reports, Scopus, and Google Scholar. It discusses the purpose and calculations of metrics like impact factor, eigenfactor, and source normalized impact per paper (SNIP). It also covers limitations of bibliometrics and recommends using multiple metrics and tools to evaluate research. Exercises are provided to help understand how to analyze journals, articles, and individual researchers using different bibliometric resources.
This document discusses journal performance metrics available through Journal Citation Reports on the ISI Web of Knowledge. It defines common metrics like the Journal Impact Factor, Immediacy Index, Journal Cited Half-Life, Eigenfactor Score, and Article Influence Score which provide objective means for evaluating leading academic journals. Additional information and questions about Journal Citation Reports can be directed to Linda Galloway.
Does a Long Reference List Guarantee More Citations? Analysis of Malaysian Hi...Nader Ale Ebrahim
Earlier publications have shown that the number of references as well as the number of received citations are field-dependent. Consequently, a long reference list may lead to more citations. The purpose of this article is to study the concrete relationship between number of references and citation counts. This article tries to find an answer for the concrete case of Malaysian highly cited papers and Malaysian review papers. Malaysian paper is a paper with at least one Malaysian affiliation. A total of 2466 papers consisting of two sets, namely 1966 review papers and 500 highly-cited articles, are studied. The statistical analysis shows that an increase in the number of references leads to a slight increase in the number of citations. Yet, this increase is not statistically significant. Therefore, a researcher should not try to increase the number of received citations by artificially increasing the number of references.
This document provides an overview of citation indexing and describes some key tools and concepts. Citation indexing traces the use of ideas across research by identifying papers that cite older publications. The Institute for Scientific Information pioneered citation indexing databases like the Web of Science. While comprehensive, the WoS has limitations in coverage of non-English language and developing world journals. The Indian Citation Index was created to index more Indian publications and support research evaluation in India. Impact factors are calculated based on citations in the Journal Citation Reports to measure journal influence.
The presentation discusses about a Thesis, Research paper, Review Article & Technical Reports: Organization of thesis and reports, formatting issues, citation methods, references, effective oral presentation of research. Quality indices of research publication: impact factor, immediacy factor, H- index and other citation indices. A verbal consent of Prof. Dr. C. B. Bhatt was obtained (at 4.15pm on Dt. 26-11-2016 at Hall A-2, GTU, Chandkheda) to float the presentation online in benefits of the research scholar society.
The document discusses author level metrics and how they are used to measure the impact of individual authors. It defines author level metrics as citation metrics that measure the bibliometric impact of individual researchers. It also discusses different types of author level metrics, including article-level metrics, journal-level metrics, h-index, i10-index, g-index, and altmetrics. Finally, it discusses tools that can be used to measure author metrics, such as Google Scholar, Web of Science, Scopus, and Publish or Perish.
This document discusses the rise of electronic journals (e-journals) as a means for research in the 21st century. It notes that e-journals now make up 96% of scholarly articles in sciences, technology, medicine, and 86% in arts, humanities, and social sciences. The advantages of e-journals include unlimited resources, international access, cost-effectiveness, enhanced access to articles, and 24/7 access without geographical boundaries. The document also examines the history of e-journals, publishers' role in marketing them, and issues librarians face in supporting e-journals, such as budget constraints and developing infrastructure.
The document discusses citation indexing. It defines citation indexing as a process that detects relationships between documents through citations. When one document cites another document, there is a conceptual relationship between the ideas in the two documents. The document outlines the history and development of citation indexing, including the first citation index created by Frank Shephard and important contributions by Eugene Garfield. It describes the major citation indexes produced by the Institute for Scientific Information (ISI), now Thomson Reuters, including the Science Citation Index, Social Sciences Citation Index, and Arts and Humanities Citation Index.
This document provides an overview of various bibliometric products and metrics that can be used to measure research impact, including journal impact factor, h-index, citation counts, and journal/article ranking tools from Journal Citation Reports, Scopus, and Google Scholar. It discusses the purpose and calculations of metrics like impact factor, eigenfactor, and source normalized impact per paper (SNIP). It also covers limitations of bibliometrics and recommends using multiple metrics and tools to evaluate research. Exercises are provided to help understand how to analyze journals, articles, and individual researchers using different bibliometric resources.
This document discusses journal performance metrics available through Journal Citation Reports on the ISI Web of Knowledge. It defines common metrics like the Journal Impact Factor, Immediacy Index, Journal Cited Half-Life, Eigenfactor Score, and Article Influence Score which provide objective means for evaluating leading academic journals. Additional information and questions about Journal Citation Reports can be directed to Linda Galloway.
The document discusses various quality indices used to evaluate research publications and authors. It defines indices such as the impact factor, immediacy index, Eigenfactor, SCImago Journal Rank, H-index, G-index, and HB-index. It provides details on how each index is calculated and its significance. It also discusses limitations of impact factor and compares different journal quality indices. The document aims to explain these quality metrics to evaluate journals and authors.
Bibliometric portrait of srels journal of information management for the peri...Ghouse Modin Mamdapur
Analyse articles in SRELS journal of information management published during the years 2004-2013 and to study the key dimensions of its publication trends. For this study 10 volumes containing 48 issues have been taken up for evaluation. Necessary bibliometric measures are applied to analyse different publication parameters. In all 499 articles are published with an average 49 articles each year. The collaborative measures are calculated as per Ajiferuke et al (0.35), Lawani (2.28) and Subramanyam (0.65). The average author per article is 1.83 for 499 articles. Lotka’s law is tested and conforms to a value of n=2.27. There are 6224 citations found appended to 499 articles during the period 2004-2013. Journals (44.49%) are the top form of source used by authors followed by books (22.51%) and web pages (15.60%). Ranked list of prolific authors and ranked list of journals is prepared and presented in respective tables. Khaiser Nikam and MP Satija have topped the ranked list of prolific authors with 11 articles each. SRELS Journal of Information Management topped the ranked list of journals with 197 (7.11%) citations. Among different countries contributing to this journal, authors from India (94.75%) have made maximum contributions. Authors from Karnataka (43.59%) have contributed majority of articles among Indian States. Bradford’s Law of Scattering is verified through Leimkuhler model and found fitting the data.
This document summarizes a study analyzing the Twitter usage of journals indexed in the Science Citation Index (SCI). The researchers identified 857 SCI journals with Twitter accounts and extracted over 950,000 tweets. They analyzed the Twitter conversation network and found SCI journals had a higher rate of intra-index conversations than humanities and social sciences journals. The network structure of SCI journals formed a "tight crowd" graph centered around Nature journals, unlike the "community cluster" structure of humanities graphs. News portals, scientific organizations, and top journals were identified as prominent authorities in the SCI Twitter network. The top hub journals in the network represented medicine, biology, and chemistry and had high journal impact factors.
This document provides information about accessing and using Journal Citation Reports (JCR) through Web of Science (WoS). JCR offers quantitative tools to evaluate and rank journals. It uses citation data from over 20,000 journals to demonstrate the most influential journals in different fields and categories. WoS provides access to JCR, allowing users to find journal impact factors and rankings. The document outlines how to create JCR reports for specific journals or browse categories to find the most impactful journals in different subject areas. It also explains several common metrics for measuring journal impact, including total citations, journal impact factor, and Eigenfactor score.
Impact Factor and the Evaluation of Scientists - a book chapter by Nicola de ...Xanat V. Meza
Disclaimer: all original texts and images belong to their rightful owners.
Chapter 6 of the Book "Bibliometrics and citation analysis" by Nicola de Bellis.
Primary research presentation r leap 1st section of the manualResearchLeap
This presentation discusses citation indexes and the h-index metric for measuring research impact. It explains that citation indexes allow users to establish which documents cite earlier works, and that the h-index considers both the number of papers published and the number of citations received. The presentation then provides tips for finding one's h-index using tools like Google Scholar and Publish or Perish, and for increasing one's h-index through publishing in journals with high citation rates, sharing work online, and judiciously self-citing relevant past papers. Finally, it outlines several citation indexes, including the Science Citation Index, Social Sciences Citation Index, and Journal Citation Reports.
The Science Citation Index (SCI) was created in 1960 by Eugene Garfield to allow searching by cited references. It has since evolved into the Web of Science database, which provides access to multiple citation indexing databases covering science, social science, arts and humanities journals. Web of Science allows searching by author, cited references, and keywords to find relevant research and analyze impact metrics like citation counts and the h-index. Access is generally through institutional subscriptions.
This document provides guidance on finding citation information using Web of Science. It describes how to use Web of Science to find out how many times an article has been cited to gauge its impact, discover related articles, access citation reports for a subject, and conduct a cited reference search. It explains how to view citation counts, citing articles, journal impact, create citation reports, analyze search results, and search by cited references. The document includes screenshots to demonstrate these functions within the Web of Science interface.
This document provides an introduction to the information literacy skills module at Universiti Sains Malaysia (USM) Library. It outlines the objectives of the module which are for students to identify library facilities and services, find information resources in all formats, and learn search strategies. It then provides details on the contents which include an introduction to the USM Library and its branches and facilities, and descriptions of the library's searching platforms such as KRISALIS, WorldCat Discovery, and the repository.
The document discusses various metrics for measuring the impact of authors, journals, and articles. It describes the h-index, m-quotient, and g-index for measuring an author's impact based on their scholarly output and citations. Journal Impact Factor and SJR are discussed for comparing journals. Metrics for articles include citations in databases like Web of Science, Scopus, and Google Scholar as well as altmetrics from social media. Broadening research impact involves platforms like Academia.edu, Mendeley, ResearchGate, and Twitter.
This document provides information on resources for evaluating journals and identifying appropriate journals for publication. It discusses Journal Citation Reports (JCR), SJR, Directory of Open Access Journals (DOAJ), Mycite, journal suggestion tools from Springer and Elsevier, Endnote Web, and SciRev and MedSci resources for reviewing processes. Key indicators for journal evaluation include impact factor, immediacy index, eigenfactor, and SJR rank. Open access options within JCR and Scimago are also outlined.
This document provides an overview of information search and reference management tools available through the UKM Medical Centre Library Portal. It discusses how to perform literature searches in various subscribed databases accessible through the portal, including OvidSP, EBSCOhost Medical, ProQuest, and the UKM Knowledge Portal. It also reviews two popular reference management software programs - EndNote and Wizfolio - that can be used to organize references and citations. Key functions and features of each tool are described such as importing references, attaching PDFs, and citing sources within documents.
The document analyzes citations in 5439 master's theses in economics from Kermanshah Razi University in Iran from 2007-2015. It finds that:
1) Books and journals accounted for the majority of citations at 50.14% and 37.69% respectively.
2) Persian references accounted for 54.38% of citations while Latin references were 45.38%.
3) Most references dated from 1997 onward, showing they were generally up-to-date. Persian references were more updated than Latin ones.
4) On average, each thesis contained 65.53 citations.
5) Based on Bradford's rule, the core journals cited included 8 Persian journals and 13
This document provides instructions on how to write references in the Harvard and Vancouver styles. It explains that references are important to avoid plagiarism, show the breadth of research, acknowledge direct quotes, and provide evidence to support arguments. It then outlines the key elements to include for different types of references such as books, e-books, journal articles, and works with no author. Finally, it describes how to format in-text citations and structure a reference list in the Vancouver style.
Research impact metrics for librarians: calculation & contextLibrary_Connect
This document summarizes a presentation about research impact metrics for librarians. It discusses various metrics for measuring research impact, including the h-index, g-index, and altmetrics. It explains how the h-index and g-index are calculated using examples. It also discusses limitations of citation-based metrics and the importance of using a "basket of metrics" to provide context. The presentation emphasizes using appropriate metrics depending on the level of analysis (article, author, or journal) and considering factors like career stage and field of research. It provides information on data sources for different metrics and principles for evaluating research performance.
Soran University: Harvard style of referencing and citation guideJagar Ali
"Soran University: Harvard style of referencing and citation guide" is a student guide for referencing and citation in faculty of engineering - Soran University.
This document introduces an algorithm for preparing thesis documents using MS Word 2013. The algorithm addresses common problems authors face such as inconsistent page numbering, manually creating tables of contents, and time-consuming formatting. It provides a series of ordered steps to setup the document, insert different page types, apply styles, and add references according to writing standards. The key aspects of the algorithm are familiarizing users with MS Word tools, following a set order of processes, and using special techniques for issues like automatic page numbering and tables of contents. Following this algorithm allows authors to efficiently prepare research documents that meet requirements.
Problem statement: Although, literature proves the importance of the technology role in the effectiveness of virtual Research and Development (R&D) teams for new product development. However, the factors that make technology construct in a virtual R&D team are still ambiguous. The manager of virtual R&D teams for new product development does not know which type of technology should be used. Approach: To address the gap and answer the question, the study presents a set of factors that make a technology construct. The proposed construct modified by finding of the field survey (N = 240). We empirically examine the relationship between construct and its factors by employing the Structural Equation Modeling (SEM). A measurement model built base on the 19 preliminary factors that extracted from literature review. The result shows 10 factors out of 19 factors maintaining to make technology construct. Results: These 10 technology factors can be grouped into two constructs namely Web base communication and Web base data sharing. The findings can help new product development managers of enterprises to concentrate in the main factors for leading an effective virtual R&D team. In addition, it provides a guideline for software developers as well. Conclusion: The second and third generation technologies are now more suitable for developing new products through virtual R&D teams.
Does it Matter Which Citation Tool is Used to Compare the h-index of a Group ...Nader Ale Ebrahim
1) The study compares the h-index of 12 Nobel Prize winning scientists as reported by Scopus, Google Scholar, and Web of Science citation indexes.
2) It finds the Google Scholar h-index is higher on average than the other two indexes.
3) There is a significant positive relationship between the h-indices reported by Google Scholar and Scopus.
The document discusses various quality indices used to evaluate research publications and authors. It defines indices such as the impact factor, immediacy index, Eigenfactor, SCImago Journal Rank, H-index, G-index, and HB-index. It provides details on how each index is calculated and its significance. It also discusses limitations of impact factor and compares different journal quality indices. The document aims to explain these quality metrics to evaluate journals and authors.
Bibliometric portrait of srels journal of information management for the peri...Ghouse Modin Mamdapur
Analyse articles in SRELS journal of information management published during the years 2004-2013 and to study the key dimensions of its publication trends. For this study 10 volumes containing 48 issues have been taken up for evaluation. Necessary bibliometric measures are applied to analyse different publication parameters. In all 499 articles are published with an average 49 articles each year. The collaborative measures are calculated as per Ajiferuke et al (0.35), Lawani (2.28) and Subramanyam (0.65). The average author per article is 1.83 for 499 articles. Lotka’s law is tested and conforms to a value of n=2.27. There are 6224 citations found appended to 499 articles during the period 2004-2013. Journals (44.49%) are the top form of source used by authors followed by books (22.51%) and web pages (15.60%). Ranked list of prolific authors and ranked list of journals is prepared and presented in respective tables. Khaiser Nikam and MP Satija have topped the ranked list of prolific authors with 11 articles each. SRELS Journal of Information Management topped the ranked list of journals with 197 (7.11%) citations. Among different countries contributing to this journal, authors from India (94.75%) have made maximum contributions. Authors from Karnataka (43.59%) have contributed majority of articles among Indian States. Bradford’s Law of Scattering is verified through Leimkuhler model and found fitting the data.
This document summarizes a study analyzing the Twitter usage of journals indexed in the Science Citation Index (SCI). The researchers identified 857 SCI journals with Twitter accounts and extracted over 950,000 tweets. They analyzed the Twitter conversation network and found SCI journals had a higher rate of intra-index conversations than humanities and social sciences journals. The network structure of SCI journals formed a "tight crowd" graph centered around Nature journals, unlike the "community cluster" structure of humanities graphs. News portals, scientific organizations, and top journals were identified as prominent authorities in the SCI Twitter network. The top hub journals in the network represented medicine, biology, and chemistry and had high journal impact factors.
This document provides information about accessing and using Journal Citation Reports (JCR) through Web of Science (WoS). JCR offers quantitative tools to evaluate and rank journals. It uses citation data from over 20,000 journals to demonstrate the most influential journals in different fields and categories. WoS provides access to JCR, allowing users to find journal impact factors and rankings. The document outlines how to create JCR reports for specific journals or browse categories to find the most impactful journals in different subject areas. It also explains several common metrics for measuring journal impact, including total citations, journal impact factor, and Eigenfactor score.
Impact Factor and the Evaluation of Scientists - a book chapter by Nicola de ...Xanat V. Meza
Disclaimer: all original texts and images belong to their rightful owners.
Chapter 6 of the Book "Bibliometrics and citation analysis" by Nicola de Bellis.
Primary research presentation r leap 1st section of the manualResearchLeap
This presentation discusses citation indexes and the h-index metric for measuring research impact. It explains that citation indexes allow users to establish which documents cite earlier works, and that the h-index considers both the number of papers published and the number of citations received. The presentation then provides tips for finding one's h-index using tools like Google Scholar and Publish or Perish, and for increasing one's h-index through publishing in journals with high citation rates, sharing work online, and judiciously self-citing relevant past papers. Finally, it outlines several citation indexes, including the Science Citation Index, Social Sciences Citation Index, and Journal Citation Reports.
The Science Citation Index (SCI) was created in 1960 by Eugene Garfield to allow searching by cited references. It has since evolved into the Web of Science database, which provides access to multiple citation indexing databases covering science, social science, arts and humanities journals. Web of Science allows searching by author, cited references, and keywords to find relevant research and analyze impact metrics like citation counts and the h-index. Access is generally through institutional subscriptions.
This document provides guidance on finding citation information using Web of Science. It describes how to use Web of Science to find out how many times an article has been cited to gauge its impact, discover related articles, access citation reports for a subject, and conduct a cited reference search. It explains how to view citation counts, citing articles, journal impact, create citation reports, analyze search results, and search by cited references. The document includes screenshots to demonstrate these functions within the Web of Science interface.
This document provides an introduction to the information literacy skills module at Universiti Sains Malaysia (USM) Library. It outlines the objectives of the module which are for students to identify library facilities and services, find information resources in all formats, and learn search strategies. It then provides details on the contents which include an introduction to the USM Library and its branches and facilities, and descriptions of the library's searching platforms such as KRISALIS, WorldCat Discovery, and the repository.
The document discusses various metrics for measuring the impact of authors, journals, and articles. It describes the h-index, m-quotient, and g-index for measuring an author's impact based on their scholarly output and citations. Journal Impact Factor and SJR are discussed for comparing journals. Metrics for articles include citations in databases like Web of Science, Scopus, and Google Scholar as well as altmetrics from social media. Broadening research impact involves platforms like Academia.edu, Mendeley, ResearchGate, and Twitter.
This document provides information on resources for evaluating journals and identifying appropriate journals for publication. It discusses Journal Citation Reports (JCR), SJR, Directory of Open Access Journals (DOAJ), Mycite, journal suggestion tools from Springer and Elsevier, Endnote Web, and SciRev and MedSci resources for reviewing processes. Key indicators for journal evaluation include impact factor, immediacy index, eigenfactor, and SJR rank. Open access options within JCR and Scimago are also outlined.
This document provides an overview of information search and reference management tools available through the UKM Medical Centre Library Portal. It discusses how to perform literature searches in various subscribed databases accessible through the portal, including OvidSP, EBSCOhost Medical, ProQuest, and the UKM Knowledge Portal. It also reviews two popular reference management software programs - EndNote and Wizfolio - that can be used to organize references and citations. Key functions and features of each tool are described such as importing references, attaching PDFs, and citing sources within documents.
The document analyzes citations in 5439 master's theses in economics from Kermanshah Razi University in Iran from 2007-2015. It finds that:
1) Books and journals accounted for the majority of citations at 50.14% and 37.69% respectively.
2) Persian references accounted for 54.38% of citations while Latin references were 45.38%.
3) Most references dated from 1997 onward, showing they were generally up-to-date. Persian references were more updated than Latin ones.
4) On average, each thesis contained 65.53 citations.
5) Based on Bradford's rule, the core journals cited included 8 Persian journals and 13
This document provides instructions on how to write references in the Harvard and Vancouver styles. It explains that references are important to avoid plagiarism, show the breadth of research, acknowledge direct quotes, and provide evidence to support arguments. It then outlines the key elements to include for different types of references such as books, e-books, journal articles, and works with no author. Finally, it describes how to format in-text citations and structure a reference list in the Vancouver style.
Research impact metrics for librarians: calculation & contextLibrary_Connect
This document summarizes a presentation about research impact metrics for librarians. It discusses various metrics for measuring research impact, including the h-index, g-index, and altmetrics. It explains how the h-index and g-index are calculated using examples. It also discusses limitations of citation-based metrics and the importance of using a "basket of metrics" to provide context. The presentation emphasizes using appropriate metrics depending on the level of analysis (article, author, or journal) and considering factors like career stage and field of research. It provides information on data sources for different metrics and principles for evaluating research performance.
Soran University: Harvard style of referencing and citation guideJagar Ali
"Soran University: Harvard style of referencing and citation guide" is a student guide for referencing and citation in faculty of engineering - Soran University.
This document introduces an algorithm for preparing thesis documents using MS Word 2013. The algorithm addresses common problems authors face such as inconsistent page numbering, manually creating tables of contents, and time-consuming formatting. It provides a series of ordered steps to setup the document, insert different page types, apply styles, and add references according to writing standards. The key aspects of the algorithm are familiarizing users with MS Word tools, following a set order of processes, and using special techniques for issues like automatic page numbering and tables of contents. Following this algorithm allows authors to efficiently prepare research documents that meet requirements.
Problem statement: Although, literature proves the importance of the technology role in the effectiveness of virtual Research and Development (R&D) teams for new product development. However, the factors that make technology construct in a virtual R&D team are still ambiguous. The manager of virtual R&D teams for new product development does not know which type of technology should be used. Approach: To address the gap and answer the question, the study presents a set of factors that make a technology construct. The proposed construct modified by finding of the field survey (N = 240). We empirically examine the relationship between construct and its factors by employing the Structural Equation Modeling (SEM). A measurement model built base on the 19 preliminary factors that extracted from literature review. The result shows 10 factors out of 19 factors maintaining to make technology construct. Results: These 10 technology factors can be grouped into two constructs namely Web base communication and Web base data sharing. The findings can help new product development managers of enterprises to concentrate in the main factors for leading an effective virtual R&D team. In addition, it provides a guideline for software developers as well. Conclusion: The second and third generation technologies are now more suitable for developing new products through virtual R&D teams.
Does it Matter Which Citation Tool is Used to Compare the h-index of a Group ...Nader Ale Ebrahim
1) The study compares the h-index of 12 Nobel Prize winning scientists as reported by Scopus, Google Scholar, and Web of Science citation indexes.
2) It finds the Google Scholar h-index is higher on average than the other two indexes.
3) There is a significant positive relationship between the h-indices reported by Google Scholar and Scopus.
Online Repository: Improving the research visibility and Impact Nader Ale Ebrahim
Institutional repositories are platforms where a university’s faculty and graduate students can preserve their research outputs. Depositing papers in Open Access repositories will increase the visibility and citation of the article, due to removing barriers to knowledge sharing. It's highly recommended that documents without DOIs to be deposited in the repository that offer DOIs for documents you have deposited. There are several different types of repository that can host your research outputs depending upon your discipline. I will dig into some of them in this workshop.
Research tools can be defined as research and related tools as vehicles that broadly facilitate research and related activities. Scientific tools enable researchers to collect, organize, analyze, visualize, mobilize and creative outputs. Tools can be created as part of a research or related undertaking, or purchased off the shelf. They have been created by credible research Institutes to enable students to follow the correct path in research and to ultimately produce high-quality research papers.
Introduction to the “Research Tools” for Research Methodology courseNader Ale Ebrahim
“Research Tools” can be defined as vehicles that broadly facilitate research and related activities. “Research Tools” enable researchers to collect, organize, analyze, visualize and publicized research outputs. Dr. Nader has collected over 800 tools that enable students to follow the correct path in research and to ultimately produce high-quality research outputs with more accuracy and efficiency. It is assembled as an interactive Web-based mind map, titled “Research Tools”, which is updated periodically. “Research Tools” consists of a hierarchical set of nodes. It has four main nodes: (1) Searching the literature, (2) Writing a paper, (3) Targeting suitable journals, and (4) Enhancing visibility and impact of the research.
Reference Management system offers an easy way of collecting references from online databases, organizing them in a database, and citing them in documents in Microsoft Word. Documents can be shared with colleagues/publish online. A reference management system can facilitate the keeping track of the literature.
N. Ale Ebrahim, “ How to write a Bibliometrics paper,” in Organized by: Moheb...Nader Ale Ebrahim
N. Ale Ebrahim, “ How to write a Bibliometrics paper,” in Organized by: Mohebban Scientific Committee, Jalan Muhibah 12, Serdang. Selangor, Malaysia, 2014, pp. 1-82
How to increase h-index; “Advertise and disseminate publications” By: Nader A...Nader Ale Ebrahim
Publishing a high quality paper in scientific journals is a halfway of receiving citation in the future. The rest of the way is advertising and disseminating the publications by using the proper “Research Tools”. Familiarity with the tools allows the researcher to increase his/her h-index in the short time. H-index shows the academicians influences in the specified field of research. Therefore, a person with higher level of h-index has more high quality publications with high amount of citations. This presentation, covers the following topics: Why publish and increase h-index?, Definition of h-index and g-index, Importance of h-index, How to use “Research Tools” Mind Map, Paper title preparation, Selecting keywords, Select the proper journal, Advertise published article, and finally Trace published article citation.
Conducting a Literature Search & Writing Review Paper, Part 3: Writing Liter...Nader Ale Ebrahim
22- The paraphrasing & editing tool
23- Avoid plagiarism
24- Organize the references (Reference management) tool
25- Writing a Literature Review
26- A Structured Abstract
27- Integrating arguments in paragraphs
28- Verbs for referencing
Enhancing Research Visibility and Improving Citations: Publication Marketing ...Nader Ale Ebrahim
Publishing a high quality paper in scientific journals is only the mid point towards receiving citation in the future. The balance of the journey is completed by advertising and disseminating the publications by using the proper “Research Tools”. Familiarity with the tools allows the researchers to increase their h-index in a short time. H-index shows the academicians influence in the specified field of research. Therefore, an academician with a higher level of h-index is deemed to have publications of higher quality resulting in higher citations.
The number of citations contributes to over 30% in the university rankings. Therefore, most of the scientists are looking for an effective method to increase their citation record. Nader has developed and introduced a method for increasing the visibility of the research which directly affects on the number of citations.
The Effectiveness of Virtual R&D Teams in SMEs: Experiences of Malaysian SMEsNader Ale Ebrahim
The number of small and medium enterprises (SMEs), especially those involved with research and development (R&D) programs and employed virtual teams to create the greatest competitive advantage from limited labor are increasing. Global and localized virtual R&D teams are believed to have high potential for the growth of SMEs. Due to the fast-growing complexity of new products coupled with new emerging opportunities of virtual teams, a collaborative approach is believed to be the future trend. This research explores the effectiveness of virtuality in SMEs’ virtual R&D teams. Online questionnaires were emailed to Malaysian manufacturing SMEs and 74 usable questionnaires were received, representing a 20.8 percent return rate. In order to avoid biases which may result from pre-suggested answers, a series of open-ended questions were retrieved from the experts. This study was focused on analyzing an open-ended question, whereby four main themes were extracted from the experts’ recommendations regarding the effectiveness of virtual teams for the growth and performance of SMEs. The findings of this study would be useful to product design managers of SMEs in order to realize the key advantages and significance of virtual R&D teams during the new product development (NPD) process. This in turn, leads to increased effectiveness in new product development's procedure.
Academic social networking (ResearchGate & Academia) and the research impactNader Ale Ebrahim
The document discusses academic social networking sites and their role in research visibility. It begins with an abstract explaining that academic social networks like ResearchGate and Academia.edu allow researchers to connect, share work, and stay up to date in their fields. These sites make work more discoverable and increase citations. The document then provides information on setting up profiles on these sites and using them to maximize research impact. It emphasizes networking, sharing publications, and tracking metrics to enhance visibility.
Conducting a Literature Search & Writing Review Paper, Part 2: Finding proper...Nader Ale Ebrahim
12- Evaluate a paper quality
13- H-index and g-index
14- Publish or Perish
15- Evaluate a journal quality
16- The Institute for Scientific Information (ISI)
17- Impact Factor-Journal Ranking
18- Keeping up-to-date (Alert system)
19- How to Read a Paper
20- Mind mapping tools
21- Indexing desktop search tool
How to write a bibliometrics paper, by nader ale ebrahimNader Ale Ebrahim
This document provides guidance on how to write a bibliometrics paper. It outlines the key steps, which include searching for keywords, creating a literature database, and writing a journal article based on the collected data. Examples are given of potential topics for bibliometric papers, such as analyses of global research trends in areas like stem cells, proteomics, and ocean circulation. The document also discusses tools for bibliometric analysis, including the Web of Science and keywords/keyword plus features. In under 3 sentences, it outlines the process for conducting a bibliometric study and analysis to write a journal article on the topic.
Strategies to enhance research visibility, impact & citations by nader ale e...Nader Ale Ebrahim
Do you know “Over 43% of ISI papers have never ever received any citations?” (nature.com/top100, 2014). Now it’s time to start spreading the word around your findings and analysis. Publishing a high quality paper in scientific journals is only halfway towards receiving citation in the future. The rest of the journey is dependent on disseminating the publications via using the proper “Research Tools”. Proper utilization of the tools allows the researchers to increase the research impact and citations for their publications. This workshop will provide you various techniques on how you can increase the visibility and hence the impact of your research work.
"Research Tools": Tools for supporting research and publicationsNader Ale Ebrahim
“Research Tools” can be defined as vehicles that broadly facilitate research and related activities. “Research Tools” enable researchers to collect, organize, analyze, visualize and publicized research outputs. Dr. Nader has collected over 700 tools that enable students to follow the correct path in research and to ultimately produce high-quality research outputs with more accuracy and efficiency. It is assembled as an interactive Web-based mind map, titled “Research Tools”, which is updated periodically. “Research Tools” consists of a hierarchical set of nodes. It has four main nodes: (1) Searching the literature, (2) Writing a paper, (3) Targeting suitable journals, and (4) Enhancing visibility and impact of the research.
Academic leadership journal .management challengespdfNader Ale Ebrahim
This document discusses virtual teams and their management challenges. It begins by defining virtual teams as groups of geographically dispersed workers brought together through information technologies to accomplish organizational tasks. It then outlines some of the key advantages and disadvantages of virtual teams, such as reduced costs but also challenges with trust and cultural diversity. The document also compares virtual and traditional co-located teams. Finally, it discusses some of the main management challenges of virtual teams, such as effectively sharing knowledge, managing conflicts, and cultural differences across the dispersed team members.
Create a publication database for enhancing research visibilityNader Ale Ebrahim
In a competitive research landscape, researchers can no longer afford to just publish and hope for receiving impact. To leave a mark, researchers have to take their impact into their own hands. Researchers need to keep all of their scholarly output in a database, to share through academic social networking sites. Researchers also, should create a "permalink" for each of their publications. A "permalink" is a stable, permanent (or persistent) link to an online journal article or subscription resource, that can be accessed from any computer at any time. It is often different to the URL that appears at the top of your browser - the link displayed there may be session dependent, and will only work for a particular person in a particular time period. This presentation provides guidelines on how to create a publication database for enhancing research visibility and citations.
The document discusses reasons for publishing research, types of journals (open access), the peer review process, indexing of journals, impact factor and its limitations, and the H-index. It recommends publishing in peer-reviewed, open access journals that are indexed in reputed databases like PubMed and have no or low publication fees. It also notes impact factor should not be the sole criteria for journal selection.
The document discusses the impact factor (IF), which is used to measure the importance of academic journals. It provides details on how the IF is calculated based on the number of citations over a certain period. While the IF has become influential in research, it also faces criticisms. The document aims to explain the IF to help young academics understand and apply it appropriately when publishing their work.
The impact factor (IF) is a metric that measures the average number of citations received in a given year by articles published in a journal over the previous two years. Impact factors are calculated annually and published in the Journal Citation Reports to indicate the relative significance and influence of journals within their fields. While impact factors help identify influential research and select publication targets, they should not be the sole consideration and have limitations due to variability in disciplines, editorial policies, and self-citations. Alternatives to the IF include the h-index and Eigenfactor, which aim to provide more robust assessments of research influence and output.
Rankometrics / Bibliometrics - Les OxleyCharlies1000
This document discusses various metrics and rankings used to evaluate academic journals, known as "rankometrics". It defines several common metrics including two-year impact factor (2YIF), Eigenfactor score, h-index, and immediacy. It also introduces some new proposed metrics such as the Impact Factor Inflation score, Self-citation Threshold Approval Rating, and PI-BETA score. Economic journals are analyzed as an example, finding the 2YIF ranges from 1.369 to 5.048, journal h-indices range from 10 to 149, and PI-BETA scores indicating the percentage of uncited articles range from 0.055 to 0.856. In summary, the document outlines many existing and proposed
Impact Factor: the Journal Competition, Scientific Excellence or Fool’s Game ...crimsonpublisherscojrr
This document discusses the impact factor (IF), a metric used to evaluate journals. It was originally intended to identify high quality research, but is now often used to assess individual researchers and influence hiring/funding decisions. However, the IF has been criticized as it can be manipulated by editors and does not accurately measure any individual researcher's work. While still widely used due to its commercial value, many scientific organizations have rejected its use in research evaluation and alternative metrics are being explored, though none have replaced the IF yet. The origins and evolution of bibliometrics and the challenges around developing new qualitative measures are also examined.
This document discusses various quality indices used to evaluate research publications and authors. It defines indices such as the impact factor, immediacy index, Eigenfactor, SCImago Journal Rank, H-index, G-index, and HB-index. It provides details on how each index is calculated and its purpose. For example, the impact factor measures the average number of citations to articles in a journal, while the H-index quantifies an individual author's scientific research output based on both their productivity and citation impact. The document also notes some criticisms of these indices and how they can be determined using databases like Web of Science and Scopus.
The document discusses various academic metrics used to measure the impact and quality of scholarly work, including journals, authors, and institutions. It defines ISSN numbers, journal finders, DOIs, SJR, impact factors, indexing services, and Google Scholar metrics. It explains how to calculate the h-index, i10-index, and g-index for authors. It distinguishes between reference lists and bibliographies and discusses various referencing styles. It provides examples of citations in the Vancouver referencing style. The document is intended as a guide to help understand different methods used to evaluate academic work.
All researchers have heard about the impact factor. Read to learn what you may not know about the impact factor. Other measures of journal quality are now available as well.
Journal ranking metrices new perspective in journal performance managementAboul Ella Hassanien
The document discusses various metrics for evaluating journals and research, including impact factor, immediacy index, and the h-index. It provides definitions and explanations of how these metrics are calculated. For example, it explains that impact factor is calculated by dividing the number of citations in the current year by the total number of articles published in the previous two years. It also discusses some limitations and criticisms of solely relying on impact factor for evaluation.
Journal ranking metrices new perspective in journal performance managementAboul Ella Hassanien
The document discusses various metrics for evaluating journals and research, including impact factor, immediacy index, and the h-index. It provides definitions and explanations of how these metrics are calculated. For example, it explains that impact factor is calculated by dividing the number of citations in the current year by the total number of articles published in the previous two years. It also discusses some limitations and criticisms of solely relying on impact factor for evaluations.
Simon Linacre, Emerald: An insider's guide to getting published in research j...sainsburylibrary
This document provides an overview of publishing research in academic journals. It discusses journal rankings from Thomson Reuters, including the Impact Factor. While Impact Factor is widely used, it has some limitations - it is biased towards North American journals, citations are not the only measure of quality, and usage is a better metric of utility. Other factors like internationality and strategy should also be considered when evaluating journals. The document encourages authors to improve dissemination by using descriptive titles and abstracts, complete references, and promoting their work through various channels.
The document discusses various citation databases and research metrics used to evaluate scholarly publications and researchers. It describes major citation databases like Web of Science, Scopus, and Google Scholar that compile citations from bibliographies. It also explains common research metrics like the Impact Factor, h-index, g-index, i10 Index, Cite Score, SJR, and SNIP used to measure the influence and impact of publications and researchers. These metrics are calculated based on factors like the number of citations a publication or researcher receives.
chapter 1 introduction to scientific writingdedy hartama
Publishing research papers in international journals is important for career progression and gaining recognition. There are thousands of academic journals that follow basic publishing standards. The Institute for Scientific Information (ISI) and Scopus are important databases for indexing journals. Good writing requires planning content before focusing on language elements. Strong writing involves outlining ideas and rewriting drafts substantially.
This document provides guidance on selecting an appropriate journal to publish research. It discusses factors to consider like the paper's content, intended audience, and journal scope. It also covers differences between indexed and non-indexed journals, as well as open access and subscription models. Metrics for evaluating journals are defined, including impact factor, eigenfactor, h-index, and quartiles. The differences between Scopus and Web of Science databases are outlined. Tools for preliminary journal searches like Ulrich's and journal finder databases are recommended. The presentation emphasizes understanding journal metrics and selection criteria before submitting to ensure matching research with a suitable publication outlet.
This document introduces two new journal metrics, SJR and SNIP, that have been endorsed by Elsevier's Scopus database. SJR measures journal prestige by weighting citations based on the status and reputation of the citing journal. SNIP accounts for differences in citation potential across research fields by normalizing a journal's raw citation impact based on the average citations in its subject field. The document compares the two new metrics to traditional journal impact factors and discusses their potential uses for publishers, librarians, and researchers to evaluate journal performance and research impact.
The impact factor (IF) is used to evaluate the importance of academic journals within their field. It represents the average number of citations received in a year by articles published over the previous two years. While IF is widely used, it has several limitations - it is not associated with quality of peer review or content, and journals publishing more review articles tend to receive higher IFs. The IF of a new journal also cannot be calculated until it has been publishing for at least three years. Individual articles or scientists are also better evaluated by metrics like citation impact rather than a journal's overall IF.
It’s important to remember that the impact factor only looks at an average citation and that a journal may have a few highly cited papers that greatly increase its impact factor, while other papers in that same journal may not be cited at all. Therefore, there is no direct correlation between an individual article’s citation frequency or quality and the journal impact factor.
This slide aims to help and guide students on how to start finding literature review through WOS and SCOPUS. The content is excerpted from various sources available from the internet. This is solely meant for education purpose.
Similar to Does Criticisms Overcome the Praises of Journal Impact Factor? (20)
This document provides an overview of wound healing, its functions, stages, mechanisms, factors affecting it, and complications.
A wound is a break in the integrity of the skin or tissues, which may be associated with disruption of the structure and function.
Healing is the body’s response to injury in an attempt to restore normal structure and functions.
Healing can occur in two ways: Regeneration and Repair
There are 4 phases of wound healing: hemostasis, inflammation, proliferation, and remodeling. This document also describes the mechanism of wound healing. Factors that affect healing include infection, uncontrolled diabetes, poor nutrition, age, anemia, the presence of foreign bodies, etc.
Complications of wound healing like infection, hyperpigmentation of scar, contractures, and keloid formation.
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...PECB
Denis is a dynamic and results-driven Chief Information Officer (CIO) with a distinguished career spanning information systems analysis and technical project management. With a proven track record of spearheading the design and delivery of cutting-edge Information Management solutions, he has consistently elevated business operations, streamlined reporting functions, and maximized process efficiency.
Certified as an ISO/IEC 27001: Information Security Management Systems (ISMS) Lead Implementer, Data Protection Officer, and Cyber Risks Analyst, Denis brings a heightened focus on data security, privacy, and cyber resilience to every endeavor.
His expertise extends across a diverse spectrum of reporting, database, and web development applications, underpinned by an exceptional grasp of data storage and virtualization technologies. His proficiency in application testing, database administration, and data cleansing ensures seamless execution of complex projects.
What sets Denis apart is his comprehensive understanding of Business and Systems Analysis technologies, honed through involvement in all phases of the Software Development Lifecycle (SDLC). From meticulous requirements gathering to precise analysis, innovative design, rigorous development, thorough testing, and successful implementation, he has consistently delivered exceptional results.
Throughout his career, he has taken on multifaceted roles, from leading technical project management teams to owning solutions that drive operational excellence. His conscientious and proactive approach is unwavering, whether he is working independently or collaboratively within a team. His ability to connect with colleagues on a personal level underscores his commitment to fostering a harmonious and productive workplace environment.
Date: May 29, 2024
Tags: Information Security, ISO/IEC 27001, ISO/IEC 42001, Artificial Intelligence, GDPR
-------------------------------------------------------------------------------
Find out more about ISO training and certification services
Training: ISO/IEC 27001 Information Security Management System - EN | PECB
ISO/IEC 42001 Artificial Intelligence Management System - EN | PECB
General Data Protection Regulation (GDPR) - Training Courses - EN | PECB
Webinars: https://pecb.com/webinars
Article: https://pecb.com/article
-------------------------------------------------------------------------------
For more information about PECB:
Website: https://pecb.com/
LinkedIn: https://www.linkedin.com/company/pecb/
Facebook: https://www.facebook.com/PECBInternational/
Slideshare: http://www.slideshare.net/PECBCERTIFICATION
How to Setup Warehouse & Location in Odoo 17 InventoryCeline George
In this slide, we'll explore how to set up warehouses and locations in Odoo 17 Inventory. This will help us manage our stock effectively, track inventory levels, and streamline warehouse operations.
Leveraging Generative AI to Drive Nonprofit InnovationTechSoup
In this webinar, participants learned how to utilize Generative AI to streamline operations and elevate member engagement. Amazon Web Service experts provided a customer specific use cases and dived into low/no-code tools that are quick and easy to deploy through Amazon Web Service (AWS.)
This presentation was provided by Rebecca Benner, Ph.D., of the American Society of Anesthesiologists, for the second session of NISO's 2024 Training Series "DEIA in the Scholarly Landscape." Session Two: 'Expanding Pathways to Publishing Careers,' was held June 13, 2024.
A Visual Guide to 1 Samuel | A Tale of Two HeartsSteve Thomason
These slides walk through the story of 1 Samuel. Samuel is the last judge of Israel. The people reject God and want a king. Saul is anointed as the first king, but he is not a good king. David, the shepherd boy is anointed and Saul is envious of him. David shows honor while Saul continues to self destruct.
Gender and Mental Health - Counselling and Family Therapy Applications and In...PsychoTech Services
A proprietary approach developed by bringing together the best of learning theories from Psychology, design principles from the world of visualization, and pedagogical methods from over a decade of training experience, that enables you to: Learn better, faster!
RHEOLOGY Physical pharmaceutics-II notes for B.pharm 4th sem students
Does Criticisms Overcome the Praises of Journal Impact Factor?
1. Asian Social Science; Vol. 9, No. 5; 2013
ISSN 1911-2017 E-ISSN 1911-2025
Published by Canadian Center of Science and Education
176
Does Criticisms Overcome the Praises of Journal Impact Factor?
Masood Fooladi1
, Hadi Salehi2
, Melor Md Yunus3
, Maryam Farhadi1
, Arezoo Aghaei Chadegani1
, Hadi Farhadi4
& Nader Ale Ebrahim5
1
Department of Accounting, Mobarakeh Branch, Islamic Azad University, Mobarakeh, Isfahan, Iran
2
Faculty of Literature and Humanities, Najafabad Branch, Islamic Azad University, Najafabad, Isfahan, Iran
3
Faculty of Education, Universiti Kebangsaan Malaysia (UKM), Malaysia
4
School of Psychology and Human Development, Faculty of Social Sciences and Humanities, Universiti
Kebangsaan Malaysia (UKM), Malaysia
5
Research Support Unit, Centre of Research Services, Institute of Research Management and Monitoring (IPPP),
University of Malaya, Malaysia
Correspondence: Masood Fooladi, Department of Accounting, Mobarakeh Branch, Islamic Azad University,
Mobarakeh, Isfahan, Iran. E-mail: foladim57@gmail.com
Received: February 16, 2013 Accepted: March 18, 2013 Online Published: April 27, 2013
doi:10.5539/ass.v9n5p176 URL: http://dx.doi.org/10.5539/ass.v9n5p176
Abstract
Journal impact factor (IF) as a gauge of influence and impact of a particular journal comparing with other
journals in the same area of research, reports the mean number of citations to the published articles in particular
journal. Although, IF attracts more attention and being used more frequently than other measures, it has been
subjected to criticisms, which overcome the advantages of IF. Critically, extensive use of IF may result in
destroying editorial and researchers’ behaviour, which could compromise the quality of scientific articles.
Therefore, it is the time of the timeliness and importance of a new invention of journal ranking techniques
beyond the journal impact factor.
Keywords: impact factor (IF), journal ranking, criticism, praise, scopus, web of science, self-citation
1. Introduction
Citation is the reference text and recognition of article information. Generally, each article has a reference
section at the end including all references cited in that article. Each reference is called a citation. The frequency
of citations to an especial article by other articles means citation count. In addition, citation index as a type of
bibliographic database traces the references in a published article. Citation index shows that how many times an
article has been cited with other articles. In other words, one can find that which later article cites which earlier
articles. Citations are applied to measure the importance of information contained in an article and the effect of a
journal in related area of research activity and publications.
The first citation index for published articles in scientific journals is Science Citation Index (SCI), which is
founded by Eugene Garfield’s Institute for Scientific Information (ISI, previously known as Eugene Garfield
Associates Inc.) in 1960 (Nigam & Nigam, 2012.). Then, it was developed to produce the Arts and Humanities
Citation Index (AHCI) and the Social Sciences Citation Index (SSCI). The SSCI is one of the first databases
developed on the Dialog system in 1972. Small (1973) develops his efforts on Co-Citation analysis as a
self-organizing classification mechanism namely “Research Reviews”. Garner, Lunin, and Baker (1967)
describes that the worldwide citation network has graphical nature. Giles, Bollacker, and Lawrence (1998)
introduce the autonomous citation indexing, which can provide the automatic algorithmic extraction and classify
the citations for any digital scientific and academic articles. Subsequently, several citation indexing services are
created to automate citation indexing including Google Scholar, EBSCOhost, Institute for Scientific Information
(ISI) and Elsevier (Scopus).
Among all citation indexing services, ISI, has dominated the citation indexing career. ISI is a part of Thomson
Reuters, which publishes the citation indexes in form of compact discs and print format. One can access to the
products of ISI through the website with the name ‘Web of Science’ (WOS). It is possible to access seven
databases in WOS including: Social Sciences Citation Index (SSCI), Index Chemicus, Science Citation Index
2. www.ccsenet.org/ass Asian Social Science Vol. 9, No. 5; 2013
177
(SCI), Arts & Humanities Citation Index (A&HCI), Current Chemical Reactions, Conference Proceedings
Citation Index: Science and Conference Proceedings Citation Index: Social Science and Humanities.
2. Impact Factor
One of the most important product of ISI is the journal impact factor (IF), which is first designed by Eugene
Garfield. IF is published by ISI as a standardized measure to reveal the mean number of citations to the
published articles in particular journal. Journal Citation Reports (JCR) defines the IF as the frequency of citation
to the “average papers” published in a journal in an especial year. In effect, IF is almost used by research
institutions, universities and governmental agencies to measure the influence and impact of a professional
journal comparing with other journals in the same area of research (Kuo & Rupe, 2007). IF is also applied to
evaluate the individual profession of scientists and researchers. Therefore, editors have a high tendency to
increase the IF of their journals.
In order to calculate the IF of a journal, the number of citations to the published papers in a journal, which is
reported in the JCR year, is divided by the total number of papers published in that journal during two previous
years. Citation may be comes from articles published in the same journal or different proceedings, journals or
books indexed by WOS. As an example, an IF of 3.0 reveals that, on average, the papers published in a particular
journal within two years ago, have been cited three times. Although the IF calculation for journals is based on
articles published and cited in the previous two years, one can calculate the average citation using longer time
periods.
The 5-year journal IF is the number of citations to a particular journal divided by the total number of papers
published in the journal since five years ago. In certain fields of study, the 5-year journal IF is more appropriate
than 2-years because the body of citations in 5-years basis might be large enough to make a reasonable
comparison. In some area of research, it takes a long time, more than two years, to publish a work and response
to previous articles.
The aggregate IF is applied for a subject category rather than a journal only (Khan & Hegde, 2009). The
calculation is the same way as the IF for a journal, but it considers the total number of citations to all journals in
a particular subject group and the number of papers published in these cited journals. The median IF refers to the
median value of IF for all journals in the subject category. The number of citations to a journal divided by the
total papers published in the journal in current year is known as immediacy index. The journal’s half-life in each
year refers to the median age of the article cited in JCR. As an example, if the value of journal’s half-life equals
to six in 2007, it means that the citations from 2002 until 2007 to the journal are 50% of all the citations to the
journal in 2007. The journal ranking table displays the ranking of a journal in related subject category based on
the journal IF (Khan & Hegde, 2009).
3. Praises for Impact Factor
IF as a measure of journals’ influence has several advantages and benefits for researchers as well as librarians,
knowledge managers and information professionals. Some of the advantages are listed below:
1) The primary advantage of impact factor is that it is very easy to measure.
2) Another advantage of IF is to mitigate the absolute citation frequencies. Since large journals provide a
considerable body of citable literature which cites to the small journals, IF discounts the benefits of large
journals in favour of small journals. In addition, there is a tendency to discount the benefits of high frequently
published journals in favour of less frequently published journals and also matured journals over the newly
established journals. In other words, journal IF is a remarkable measure for evaluating the journals since it
offsets the benefits of age and size among the journals.
3) IF is a useful tool to compare different research groups and journals. It is commonly applied to administer
scientific library collections. Librarians, knowledge managers and Information professionals encounter with
limited budgets when they aim to select journals for their institutions and departments. ISI provide them with the
IF as a measure to choose most frequently cited journals (Khan & Hegde, 2009). About 9000 social science and
science journals from 60 countries are indexed by Web of Knowledge.
4) IF for indexed journals are greatly available and easy to understand and use. IF is more acceptable and
popular than other alternative measures (Khan & Hegde, 2009).
4. Criticisms for Impact Factor
However, IF has several inherent shortcomings overcoming the advantages of IF. The following problems refer
to the calculation method of IF:
3. www.ccsenet.org/ass Asian Social Science Vol. 9, No. 5; 2013
178
1) It should be noted that Thomson Scientific is a commercial institution, which sells its products and
evaluations to research institutions and publishers. Data and measures of IF are not publicly established and it is
not easy for scientists to access these data and measures. Therefore, it is not subject to a peer review process.
Researchers demand a fully transparency over how Thomson Scientific collect data and calculates citation
metrics. Thomson Scientific is unable to clarify the used data to support its published IF and hence its IF is not
reliable. Generally, findings of a scientific article could not be accepted by scientists when the primary data is
not available.
2) The coverage of database is not complete since Thomson Scientific excludes some specific types of sources
from the denominator. As an example, books (as a source for citations) are excluded from the database. Citations
from journals which are not indexed in the ISI are not considered in IF calculation (Khan & Hegde, 2009).
Falagas, Kouranos, Arencibia-Jorge, and Karageorgopoulos (2008) assert that one of the major shortcomings of
journal IF is considering only “citable” papers mainly original papers and reviews.
3) IF could not be an appropriate measure because it is an arithmetic mean of number of citations to each paper
and the arithmetic mean is not an appropriate measure (Joint Committee on Quantitative Assessment of Research
2008). IF does not consider the actual quality of research articles, their impressiveness, or the long-term impact
of journals. It should be noted that IF and citation indexes indicate a specific, and relatively uncertain, type of
performance. They do not provide a direct measure of quality (Moed, 2005). Therefore, Using the IF to derive
the quality of individual articles or their authors seems to be idle (Baum, 2011). In addition, the validity and
appropriate use of the IF as a measure of journal importance is a subject to controversy and another independent
examination might reproduce a different IF (Rossner, Van Epps, & Hill, 2007). IF make more controversy when
it is used to assess the articles published in the journals. IF appraises the reputation of the publishing journal not
the quality of the content in the single papers. In addition, Falagas et al. (2008) assert that IF takes into account
received citations only in a quantitative manner. It is conceivable that the published papers in a journal have a
greater influence on science if these papers are cited by articles with a higher scientific quality.
4) The two-year or five-year window for measuring the IF might be logical for some fields of studies with a fast
moving process of research, while it is not reasonable for some fields of study, which requires a long period for
research or empirical validation and also takes a long time for review. These kinds of research might take a time
longer than two years to be completed and then published. Therefore, citation to the original papers will not be
considered in the IF of publishing journal. In addition, articles, which are published in many years ago and are
still cited, have a significant impact on research area, but unfortunately, citation to these articles will not be
considered due to their old age.
5) Distributing the same value of IF to each article published in a same journal leads to excessive variability in
article citation, and provides the majority of journals and articles with the opportunity to free ride on a few
number of highly cited journals and articles. Only some articles are cited anywhere near the mean journal IF. In
effect, IF highly overstates the impact of most articles, while the impact of rest articles is greatly understated
because of the few highly cited articles in the journal (Baum, 2011). Therefore, journal IF could not portray the
individual impact of single articles and it is not much related to the citedness of individual articles in publishing
journal (Yu & Wang, 2007). Seglan (1997) states that IF is wrongly used to estimate the importance of a single
article based on where it is published. Since IF averages over all published papers, it underestimates the
importance of the highly cited papers whereas overestimating the citations to the average publications.
6) Although, it is feasible to evaluate the IF of the journals in which an individual has published papers, the IF is
not applicable to individual scientists or articles. ISI does not distribute a JCR for the humanities. The relevant
number of citations received by a single paper is a better measure of importance and influence of individual
papers as well as its authors (Khan & Hegde, 2009). Garfield (1998) caution regarding the “misuse in evaluating
individuals” by IF due to a wide variation from paper to paper within an individual journal.
7) The IF is highly depended on different disciplines such as physical sciences and mathematical because of the
speed with which articles are cited in a research area (Van Nierop, 2009). It is not logical to use IF as a measure
to compare journals across different disciplines. The absolute value of IF does not make any sense. For example,
a journal with the value of IF equal to two could not have much impact in fields and disciplines such as
Microbiology, while it would be impressive in Oceanography. Therefore, IF as a measure of comparison
between different fields and disciplines could be considered as invalid. In addition, this comparison has been
widely made not only between the journals, but also between scientists or university departments, while,
absolute IF could not estimate the scientific level of department. In an evaluation program, especially for
doctoral degree granting institutions, reviewers consider the IF of journals in examining the scholarly outputs. It
4. www.ccsenet.org/ass Asian Social Science Vol. 9, No. 5; 2013
179
makes a bias in which some types of researches are undervalued and falsifies the total contribution of each
scientist (Khan & Hegde, 2009).
8) IF is not relevant in some disciplines such as engineering, where the main scientific outputs are technical
reports, conference proceedings and patents. Regarding the literature, IF is not a relevant measure because books
constitute the most important publications in literature, which cite other books.
9) The number of citations from journals in less universal languages and less-developed countries is understated,
since only ISI database are utilized to define the IF.
It is argued that the simple methodology applied in the calculation of IF provides editors with the opportunity to
use different practices in order to inflate the impact factor of their journals (Garfield, 1996; Hemmingsson,
Skjennald, & Edgren, 2002; The PLoS Medicine Editors, 2006). Reviewers may request from the authors to
expand the article’s citation before publishing the articles. This is usual in peer review process in order to
improve the quality of article. On the other hand, IF indicates the importance of a journal in related subject
category. Therefore, editors have a high tendency to increase their journal IF (Yu, Yang, & He, 2011). There are
several methods for a journal to increase its IF. One policy is to manipulate the IF (Dong, Loh, & Mondry, 2005).
References to a journal can be managed in order to increase the number of citations to a journal and hence raise
the IF (Smith, 1997). Therefore, these journals seem to have additional impact and their quality and quantity will
be increased. However, there is no actual improvement in the journal impact and influence. This manipulation of
IF will only result in artificial evaluation.
1) Self-citation is a common way to manipulate and increase the IF of journals (Miguel & Marti-Bonmati, 2002;
Fassoulaki, Papilas, Paraskeva, & Patris, 2002; Falagas & Kavvadia, 2006; Falagas & Alexiou, 2007; Yu et al.,
2011). Editorial board may force authors to expand the article’s citations not necessarily to improve the quality
of scientific articles but they aim to inflate the IF of their journal in order to artificially raising the journal's
scientific reputation (Wilhite & Fong, 2012). This is a business policy, which is called coercive citation. In effect,
editors require authors to cite papers published in the same journal (self-citation) before the journal accepts to
publish the article even if these citations are not relevant to the research. In their study, (Wilhite & Fong, 2012)
find that 20% of researchers in psychology, sociology, economics and multiple business disciplines has
experienced coercive citation. They find that journals with a lower IF have a high tendency for coercive citation
in order to inflate their IF. Another study documents that coercive citation is common in other scientific
disciplines.
2) Another policy to manipulate the IF is to focus on publishing review articles rather than research articles
because review articles have a large body of literature, which are highly citable (Andersen, Belmont, & Cho,
2006; Falagas et al., 2008; Arnold & Fowler, 2011). Therefore, review articles provide the higher IF for
publishing journals and these journals will have the highest IF in their respective research area. Journals have a
tendency to publish a large number of their articles, which are likely to receive a high citation in near future or
deny accepting papers such as case report in medical journals, which are not expected to be cited.
Considering the above drawbacks of IF, it is suggested that IF as a measure of journal evaluation should have
certain characteristics (Offutt, 2008). It should be independent from the length and number of the papers
published in the journal. In addition, it should consider essential advances in the area over short term or
incremental contributions. IF should have relative stability in a year to year basis. The most important is that IF
should evaluate the impact of a journal during a long period since some articles are still cited after 10, 20, or
even 50 years from the date of publication. In addition, we should consider the warnings by Garfield against
using to compare different IF and scientific field. Beside the IF we can consider other internet databases to
calculate the IF of journals or articles which is collaborating with Elsevier.
The journal IF is an indicator of journal performance, and as such many of the suggestions for performance
indicators could equally be used to the application of journal IF (Greenwood, 2007). In November 2007 the
European Association of Science Editors (EASE) published an official statement suggesting that journal IF can
be applied only and cautiously for assessing and comparing the importance and impact of journals, but not for
evaluating the single articles, and of course not for appraising the research programmes or researchers (EASE,
2007). The International Council for Science (ICSU) Committee on Freedom and Responsibility in the conduct
of Science (CFRS) issued a "Statement on publication practices and indices and the role of peer review in
research assessment" in July 2008, which suggest to consider only a limited number of articles published per
year by each scientist or even give a penalty to scientists due to excessive number of publications in each year
(Smith, 1997). The Deutsche Forschungsgemeinschaft (2010), German Foundation for Science, issued new
strategies to assess only papers and no bibliometric information on candidates in decision making process
5. www.ccsenet.org/ass Asian Social Science Vol. 9, No. 5; 2013
180
regarding:
"...performance-based funding allocations, postdoctoral qualifications, appointments, or reviewing funding
proposals, [where] increasing importance has been given to numerical indicators such as the h-index and
the impact factor".
The citation style of manipulated journals, especially the number of self-citations in the last two year appears to
be unusual. It can be seen that the self-cited rate of manipulated journals is higher than that of the normal journal.
Therefore, self-citation rate can be used as a measure to specify the manipulated journals and calculate the real
IF by deducting the self-citations from the reported IF. However, it is a difficult task to specify manually each
journal in the JCR database. As discussed, journal might publish a larger number of review articles in order to
inflate their IF. Therefore, the Thomson Scientific website can provide directions for removing review articles
from the calculation (Khan & Hegde, 2009).
5. Conclusion
Although IF is a popular measure of quality for journals, we conclude that journal IF has its own restrictions. We
believe that there should be more examination of whether and how IF evaluates journal quality before it is
broadly accepted as a measure of journal quality. The IF measurement would certainly never assess the peer
review process of journals (Offutt 2008). It means that IF is not an appropriate measure to assess the quality of
published articles. One important factor in assessing a progress case is to consider the number of published
articles but it is an imperfect method because it considers the number of articles only instead of the quality of
publications. However, it is very hard to assign professionals to read articles and give an independent opinion
about the impact of particular scientist on his or her research area.
The implication for researchers and journals is that they should not rely only on this indicator. If we do not
consider the above limitations associated with IF, decisions made based on this measure is potentially misleading.
Generally, a measure will fall into disuse and disrepute among the scientific community members, if it is found
to be invalid, unreliable, or ill-conceived. Although IF has many limitations as discussed in this paper, it does not
lose its reputation and application by the scientific society. Indeed, IF attracts more attention and being used
more frequently by scientists and librarians, knowledge managers and information professionals. Critically,
extensive use of IF may result in destroying editorial and researchers’ behaviour, which could compromise the
quality of scientific articles. Calculation of IF and policies to increase the IF by journals, may push researchers to
consider publication as a business rather than contribution to the area of research. It is not fare that we should
rely on such a non-scientific method which IF appraises the quality of our efforts. It is the time of the timeliness
and importance of a new invention of journal ranking techniques beyond the journal impact factor. There should
be a new research trend with the aim of developing journal rankings that consider not only the raw number of
citations received by published papers, but also the influence or importance of documents which issue these
citations (Palacios-Huerta & Volij, 2004; Bollen, Rodriguez, & van de Sompel, 2006; Bergstrom, 2007; Ma,
Guan, & Zhao, 2008). The new measure should represent scientific impact as a function not of just the quantity
of citations received but of a combination of the quality and the quantity.
References
Andersen, J., Belmont, J., & Cho, C. T. (2006). Journal impact factor in the era of expanding literature. Journal
of Microbiology, Immunology and Infection, 39, 436-443.
Arnold, D. N., & Fowler, K. K. (2011). Nefarious numbers. Notices of the American Mathematical Society, 58(3),
434-437.
Baum, J. A. C. (2011). Free-riding on power laws: Questioning the validity of the impact factor as a measure of
research quality in organization studies. Organization, 18(4), 449-466.
http://dx.doi.org/10.1177/1350508411403531
Bergstrom, C. (2007). Eigenfactor: Measuring the value and prestige of scholarly journals, College & Research
Libraries News, 68(5), 314-316.
Bollen, J., Rodriguez, M. A., & Sompel, H. (2006). Journal status, Scientometrics, 69(3), 669-687.
http://dx.doi.org/10.1007/s11192-006-0176-z
Dong, P., Loh, M., & Mondry, A. (2005). The impact factor revisited. Biomedical Digital Libraries, 2.
European Association of Science Editors (EASE). (2010). Statement on inappropriate use of impact factors.
Retrieved from http://www.ease.org.uk/publications/impact-factor-statement
Falagas, M. E., & Alexiou, V. G. (2007). Editors may inappropriately influence authors’ decisions regarding
6. www.ccsenet.org/ass Asian Social Science Vol. 9, No. 5; 2013
181
selection of references in scientific articles. International Journal of Impotence Research, 19, 443-445.
http://dx.doi.org/10.1038/sj.ijir.3901583
Falagas, M. E., & Kavvadia, P. (2006). Eigenlob: Self-citation in biomedical journals. Federation of American
Societies for Experimental Biology Journal, 20, 1039-1042. http://dx.doi.org/10.1096/fj.06-0603ufm
Falagas, M. E., Kouranos, V. D., Arencibia-Jorge, R., & Karageorgopoulos, D. E. (2008). Comparison of
SCImago journal rank indicator with journal impact factor. The Federation of American Societies for
Experimental Biology Journal, 22(8), 2623-2628.
Fassoulaki, A., Papilas, K., Paraskeva, A., & Patris, K. (2002). Impact factor bias and proposed adjustments for
its determination. Acta Anaesthesiologica Scandinavica, 46, 902-905.
http://dx.doi.org/10.1034/j.1399-6576.2002.460723.x
Garfield, E. (1996). How can impact factors be improved? British Medical Journal, 313, 411-413.
http://dx.doi.org/10.1136/bmj.313.7054.411
Garfield, E. (1998). Der impact faktor und seine richtige anwendung. Der Unfallchirurg, 101(6), 413-414.
Garner, R., Lunin, L., & Baker, L. (1967). Three Drexel information science research studies. Madison: Drexel
Press.
Giles, C. L., Bollacker, K. D., & Lawrence, S. (1998). An automatic citation indexing system, Digital Libraries
98- Third ACM Conference on Digital Libraries, New York, 89-98.
Greenwood, D. C. (2007). Reliability of journal impact factor rankings. BMC Medical Research Methodology,
7(48), 1-6.
Hemmingsson, A., Mygind, T., Skjennald, A., & Edgren, J. (2002). Manipulation of impact factors by editors of
scientific journals. American Journal of Roentgenology, 178(3), 767-767.
http://dx.doi.org/10.2214/ajr.178.3.1780767
Joint Committee on Quantitative Assessment of Research. (2008, June 12). Citation Statistics, 1-26.
Khan, & Hegde, (2009). Is impact factor true evaluation for ranking quality measure? Journal of Library &
Information Technology, 29(3), 55-58.
Kuo, W. & Rupe, J. (2007). R-impact: reliability-based citation impact factor. IEEE Transactions on Reliability
56(3), 366-367. http://dx.doi.org/10.1109/TR.2007.902789
Ma, N., Guan, J., & Zhao, Y. (2008). Bringing PageRank to the citation analysis. Information Processing and
Management, 44(2), 800-810. http://dx.doi.org/10.1016/j.ipm.2007.06.006
Miguel, A., & Marti-Bonmati, L. (2002). Self-citation: comparison between Radiologia, European radiology and
radiology for 1997–1998. European Radiology, 12(1), 248-252. http://dx.doi.org/10.1007/s003300100894
Moed, H. F. (2005). Citation analysis in research evaluation. Information Science and Knowledge Management.
Springer.
Nigam, A., & Nigam, P. K. (2012). Citation index and impact factor. Indian Journal of Dermatology,
Venereology, and Leprology, 78(4), 511-516. http://dx.doi.org/10.4103/0378-6323.98093
Offutt, J. (2008). The journal impact factor. Journal of software testing verification and reliability, 18(1).
Palacios-Huerta, I., & Volij, O. (2004). The measurement of intellectual influence. Econometrica, 72(3), 963-977.
http://dx.doi.org/10.1111/j.1468-0262.2004.00519.x
Rossner, M., Van Epps, H., & Hill, E. (2007). Show me the data. Journal of Cell Biology, 179(6), 1091-1092.
http://dx.doi.org/10.1083/jcb.200711140
Seglen, P. O. (1997). Why the impact factor of journals should not be used for evaluating research. British
Medical Journal, 314(7079), 498-502. http://dx.doi.org/10.1136/bmj.314.7079.497
Small, H. (1973). Co-citation in the scientific literature: A new measure of the relationship between two
documents. Journal of the American Society for Information Science, 24(4), 265-269.
http://dx.doi.org/10.1002/asi.4630240406
Smith, R. (1997). Journal accused of manipulating impact factor. British Medical Journal, 314(15), 461.
http://dx.doi.org/10.1136/bmj.314.7079.461d
The PLoS Medicine Editors. (2006). The impact factor game: It is time to find a better way to assess the
scientific literature. PLoS Med., 3, 291. http://dx.doi.org/10.1371/journal.pmed.0030291
7. www.ccsenet.org/ass Asian Social Science Vol. 9, No. 5; 2013
182
Van Nierop, E. (2009). Why do statistics journals have low impact factors? Statistica Neerlandica, 63(1), 52-62.
http://dx.doi.org/10.1111/j.1467-9574.2008.00408.x
Wilhite, A. W., & Fong, E. A. (2012). Coercive citation in academic publishing. Science, 335(6068), 542-543.
http://dx.doi.org/10.1126/science.1212540
Yu, G., & Wang, L. (2007). The self-cited rate of scientific journals and the manipulation of their impact factors.
Scientometrics, 73(3), 356-366. http://dx.doi.org/10.1007/s11192-007-1779-8
Yu, G., Yang, D. H., & He, H. X. (2011). An automatic recognition method of journal impact factor manipulation.
Journal of Information Science, 37(3), 235-245. http://dx.doi.org/10.1177/0165551511400954