Paper 8: Business and Management Journal Quality (Mingers)


Published on

Published in: Technology, Education
  • Be the first to comment

  • Be the first to like this

Paper 8: Business and Management Journal Quality (Mingers)

  1. 1. Estimating Business and Management Journal Quality from the 2008 Research Assessment Exercise in the UK<br />Professor John Mingers<br />Director of Research<br />Kent Business School<br />Centre for Evaluating Research Performance (CERP)<br />July 2010<br />
  2. 2. 1. Journal rankings <br />Journal quality<br />Journal rankings<br /><ul><li>Many different journal rankings each with its own biases and prejudices
  3. 3. They are based on often arbitrary criteria. They can be by peer review or behavioural (e.g, impact factors)
  4. 4. The original Kent ranking was simply a statistical combination of other rankings
  5. 5. “Objectivity results from a combination of subjectivities” (Ackoff)</li></ul>Why are they so contentious?<br />
  6. 6. Paper quality<br />Journal quality<br />Journal rankings<br />Researcher quality<br /><ul><li>Paper quality is unknown unless we peer review it – hence the RAE; so is researcher quality – no little Lion markSo we impute them from the journal ranking
  7. 7. THEORY 1: The quality of a journal purely reflects the quality of its papers (Editors/publishers/common sense)
  8. 8. THEORY 2: Low quality papers may be published in high quality journals and vice versa (RAE)
  9. 9. It matters in terms of publication strategy and decision-making </li></ul>Is journal a good proxy for quality?<br />
  10. 10. 2. Reconstructing the 2008 RAE Grades <br />Table 1 Submission statistics for the last three RAEs<br />Adapted from Geary et al (2004), Bence and Oppenheim (2004), RAE (2009a)<br />a Totals differ slightly between different sources. Figures for 2008 are after data cleaning as described later<br />
  11. 11. Table 2 Number of publications by output type<br />Adapted from Geary et al (2004), Bence and Oppenheim (2004), RAE (2009a). Categories with zero entries have been suppressed<br />
  12. 12. Figure 1 Pareto curve for the number of entries per journal in the 2008 RAE<br />
  13. 13. Figure 2Numbers of journals in the RAE and the ABS list<br />
  14. 14.
  15. 15. 2.1 The LP Model <br />Initial model (QP1)<br />Let:<br /> j index the journals (j = 1 .. no. of journals)<br />g index the grades 0* - 4* (g = 0 .. 4) <br />i index the universities (i = 1 .. no. of institutions)<br />eig be the estimated proportion of research at grade g for university i<br />pjg be the estimated proportion of the outputs of journal j graded at grade g<br />uig be the actual proportion of research at grade g for university i<br />nij be the number of entries of journal j submitted by university i<br /> <br /> <br />s.t.<br /> for each institution (i) and grade (g) <br /> for each journal (j) <br />
  16. 16. 2.2 Validity of the Results <br />
  17. 17.
  18. 18. Table 9 Proportions of journals in particular ranks comparing ABS with RAE grades<br />Note: we show the proportions in terms of % for ease of comparison but all Chi-Square tests were performed on the underlying frequencies<br />
  19. 19. Conclusions from Table 9<br /><ul><li> Overall RAE grades were higher than overall ABS grades (cols 1, 4) but this was because of selectivity of submissions
  20. 20. This can be seen by comparing the ABS submitted with the ABS not submitted (cols 2, 3)
  21. 21. Comparing those journals that are in common the level of grading is very similar (cols 3,6)
  22. 22. In the RAE , ABS journals were graded more highly than non-ABS journals (cols 5,6)</li></li></ul><li>Figure 3 Scattergram showing association between GPA and proportion of an institution’s submitted journals that are in ABS<br />
  23. 23. There are at least 3 possible explanations of this:<br />Better RAE grades<br />Higher % ABS journals<br />“RAE Bias”<br />Higher % ABS journals<br />Higher quality of department<br />“Better depts. more mainstream”<br />Better RAE grades<br />Higher quality of department<br />“Greater selectivity”<br />Higher % ABS journals<br />
  24. 24. 2. Citations <br />Paper quality<br />Journal quality<br />Journal rankings<br />Researcher quality<br />Citations<br /><ul><li>The REF and the Leiden methodology
  25. 25. Mean citations per paper field normalised (cpp)
  26. 26. Paper quality generates citations which can then measure journal, paper and researcher quality but …</li></ul>Is there a journal effect or a researcher effect as well?<br />
  27. 27. 2.1 Journal and citations?<br />A study of 6 OR journals from Management Science to Omega<br />Looked at all 600 papers published in 1990<br />
  28. 28. 2.2 How long do citations take?<br />
  29. 29. 2.3 Other factors affecting citations <br /><ul><li>A negative binomial regression found the following:
  30. 30. Strongly significant factors
  31. 31. ManSci, EJOR
  32. 32. Rank of author’s institution
  33. 33. Moderate
  34. 34. Theoretical papers, review papers
  35. 35. No. of pages, no. of references
  36. 36. OpsRes
  37. 37. Not significant
  38. 38. Empirical, methodological, case study papers
  39. 39. JORS, DecSci
  40. 40. Author’s country, no. of publications
  41. 41. Title words, keywords, no. of authors</li></li></ul><li>3. Counting citations – WoS vs GS<br />
  42. 42. 4. Is cpp the best measure: h-index? <br />Cites per paper = no. of cites/no. of papers<br />Clearly this can be increased by getting more citations or by producing less papers. <br />There is a built-in behavioural effect to lessen research productivity<br />The h-index:<br />“a scientist has index h if h of his/her N papers have at least h citations each and the other (N-h) papers have no more than h citations each” (Hirsch, 2005,p. 16569<br />It measures both impact and productivity<br />
  43. 43.
  44. 44.
  45. 45.
  46. 46. 5. Technical conclusions<br /><ul><li>Rankings are just a heuristic device and should not be taken as synonymous with quality
  47. 47. We can use the RAE data to reconstruct the judgements they made
  48. 48. Citations are a reasonable measure of impact - citations are in fact peer review by the world
  49. 49. There are significant problems in measuring citations
  50. 50. WoS (and Scopus) are extremely limited of their coverage for non-science
  51. 51. GS has better coverage but is unreliable
  52. 52. Citations per paper is not a good measure – the h-index is better but has its own limitations</li></li></ul><li>5.1 Strategic questions<br /><ul><li>Current measurement regimes are hugely distorting to research:
  53. 53. Narrow focus on types of outputs – ie “4*” English language journal articles
  54. 54. Narrow focus on types of measurements
  55. 55. Narrow focus on types of impact
  56. 56. Should we stop now and develop a system that aims to evaluate quality in a variety of forms, a variety of media, through a variety of measures with the ultimate goal of answering significant questions?</li></ul>Adler, N. and Harzing, A-W. (2009) “When Knowledge Wins: Transcending the Sense and Nonsense of Academic Rankings”, Academy of Management Learning and Education 8, 1, 72-95<br />