0
Reputation Matters
Towards an author economy
Melinda Kenneway
Director Director and co-founder
TBI Communications Kudos
Me...
Topics
1. Once upon a time …
2. What is reputation?
3. How is it measured?
- publications and other digital assets
- resea...
Once upon a time …
Every day, a miracle …
Distance selling …
Wake up call: NAR goes OA
Seeing the bigger picture
Libraries
Publishers
Universities
Government
ResearchersFunders
Intermediaries
Learned societies
Why submit? Two views…
what is reputation?
A definition
reputation
r pj te ( )n/ɛ ʊˈ ɪʃ ə
noun: reputation; plural noun: reputations
1. the beliefs or opinions that ...
Publication
performance
Institutional
performance
Researcher
performance
Publication
performance
Institutional
performance
Researcher
performance
Publication performance
• Rise of article level metrics
article-level-metrics.plos.org
Publication performance
• Rise of article level metrics
• Introduction of altmetrics
www.altmetric.com
www.impactstory.org
Publication performance
• Rise of article level metrics
• Introduction of altmetrics
• New units of publishing: data/image...
www.datadryad.org
www.figshare.com
Publication performance
• Rise of article level metrics
• Introduction of altmetrics
• New units of publishing: data/image...
www.rubruq.com
www.peerageofscience.org
Publication performance
• Rise of article level metrics
• Introduction of altmetrics
• New units of publishing: data/image...
www.plumanalytics.com
Publication performance
• Rise of article level metrics
• Introduction of altmetrics
• New units of publishing: data/image...
am.ascb.org/dora
Publication
performance
Institutional
performance
Researcher
performance
Researcher performance
• Publication output
• Publication impact
• Funding
• Other income (e.g. patents)
• Affiliations (i...
www.google.com/scholar
www.klout.com
www.researchgate.com
www.peerindex.com
The problem with single score systems…
www.researchcorecard.com
Publication
performance
Institutional
performance
Researcher
performance
Institutional performance
UK Research Excellence Framework
Outputs 65%:
“originality, significance and rigor”
Impact sub-p...
Research assessmentSector UK US Germany China Japan
Government RCUK
assessment
- pathways
to impact
STAR Metrics ESF
Guide...
“13 carefully
calibrated
performance
indicators”
“subjective
judgment of
senior,
published
academics”
http://www.snowballmetrics.com
https://becker.wustl.edu/impact-assessment/model
www.researchfish.com
convergence
what this means
Metrics are here to stay
Metrics are here to stay
“If you measure something
people change their behaviour.”
3 predictions
1. Emergence of scoring
and reward systems for all
kinds of academic activity
www.publons.com
2. Complex algorithms for
assessment of publication
and author impact
“what happened next”
www.frontiersin.org
3. Emergence of systems
and tools to influence
performance metrics
www.growkudos.com
Thank you
Melinda Kenneway
Director Director and co-founder
TBI Communications Kudos
Melinda.Kenneway@tbicommunications.co...
Qs 5 group c melinda kenneway
Upcoming SlideShare
Loading in...5
×

Qs 5 group c melinda kenneway

1,095

Published on

Published in: Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
1,095
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
6
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide
  • Welcome to this breakout session on academic reputation
    My name is melinda kenneway, and I'm the director of TBI Communications
    A strategic marketing consulting company
    But more recently I’m also the director and co-founder of Kudos
    A new web-based platform for authors through which they can increase
    usage and citations to their publications
    I'll come back to Kudos a little later in this presentation
    Because its very much a response to the some of the topics I’m going to cover today
  • I wanted to start this presentation with a story.
    A little bit of a journey through my career in academic publishing
    Which I hope will explain why I’ve chosen to talk today on the topic of reputation
    I’ll also run through some of the various emerging initiatives relating to the measurement of reputation
    Then reflect on what I think this is going to mean in terms of strategic imperatives for the future
  • My story starts back in the early 90s, when I got my first job in publishing at Oxford University Press.
    When I googled images of OUP for this presentation there are lots of shots taken looking skywards like this,
    and certainly I remember feeling on the one hand rather small and intimidated, but also a glorious sense of history and tradition.
    Reputation of the publisher and publication – particularly for journals – has dominated for such a long time in the minds of authors.
    Being published in a top journal brand or with a prestigious press has been the foundation of building a successful career in academia.
    And this meant for a very long time that publishers didn’t really have to pay a huge amount of attention to authors
    It’s not that authors weren’t important back then, it’s just that they weren’t a focus – particularly for Journals
    Because every day a miracle occurred.
  • Almost without any prompting whatsoever, academics would send in their articles to our journals – unpaid –
    and we would publish some of them, and then libraries paid to buy this content.
  • Back in the 1990s we were just starting to learn about libraries too.
    I knew they were out there somewhere, I’d even been to one very occasionally as a student, but I’d certainly never visited one as a publisher.
    I remember going to a presentation not long after I’d started work at OUP,
    talking about how it was becoming critical for publishers to have direct relationships with libraries.
    We hadn’t really gotten involved in the money part before
    Because this was all handled by agents.
    And because the money kept coming in, it didn’t feel like there was a pressing need to get involved.
    But gradually, this lack of direct contact started to be a problem.
    By the late 90s we’d started selling online content, and that changed everything.
     So the last decade has been consumed with publishers forging relationships more directly with libraries,
    and as a result we’ve learned a lot more about them. If you looked into a publishing house a few years ago
    you’d see teams of people dedicated to library selling and relationships,
    but still relatively little focused on author relations.
    You’ve only got to look at the history of identifier development to see where our priorities lay –
    the demand for institutional identifiers (with Ringgold for example) came long before that for authors (ORCID).
  • My wake up call came about 10 years ago. I was planning a marketing campaign
    for converting OUP’s largest and most profitable subscription journal – Nucleic Acids Research – to full, Gold Open Access.
    Suddenly I had to ask our authors, who’d been loyally sending in their articles year after year, for money to publish with us –
    knowing there were plenty of other places they could go and still get published for free.
    I remember doing an internal presentation on the “need to compete for authors” and what that was going to mean –
    that we had to rethink our services, our communications, we’d need an understanding of the kinds of author we wanted to attract and how to identify them.
    This was a whole new world. We didn’t have the information we needed on authors:
    who they were, why they published with us, what they really valued in the publishing process … we had a lot to learn.
    It was around that time that I decided to leave OUP It was one of the hardest decisions I’ve had to make.
    But I had a niggling feeling that not being attached to a publisher was going to becoming increasingly important
    if I wanted to be truly free to explore new ideas and opportunities.
    The roles were blurring, I didn’t want to have to feel allegiance to any one way of doing things.
     
  • So, I founded a consulting company called TBI and started working with a whole range of stakeholders in the industry –
    One of my first projects was working with a group of authors wanting to start their own OA press,
    which later become the Frontiers series, acquired recently by Nature;
    I’ve also worked with libraries looking to enhance their users’ experience, and more recently – start their own publishing operations;
    And I’ve worked with a whole variety of publishers too of course – including new start-ups,
    and Open accesses presses, coming into our market with new ideas and approaches.
    Working with open-access from birth publishers has been particularly interesting,
    because their entire focus is on attracting authors, often without the benefit of much of a publication or publisher brand to build on either.
     
  • The end result tends to be organizations that are focused on the author.
    Take this example of the difference between how elife presents itself to potential authors compared to Nature.
    Of course, everyone wants to be published in Nature, and it’s likely to take a while yet to topple impact factor as a measure of quality for an author’s work …
    but one thing is for sure is that that day will come.
    And those that have worked hard to gain author’s loyalty and attention now will likely
    reap the greater rewards in a future where potentially the reputation of publisher and publication
    becomes less important, and the reputation of the author more so.
     
  • Coming back to reputation … let’s for a moment consider what this actually means.
  • This definition I think expresses how most of us might think about reputation.
    The key thing to note is that here’s it’s expressed as perceptual.
    There may be some metrics behind this, but it’s also an overall feeling that we might have.
    I remember a foundation report that ALPSP produced on “what authors want”, probably 10-15 years or so ago now,
    that found that impact factor wasn’t the top rated feature of a publication for author preference.
    The top rated feature was actually “perceived reputation”, which is clearly somewhat broader and less easy to define.
    When considering authors too, it’s likely that the way in which they are perceived by their employers and peers is
    also driven by a combination of metrics – for example, related to the journals they publish in, the funding they receive, but also their general visibility
    – their presentations at conferences, their peer network and so on.
    We’re moving now towards much more quantitative measures of individual article impact
    and also researcher influence and institutional performance.
    All of which are closely interrelated of course, but developing on somewhat separate tracks at the moment.
    But there will be a convergence soon, and that convergence may drive some substantial changes in the not too distant future
  • Let’s take a moment to examine these 3 tracks and the emerging systems of measurement in each area.
  • Starting with publication performance
  • Article-Level Metrics (ALMs) are a new approach to quantifying the reach and impact of published research. 
    Historically, impact has been measured at the journal level.  A journal’s average number of citations to recent articles (i.e., its impact factor) has for years served as a proxy for that publication’s importance. 
    Articles published in highly-cited journals were viewed as impactful by association
    Now it’s much easier to assess an individual article’s impact from the publication it appeared in
    It’s also possible to track different markers of an article’s reach, beyond just citations.   
    Tracking how a paper is used – and who is using it  -  is now becoming possible.
    Article-Level Metrics open the door to measures of both the immediacy and the socialization of an article. 
    These are critical components of impact that have not previously been captured.
  • I referred earlier to our ability to track many more measures beyond citations to help assess impact
    Altmetrics are the response to this opportunity
    Sometimes talked about as an alternative to established metrics such as citations
    But most of the people I know in the altmetric community talk about them as complementary, not competing
    Presenting a broader picture of how information is being shared, discussed and so on…
    There’s no doubt in my mind that these kind of metrics are going to become a vital part of impact assessment in the future
    Particularly if we’re able to start relating almetric scores to more traditional measures of influence
    For example, if we see patterns of high social media activity leading to subsequent high citations or downloads
    Current evidence around this is all a little embryonic, but some correlations are already being found
  • The main altmetric providers are PLOS, Impact Story, Plum Analytics and Altmetric
    Here’s the altmeric donut, which gives an aggregated score for an article or other piece of digital content
    Based on social media activity
    The donut has mixed feedback
    Some people like the simplicity of a single score
    Others feel that an aggregated count of social media activity is pretty meaningless
  • Impact Story present similar data but dont’ create a score from this
    And as you can see here from the supported ID types
    Altmetrics aren’t just constrained to articles
    You can get analysis also on web pages, videos and datasets
    This really begins to make a reality a future where the traditional journal article
    Is no longer the primary unit of scholarly communication
    Altmetrics still have a long way to go
    In the recent Ciber report on Trust and Authority in the light of the digital transition
    They found that altmetrics were still not being taken seriously by the research community
    Bit they’re early in their evolution … and as informal digital communication channels become more accepted and widely used
    Metrics reflecting this activity will of course also grow in importance
  • So thinking more about these new units of publishing
    There’s clearly a lot more material that can now be published
    That previously wouldn’t have been possible in a print world
  • Data is particularly key
    And dryad now tracks downloads of datasets
    Which can be used by an author to demonstrate the value of their work
  • And through figshare authors can set up DOIs for a whole range of materials
    And then track shares and views
    So the era of analysis of a whole range of research outputs is already upon us
    And we may discover that the article is not optimised for impact
    It’s exciting to think that metrics might help us determine
    What types of research output are actually most effective
    And that this might bring fundamental change to how we communicate research ideas and discoveries
  • Another interesting development is the idea of independent peer review for a publication
  • Rubriq is one example of this
    Which is a service that authors pay for, something in the region of 500-700 dollars
    To have their article reviewed and graded against a number of criteria
    A score is generated, called the R score
    Which is then portable across publishers
    The system was designed to reduce wastage (duplciation) and speed up the process
  • Peerage of Science similarly is a pre publication peer review service
    In this system the finally scored article is then made available to publishers
    To essentially express their interest in publishing it
    And the author can then choose from those that say yes
  • There are clearly benefits for institutions
    To be able to have a view on these metrics
  • 34,000 signed plos….
  • H index = measures productivity and impact. scholar with an index of h has published h papers each of which has been cited in other papers at least h times.[2]
    Thus, the h-index reflects both the number of publications and the number of citations per publication.
    i10 index = number of publications with at least 10 citations, introduced by google in July 2011
  • Potential to become a lot more sophisticated
    Afterall, when we think about the more subjective elements of reputatiom
    This includes things like influence and visibility, which are harder to put a metric against
    Klout is an example of a system that attempts to do this through algorithms, and present an ‘influence score’
    In this case, what is being assessed is influence within the social media sphere
    So the question is how relevant this is to academia,, when several studies have shown academics are slow to adopt social media
    Perhaps what is needed is more specialist services for the scholarly communications industry
    And certainly many have been attempting to establish this, with various levels of success
  • Here’s one that’s performed better than some
    Helped recently by an injection of funding amounting to $35 million last year, with Bill Gates included in the backers
    ResearchGate is a social networking site for scientists and researchers to share papers, ask and answer questions, and find collaborators
    Say they have 2 million members
    Blogger Beatrice Lugger reported in 2012 that her "RG score" reached the top 5% of ResearchGate users
    although her contributions were restricted to occasional questions.[17]
    some using it relatively much (e.g., Brazil, India) and others using it relatively little (e.g., China, South Korea, Russia).[16]
  • PeerIndex measures influence by measuring Activity, Audience and Authority.
    Authority measures how relevant your activity is to the community. The Authority measure is boosted whenever others like, comment and/or engage with your activity.
  • I’m not really a big fan of these single scoring systems
    I don’t think they tell us very much
    A single aggregated score at the article level isn’t much more use than an impact factor at a publication level
  • The Times Higher Education World University Rankings 2013-2014 powered by Thomson Reuters
    are the only global university performance tables to judge world class universities across all of their core missions –
    teaching, research, knowledge transfer and international outlook.
    The top universities rankings employ 13 carefully calibrated performance indicators to provide the most comprehensive and balanced
    comparisons available, which are trusted by students, academics, university leaders, industry and governments.
  • Earlier we talked about reputation being subjective …
    8 are the same …
    Yale does better; as does UCLA; but Imperial and Chicago do worse
  • Institutional performance assessment is highly complex, with many factors to consider
    How one institution judges success may differ substantially to another
    As with researchers, single score metrics are very limiting
    What offers more potential for meaningful assessment is a combination of metrics
    That can then be cut a number of different ways to give insight into an institution’s performance
    There are many projects underway looking at evaluating institutional performance
    And here’s an example of, which Elsevier is working in with a range of UK based instiutions
    Project started to meet the following objectives:
    - A defined and agreed national framework for data and metric standards is needed
    - Suppliers should participate in the development of these standards
    - Institutions and funders should collaborate to build best practices. They should also develop stronger relationships with suppliers
  • Here’s an example of another model developed by the Becker Medical Library in Missorri,
    called the Becker Medical Library Model for Assessment of Research Impact
    Which helps demonstrates the potential complexity of institutional performance assessment
    There are 15 or so pages in this model, each as detailed as this…
    Clearly we’re at the start of a process here
    But one thing is for sure
    Standards, systems and processes will emerge over the next few years that will
    make this kind of analysis a regular part of every institution and researchers life
  • And direction will likely come of course from those with the money that drive higher education
    Funders and government
    We’ve already touched on criteria laid down by the UK government as part of the research excellence framework
    And of course independent funding agencies are key too
    And they are also getting increasingly interested in performance indicators
    Here’s the Wellcome Trusts’ high level indicators
    Which include not only measures of academic impact, but societal understanding and impact too
    Most of the researchers I know tell me that the competition for funding is getting more and more intense
    So demonstrating your effectiveness against these metrics will become increasingly critical
  • Intermermediaries are of course already entering this space to help with institutional-level performance assessment.
    There are a range of dashboard providers
    And those gaining most traction tend to be collaborative systems that aren’t based on one single funder or institution’s needs
    ResearchFish is one example, which can be used by institutions to collate inforamtion on publications, partnerships, funfing and intellectual property rights
    It enables researchers to report once across multiple funders, and re-use their data.
  • At the moment, the three tracks of publication performance, researcher performance
    and institutional performance are developing on separate, if somewhat overlapping tracks
    But they are already starting to come together
    As all interested parties begin to settle on what really matters, that’s when we’re really going to start seeing changes
    At the end of the day, government and funders hold the purse strings
    So what they want to see will inevitably and increasingly drive what authors do in terms of publication choices
    Publishers will need to consider what happens after publication as much as what happens before
    – helping ensure that researchers publishing with them are best placed to perform well against a broadening set of metrics
    Institutions will become ever more important managers of performance,
    working with and directing their research communities on performance improvement
  • For those of you familiar with the Hitch Hikers guide to the galaxy
    You’ll recall that the meaning of life, the universe and everything was 42
    Well, things aren’t so simple in our industry
    There will be no single number answer to us understanding what reputation means
  • But of this I’m certain
    There is too much money at stake to imagine that we’re going to escape measurement
    And of course, once things start being measured, this becomes a driver for changing behaviours
    How, where and what people publish will certainly fall into that
    Publishers will need to compete for the authors whose work is most likely to tick the impact boxes against which publications will be assessed
    How institutions support their research communities in maximizing their performance against key metrics will become critical
    Researchers will be judged not only on how much they publish,
    but also on how effective they are in following through in ensuring their work has measureable impact after publication
    Publishing output will become ever more granular – with systems available to
    aggregate and slice and dice a range of research outputs to assess their value individually and combined
    Outreach within and between networks of specialist interest will become a critical skills for conducting effective research
    The phrase ‘social media’ will have disappeared into history as academics grasp these new communications tools
    As they become just as essential as the postal system and email has been in the past
  • Our mission is to speed up and improve science by bringing peer review to the forefront of research.
    Publons gives reviewers credit for their work with:
    Open, quantifiable post-publication peer review
    Validated pre-publication peer review
    Citable reviews with DOIs
    Discussion and endorsement of reviews
    Public reviewer profiles
  • Post publication peformance beyond measures of usage will become more impprtant
  • Transcript of "Qs 5 group c melinda kenneway"

    1. 1. Reputation Matters Towards an author economy Melinda Kenneway Director Director and co-founder TBI Communications Kudos Melinda.Kenneway@tbicommunications.com melinda@growkudos.com
    2. 2. Topics 1. Once upon a time … 2. What is reputation? 3. How is it measured? - publications and other digital assets - researchers - institutions 4. What this means 5. 3 predictions
    3. 3. Once upon a time …
    4. 4. Every day, a miracle …
    5. 5. Distance selling …
    6. 6. Wake up call: NAR goes OA
    7. 7. Seeing the bigger picture Libraries Publishers Universities Government ResearchersFunders Intermediaries Learned societies
    8. 8. Why submit? Two views…
    9. 9. what is reputation?
    10. 10. A definition reputation r pj te ( )n/ɛ ʊˈ ɪʃ ə noun: reputation; plural noun: reputations 1. the beliefs or opinions that are generally held about someone or something.
    11. 11. Publication performance Institutional performance Researcher performance
    12. 12. Publication performance Institutional performance Researcher performance
    13. 13. Publication performance • Rise of article level metrics
    14. 14. article-level-metrics.plos.org
    15. 15. Publication performance • Rise of article level metrics • Introduction of altmetrics
    16. 16. www.altmetric.com
    17. 17. www.impactstory.org
    18. 18. Publication performance • Rise of article level metrics • Introduction of altmetrics • New units of publishing: data/images/blogs
    19. 19. www.datadryad.org
    20. 20. www.figshare.com
    21. 21. Publication performance • Rise of article level metrics • Introduction of altmetrics • New units of publishing: data/images/blogs • Pre-publication evaluation
    22. 22. www.rubruq.com
    23. 23. www.peerageofscience.org
    24. 24. Publication performance • Rise of article level metrics • Introduction of altmetrics • New units of publishing: data/images/blogs • Pre-publication evaluation • Tools for institutional assessment
    25. 25. www.plumanalytics.com
    26. 26. Publication performance • Rise of article level metrics • Introduction of altmetrics • New units of publishing: data/images • Tools for institutional assessment • Anti-impact factor: DORA
    27. 27. am.ascb.org/dora
    28. 28. Publication performance Institutional performance Researcher performance
    29. 29. Researcher performance • Publication output • Publication impact • Funding • Other income (e.g. patents) • Affiliations (institutional reputation) • ‘Esteem factors’ • Membership of societies/ed boards etc • Conference activity • Awards and prizes
    30. 30. www.google.com/scholar
    31. 31. www.klout.com
    32. 32. www.researchgate.com
    33. 33. www.peerindex.com
    34. 34. The problem with single score systems…
    35. 35. www.researchcorecard.com
    36. 36. Publication performance Institutional performance Researcher performance
    37. 37. Institutional performance UK Research Excellence Framework Outputs 65%: “originality, significance and rigor” Impact sub-profile 20%: “unit’s reach and significance” Environment sub-profile 15%: “research environment vitality and sustainability”
    38. 38. Research assessmentSector UK US Germany China Japan Government RCUK assessment - pathways to impact STAR Metrics ESF Guidelines NNSFC – expert review panels CSTP (focus on peer review) Higher Education Research Excellence Framework – 20% now based on ‘impact’ (case studies + pilot Impactfinder Peer review, citation analysis – no formal framework Research Rating (introduced early 2013) Publication metrics (e.g. impact factor) NIAD-UE (evolving assessment framework – no assessment prior to 2008) Private Non- Profit Funder- specific Funder-specific (focused on peer review) Funder- specific (informed peer review) “No routine evaluation conducted” Research Center for Science Systems (peer review)
    39. 39. “13 carefully calibrated performance indicators”
    40. 40. “subjective judgment of senior, published academics”
    41. 41. http://www.snowballmetrics.com
    42. 42. https://becker.wustl.edu/impact-assessment/model
    43. 43. www.researchfish.com
    44. 44. convergence
    45. 45. what this means
    46. 46. Metrics are here to stay
    47. 47. Metrics are here to stay “If you measure something people change their behaviour.”
    48. 48. 3 predictions
    49. 49. 1. Emergence of scoring and reward systems for all kinds of academic activity
    50. 50. www.publons.com
    51. 51. 2. Complex algorithms for assessment of publication and author impact “what happened next”
    52. 52. www.frontiersin.org
    53. 53. 3. Emergence of systems and tools to influence performance metrics
    54. 54. www.growkudos.com
    55. 55. Thank you Melinda Kenneway Director Director and co-founder TBI Communications Kudos Melinda.Kenneway@tbicommunications.com melinda@growkudos.com
    1. A particular slide catching your eye?

      Clipping is a handy way to collect important slides you want to go back to later.

    ×