Nottingham Trent University and Alexander Street have
partnered to pilot an in-depth view on analytics, demonstrating
user engagement and impact of use. They will share findings
on how e-resources were used and how these analytics can
go beyond simple cost-per-use evaluation to support effective
decision making on the marketing and promotion of resources
and improve our understanding of how library users are
engaging with the resources we provide.
UKSG Conference 2017 Breakout - Licensing for additional users and partner or...
Similar to UKSG Conference 2017 Breakout - User Engagement Analytics: measuring and driving meaningful use of e-resources - Helen Adey and Andrea Eastman-Mullins
Similar to UKSG Conference 2017 Breakout - User Engagement Analytics: measuring and driving meaningful use of e-resources - Helen Adey and Andrea Eastman-Mullins (20)
UKSG Conference 2017 Breakout - User Engagement Analytics: measuring and driving meaningful use of e-resources - Helen Adey and Andrea Eastman-Mullins
1. 12 April 2017
1
Helen Adey, Resource Acquisitions and Supply Team Manager,
Nottingham Trent University
Andrea Eastman-Mullins, COO, Alexander Street
User Engagement Analytics:
measuring and driving meaningful
use of e-resources.
2. Abstract
Nottingham Trent University (NTU) and Alexander Street have
partnered to pilot an in-depth view on analytics, demonstrating user
engagement and impact of use. This presentation shares findings on
how e-resources were used and how these analytics can go beyond
simple cost-per-use evaluation to support effective decision making
on the marketing and promotion of resources and improve our
understanding of how library users are engaging with the resources
we provide.
12 April 2017
2
3. Content
• Background - Why User Engagement
Analytics? Examples of when usage reports
and conventional data aren’t enough
• User Engagement Analytics - the Publisher
view
– What is possible
– Issues to consider
• User Engagement Analytics – the library view
– What do they tell us and how might we use them
– Issues to consider
• Conclusions - what have we learned?
• Next steps……..
12 April 2017
3
4. Background - Why do we need
User Engagement Analytics?
• Scenarios when conventional usage data alone is
not enough
– Evidence Based Acquisitions plans - what
constitutes “good” usage
– eBook short term loan rentals, free use for less
than 5 minutes
12 April 2017
4
• How would User Engagement Analytics help and what might
engaged usage look like?
• How long is a resource used for rather than how many times
has it been used?
• Active engagement usage types: notes, cut and paste,
citations……?
5. User Engagement Analytics: The
Publisher View
Andrea Eastman-Mullins
COO, Alexander Street
UKSG, April 2017
8. There is more to “ROI” than cost-per-use
• New discovery or dead-end search?
• Shown in class to 50+ students or one mobile view?
• Preview or 100% viewed?
• Found via organic search or “curated” link?
• Met learning outcome?
Was it viewed?
How was it viewed?
What difference did it make?
14. How valuable was
this video?
(1 – 5 stars)
How were videos
used?
• Assigned for class
• Shown in class
• For research project
• Entertainment
• Training/Education
• Other (open ended)
Measuring Impact – Feedback
17. Publisher View: Challenges
• Balance need for transparency and consistency in stats.
• Librarians and publishers need to invest time to determine
what is useful.
• Definitions evolving
– On/off-campus
– What is a playback in an opera with multiple movements?
– Easy vs. tools to dig deeper
18. User Engagement Analytics - the
Library view: Acquisitions
• Much richer view than we get
from usage and cost-per-use
alone - many metrics showing
engagement.
• Most viewed - obvious EBA
candidates for final acquisition
decisions, but what about
“watched for longest” or %
played?
• Breadth of content used may
support decision to re-subscribe
rather than purchase specific
content
12 April 2017
18
20. User Engagement Analytics - the
Library view: 2 - Technical
• Reassuring Top referring URL data - Libraries Resource
Discovery system way out in front – supports practice of
loading Marc records whenever possible
12 April 2017
20
21. Devices, Browsers and Operating Systems
Data
Some surprising
levels of use from
non or less
supported devices
12 April 2017
21
22. Curated Views
• Through what paths are our
users finding and accessing
content and what does this tell
us about usage and
engagement?
• On and Off campus use - less
clear what the data is telling
us; data may be less reliable
because of use of proxy servers
etc?
12 April 2017
22
23. User Engagement Analytics 3 - Strategic /
Policy
• Evidence strongly
supports policy of loading
records into our
Discovery system
• Evidence contradicts
received wisdom
regarding Google as a
starting point
• Evidence suggests
support for Apple devices
is increasingly important
12 April 2017
23
24. Marketing and Promotion Strategies
• Targeted use of different
promotion routes could be
monitored for impact &
effectiveness.
• Could inform marketing
policies e.g. what is most
effective route for driving
usage and engagement -
Twitter, Facebook, Librarian
recommendation?
7 April, 2017 24
25. Subject use - Potentially useful for course
accreditation and departmental review
purposes
12 April 2017
25
26. Conclusions & Questions- what
have we learned? (1)
1. Possible tension between seamless & non-intrusive ease
of access and the desire to know and understand more
about what our customers are doing by asking questions?
2. Needs to be an addition to Counter Usage reports and
not a replacement.
3. If there is no standard, how might consistency in
reporting be reached?
4. Can we trust users to be honest with the self declared
use data? None of the use was for entertainment
reasons????
5. How should we interpret low % used data - we don’t
expect users to start reading all eBooks at page 1, so
what would be the markers of good targeted usage of
resources as apposed to random dipping in and out?
12 April 2017
26
27. Areas requiring more
investigation / thought
• Can we rely on Off Campus data? Do proxy servers and the lack of
WAYFless URL deep links make this meaningless?
• Very clear Resource List usage but comparatively few clips, playlists
or embedding - are there opportunities for engagement with
Academics to improve user experience & learning outcomes?
• We need to understand more about the % used data - what does it
tell us and do we know what “good” looks like?
• Are more information and data always a good thing or do we run
the risk of information overload preventing the decision making
processes that we are trying to improve?
• Would benchmarking of usage with other institutions be possible
and what would it tell us?
12 April 2017
27
28. Where next – Options for
other Engagement
Analytics
How many different people are using
the resource? e.g. Anonymised unique
user numbers. Are there conclusions to
be drawn between content being a lot by
a few users and content used a little by a
lot of users?
What might user engagement analytics
look like for other types of text-based
content:
Notes;
Cutting and pasting
Highlights;
citation;
pages turned / viewed
URL Referrals.
12 April 2017
28
29. Opportunities: Evolving Standards
• Definitions where there are no agreed standards.
• Evolve standards to include impact.
• Learning analytics models can guide us.
30. Opportunities: Expand Referring Data
Emphasis on “curated view”
and referring URLs
• It’s not mostly Google.
• Do library efforts in discovery work?
• Does more meaningful use come
from library referrals vs. google—
longer on site, richer experience?
• Influence of social network, word of
mouth.
32. Opportunities
• Expand beyond video—music, archives, scores, text.
• Partner with libraries willing to experiment to find most useful
data.
• How can measuring engagement drive it? Expose stats to
user community.
– User uploads stats
– Heat map of video
– Ratings
– Most viewed
– Etc.
To put the possibly rather vague idea of User Engagement Analytics into context, I want to start by giving a couple of recent examples from Nottingham Trent University when my colleagues and I had been looking at what might be termed conventional usage data, and we found that it wasn’t really enough to make decisions about which we could be confident and we really couldn’t work out what it was telling us.
As some of you may already know in 2015, NTU Libraries launched a new service called Your Books More Books, whereby we undertook to supply our final year UG and post grads with any items which we didn’t already stock, within 3 days of them asking. As part of this YBMB service we ran 5 EBA plans and at the end of each plan, we found that each plan had been used differently and our initially deposited funds performed in widely different ways for each one. For one, our deposited funds would purchase all items used 9 times or more; for another plan, the level was set at items used 25 times or more; the 3rd one, we could buy items used 7 times or more and so on. At the time, we couldn’t really work out whether this was something we should worry about, and we had no clear idea of what would be our measure of good usage. In the end, we largely want with what was affordable, but for some of the plans, we didn’t really feel confident about our decisions or whether we had “purchased” the correct things.
The other strand of the YBMB project was to offer much quicker ILL service, satisfying requests for items not held, by using whatever route was quickest. This including next day BL supply; amazon prime next day and short term eBook rentals. Again at the end of the project, we analysed and reported on had worked well and what could have worked better. As part of this analysis, we found that a significant number of the short term rental eBooks links that we had sent to our users had been used for less than 5 minutes and had therefore cost us nothing. Again - we couldn’t work out whether this was a good or a bad thing. Its good that we hadn’t paid, but what would be the verdict of our customers who had hardly used the links we had sent them. Were they really unhappy that the book wasn’t what they needed? Were they Ok about it, because they could tick the book off the list of titles that they needed to look at, and could move on to the next item on their list; or were they thrilled because they quickly found the quote or references they needed in no time at all? We simply had no idea.
This started us thinking about what sort of metrics and data WOULD be helpful when trying to make decisions about how well our services are being used and how satisfied our users might be with the services that we had provided. So, would it be helpful to understand more about how long a customer used an item for rather than how many times an item has been viewed? Are there some types of measurable usage which really would be meaningful and provide us with better evidence and data on which to base our decision making and service delivery? Around about this time AS approached us with an offer to work with us on developing a suit of user engagement analytics, and Andrea is going to talk to you now about what is possible from the Publishers point of view,
You’ve been hearing about engagement and impact stats. We are in early days of developing standards in this area—like stats before COUNTER.
Our responsibility as publishers is to share the data we have so libraries can make informed decisions on use. Push/pull
For over a year, we’ve thought about what really matters—ROI balance and meeting learning outcomes. The “so what” of measuring use.
To compare, evaluate – critical as video usage evolves to have standard definitions and comparison points
Also not full picture. It indicates which collections were used and how much they were used each month. But it does not reveal what titles were used or how they were used.
We’ve been tracking usage at the item level to support our demand-driven models. Which titles, publishers, subjects were used.
Its main limitation is not revealing what the user does with the content beyond just viewing it.
Challenge is over-reliance on cost-per-use as a sole measure (found or significant meaning)?
As publishers we have more data to reveal.
First off, the key to understanding user engagement is that the platform must provide ways of engaging the user. Variations in these features from platform to platform might limit what is knowable about engagement.
Users can isolate sections of a video and make a clip. The clip can have a title and a description. This clip is then discoverable by other users. For many of our videos our indexers make ’learning segments’ designed to isolate teachable moments within a longer video.
First off, the key to understanding user engagement is that the platform must provide ways of engaging the user. Variations in these features from platform to platform might limit what is knowable about engagement.
Users can isolate sections of a video and make a clip. The clip can have a title and a description. This clip is then discoverable by other users. For many of our videos our indexers make ’learning segments’ designed to isolate teachable moments within a longer video.
NTU now has just over 3 months worth of these engagement analytics and we have found them both fascinating and useful. I’m going to pick out some examples for you that we believe are helpful from an acquisitions point of view (POV); a more technical point of view and finally from a strategic or policy point of view.
Starting with the Acquisitions POV; the first thing to say is that the quality and range of data on offer is much much richer that we find in the normal Counter usage reports,. Many libraries use standard metrics for evaluating the VFM that they get from a resource by calculating the Cost per Use (CPU). NTU certainly does this and it is a metric which underpins and drives many of our renewal and serials review processes. However, we felt this metric doesn’t sit very happily with EBA packages of books or Video content, where we really wanted to get a better understanding of how (e.g. in what ways) the resources are being used and for how long. At its heart, we wanted data that we could interrogate to see if the decisions made on content not acquired at the end of an EBA plan was going to come back and bite us- e.g. was there content which may have only been used 6 times, but had been used actively and engaged with by comparison with other items which had been used 7 times but had been opened and quickly closed. Like most libraries, we would naturally lean towards acquiring the content which had been most used, we were keen to see if it was possible to see other data and metrics which might be better indicators of how our users have engaged with the content. So, is there any data which shows very active involvement with content provided which might give us some sense of how much it was VALUED by our users. In this slide which shows title views we can see the almost conventional data of most used items in descending order but we can also see on average, how much of the content was played. It is interesting to see here that the item with the largest average % played is not the most viewed title – so if the decision was made to only acquire the top 3 most viewed titles - would we be in danger of rejecting some items which had been viewed for longer or more of their content had been looked at - albeit not as many times? I’ll come back to this point later in the presentation as it certainly an are which needs more thought.
One final point from an Acquisitions PoV and for evidence based acquisitions in particular is the final bullet point here. Libraries can of course use the metrics showing most used content to simply allocate the funds lodged with the publisher at the start of the plan to purchased the content which has been used most, but if this usage is spread widely across the EBA collection it may be more cost effective to re-subscribe than to use the deposited funds to purchase only some of the content which has been used. The spending power of the initial deposit will often vary from plan to plan and in some cases the line where the money runs out can mean that finishing the plan will result in the library losing access to content which appears to have been well used. So a different type of data or engagement analytic which would be very useful from an Acquisition decision POV, would be data showing the breadth of content for which there are signs of positive engagement, which might make it more effective to re-subscribe to an EBA plan for another year, especially if this would be cheaper than trying to acquire all the content which had been used x many times.
As Andrea has already mentioned these are the types of reporting that we can see in the engagement analytics and from an Acquisitions point of view, we feel that all of these have value. An item which has been cited suggests both importance and value; Someone liking content so much that they shared it; or added it to a playlist; watched it on their mobile; clipped bits of content and maybe added to their resource list or embedded it in some sort of learning content, emailed; saved or printed it, all suggest some level of engagement and interest in the content we have made available. We feel that all of these types of metrics have the potential to give us useful insight on which to make our decisions - particularly on what acquisition decisions we should make , but also on what we might call technical, strategic and policy decision making as well
Moving on to some of this “technical” data that we find useful and interesting:
Firstly we were thrilled about this data on the top referring URLS showing our Library Discovery system being the most used route into Alexander St content. Its almost a received wisdom that librarians assume that most of our users have Google as their starting point and it has also been questioned whether it is really worth all the hassle of loading marc records or should we just switch content on and off in our link resolver. It is only a couple of years ago that there was a session at a UKSG conference on whether libraries still need an OPAC / catalogue and should we just depend on google? We feel this data firmly backs up our practice of loading marc records as being the key to driving both discovery and usage, and this metric alone was very pleasing and encouraging.
The other important piece of data in this slide is the 2722 referrals from our resource List system. Going back to our acquisition decision making again - this is critical data in that decision making process especially for EBA plans: - if we don’t spot that some of the content is on a resource list and it falls outside of the usage levels that would normally drive what gets onto the final list of items to be acquired, then we risk links being broken in our resource lists, EBA content being discarded which our academics wish to use in their teaching, and very dissatisfied users. This can be a critical issue if content only gets added to a resource list just before the EBA plan ends and before any usage has occurred. Depending on the reporting from your resource list system, you may be prompted by data from there, but if not - this type of data and intelligence is gold dust.
This may seem like an odd piece of data to highlight, but it has often proved difficult to get support of Apple products onto the list of formally supported devices. Only this year, the library has been considering whether it should bid for the funds to set up a laptop loan locker full of apple MacBook's. These sorts of data showing which browsers and operating systems are being used to access AS content shows nearly a third of all usage is coming from an apple device - This data and intelligence could help to support a capital bid as supporting evidence on how much these devices are being used for scholarly purposes.
Curated views give both technical insight and insight informing acquisitions decisions. Curated views detail content that our users will have found it via VLEs; learning management systems; social media;, citation services, or embeds. This curated data gives insight into user behaviour that promotes awareness of Alexander Street content. Curated views are calculated based on referrer data that show what kind of link a user followed and from where, and all give evidence of effort gone to, to draw users attention to AS content. We feel this is pretty strongly suggestive of the value placed on the content to which our customers are being pointed.
The On and Off Campus data has been less clear and more of a struggle to work out what, if anything is it telling us or is it misleading? On the whole, in its current form, we think this information is misleading and may well be being confused by the use of proxy servers amongst other things. NTU and AS have discussed this and we both think it would be better to simply report on the facts rather than try and extrapolate what the data may be suggesting, e.g. to what extent are our users accessing Alexander Street platform from on-campus via IP authentication, or by using EZproxy, shibboleth etc. As it stands at the moment, we believe the graphic shown here to be misleading.
Moving on the strategic decision making and influencing stuff; as mentioned before, we love the top referring URL data and the fact that primo seems to be beating google hands down - a fact that seems to fly in the face of conventional received wisdom that most of our users start their searches in google. NTU is currently looking again at our digital Library and Resource Discovery policies and strategies and we would definitely like to include some of these new data types into those reviews. Its also worth saying that we see this same pattern of URL referrals in the Kanopy data and some data from JSTOR - every time, Primo is out in front
We know that there is some tidying up of these data as our technical experts feel that in all likelihood, the Shib2IDP referrals and the no session referee host value are also counting other access via Primo, but once this bug is fixed, the data and evidence is even more compelling that our discovery systems are working well and are leading our users to the content we are providing.
As previously mentioned, we also feel the data on devices could be useful in a variety of fora, in discussion with our Information Systems department, Capital funding bids and so on.
Issues around Marketing and Promotion of resources are becoming more important to libraries and NTU is no different. We have only started to promote our collection resources using social media routes relatively recently and the granularity of the AS data gives us the chance to try out different marketing tools and routes, watch for the impact in the user engagement analytics and see what this might tell us about the most monitored or effective communication routes. This is something we want to check out in the coming months once we have identified robust sampling techniques .So we are not quite there yet but hope to be soon, as gaining some insight into whether Facebook promotion appears to be more effective than Twitter, and is word of mouth promotion by our library liaison teams even more effective, would be very useful to know.
I hope this slide is fairly self explanatory but for the non librarians in the room, Libraries regularly get asked for data to support course or programme accreditation visits and reporting.
This information on the subject spread of usage may help to inform the previously mentioned acquisition decisions regarding whether to resubscribe or conclude the EBA plan, but also from the list above, we definitely supply data for NTU accreditation submissions for Education, Architecture Business and Psychology, and I doubt that Video content would currently be included in this data. These data definitely supplement and add to our knowledge and understanding of who may be actively engaging with AS content
The temptation to try and find out more and more about what our customers like and value is almost overwhelming but we do recognise the possible tension between this desire & seamless and non intrusive access to content. You only have to think about those websites where as soon as the page opens you get asked to complete a survey and to get an idea of how annoying this can be.
Is a given
Publisher view on lessons learned so far.
Caliper is IMS’s Learning Analytics Framework (student engagement and learning metrics)
a playback (for COUNTER) and an aggregated playback (for engagement reports and pda reporting).
Curated views are a count by title of views received via ‘user promotion’-- a Tweet, Facebook post, Blog post OR a LibGuide embed, LMS embed, Wikipedia citation, etc.
Why interesting? It indicates an investment on behalf of the person posting the link or embed—deeper engagement. Unlike AltMetrics it counts the ’return users’ and it measures how successful these posts are at generating views. Demonstrates community found value in the video.
Among all measures we’re capturing now, we are hearing it is perhaps the most revealing about the material that’s impacting the user community.
This is a tactic where you group referring URLs into ‘Channels’. It’s a way to measure your investment in discovery alternatives.
We report the different ’sources’ of traffic to your Alexander Street material and the deep dive on a special category called “Curated Views”.