Alpsp conference on discoverability lettie conrad presentation july 2013


Published on

Presentation from SAGE's Lettie Conrad as part of the ALPSP training session on discoverability. The presentation looks at how the landscape of content discoverability is evolving, exploring the current challenges and progress that has been made.

Published in: Education
  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide
  • Ok, so just some quick facts about SAGE, as a foundation for my talk.We are a nearly 50-year old independent scholarly publisher with editorial offices in the US, UK, India and Singapore. We publish print and digital books, journals, reference and database products. We have a number of imprints and subsidiaries – *cue animation* such as Corwin, CQ Press, and Adam Matthew. For purposes of this talk, I’ll be focusing on the online products and databases we sell into higher-educational, corporate and government libraries, and various special markets. *cue animation* Specifically, I’ll draw on discovery channels for SAGE Journals, SAGE Knowledge, SAGE Research Methods and CQ Researcher
  • Now, with that foundation, let me outline my session – essentially, I’m here to share some research, performance metrics and strategies used at SAGE as a representative publisher perspective on discoverability. Specifically, I’m going to outline 4 key channels of user discovery of SAGE content and who uses them, why they matter, and how we monitor their activity. And then, if we have time, we can have a little discussion or I’ll take questions.And – just to be clear – I should also tell you what I’m *not* talking about - *cue animation* - I’m not talking about THE discovery channel *cue animation* (hehe)
  • (seriously though) What I DO mean to illuminate is the channels by which readers find and use SAGE’s online databases and platforms. Understanding what these channels are and why they matter depends on a functional knowledge of our users. SAGE looks to two primary sources of information here -- Market Research and Data Analysis.Market research *cue animation* includes usability testing and observational studies; library advisory boards that span both specialty and territory; focus groups, surveys and other methods of gaining insights on our product performance and development strategies from students, practitioners, faculty, or other relevant readers; and monitoring information-seeking behavioral research in the scholarly literature.Data analysis *cue animation* which draws on a number of tools and resources including COUNTER usage reports, Google Analytics, Moz (previously SEOMoz focused on open-web performance metrics), and we’ve developed a MasterVision tool with Data Salon to illuminate data about turnaways (specifically, unauthenticated users within institutional networks)
  • There are 4 primary channels of online usage to SAGE’s online databases and platforms. There are multiple channels within each of these 4 categories, but I’ll mention each briefly just now. Then, throughout this talk, I’ll go into more detail about each channel. *cue animation* Open web search: any mainstream search tool (Google), excluding social search / social media referrals*cue animation* Library search: any referrals from academic / institutional websites, proxies, openurl, libguides, etc.*cue animation* Academic / A&I search: any abstracting/indexing database referrals and mainstream academic search tools like Google Scholar or Microsoft Academic Search*cue animation* Marketing efforts: this includes SAGE-generated social media outlets and campaigns, house banner ads, a range of emails campaigns to promote new content, enhanced products, and industry initiatives, events, etc.You can probably lump together different types of traffic sources or discovery resources in any number of ways. And different channel types are more or less important, depending on your mission, nature of your content, etc. Even Simon and his colleague Tracey Gardener created one classification for discovery channels in their longitudinal survey on how readers discover academic journals. But, this is the schema that we most often use internally when discussing how users find our content.
  • So, for each channel, I’m going to try to answer for you 3 questions – who uses it? This is where the user knowledge is critical, to understand the types of readers of our content, such as undergraduate students or scientists why does it matter to SAGE? I’ll share some high-level performance metrics and the evidence-based approach we take to priorizing our efforts on certain discovery channels based on product or content type how do we monitor? I’ll explain a bit about how we keep tabs on the performance of each channel by product
  • Ok, let’s start with open web search – who uses it? Well, everyone uses mainstream search tools, whether they like to admit it or not. Key players in this channel are *cue animation* Google and Wikipedia, obviously. (I should note that we do not include Google Scholar in this channel, as we consider it an academic tool.) The reasons for their popularity vary from user to user, but mainly it’s a matter of providing readers with a simple and convenient tool. A single search box. A free service. Reliable results, with answers to your questions every time. Most academic products cannot boast these traits. The fact is, an increasing number of students and readers value speed/simplicity/convenience over quality information.An important element to keep in mind with this channel is that usage numbers from open web search tools are often high – but that does not mean it is all relevant, quality traffic. Being part of Google soup means SAGE products see higher bounce rates, where unauthenticated visitors quickly retreat when they find themselves in an unknown database or platform, very often irrelevant for their needs.There are variations in HOW different readers leverage these tools for content discovery – for example, a case study at University of Baltimore showed that undergraduate students are familiar with a variety of search needs and modify their techniques as needed. Many in this study reported going to Google or other mainstream tools when doing simple keyword searches, often at the start of a project or when they’re unfamiliar with a topic. While more complex searches focused on a specific topic, where users have deeper knowledge in a topic, might be best served by other channels with advanced Boolean searches in a specialty database or library catalog. Ultimately, knowledge of these trends is critical to understanding our usage data.
  • So why does this usage channel matter – this one’s pretty obvious, everyone uses it. The high volume of open-web referrals and market research evidence of the prevalent use of mainstream search tools within academia makes this one a no brainer.Ultimately, competitive SEO can make or break a product. Since so many students and faculty start with Google or Wikipedia, library holdings must be discoverable via these channels. If they’re not, cost / use will quickly decline and result in cancellations or lack of sales.Also, the Googles of the world are common ‘starter’ channels for discovery as well. Our research shows that many readers will shift to GScholar or other academic database tools after conducting an initial open-web search. For example, *cue animation* this diagram shows how a single Wikipedia entry page can push traffic to numerous resources. It’s important for publishers to be creative about ensuring their authoritative content is one of those optional destinations.
  • So, the final question for this channel is ‘how do we monitor?’ Mainly, at the moment, we’re recording and analyzing mainstream search traffic via Google Analytics. We also use Moz, as it is focused on SEO specifically, so it allows us much greater insight into usage patterns related to the open web.SAGE teams have developed performance dashboards for all primary product lines. Product and Marketing Managers analyze this data on a regular basis, to determine trends in open-web activity. *cue animation* For example, CQ Researcher is one where we’re very focused on open-web traffic at the moment, as it’s notably lower than other products. As you can see here with data from May 2013, we have strong performance from library referrals, but we’re adjusting site design and authentication routines to achieve greater indexing in Google and Google Scholar.*cue animation* In contrast, during the same month, we see very different patterns, where more than 50% of our traffic comes from mainstream search tools. Currently, we feel this is a good balance for journals and all channels are performing well for us. We try to leverage these experiences to apply to other product strategies.
  • Ok, onto the second key discovery channel for SAGE products – library search. Again, this encompasses both library search tools as well as browse, research guides, widgets, faculty sites, or any other traffic engine within a customers’ institution.For the most part, our research shows that library search and browse tools are most commonly used by more advanced students – either upper level undergrads or graduate students – as well as faculty, staff and librarians. While single-search-box discovery services, like Summon, are driving younger students to the library website, SAGE readers and customers commonly report that they look to the library website for specific use cases. For example, a humanities grad student recently told us that she looks to the library website when she’s looking for full text access to a specific publication. Observational research we conducted earlier this year made it clear that the library search tools are relied on once a user has a deeper knowledge of their topic or specialty.
  • Ok, so why does the library search channel matter to SAGE? To start, we want to be sure we’re visible and available for those use cases I just mentioned – we want to capture faculty and advanced students when they turn to the library for help. And, our investment in SAGE content being discoverable via library search and browse resources is a win-win – we are both securing strong performance for our products and also securing our customers’ investment in SAGE products.SAGE’s investment in library search has multiple prongs – *cue animation* we deliver full text metadata for indexing in all major discovery services*cue animation* we invest notable resource to KBART-compliant monthly feeds of product data for ERM servicesAnd we develop portable tools for librarians to adopt and install on their websites – *cue animation* such as LibGuides and search widgets. This is our latest LibGuide, which will go live on the SpringShare platform very soon!
  • Here again, we rely on Google Analytics to track library referrals. We set up a custom advanced segment that filters for common library website attributes (for those GA geeks in the room). We’re also looking at cost/ use and other metrics from COUNTER reports that our customers regularly monitor. Understanding their benchmarks helps us development KPIs for each product or content type.For example, the LibGuide I showed you a moment ago (*back browse to previous screen*) showcases content in SAGE Research Methods – a platform that hosts the lion-share of SAGE’s content in research methodology, mixed methods, research design, and the research process. During its first full year in the market, SAGE Research Methods struggled for a competitive usage ratio. During these last few years, we’ve been working to improve traffic via all channels – specifically library search, since that’s a favorite among advanced readers who are more likely to be conducting social science research studies. *cue animation* Comparing the fourth quarter of 2011 with the fourth quarter of 2012, we can see a steady climb in library referrals (that’s the orange line). This is moving in the right direction, improving cost / use metrics. But, it’s not quite where we’d like it to be and, therefore, we’re releasing that lovely LibGuide I mentioned very soon.
  • Ok, onto the 3rd important discovery channel for SAGE – academic or A&I search.Like library search, we find that advanced SAGE readers seek out academic and discipline specialty tools, like A&I databases – graduate students, faculty, practitioners, and various experts in their fields. Academic search providers *cue animation* (of which there are dozens, these logos are just representative) -- they often find they have loyal users who rely on advanced features, like Boolean searches and filters. For publishers, delivering high-quality, enriched metadata is a must in order to be discoverable by the right power users of academic databases and A&I products. Building brand loyalty with these experts is key for primary and secondary publishers in this space.
  • Ok – why does this one matter? Well, it depends on the type of channel within this category. For A&I and classic academic databases, it’s about reaching advanced researchers and harvesting usage from a refined group of users. Within the larger scholarly ecosystem, it’s about branding and profile among those expert, power users, who are leaders in their disciplines. ‘Why it matters’ also differs across the disciplines. For example, getting journal content into A&I products is a much higher priorities for humanities publications, while the hard sciences are focused on many more channels.But, you noticed Microsoft Academic Search and Google Scholar are on that previous slide (*back browse*). These two constitute a game-changer for publishers in the academic search channel. As open-web / academic hybrids, a great deal of care goes into comprehensive scholarly indexing of relevant, authoritative content with unique search algorithms and a very clean, simple user experience.
  • Ok, so how do we monitor these two elements of the academic search channel – we continually monitor what students, faculty, librarians, and other readers tell us about their academic search habits and preferences.Of course, we’re also leveraging Google Analytics reports with custom advanced segments that filter for elements of referring addresses that are germane to A&I and academic traffic. Generally, the activity is low, relative to our total usage.Here, *cue animation* usage stats from classic A&I databases and hybrid open-web academic search tools are lumped together. We’re seeing very low overall numbers – even for journals and CQ Researcher (a serial product), which are often more easily indexed. SAGE Journals is the only product where we are indexed in GScholar and Microsoft Academic Search. And SRM and SK are largely HSS ebooks, which are typically not indexed in A&I databases. If we have time, I can go into a bit more detail about why that is…But the other important element of this channel is the classic A&I products, which, as I noted, have a specialty audience and therefore do not constitute high volumes of traffic. However, they are important users and constitute valuable traffic – often paid traffic (vs. the often large numbers of unauthenticated, unpaid traffic from open web tools).
  • Ok, last channel – marketing campaigns. This one is quite broad and covers a number of different sub-channels, which I’ll go over in more detail. To start, who are we targeting here? That depends on whether we’re marketing journals or other library database products. For journals, *cue animation* we’re looking at some large and important groups to our business: faculty, who are also often SAGE authors, scientists, society members, practitioners, experts in their fields – as well as the traditional buyers, librarians. Journals marketing take many forms and the use cases are both passive, such as receiving print or email promotions and active, such as signing up for content or “TOC” alerts, attending conferences or following a Tweet to a SAGE press release. For all other online products sold to the library market, we’re primarily targeting faculty and librarians for product sales campaigns, catalogs, etc. We’re also using emails and newsletters to promote user training and post-sales support for new customers, as well as helping to drive usage and product awareness for established customers.
  • Ok – so I’ll split out library marketing from journals marketing for the next few slides, as the objectives and ROI are unique.For journals marketing, as a discovery channel, we’ve developed a range of cross-promotional banner ads and email campaigns to keep SAGE journals top of mind with researchers and librarians. *cue animation* SAGE-related house ads appear across all our onlin products and databases, where relevant, to drive awareness about journal society news, industry events, and our publishing program.More advanced scholars -- faculty, members of affiliate societies / associations, practitioners, some of whom are also our authors, as well as some librarians – they all use our email alerts *cue animation* to keep abreast of new article and issue releases. Obviously, the use case here is that these folks have elected to be “pushed” these alerts, really a passive sort of an opportunity for discovery of new content. We’re also using this same list to promote new journals, association conferences, title changes, and other journal-specific communications.We invest in these campaigns in order to capture those advanced researchers, faculty, practitioners, etc. and take good care of that valuable user and customer base. And journals promotions are key author services. For a publisher like SAGE, where 2/3 of our journals are published in partnership with an affiliate society or association – so email alerts and other marketing efforts are part of the service we provide those members. Who, not coincidentally, are part of that advanced reader group. These are qualified leads and represent a valuable element of our business.
  • However, as a discovery channel, journals marketing efforts are tricky to measure. Attending conferences, conducting research activities, engaging with the market on various levels helps us gauge how we’re doing in promoting SAGE journals and our uathors.Using platform feature and email usage data, as well as Google Analytics, we can see that marketing campaigns – which are resource heavy for SAGE – reap overall over web traffic to SAGE’s products. For example, *cue animation* campaigns in the first half of 2013 produced so little activity on the SAGE journals platform that it’s not reportable in this pie chart in Google Analytics. Now, this metric doesn’t include banner ads or email alerts for content releases. However, if we look at email activity only – this is data related to the alerts at publication of new articles or issues, which readers elect to receive – the usage is still relatively low. In the first 5 months of the year, we sent nearly 5 million content alerts, saw a click through of nearly 2% and total related usage on the SAGE journals platform of 1.3%.Ultimately, we’re juggling two primary objectives with journals marketing campaigns: We’re providing services to our society partners and we’re also driving usage from many different kinds of users. The tricky thing for this channel is that usage is a relatively minor portion of our overall product usage stats, but the society services are mission critical to SAGE. So, the motivations are complex and largely about branding / perception.
  • Similarly, library marketing is focused on broad segments of our market. The ways we engage with them vary, so let’s break this one down by user group:– for librarians or other purchasing agents, they hear about new products and services, pricing promotions, and these are vital for making new sales and securing renewalsfor faculty, direct marketing (like flyers and emails) and indirect marketing (like facilitating discussion forums on key issues in academia) are effective in promoting brand or specialization within a given field, as well as to encourage their influence over librarians / buyers for new / renewed salesfor students and other readers, banner ads and social media are the key tools for engaging with SAGE and our contentSo, *cue animation* library marketing comes in hosting open access networking forums on research methods, criminology, communication, and other key fields in which we publish; these often link into our content platforms and databases. *cue animation* Of course, we use emails and newsletters for library marketing efforts, to keep our customers informed of product development. And we’re *cue animation* heavily engaged in many initiatives that promote the social sciences, *cue animation* which is a core competency for SAGE.
  • Since library marketing is largely focused on sales and editorial objectives, we use a range of tactics for monitoring how campaigns drive discovery. Again, ongoing research is critical to listening to buyers and users within the academic marketplace. We’re also using various datasets to monitor the effectiveness of library marketing efforts to drive usage of online products. For example, *cue animation* we can measure visits to the SAGE journals platform by territory following the 2012 free trial campaign. We’re also able to look at the bounce rate for that trial usage compared to how many were new visitors in our key territories. That helps inform future trial campaigns and sales efforts by region. We’ve worked with Data Salon to analyze unauthenticated usage for up-sell campaigns to existing customers. We can also compare referral traffic during trial campaigns to other periods of time. For example, during last fall’s global trial promotion *cue animation* we saw an 84% increase in traffic from other SAGE sites, demonstrating the effectiveness of cross-promotional marketing activity during that campaign.
  • Ok, now that we’ve gone through the 4 main discovery channels for SAGE, I want to unpack a bit of detail on measuring discovery channels. First, as you saw in this talk, we rely quite a bit on Google Analytics. If you’re not familiar, that’s the free analytics program made possible by Google, out of the goodness of their hearts (and a sly move to aggregate a vast amount of intelligence about web usage trends). It’s no surprise, really, that GA comes with some limitations. Because it’s a free service, you really get what you pay for – we find that GA is best used for trend analysis, as it is not a reliable source of actual usage counts. We also cannot influence the GA roadmap to, say, add new reports like un-authenticated institutional usage or turn-away reports.Clearly, Google and COUNTER are two very different organizations and have uniquely different agendas or purposes for usage reporting. GA is geared toward a broad understanding of user engagement with a website, whereas COUNTER is designed to support institutional investments in online products. Therefore, GA does not include basics like counting PDF downloads, which is quite central to COUNTER reports. Conversely, COUNTER does not dictate that certain datasets be captured for all types of products. For example, book reports include data such as numbers of searches and sessions, which is not required of journal reports. And, in this latest release of COUNTER 4, we’re just now counting mobile activity. COUNTER is focused on ensuring reporting standards so that libraries can measure their return on investment in electronic media / databases.However, raw usage data is not enough. Publishers must know their users in order to truly know which discovery channels they should prioritize. That requires a range of activities – from attending conferences in the areas of study in which you publish, holding focus groups and other market research activities, conducting usability testing and other observational research.
  • And on that last point, let me give you an example from my work at SAGE. Much of the data I’ve shared about these channels has been hard ROI metrics and usage stats. But, as you can tell from the overview of user behaviors and trends, SAGE invests a great deal of resource into qualitative studies of what our readers and customers conduct their work and locate our content. So, I wanted to share a bit of detail about a research study we conducted and presented to this year’s annual meeting of ALA’s Assn. for College Research Libraries….Purpose: to graphically represent typical research workflows of advanced students in the social sciences during their literature review; our goal was to depict activity beyond the walls of a single academic database or library websiteParticipant profile: …Method: we conducted interviews and observational research with 11 social science masters or PhD candidates in the US and UK
  • Lettie – to provide some context, Mary and I reviewed sample user data from the UC Denver library and SAGE databases, respectively. Google Analytics is a free web analytics service commonly installed and used by many publishers and libraries the data is used to enlighten the development and maintenance of library and publisher websites, to ensure high functioning online products used by researchers for example, *cue animation* traffic source reports tell us how many students enter a library site from social media or open web searches bounce rates *cue animation* help us understand the level of user engagement on an ebook platform visitor flow reports *cute animation* are incredibly helpful in painting a portion of how users navigate a library or publisher website however, this data is limited to a single website or domain name, where only entry points capture any activity beyond the walls of our online products and services
  • Lettie – therefore, with the goal of generating a visual representation of a common research workflow beyond the scope of a single website or action, we conducted original research with our advanced social science student participants as noted earlier, we conducted interviews and gathered observational data observation was mostly drawn from recorded sessions participants used screen-capture software (typically with Camtasia or Screencast-o-Matic and viewed with either Camtasia or VLC Media Player *cue animation*) each student recorded a minimum of one hour of their online research session while searching for content during their literature reviews result was 13 hours of observational data *cue animation* data captured in field notes *cue animation* each step of their process was recorded in custom field notes template new squares were used to capture the URL or description of each site visited during the recorded sessions this data was then categorized *cue animation* with slight modification, we applied the same discovery resource categories used in Tracey Gardener and Simon Inger’s series of student surveys that track discovery starting points during the research process following quality assurance routines to ensure cross-coder reliability, these categories were entered into Excel this allowed us to generate strings of code that represent all workflows from the beginning of a new query to the point where participants captured citations, full text or other evidence that new knowledge was being incorporated into their literature review
  • This is the resulting user pathway chart. Here, we measured activity on many more specific discovery channels than the 5 just outlined for your earlier. We modeled our categories after the Inger / Gardener survey study and wanted a greater level of granularity than we typically measure in product dashboards. For example, here, we measured A&I and mainstream academic search tools seperately. we used the Sanky program developed by Mike Bostok – I encourage you to take a look at the Excel tools available there! Again, the purpose of this diagram is to demonstrate dominant trends in how social science students navigate the web in search of scholarly material and there are some obvious ones that popped out right away to us: Aggregator and publishers sites were the dominant resources that lead students to capturing knowledge into their PDLs We also found trends in connects between certain resources. For example, open-web or mainstream search activity lead participants to a wide range of resources -- from primary source materials and key research groups, as well as academic materials. In contrast, dedicated resources, like A&I (Scopus) or academic search (GScholar), clearly served their dedicated purpose of driving users to content. We found that these resources often did not open up new pathways for students to discover other associated material. Also, this chart will show you how many participants leveraged use of library sites for authentication or recovery of full text they may have found on the open web
  • Some additional findings of note, from both interview and observational data, that are not available via your standard web analytics trends in search behavior many participants reported beginning their searches with academic search engines and this was born out during observation however, academic services of all kinds were given higher credence during interviews than were actually demonstrated during observation – specifically, 9% of participants reported starting with mainstream search tools (like Google) whereas we calculated approximately 20% of searches beginning with this type of open-web search products most subjects reported being aware of risks in mainstream search results and noted routines for validating authenticity. One subject shared his process: “Google searches I think are helpful as long as they lead to reliable information”Some participants noted that they would use Chrome more often, but IE was the most common among library computer terminals and institutionally issued laptops. Some were observed using more than one browser, due to lack of support for some services in Chrome, which was reported to be the most popular browser for these participants. trends in retrieval behavior a striking 100% majority reported and demonstrated a fully manual process for capturing citations and managing their saved documents. Almost 80% noted that they had tried applications, such as EndNote or RefWorks, but found them to not integrate easily into their workflows. A few students mentioned that these applications are tied to their libraries and require institutional logins, which is found to be inconvenientNearly all read articles in the downloaded PDF format, which has clear benefits in usability and builds researchers’ personal digital libraries. However, article PDFs do not contain advanced functionality often found within the article hosted in HTML (hypertext required for online hosting) versions on publication websites, that in many cases would be time and labor efficient for these students. Many utilized citation metrics, if found easily ‘above the fold’ near the top of the article, often alongside abstracts. However, no participants made use of features often found within the text of the article as hosted online. For example, at least one extra step, and as many as 5 extra steps, are required to manually retype bibliographic information when participants searched for full text of works cited in recently discovered articles. However, if they had spent time browsing the HTML version of the article, they could have utilized many programmatic features offered on publisher and aggregator websites -- such as hyperlinks to query local holdings in just one click.while 80% of participants noted how quickly concepts develop in their fields, most did not perform any sort of ‘double check’ to ensure if changes were made in subsequent versions after they originally downloaded a PDFNow, to draw the necessary connections, my research partner, Dr. Mary Somerville, will provide us with conclusions…hand over
  • And that’s it – thank you for listening!Here’s my email and citations for a fe resources I used for this presentation, in case you’re interested in reading more.Any questions?
  • Alpsp conference on discoverability lettie conrad presentation july 2013

    1. 1. Los Angeles | London | New Delhi Singapore | Washington DC Why it matters and what difference it can make ALPSP Seminar It's all about discoverability, stupid! How to get your content seen by the right people July 10, 2013 Lettie Y. Conrad, MA SAGE Publications
    2. 2. Los Angeles | London | New Delhi Singapore | Washington DC SAGE overview ● Independent, global scholarly publisher ● Books, journals, reference, databases
    3. 3. Los Angeles | London | New Delhi Singapore | Washington DC ● SAGE overview ● Discovery channels • Who uses it? • Why does it matter? • How do we monitor? ● Q/A Session Outline Saaa
    4. 4. Los Angeles | London | New Delhi Singapore | Washington DC User knowledge >> channel knowledge ● Market research • Usability testing & observation • Librarian advisory boards • End-user focus groups, surveys, etc. • Info-seeking behavior research studies ● Data analysis • COUNTER reports • Google Analytics • Moz (previously SEOMoz) • Data Salon
    5. 5. Los Angeles | London | New Delhi Singapore | Washington DC Discovery channels – what are they? 1. Open web search 2. Library search 3. Academic / A&I search 4. Marketing efforts
    6. 6. Los Angeles | London | New Delhi Singapore | Washington DC Discovery channels – 3 questions 1. Who uses it? (reader / customer persona) 2. Why does it matter to SAGE? 3. How do we monitor?
    7. 7. Los Angeles | London | New Delhi Singapore | Washington DC 1. Open Web Search – who uses it? ● Everyone! (despite what they may say) ● Simple and user friendly ● Quantity vs. quality traffic ● Use case: quick search, new topic
    8. 8. Los Angeles | London | New Delhi Singapore | Washington DC Open web search – why does it matter? ● Everyone uses it (remember?) ● SEO = ROI ● Common „starter‟ channel
    9. 9. Los Angeles | London | New Delhi Singapore | Washington DC Open web search – how do we monitor? ● Google Analytics ● Moz ● Market research CQ Researcher Traffic Sources Open web search Library referrals Social media Academic N/A SAGE Journals Traffic Sources Open web Library re Social me Academic N/A
    10. 10. Los Angeles | London | New Delhi Singapore | Washington DC 2. Library search – who uses it? ● Advanced students, faculty ● Advanced search / browse ● Use case: narrow queries, “known searches”
    11. 11. Los Angeles | London | New Delhi Singapore | Washington DC Library search – why does it matter? ● Capture advanced readers ● Win-win strategy • Discovery services • ERM feeds • LibGuides, widgets and more!
    12. 12. Los Angeles | London | New Delhi Singapore | Washington DC Library search – how do we monitor? ● Google Analytics ● COUNTER – cost / use ● Usability testing
    13. 13. Los Angeles | London | New Delhi Singapore | Washington DC 3. Academic / A&I search – who uses it? ● Advanced students, faculty, practitioners ● “Power” users ● Use case: deep research, building expertise
    14. 14. Los Angeles | London | New Delhi Singapore | Washington DC Academic search – why does it matter? ● A&I • reach experts, power users • branding, profile, scholarly ecosystem ● Mainstream academic search • hybrid, emerging technology • reach wider audience, including advanced readers
    15. 15. Los Angeles | London | New Delhi Singapore | Washington DC Academic search – how do we monitor? ● Market research ● Google Analytics % Total Usage (Sep-Dec 2012) SAGE Journals 2.6% SAGE Knowledge 0.6% SAGE Research Methods 0.4% CQ Researcher 1.5%
    16. 16. Los Angeles | London | New Delhi Singapore | Washington DC 4. Marketing campaigns – who uses it? ● Journals marketing • Faculty, authors society members, practitioners, librarians • Use cases: receive flyers, emails; registering for content alerts; attend conferences; social search ● Library marketing • Faculty, librarians • Use case: receive flyers, catalogs, emails, newsletters
    17. 17. Los Angeles | London | New Delhi Singapore | Washington DC Journals marketing – why does it matter? ● Faculty, society members, practitioners, librarians, authors ● Use case: follow links in banners, emails
    18. 18. Los Angeles | London | New Delhi Singapore | Washington DC Journals marketing – how do we monitor? ● Market research ● Email usage ● Google Analytics ● Platform feature usage Jan-May 2013 Usage metric Total emails sent 4.9 M Email click-through rate 12.9% % total usage from emails 1.3%
    19. 19. Los Angeles | London | New Delhi Singapore | Washington DC Library marketing – why does it matter? ● Librarians: New sales, renewals, news ● Faculty / authors: Brand, influence, engagement ● Students/readers: Social media / networking
    20. 20. Los Angeles | London | New Delhi Singapore | Washington DC Library marketing – how do we monitor? ● Market research ● COUNTER reports ● Google Analytics ● Data Salon Visits United States Indonesia United Kingdom India China Romania Iran 0.00% 50.00% 100.00% 150.00% United States Indonesia United Kingdom India China Romania Iran Mexico Portugal Germany % New Visits Bounce Rate
    21. 21. Los Angeles | London | New Delhi Singapore | Washington DC Discovery channels – metrics & research ● Google Analytics: get what you pay for ● COUNTER: library ROI ● Data is not enough: know your users!
    22. 22. Los Angeles | London | New Delhi Singapore | Washington DC Observational study ● Purpose: graphic representation of social science scholar workflow, beyond single site ● Participants: 3 MA & 8 PhD students, during literature review phase ● Methods: observation, interview, data analysis
    23. 23. Los Angeles | London | New Delhi Singapore | Washington DC Findings: Web analytics ● User data available to most libraries and publishers ● Data limited to single website / domain • Traffic sources • Bounce rates • Visitor flows
    24. 24. Los Angeles | London | New Delhi Singapore | Washington DC ● Observational data (recorded research sessions) ● Field notes ● Discovery resource categories Charting Researcher Behavior
    25. 25. Los Angeles | London | New Delhi Singapore | Washington DC Mike Bostok, “Sanky Diagrams from Excel,” accessed on February 9, 2013:
    26. 26. Los Angeles | London | New Delhi Singapore | Washington DC Findings: Participant workflows ● Query trends • Higher use of open-web search than reported • Validating authenticity • Browser trends ● Retrieval trends • 100% manually managed citations • Low usage of hyperlinked references • Low rate of checking for „version of record‟
    27. 27. Los Angeles | London | New Delhi Singapore | Washington DC Thank you! ● Cardwell, C. et. al (2012). “Beyond simple, easy and fast.” College & Research Libraries News, 73(6), 344-347. ● Haines, L. et al. 2010. Information-seeking behavior of basic science researchers: Implications for library services. Journal of the Medical Library Association, 98(1), 73-81. ● Holman, L. (2011). Millennial students‟ mental models of search: Implications for academic librarians and database developers. Journal of Academic Librarianship, 37(1), 19-27.