Start with stuff about me… Background, etc.We’re going ton concentrate on the library web site, which is separate from the catalog (although catalog results are available in the library site). Confusing. To our users, too.
Our users told us they wanted the mystery of where to start removed from their lives
I’ll talk about each of these in the following slides
When I talk about user data, it’s limited and anonymized.
Bento-box approach to resultsJapanese bento box –
The ‘bento box’ approach to search presents results by category – some of this, some of that, with each section labeled differently. We show things, broken down by usage patterns (last reorganized about 15 months ago). Most-used resources are on top, less used to the right and down Subject specialists are dynamically presented based on query; “Ask a Librairan” always there. Staff directory is below the Research Help section; journals are below Catalog.
Not necessarily a good idea. We’ve receive a lot of commentary about the search boxes.At the same time, our users have figured it out.
This is searches entered from the “find bar” only.MLibrary – no surprise, it’s the default searchArticlesPlus – launched September 2010. Thing it replaced had 3.7% of searches in Summer 2010. (MLibrary was 92.2%, Catalog was 4.1%).
But to get back to the site search…. What do users do when they get here?
MLibrary – no surprise, it’s the default searchArticlesPlus – launched September 2010. Thing it replaced had 3.7% of searches in Summer 2010. (MLibrary was 92.2%, Catalog was 4.1%).
Not all searches result in clicks. Some searches result in multiple clicks. There are 200,000 “clicks” and (from earlier slide) just over 400,000 searches. Numbers don’t add up to total searches.Interesting… People are using the “one search” the way we intended; not going to the catalog tab, but doing their searches here. Of course, some people are looking for databases & journals, finding the Mirlyn entry, going there, and then clicking into the database. Individual database titles & creative misspellings are the most frequent searches by raw numbers. Mirlyn is the most frequently clicked result (databases & journals appear there)The long tail is VERY longA few hundred searches make up the top queriesAfter that – lots of one-off queriesArticles NOT included in site search. Frankly, we don’t know where to put it.Just added ILL & Uborrow to list (after semester ended)
Searches conducted from MLibrary tabStarted recording data in April 2010 (note where blue line starts)Remarkably consistent
Library catalog also remarkably consistent. Outlier in early 2012 – a singleIP address grabbed a LOT of ISSNs from our catalog. Who knows why?
This is a sign of a true success story. We had data for the last month or so of the old product in 2010. For 2011, usage started out higher than the old one. For 2012, it was consistently higher. Yet peaks & troughs are remarkably similar.
Replicates functionality of Summon interfaceIt’s a Drupal module, available to any library that wants to use it. Drupal 6 for now, but there’s a French library that’s started a Drupal 7 version.Gives us some added goodnessFavorites – persistent citations. Summon is session-only. - Coming this summer – save & organize favorites across catalog, journals, databases, articlesProblem reporting
Data independent of vendorData since March 1All of these are proxies for utility to user.Matched with query & user informationFavoritingProblem links. Do they use it? Boy, do they ever. 1070 times in winter semester from 310,000 searches (these numbers are different from Google’s – it’s what was searched,
Proxy: the resource isn’t proxied at all, or a new “hop” was put in place between the original URL and the targetUser account: all sorts of odd things. Expired users, alumni who don’t know they can’t have access; registrar problems, you name itResource: the database doesn’t work, it’s no longer licensed, catalog is wrong
Privacy concerns. Once users click the link… they’re gone.Extras include problem reporting, providing a friendly face to a query, getting people to better resourcesWe know what people look for & access from our site; exploring how that differs from vendor site. Noticed (as has anyone else using web-scale discovery tools) that full text is up, native database searching in broad aggregators is down. Poses a challenge for future contracts.
Transcript of "Don't Go There! Providing Discovery Services Locally, not at a Vendor's Site"
Don’t Go There!Providing Discovery Services Locally, not at a Vendor’s Site Ken Varnum Web Systems Manager University of Michigan Library
Overview• University of Michigan Library’s web site• Why we built it the way we did• What we’ve changed/added since then• How discovery happens
[Dis]Integrated Discovery• We recognized that we did not know enough to build an integrated search interface• Our old sites were a hot mess• Where to start?• Information, not Location
Benefits of Homegrown• We can track full-text clicks• Provide problem reporting mechanism• For authenticated users – we can explore resource use• Once users leave the library, they’re on their own
Discovery• Discovery is the heart of the library site• It is, broadly, what people come do
Now, Articles• Article discovery looks different: – Different level of data needed by users – Different functionality (facets, full-text linking)• Found it hard to integrate into site search• Users seem to understand it as separate thing
Article Clicking• Article full-text links clicked: 134,095• There were 310,171 article searches• Problem reporting is built in to our interface
Results of Error Reporting• Reports go to our online reference service, Ask a Librarian• Each one reviewed & responded to (phew!)• Frequent classes of problems: – User account problems – Proxy problems – Resource problems
Problem ReportingBefore “Direct Linking”• 548 problem reports (112,138 searches)• Reports per 1,000 searches: 4.9After turning on “Direct Linking”• 656 problem reports (198,033 searches)• Reports per 1,000 searches: 2.6• Reports per 1,000 MGet It clicks: 3.8
After Direct Linking (n=656)webofscience_primary_A, 1.4 % 36 more sources, 11.0% eric_primary_EJ, 1.4% doaj_primary_oai, 1.5% gale_primary, 2.6%pubmed_primary, 4.4% LOGICAL, 42.1% crossref_primary, 4.4% webofscience_primary, 13.4% proquest_dll, 17.8%
Favorites• Have “silos” of favorites for articles, catalog, and the database & journal finder• Launching an integrated tool with folders• Beginning to analyze data from 9,000 favorite items / 1,000 patrons• Have longer-term goal of dynamically connecting resources & classes
What We’ve Learned• We’re learning lots about our users• Interesting that most of our site visitors don’t authenticate• Provide help in context
Tools are (Mostly) Open Source• Site is Drupal• Exhibits tool (launching soon) is Omeka• Library catalog is VuFind• But… – ILS is Aleph – Research Guides are SpringShare’s LibGuides
Apron Strings• We don’t want our users to leave home without making sure they know where they’re going• We want to provide extras when we can• Can compare our usage with vendor reports