Your SlideShare is downloading. ×
User experience of Encore: a mental models approach
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×

Introducing the official SlideShare app

Stunning, full-screen experience for iPhone and Android

Text the download link to your phone

Standard text messaging rates apply

User experience of Encore: a mental models approach

1,440
views

Published on

This talk was given at the Innovative Users Group conference (IUG) in Chicago, IL on 16 April 2012. …

This talk was given at the Innovative Users Group conference (IUG) in Chicago, IL on 16 April 2012.

I spoke about research into user experience and understanding of next-generation library catalogues from a perspective of mental models theory at Senate House Library, University of London.

Published in: Education, Technology

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
1,440
On Slideshare
0
From Embeds
0
Number of Embeds
3
Actions
Shares
0
Downloads
20
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide
  • Format of the talk, when questions should be asked.
  • My name is Andrew Preater, I am the Systems Manager at the University of London.About my employer:I work for the central University of London. This is the central administrative body of the federal university of 18 colleges which includes ones you may have heard of like University College London and King’s College London.This is our current home, Senate House being built in 1937. The University got its charter in 1836 so is 175 years old last year. http://bit.ly/uol175 is an online exhibition for our 175th anniversary which includes this picture from our archives.
  • Context is an academic one. Senate House Libraries contains: Senate House Library itself, a large research library of about 3 million items concentrated in the arts, humanities and social sciences. The libraries of the School of Advanced study adding another 1 million items. These are specialist libraries of the institutes of the central university. Our Millennium system includes the libraries of one of the smaller UoL colleges, the library of a nearby art museum and our own Institute in Paris.We support research and teaching at our federal University as well as for researchers from about 1000 universities around the world.
  • I will summarize what I’ll talk about today.Firstly I’ll discuss our readers (or customers or patrons or whatever you like) understanding of encore versus “the old catalogue” the WebPAC.This work formed the basis of my masters dissertation at library school (University of Northumbria, Newcastle).Then I’ll follow on by explaining what this means for the library and what steps we took following this investigation.
  • We have Millennium R2011 and Encore Discovery. We don’t have Encore Synergy.We went live with the Encore catalog on 1 June 2011 and have positioned it as the default or main catalog we direct our readers to.We are a best test partner for release 4.2 and went live with that about four weeks ago.For Q1 2012 so far, Encore usage is about is 52% of the total Encore & WebPAC use.
  • To say a bit more about how we “positioned” Encore.Briefly wherever we used keyword search previously we are now driving traffic to Encore. This includes sending searches from the WebPAC front page.At the time we launched this we judged from other Encore sites this was reasonably aggressive. Nowadays less so as more libraries are launching a next-gen catalog as their only or their preferred catalog.Why do this:Based on ensuring take-up at launch. Rather easy for readers (and especially for staff) to ignore a new catalogue if it’s just “there”.This based on:Experience of other Encore sites in UK who launched alongside the WebPAC and found weak positioning lead to little takeup.Related to that is Marshall Breeding’s argument that this positioning has a marketing function. The preferred catalog should have top billing to emphasize its use and ensure we get feedback on it.Breeding, M. (2010). Next-gen library catalogs. London: Facet
  • This is the library homepage at http://www.senatehouselibrary.ac.ukPlease don’t judge this too harshly, it does deserve it but we are launching a completely reworked site from May. I accept “my library website stinks and it’s [my] fault” to paraphrase the title of a talk by Matthew Riedsma – which I would recommend you listen to - http://matthew.reidsrow.com/articles/16Of course we have a search box positioned prominently on the front page. Quite a lot of traffic goes via this. This uses Encore only and doesn’t offer many options.However lots of our readers understand they need to go “somewhere else” to search so vast numbers of them follow this link in the top left. It’s impressive to see as I never think to use it myself.
  • This takes us to an opacmenu page that provides a scoped search.Loads of search traffic comes this way.What’s happening in this box up here: if you keep our default keyword search (called ‘Quick Search’) you are directed to Encore and the search is pre-faceted by location (or material type if you choose something else from that right-hand drop-down menu).If you select a different index you go to the WebPAC.http://catalogue.ulrls.lon.ac.uk/search~S1
  • Here’s the resulting search page in Encore, you can see my location facet has been pre-applied to my search.http://encore.ulrls.lon.ac.uk/iii/encore/search/C__Sparanormal__Ff:facetlocations:ull:ull:SENATE%20HOUSE%20LIBRARY::?lang=eng&searchtype=Y&searcharg=paranormal&searchscope=1&SORT=D
  • You need to know a little bit about mental models theory.Mental models are:Internal mental representations of external reality that allow us to understand the world – including how things like Web sites and library catalogues work.They help us make predictions and simplify a “too complex reality” in our minds (Johnson-Laird, 1983).Characteristics of these models – what are they like?Johnson-Laird, P.N. (1983) Mental models. Cambridge: Cambridge University Press.
  • Well they are simplified from actual reality and may not faithfully match what is really going on.They’re dynamic and they can be incomplete or just plain wrong!They change over time and lack firm boundaries. People can confuse different systems or actions and are superstitious.Think about confusing different systems as that’s relevant when we come to look at the catalog.This idea goes back to the 1940’s originally but it really took off in the 1980’s when people started applying it to human interface design, on which note…
  • I want to show you a simple example.This was made for a Website “can you draw the internet dot com”. This web site was created in 2010 for “Internet Week Europe” with the idea of gathering various attempts to draw the internet and find out if kids or design creatives were more imaginative.Obviously the children were better as they are more imaginative! (But also they weren’t trying to show off or draw something as a free advert for their design agency...)This is a representation created by a girl at a primary school in East London.Think about this:Hosts on the network are small balls of Play-doh. Combine them into a big ball the same color.Larger networks represented by more Play-doh.Then mash together those balls of Play-doh to create a single interconnected internet.So I think a “ball of modelling clay” model is pretty good.
  • Don Norman popularized mental models in this book "The design of everyday things". It’s a great pop science book and I’d recommend it.So Norman explains in there that, “People form mental models through experience, training and instruction. The mental model of a device is formed largely by interpreting its perceived actions and its visible structure.”.That's really important. Norman here concentrates on what the user sees and understands of the structure. Doesn’t really matter what the designer thinks or knows about it!So, think about that in a library catalog context: you could easily substitute “librarian” or “cataloger” as a “designer” of metadata and its presentation. (It’s not just about how Innovative have designed Encore.)Norman, D.A. (1988) The design of everyday things. 2nd edn. New York, NY: Doubleday.
  • So what does mental models theory mean for us?Here’s how we design for users according to a mental models approach.
  • Don Norman had this idea that designers should develop systems based around a consistent and intelligible conceptual model.That model doesn’t actually need to reflect how the system really works – that doesn’t matter.
  • Having done that, this would lead our users to develop and consistent and intelligible mental model of the system and therefore enjoy a consistent user experience.
  • Even better, this means that your manuals or user instruction and training efforts can just teach the underlying model, you don’t need to teach how it actually works.Then everyone would go away with a well-formed mental model and able to use the system.The idea itself is sound, we should be about teaching concepts rather than procedures.However…
  • Think about how this does or doesn’t relate to traditional library catalogues. How consistent and intelligible do they seem to you?I don’t blame vendors for this. I think what they give us generally reflects what we ask for and especially our conservatism as a profession.As I’ll explain I think next-generation catalogues like Encore are actually a huge step forward as they present a real conceptual break with what went before.
  • There has been previous research on mental models of previous generations of library catalogs in the 1980s and early 1990s:These are the two papers they are key I think:Christine Borgman’s research on an old second-generation OPAC like “telnet” Innopac demonstrated that user training based on mental models was better than procedural training.Alexandra Dimitroff showed quantitatively that users with a more complete mental model did better at searching a library catalogue.What would I need to form a good mental model of how a library catalog works?Borgman, C. (1986) ‘The user’s mental model of an information retrieval system: an experiment on a prototype online catalog’, International Journal of Man-Machine Studies, 51 (2), pp. 435-452.Dimitroff, A. (1992) ‘Mental models theory and search outcome in a bibliographic retrieval system’, Library and Information Science Research, 14 (2), pp. 141-155.
  • Alexandra Dimitroff identified these eight components that were required for users to form a complete mental model of a library catalog. Pretty horrifying stuff!In many ways this is still applicable to the older Web-based catalogs like WebPAC still in use, though less so than Encore and other discovery interfaces.Both authors wrote papers later in the 1990s complaining about how little library catalogs had improved since then!These papers are pretty old now and I wanted to apply some of these ideas and concepts to next-generation catalogues.Dimitroff, A. (1992) ‘Mental models theory and search outcome in a bibliographic retrieval system’, Library and Information Science Research, 14 (2), pp. 141-155.
  • Pause for a second to consider how much current web catalogues inherited ideas from the first and second-gen OPACs which were heavily based on concepts from card catalogues.For a contemporary user base this really doesn’t make sense.I think and argue that next-gen catalogues represent the first serious attempts to break these ties to the card catalogue still present in the WebPAC.‘Clements Library Card’ by David Fulmer, license CC-BY http://flic.kr/p/7Cs5gQ
  • So this is what I did for my masters dissertation – how I investigated mental models of Encore.
  • I recruited 9 users of Senate House Libraries, all masters or PhD students.I did some usability-test style cognitive walkthoughs using good old ‘think aloud’ protocol to get them to start thinking about Encore and how they would use it.This was a warmup and not intended as the main focus of the exercise. That said, as usability testing it was pretty useful. I got some results that allowed me to make changes to the interface.
  • I did semi-structured interview using a method called repertory grid technique.I will tell you more about that in a minute.
  • Alongside these more formal methods is a lot of conversation about the catalogues and the readers experience with it.This is where a lot of the ‘close questioning’ and deeper exploration of the readers understanding happens.This process generated a lot of qualitative data that needed coding up in Atlas.ti and analysing.
  • So, the repertory grid technique or RGT.This method is based around the idea of describing the ways in which things are alike or different from other things.The idea is if we compare different aspects of Encore and the WebPAC we’ll eventually get a good picture of how someone understands them.RGT has been used by these researchers Crudge and Johnson in Manchester to evaluate mental models of search engines. In many ways theirs was the model for my work.Crudge, S.E. and Johnson, F.C. (2007) ‘Using the repertory grid and laddering technique to determine the user's evaluative model of search engines’, Journal of Documentation, 63 (2) pp. 259–280
  • Using RGT we compare two or more catalogs by defining a set of ideas about called “constructs”, which allow you to rate each catalogue on a scale. By repeating this we build up a “grid” that allows comparisons to be made.A construct is a short description of the ways in which the catalogues are similar or different.These are elicited from the user by questioning them – but they are generated by the user. This helps limit interviewer bias.It’s a good method because you develop a good understanding of what people really think about things.Here are some examples...
  • Here are some examples.For example I ask you:“Tell me a way in which Encore and your ideal catalogue are similar, but the old catalogue is different”. You would say: Encore and the ideal catalog have a “clear user interface” whereas the WebPAC is “busy or has a cluttered user interface”.Some further examples...
  • Then you rate all three on a scale from 1-5 so in this example Encore and the Ideal catalog are rated as closer to a “clear” UI and the WebPAC closer to a “cluttered” UI.What’s the ideal catalogue about? I introduced the concept of an ‘ideal catalogue’ to ensure there was always three points to enable comparisons to be made.Some participants volunteered other catalogues as well including catalogues of national libraries like British Library, union catalogues including Copac, the catalogues from their home universities and even in one case: Google!http://www.copac.ac.ukhttp://www.bl.uk
  • You carry on and do this for all your constructs and all your interviewees.After rating all your constructs you can re-sort them using a technique called FOCUS – it reorders the grid so you can see similarly-rated constructs and how they group together. We’ll see one in a minute.You can then create pretty dendrograms or tree diagrams to show how the catalogs and constructs relate to each other.You repeat this 10-12 times per person, then for 10 people, and you have some qualitative repertory grid data.Shaw, M.L.G. and Thomas, L.F. (1978) ‘FOCUS on education – an interactive computer system for the development and analysis of repertory grids’, International Journal of Man-Machine Studies, 10 (2), pp. 139-173.
  • This is a finished grid that has been sorted and a tree diagram constructed. How do we interpret what we see here?Look at the bottom right first.By looking at how elements cluster (bottom right) we find similarities in what the user says about them. By looking at how constructs cluster (centre, highlighted) we find similarities in how the user describes the elements.Participant 9 (masters student of Latin American development) is an example of someone who strongly preferred Encore to the WebPAC as a much closer model to how she views the information search process.Encore is closer to the ideal than the WebPAC is (72% versus 40% agreement).Note the agreement between the WebPAC and the ideal isn’t at the point where they join on the dendrogram in the high 50’s, it’s the “distance” represented by the bar joining the WebPAC and the point at which Encore and the Ideal catalog join.Now look in the middle:The participant has clustered procedural and evaluative aspects of the catalogue:Scannability of results, Finds relevant results, and Manageable set [of results] clustered, followed by: Suggestions for additional searches and Features to refine search slightly further across the dendrogram.On these points she is particularly critical of the WebPAC, and took the view that her approach to search matched what Encore could do – started with a very general search without specifying a search index, and then used the Encore faceting and limiting features to prune the results down.
  • This is a really high-level summary of results from analysis of grid data.
  • Almost everyone rated Encore closer to the ideal than the WebPAC to the ideal.Because of the RGT method used I’m more confident in concluding this than if I just asked them. It’s fair to say Encore is a step forward.
  • However there were strongly contrasting views of the catalogues. Some users overall preferred the WebPAC, or some other OPAC, or nothing at all.Catalog user experience is very subjective. It was interesting the extent to which two people will rate Encore and the WebPAC as exact opposites when looking at the same things.One example here is the idea of the interface being “clear” or “cluttered”. Now at the time of the study I was using the three-column “citrus” skin which may have been why some users rated Encore “cluttered”.
  • I will summarize findings from the cognitive walkthroughs and interviews.This is the “conversation” I was talking about earlier.
  • Qualitative data from interviews testing found extensive “Web-like” behavior in Encore vs. the WebPAC.By “Web-like” we mean behaviors associated with using Web search engines and browsing Web sites, such as:Scanning Web pages, concentrating on titles and skim-readingIterative searching based on skim reading over multiple reworked search queriesShort queries, characterised by use of a few keywordsA tendency not to look beyond the first page of search resultsTrust in search relevancy rankingA query is seen as part of an ongoing processExpectation of tolerance to small errors or typos based on ‘Did you mean...?’ suggestions“Satisficing” behavior, a tendency to make do with results or information that seems good enough rather than search exhaustivelyBy ‘library catalogue-like’ we mean behaviours associated with traditional information retrieval systems including:More complex search queries including use of boolean operatorsFormulation of queries to meet an ‘approved’ format of the library bibliographic record, such as searching by author’s last name first.A query is seen as a form that should be submitted to get a desired correct result, rather than a processUse of pre-limits, such as an index or limit to part of the library collection to control what is searchedBrowsing of the catalogue using linking generated in catalogue records such as subject headingsRequirement to avoid or correct typos or other errors due to inherent intolerance of the systemThis is summarised from:Ahmed S.M.Z., McKnight C., and Oppenheim C. (2009) ‘A review of research on human-computer interfaces for online information retrieval systems’. The Electronic Library, 27 (1), pp. 96-116.Craven, J., Johnson, F., and Butters, G. (2010) ‘The usability and functionality of an online catalogue’. Aslib Proceedings, 62 (1), pp. 70-84.Jansen, B.J. and Spink, A. (2006) ‘How are we searching the world wide web?: a comparison of nine search engine transaction logs’, Information Processing and Management, 41 (1), pp. 248-263.Lau, E.P., and Goh, D.H. (2006) ‘In search of query patterns: a case study of a university OPAC’, Information Processing and Management, 42 (5), pp. 1316-1329.Lauridsen, H. and Law, J. (2009) ‘How do you follow Google? Providing a high quality library search experience’, IATUL Conference, KatholiekeUniversiteit Leuven, Belgium 1-4 June.Nielsen, J. (1997) ‘How users read on the Web’, Jakob Nielsen's Alertbox, 1 October. Available at: http://www.useit.com/alertbox/9710a.html (Accessed: 9 April 2012).Novotny, E. (2004) ‘I don’t think I click: a protocol analysis study of use of a library online catalog in the Internet age’, College & Research Libraries, 65 (6), pp. 525-37.Tunkelang, D. (2009) ‘Reconsidering relevance and embracing interaction’, Bulletin of the American Society for Information Science and Technology, 36 (1), pp. 20-23.Zhang, X. and Chignell, M. (2001) ‘Assessment of the effects of user characteristics on mental models of information retrieval systems’, Journal of the American Society for Information Science and Technology, 52 (6), pp. 445-459.
  • What am I suggesting here: if you give the reader something that looks like Google then it encourages a style of search and behavior that reflect what we’d expect on a search engine.
  • It surprised me how much what was presented to the end user seemed to lead or influence their behavior. That is, putting readers in front of a Google-style search interface tended to lead on to “Web like” behaviour.This was developed in an inductive way from the behavior observed, I had not formulated this idea ahead of time and attempted to prove it. However bear in mind this is based on coding qualitative data from nine people.
  • The emotional experience of using the catalogue is an important part of the overall experience of our library.This is interesting. I found there was a strongly emotional or affective response to the catalogue beyond what you’d expect from a mere lookup tool.Much more than just about it being “nice to use” or “familiar”. A catalogue can be “a joy to use”. There is no reason for it to be a painful experience where you have to pay your dues and work through a painful process where you are judged deserving of a good outcome.And why not? I think we often overlook this.
  • Having discussed what we found I will address how this influenced our approach to going live with Encore at Senate House Libraries.
  • I wanted our staff to have a good understanding of how Encore works so they could explain it to readers and encourage the development of a better model of Encore in their minds.
  • To do this I did training in small groups of 2 to 3 going quite in-depth. This can be scary if you were expecting to hide at the back of the room in a big session.What I was attempting to do here was improve staff understand of readers experience of the catalogue.Actually I don’t think we understand the difference the next-gen catalogue makes very well.
  • I mentioned that I found strong contrasts in what readers wanted or expected from an OPAC.I was going to put a slide in saying “Don’t throw out your WebPAC just yet.” but this wasn’t quite right. Nor did I want to imply it’s OK to dismiss concerns about Encore functionality as resistance to change for the sake of it.Our view is we trust our readers and we trust our staff.I always emphasize Encore as the best starting point but I don’t insist on it being used particularly by more more experienced readers with a strong traditional view of what a catalog is and how it should behaves.(Incidentally we still have “telnet” Innopac in use at one site…)
  • Feedback on our catalogue is generally positive.“BL” here means British Library, our national copyright library which is about 15 minutes walk from where we’re located.First time I have seen a catalogue be called sexy. We did retweet this from @SenateHouseLib by the way!By the way British Library have since launched Primo as their only catalog.http://twitter.com/#!/faceometer/status/79145225074909184http://twitter.com/senatehouselibhttp://search.bl.uk/
  • Next steps:We’re carrying on with qualitative testing of Encore as part of our review of our Web presence.I think some kind of testing is the only way to get useful information about how you’re doing but I want to try to do this ‘in the field’ rather than the more artificial environment of a usability testing lab.We have made sure to create opportunities for conversation with our readers around Encore and are being seen to act on feedback.The more rapid releases of Encore help with this process, especially as Innovative are going in the right direction with their development work.
  • Thanks everyone.You can contact me on Twitter @preater or by email andrew.preater@london.ac.ukMy blog ‘Ginformation Systems’ is at http://www.preater.com where I blog about libraries, next-gen catalogs, etc.My dissertation is available here: http://bit.ly/encoremm
  • Transcript

    • 1. User experience of Encore:a mental models approachAndrew PreaterSenate House LibrariesUniversity of London
    • 2. bit.ly/uol175
    • 3. What I’ll talk aboutOur readers understandingof EncoreWhat this means for us
    • 4. Encore live June 2011Positioned as defaultcatalog
    • 5. Positioning EncoreBreeding, M. (2010). Next-gen librarycatalogs. London: Facet
    • 6. Mental models
    • 7. 1. Incomplete2. Unstable3. Unscientific4. Lack firm boundaries
    • 8. Maya (age 10), primary school pupilhttp://www.canyoudrawtheinternet.com
    • 9. “The mental model of a device is formed largely by interpreting its perceived actions and its visible structure.”Norman, D.A. (1988) The design of everydaythings. 2nd edn. New York, NY: Doubleday.
    • 10. Designing for libraryusers
    • 11. 1. A consistent conceptual model...
    • 12. 2. A consistent and intelligible user experience...
    • 13. 3. You only need train users on the conceptual model
    • 14. Does that match yourexperience?
    • 15. Catalog mental modelsBorgman, C. (1986) ‘The user’s mental model of aninformation retrieval system: an experiment on aprototype online catalog’, International Journal ofMan-Machine Studies, 51 (2), pp. 435-452.Dimitroff, A. (1992) ‘Mental models theory and searchoutcome in a bibliographic retrieval system’, Libraryand Information Science Research, 14 (2), pp. 141-155.
    • 16. 1. Contents of the database 2. Interactive nature of the system 3. Existence of multiple files 4. Multiple fields within each record 5. Multiple indexes and / or inverted indexes 6. Boolean search capability 7. Keyword search capability 8. Use of controlled vocabularyDimitroff, A. (1992) ‘Mental models theory and searchoutcome in a bibliographic retrieval system’, Libraryand Information Science Research, 14 (2), pp. 141-155.
    • 17. Clements Library Card by David Fulmer,license CC-BY http://flic.kr/p/7Cs5gQ
    • 18. Investigating mentalmodels
    • 19. 1. Cognitive walkthroughs
    • 20. 2. Structured interview using Repertory Grid Technique
    • 21. 3. Stories, anecdotes, and conversations
    • 22. Repertory Grid Technique?Crudge, S.E. and Johnson, F.C. (2007) ‘Using the repertory grid and ladderingtechnique to determine the users evaluative model of search engines’, Journal ofDocumentation, 63 (2) pp. 259–280
    • 23. Constructs
    • 24. ‘Clear user interface’ versus‘Cluttered user interface’‘A specialist tool’ versus‘A general search tool’‘Terms in catalog are easy to understand’ versus‘Stuffy or out-dated jargon’
    • 25. Rate your catalogs on a five pointscale: Ideal WebPAC Encore catalog ‘Cluttered ‘Clear user user 4 2 1 interface’ interface’
    • 26. Repeat 10 constructs, 10 participants Sort grids using ‘FOCUS’ Can now represent it visuallyShaw, M.L.G. and Thomas, L.F. (1978) ‘FOCUS oneducation – an interactive computer system for thedevelopment and analysis of repertory grids’,International Journal of Man-Machine Studies, 10 (2), pp.139-173.
    • 27. A completed grid
    • 28. 1. Summary of grid data
    • 29. Encore closer to idealthan the WebPAC to theideal
    • 30. Strongly contrastingviews of Encore
    • 31. 2. Summary of qualitative data
    • 32. Readers behave likethey’re using websearch
    • 33. Encore encourages thisbehavior ?
    • 34. Affective aspects ofcatalog use
    • 35. Implications for thelibrary
    • 36. Staff training focused onexplaining the system
    • 37. Small groups trained in-depth on Encore
    • 38. One approach won’tsuit all readers
    • 39. Feedback is generallypositive
    • 40. How we followed up onthis
    • 41. Thank youMe: @preater andrew.preater@london.ac.uk www.preater.com bit.ly/encoremm