Usability and User Research (UX2 UCD Day)
Upcoming SlideShare
Loading in...5
×

Like this? Share it with your network

Share

Usability and User Research (UX2 UCD Day)

  • 3,459 views
Uploaded on

This presentation introduces user research methods and highlights examples from case studies to demonstrate how these methods can be used. A basic guide to usability test techniques is provided......

This presentation introduces user research methods and highlights examples from case studies to demonstrate how these methods can be used. A basic guide to usability test techniques is provided including best practice and practical advice.

More in: Education , Technology , Design
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
No Downloads

Views

Total Views
3,459
On Slideshare
2,123
From Embeds
1,336
Number of Embeds
10

Actions

Shares
Downloads
42
Comments
0
Likes
3

Embeds 1,336

http://lorrainepaterson.wordpress.com 1,319
http://www.365dailyjournal.com 3
http://www.linkedin.com 3
https://lorrainepaterson.wordpress.com 2
http://translate.googleusercontent.com 2
url_unknown 2
http://paper.li 2
http://webcache.googleusercontent.com 1
http://digg.com 1
http://twitter.com 1

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide
  • Just going to quickly cover some of the basics so everyone is on the same page. Then I'll provide some reasons why its important, these might hopefully help you when trying to make a positive change in your own institutions Usefulness and usability and why usefulness is just as important Then I will go through some of the methods used during our project and the impact it had and lessons we learned. I’ll be skipping pretty quickly through a lot of information so if I’m going too fast or you want to ask a question please ask as we go. If there’s time we’ll have more questions at the end.
  • Often the type of thing I've heard from people over the years. Commonly this is what many service managers believe when they don't feel there is a problem but as we all know, lots of 'little' problems can quickly add up and create a bad user expeience.
  • So why is usability important to all digital services? People will go elsewhere if they have difficulties using a services – already we've seen from our and other's research which points to the fact that students use a variety of resources to fulfil their needs, frequently using external resources such as Google books, scholar among others. Often students are unaware of the benefits or the full features of their university services including digital library because of usability issues such as poor signposting, complicated forms and hidden services which students are unaware of.
  • Steve Krug provides a great example of why its important by illustrating the typical reservoir of goodwill each person has inside them.
  • Small problems can quickly add up
  • These apply across the board, regardless of the type of digital service you provide.
  • Comforts – breadcrumb links, clear navigational buttons.
  • This is the definition of usability from ISO, key things being effectiveness, efficiency and satisfaction.
  • These are all questions you should ask when evaluating your own services and systems. Throughout user research these questions should be addressed.
  • Testing can be conducted with on average 6 users but even as few as 3 if conducted regularly on a monthly basis. With three users and just one or two tasks you can evaluate a system in a couple hours once a month. Also you don't need an expensive test lab to do user research. A laptop with internet access and paper +pencil is sufficient to conduct tasks and take notes. There are also cheap recording software packages available and free trials to record each test and share your findings with others. I’m hoping I might be able to convince you all of these things so that you might go out and conduct your own user research!
  • In addition to learnability, efficiency and satisfaction – utility is also important attribute to be aware of when creating and evaluating digital services. Usefulness is just as important as usability
  • When we started the UX2 project we looked at theoretical frameworks which might help us to evaluate an information system in the most holistic way, ensuring that the research was thorough and robust. TAM was one model we examined. It has been around for many years and will reveal a persons intention to adopt an IS.
  • Interactive Triptych Framework (ITF) was another framework we researched. It is based on TAM among other theoretical frameworks Interaction between 3 components -provides 3 evaluation approaches Interaction is affected equally by content and system characteristics Examines usefulness separately Ability of ITF to thoroughly evaluate success of DLs in a holistic manner So when we began to carry out user research, we tried to evaluate not only the usability of a system but also it’s usefulness to its users. Next are some methods which you can apply at different stages of your research to help design information systems and evaluate existing ones.
  • Personas are part of the data gathering phase of research, before you test anything you ideally want to have an idea of who your typical users are. This is especially important when you are approaching a new service This is an example of the personas we created during our project.
  • Data gathering system where the wide array of data and information is gathered and funnelled down into several personas which represent all the individuals involved in the data gathering. This can be done using several methods including interviews and usability testing. The primary aim it to gather data on users behaviour, goals and attitudes.
  • Number of steps involved in creating personas: 1. collect qualitative data – we chose to interview a number of students and librarians 2. Collect all notes on each interview and present in a single page under different headings. We chose goals & behaviour, attitudes and personal traits 3. take each individuals interview notes and try to summaries each person using a couple lines or several bullet points. Separate out each aspect into different coloured post-it notes. These must provide an efficient summary of that person. 4. next slide...
  • 4. conduct two-by-two comparisons of each interviewee. Plot each person's distinction on a scale – next slide..
  • We then created a scale for each distinction identified during the two-by-two comparison and determined end points. Doing so allowed us to place each participant on the scale and directly compare them.  Most variables can be represented as ranges with two ends. It doesn’t matter whether a participant is a 7 or 7.5 on the scale; but what matters is where they appear relative to other participants. The next slide provides an example of our 12 scales mapped for each of our 17 participants.
  • We used coloured pens and stickers to plot each individual as well as labelling each individual as a number – this made it easier to 'eyeball' each scale and spot individuals who were close on a number of scales. Creating segments from these scales was by far the trickiest part. It's not an exact science and is open to interpretation. We initially had 6 personas which we were able to narrow further once we started designing each persona. At that stage the similarities between two personas became apparent and the argument to merge these personas was strong enough to reduce the number further.
  • When designing personas include a name, photo, personal information such as their background, interests and hobbies as well as their attitudes to information seeking/ digital services etc. You can also include some of the more important scales to provide a quick overview in addition to the more detailed description. You'll get the chance to see a basic persona if you're attending the prototyping workshop this afternoon. We were unable to recruit staff and therefore did not create a persona which represented them.
  • If its the first time you undertake persona development, factor in extra time to read up on the processes and get familiar with what you are required to do. Don't underestimate the amount of notes you need during the interview to use later. During the interview keep in mind the information you are looking for and ensure that you ask the right questions to get an understanding of these things i.e. information seeking behaviour.
  • One-on-one observations of work practice in its naturally occurring context. [Switch to Boon's slides]
  • If we go back to our definition and remind ourselves what we are looking for when we conduct a usability evaluation.
  • Scavenger hunt : typical task scenarios which are useful when testing a particular area of service in the interface. The Reverse Scavenger hunt : show users the answer and ask them to find and purchase it. E.g. Image of clothing to source online. Self-generated tasks : using interview questions beforehand to help participants identify their requirements Part self-generated : ask users to bring data with them to the session and then use it to construct an ad-hoc scenario. ‘ Skin in the game’ tasks : gives users real cash to purchase something as part of the task making it as realistic as possible Troubleshooting tasks : ask the user to solve a problem
  • This is an example of a scenario which we designed for our own usability tests. By adding context you provide a sense of realism. Wherever possible, try to ensure the subject matter is relevant to user (but not always the case, in which case emphasise that the participant must try to imagine themselves in this situation) Adding specific details focuses the task to the area you want to evaluate. In this example we added the detail of the type of information required. This forced the participant to think about how to screen for information published recently – hopefully using the facet navigation to do so.
  • However, sometimes scavenger hunt tasks have limitations – particularly when testing search services and digital libraries.
  • This is what a self generated task might look like in a test script with an example of the type of task a student might come up with.
  • Self generated tasks can provided the greatest realism and therefore generate greater findings which scavenger hunt tasks might not reveal. However it also means you can't directly compare tasks between participants and is therefore not idea if you are looking to gather quantitative data (although it can be used in conjunction with other tasks which specifically collect quantitative data)
  • Word choice questionnaire was a list of standard adjectives, both positive and negative. Participants were asked to tick all the words which summarised their experience and circle the top three. Morae is a usability test software which is typically used to record tests by professionals and it good at helping you edit together a highlights video to share with others. We have several edited highlights from the testing which are available to see as an example.
  • Try not to interrupt participants and give them lots of time to speak – sometimes you find they say something unexpected. Hard not to be quiet and takes practice but certainly an important skill. Ask open questions which aren't closed (one word answers) or too leading. Clarify a participant's point at the end to ensure you've understood their feedback. Remember that what participants say is often different to what they do. Make a note of any discrepancies but it's not necessary to pick up participants on this.
  • Too much help will only affect the results of the testing and its robustness. Tell participants that if they need help or ask a question you will be able to help them but only at the end of the task. I find that once a participant has demonstrated that they cannot complete the task and stated they would give up, you can provide pointers. This is helpful as you can gather feedback on user’s expectations.
  • Guerilla is similar to ‘discount’ usability testing and makes usability testing cheaper and more accessible to everyone to undertake. Characteristics include, real environments, loose recruitment – often on the spot. Shorter test times and smaller incentive (if any) e.g. Free coffee or voucher. Not as rigorous as traditional testing, not always with representative users.
  • Anyone attending IS usability testing training days here at Edinburgh? If so you may remember our tests at the end of the class.
  • Useful and accurate metaphor for mobile internet use compared to desktop use. Is important to bear in mind when designing mobile web services - has impact on the type of services users are likely to want and find useful.
  • During the UX2 project we created a mobile friendly version of the desktop prototype digital library. Wanted to understand how students searched for information on smaller screens and if their needs changed under different circumstances. Here was can see an example of the test environment – student searching on their phone which is hooked up to the recording equipment. Web cam positioned above phone allowed us to capture what was happening on screen – image provides example of the mobile site.
  • Other mobile testing equipment include ELMO-cam which is quite expensive and commonly found in speciality test laboratories. We wanted to lower-tech and cheaper alternative which could do the job equally as good if not better. We used a ‘sled’ system which has been used and documented by others (see resources on last slide). Using this as guidance we built our own sled with the following materials. For more detailed information please read our blog titled Mobile Usability Testing http://lorrainepaterson.wordpress.com/2011/03/31/ux2-mobile-usability-testing/
  • A close-up of the set-up showing the camera clipped onto the Perspex and the phone sitting in the case. A second web cam in the background was used to capture body language in picture in picture (PiP) format.
  • To conclude – When undertaking mobile testing we considered two things in particular: Should we develop a website or app first? Should we ensure testing is as realistic as possible or test in a controlled environment such as a lab or office room? Research revealed that consensus of prioritising mobile websites are best. Research also revealed that realistic mobile testing does not necessarily reveal additional usability issues. Example os realistic testing includes making participants walk around obstacle course while performing tasks or making users go out while conducting tests i.e. Travel on tube train etc. - As this was a preliminary test which was intended to be first of many, the scope of the project determined that realistic testing would be too time consuming to be worthwhile at this stage. Might have been more useful closer to launch or post-launch.
  • Some resources mentioned directly or indirectly during the presentation. Very worthwhile reading! I have copies of some of the mentioned books if you would like to take a look at them feel free to come up at the end/during lunch.

Transcript

  • 1. USABILITY AND USER RESEARCH: CASE STUDIES AND CURRENT PRACTICE Lorraine Paterson, Boon Low NeSC, University of Edinburgh User Centred Design Day 17 May 2011
  • 2. Agenda
    • Why user research is important?
    • What is usability?
    • Usefulness and usability
    • Methods
      • Personas
      • Contextual Enquiry (Boon)
      • ‘ Traditional’ Usability Testing
      • Guerilla Testing
      • Mobile Usability Testing
  • 3. “ It’s only a little problem...they’ll get over it.”
  • 4.
    • If its difficult to use, people leave
    • If users get lost they leave
    • If users don’t understand how to use a service, they leave.
    • If users do not understand the value of using a service, they leave
    • If it’s hard to read or doesn’t address their key questions, people leave
    Why is usability necessary?
  • 5. Why is usability important ?
    • Websites are full of little usability problems (and sometimes big ones). No website is ever perfect!
    • Each problem frustrates users a little more reducing their ‘Reservoir of goodwill’ – Steve Krug, Don’t Make Me Think (2000).
  • 6. Reservoir of goodwill
    • Every little frustration gradually erodes our goodwill.
      • Different people have different size reservoirs.
      • Sometimes you have more goodwill than others.
      • You can refill it even if you make mistakes.
      • Sometimes a single mistake can empty it.
    • Therefore every small improvement can have an impact.
    http://www.adobe.com/designcenter/dialogbox/usability/index.html
  • 7. Things that increase good will
    • Know the main things people want to do on your site and make them obvious and easy
      • Top 3 goals important to the success of the site
    • Tell me what I want to know
      • Be upfront about things. Explain registration procedure clearly
    • Save me steps wherever you can
      • Registration, short-cut links
    • Put effort into it
      • The more effort you put into generating information which is presented clearly and is organised in a way that it can be found quickly will allow users to answer questions themselves
  • 8. Things that increase good will
    • Anticipate the type of questions are likely to be asked
      • FAQ are useful when kept up to date and easy to navigate
    • Provide me with creature comforts like printer-friendly pages
    • Make it easy to recover from errors
      • E.g. Suggestions and advice when no search results returned
    • When in doubt, apologise
      • Let them know you are aware of the issue
  • 9. Things that diminish good will
    • Hiding information I want:
      • Contact details, shipping details or prices
    • Punishing me for not doing things your way:
      • Formatting data in forms, forgetting to complete a field, visiting the wrong page or conducting a search with no results
    • Asking me for information you don’t really need:
      • Personal details, marketing questions
    • Putting ‘sizzle’ in my way:
      • Long Flash intros, bloated pages, marketing or adverts
    • Providing an amateur looking site:
      • Sloppy, disorganised and unprofessional
  • 10. Definition
    • Usability definition from International Organisation for Standardisation (ISO) 9241-11:
      • “ The extent to which a product can be used by specified users to achieve specified goals with effectiveness , efficiency and satisfaction in a specified context of use.”
    http://en.wikipedia.org/wiki/Usability
  • 11. Definition
    • Usability components:
      • Learnability : How easy is it for users to accomplish tasks the first time?
      • Efficiency : Once users have learned the design, how quickly can they perform tasks?
      • Memorability : When users return to the design after a period, how easily can they re-establish proficiency?
      • Error prevention : How many errors do users make, how easily can they recover?
      • Satisfaction : How pleasant is it to use the design?
    http://www.useit.com/alertbox/20030825.html
  • 12. Ways to improve usability
    • User testing is the best way to get feedback on your website and will always uncover something you weren’t aware of
    • Usability professionals mantra:
      • “ Test early and often!”
    • Contrary to popular beliefs,
      • Usability testing can be done quickly
      • Can be done cheaply
      • Can be done by anyone
  • 13.
    • Another key attribute is utility
    • Does it do what the users need?
    • Is it useful while also being usable ?
  • 14. Usefulness and Usability
    • Technology Acceptance Model (TAM)
      • A person’s intention to adopt an information system is affected by two beliefs: perceived ease of use and perceived usefulness
    • However, TAM can only identify that a system is not likely to be accepted by users but not offer feedback on how it can be improved (Dillon & Morris, 1999)
    Dillon, A. and Morris, M. (1999). Power, Perception and Performance: From Usability Engineering to Technology Acceptance with the P3 Model of User Response. Presentation of the 43rd Annual Conference of the Human Factors and Ergonomics Society, Santa Monica, CA: HFES
  • 15. Interactive Triptych Framework “ Unless users perceive an information system as being useful at first, its ease of use has no effect in the formation of intention” ~ Szajna 1996 Szanja, B. (1996). Empirical Evaluation of the Revised Technology Acceptance Model. Management Science, Vol. 42(1) Tsakonas, G., Papatheodorou, C. (2006). Analysing and evaluating usefulness and usability in electronic information services. Journal of Information Science, 32(5)
  • 16. Personas
  • 17. Personas
    • Personas are fictional person who represents a major user group for your site
    • Help to visualise your user when developing user interfaces
    • Enables library stakeholders to better judge user goals and needs, and to overcome incorrect system assumptions
    • How many personas?
      • No definitive answer but if you have a lot it’s likely that some will overlap and can be combined
      • Don’t always need one persona per demographic group. It’s the behaviour and attitudes that are important, these might be very similar or the same for example librarians and academic staff
      • Normally 4-6 is a good indicator
  • 18. AquaBrowserUX Personas
    • Created a set of library personas for JISC AquaBrowserUX
    • Planned to create qualitative personas with quantitative validation
    • Only able to create qualitative personas in end
    Image: Mulder, S. & Yaar, Z. (2007) The User Is Always Right, A practical Guide to Creating and Using Personas for the Web
  • 19. AquaBrowserUX Personas
      • Conduct qualitative research to reveal insights: interviews with 17 participants
      • Summarise participants using coloured post-its:
      • Goals
      • Information seeking behaviour
      • How they relate to library services
      • Skills, abilities and interests
  • 20. AquaBrowserUX Personas
    • Two-by-two comparisons:
      • Selecting pairs of randomly participants to identify distinctions and plot them on a distinction scale
      • Keep selecting two participants until all have been compared
  • 21. Positive attitude To library Preference for Digital resources (e-books) Uses digital Library a lot Uses physical Library a lot Use of Classic Use of Internal resources Positive att to AB Positive att to Classic Simple search Negative attitude To library Uses digital Library little Uses physical Library little Use of External resources Use of Aquabrowser Preference for physical Resources (books) Negative att to AB Negative att to Classic Boolean search Plot each participant on every distinction scale
  • 22. AquaBrowserUX Personas Identify segments, looking for groups with similar traits
  • 23. AquaBrowserUX Personas
    • Write a persona for each segment you have
    • We created 3 student personas and 1 library persona
    • Eve the e-book reader: “I like to find excerpts of books online which sometimes can be enough. It saves me from having to buy or borrow the book.”
    • Sandra the search specialist: “In a quick-fire environment like ours we need answers quickly”
    • Pete the progressive browser: “Aquabrowser and Classic, it’s like night and day”
    • Baadal the search butterfly: “Classic is simple and direct but Aquabrowser’s innovative way of browsing is also good for getting inspiration.”
  • 24. AquaBrowserUX Personas
    • Most difficult step is getting from two-by-two comparisons to identifying segments
    • You may find you create more segments than eventually use
    • Once you begin writing the persona you may realise there are two personas who are very similar or not different enough to justify two individual personas
    • Time consuming process, 17 hours of interviews followed by a week of analysis and persona writing
    • Worth thinking about individual characteristics e.g. Goals, interests, skills, information seeking behaviour, attitude to library, before you begin interviews so you are thinking about this throughout
  • 25. Contextual Enquiry
  • 26. Usability Testing
    • Usability components:
      • Learnability : How easy is it for users to accomplish tasks the first time?
      • Efficiency : Once users have learned the design, how quickly can they perform tasks?
      • Memorability : When users return to the design after a period, how easily can they re-establish proficiency?
      • Error prevention : How many errors do users make, how easily can they recover?
      • Satisfaction : How pleasant is it to use the design?
    http://www.useit.com/alertbox/20030825.html
  • 27. Usability Testing
    • Recruit representative users of your service
    • Ask them to perform representative tasks using the service interface
    • Observe what the users do, where they succeed, and where they have difficulties with the user interface.
  • 28. Characteristics
    • Ask participants to verbalise their thoughts by ‘thinking aloud’
    • One-to-one tests, unlike focus groups
    • Normally small numbers of participants (5 considered adequate in most situations*)
    • Most effective when used iteratively throughout development
    • Provides information on how people use an interface
    *Research by Jacob Nielsen found that 5 participants found 85% of usability issues: http://www.useit.com/alertbox/20000319.html
  • 29. Representative Task Scenarios
    • Task scenarios must be realistic to the user
    • Written in the participant’s own words, unambiguous and asks the participant to explore a particular area of interest
    • Types of task scenarios:
      • Scavenger hunt : tasks with one clear, ideal answer.
      • The Reverse Scavenger hunt : show users the answer and ask them to find and purchase it. E.g. Image of clothing to source online.
      • Self-generated tasks : using interview questions beforehand to help participants identify their requirements
      • Part self-generated : ask users to bring data with them to the session and then use it to construct an ad-hoc scenario.
      • ‘ Skin in the game’ tasks : gives users real cash to purchase something as part of the task making it as realistic as possible
      • Troubleshooting tasks : ask the user to solve a problem
  • 30. Scavenger Hunt “ As part of your coursework your lecturer has asked you to read a recent presentation on the developments in fusion. Using the prototype can you find a suitable presentation published in the last 2 years?” Subject matter, where possible relevant to user’s knowledge base Task restriction, forces the user to think about narrowing their search. Also task completion indicator Making the scenario realistic so the user can relate to it
  • 31. Self Generated Task
    • Interview-based tasks are often more suitable for Search Interfaces than pre-defined tasks
    • Pre-defined tasks wont feel real to participant and this may affect how they search for information
    • Many search interfaces such as digital library catalogues don’t have one or two specific tasks that are more important
    • Here you create a task with the input of the participant and agree what task success would be beforehand
    • Provides a valuable insight into information seeking behaviour
    AquaBrowserUX blog post: Realism In Testing Search Interfaces by David Hamill http://lorrainepaterson.wordpress.com/2010/10/05/realism-in-testing-search-interfaces/
  • 32. Self Generated Task
    • ‘ What do you study at the University?’
    • ‘ Describe the last time you had to search for information as part of your course? What were you looking for?’
    • ‘ Let’s try and see if we can search for this information today using.... Can you use the web to show me how you would search for this information?’
    • Example: Task scenario created based on an essay to write on The National identity in the work of Robert Louis Stevenson .
      • Led to searches on the architecture in Jekyll and Hyde, Edinburgh's architecture in Scottish literature, opinion on architecture in Stevenson's work and opinion on architecture in national identity
    AquaBrowserUX blog post: Realism In Testing Search Interfaces by David Hamill http://lorrainepaterson.wordpress.com/2010/10/05/realism-in-testing-search-interfaces/
  • 33. Self Generated Task
    • Downside to such tasks is that it’s difficult to report useful measurements
      • Each participant undertakes a different task
      • Inappropriate method to provide an overall measure of usability
      • Perhaps useful in conjunction with other types of task scenarios i.e. Scavenger hunt tasks.
  • 34. AquaBrowserUX Testing
    • Recruited students based on established Personas
    • Mixture of undergraduate and postgraduates
    • 12 participants, £15 book voucher incentive
    • 1 hour sessions carried out over 3 days with external freelance consultant
    • Post-test interview to sum up experience
    • Word choice questionnaire
    • Ran pilot with member of staff, also tested Morae set-up
    • Video highlights from AquaBrowserUX tests: http://vimeo.com/15765406
  • 35. Facilitating user research
    • Do
    • Put participants at ease
    • Put participants at ease (avoid the word ‘test’)
    • Emphasise you’re not testing them, just the design
    • Encourage them to think aloud
    • Allow them to stop and leave at any time
    • Make notes, even when you are recording
    • Most important, learn to listen and not lead participants as this can influence participant's answers
  • 36. Facilitating user research
    • Don’t
    • Help participants with hints or leading questions
    • Justify the design or content to them:
      • Stay objective and encourage constructive criticism
    • Be afraid to improvise:
      • Have some questions ready to get started but don't stick strictly to a script, everyone’s experience will differ
  • 37. Informal (Guerilla) Testing
    • Quick and easy to perform
    • Conducted in a ‘real’ environment e.g. Library or cafe
    • Tests are much shorter (15 mins max)
    • Useful when testing a design or answer a conflict
    • Relatively inexpensive
      • shorter sessions mean you can test half a dozen participants within an afternoon.
    • However, bias can be an issue if you are closely involved in project
    • Less statistical rigour on quantitative results, appropriate as a user-centred design method
    • Difficult to recruit representative users
  • 38. UX2 Guerilla Testing
    • Piggybacked onto workshop run for university staff twice every 6 months which teaches usability testing methods
    • Asked attendees to participate for 10-15 mins at end of workshop
    • Tested two different interfaces using Blacklight technology
    • Aim: to get initial feedback on both interfaces and identify successful aspects of each for future design changes
    • Recruited 7 participants in total over 2 workshops (2 phases)
    • Alternated design exposure to reduce familiarity factor
    • Able to make changes from findings in phase 1 to test in phase 2
  • 39. Scuba Diving Vs Snorkelling Desktop internet is like scuba diving, where the search can be immersive and invites exploration and discovery. Mobile internet use is closer to snorkelling, where shallow dipping in and dipping out of content for quick checking of key content is desired. ~ Hinman et al., 2008 Hinman, R., Spasojevic, M., & Isomursu, P. (2008). They call it surfing for a reason: identifying mobile internet needs through pc internet deprivation. In CHI '08 extended abstracts on Human factors in computing systems (pp. 2195-2208). Florence, Italy: ACM. doi:10.1145/1358628.1358652
  • 40. UX2 Mobile Usability Testing
    • Testing UX2 prototype digital library
    • Aimed at understanding how participants find searching for items on a smaller screen
    • How easy is the website to navigate on a mobile?
    • How useful are the features/services provided?
  • 41. UX2 Mobile Usability Testing
    • Various methods for testing mobile interfaces
    • UX2 used the ‘sled’ which is tried and tested with other researchers (see resources slide).
    • Easy ‘do-it-yourself’ method and relatively cheap
    • Requirements:
      • Perspex (30cm x 11cm)
      • Web cam (x2 where possible)
      • Carry case for each type of phone being tested (E.g. iPhone 3G and 4)
      • Velcro to attach phone case to Perspex
      • Computer with adequate recording software (Morae)
    • Total cost (excluding Morae software): approx £80
  • 42.  
  • 43. UX2 Mobile Usability Testing
    • Mobile website vs. Native applications (Apps)
      • General consensus that mobile websites should be prioritised before apps for a number of reasons:
        • Fewer compatibility issues, don’t have to design for individual mobile platforms or handsets
        • Users don’t have to download anything
        • Don’t need approval from app store like Apple to publish
    • Realistic mobile testing vs. Laboratory testing
      • Review of existing literature revealed that no additional usability issues are revealed when testing in realistic conditions ( http://lorrainepaterson.wordpress.com/2011/02/22/mobile-user-research-methods/ )
  • 44. Further Reading & Resources
    • JISC Research Project - Usability Foundation Study and Investigation of Usability in JISC Services (2003): http://www.jisc.ac.uk/whatwedo/programmes/presentation/usability.aspx
    • Book - Rocket Surgery Made Easy: The Do-It Yourself Guide to Finding and Fixing Usability Problems, Steve Krug, New Riders, 2010
    • Book - Don’t Make Me Think (2nd Edition), Steve Krug, New Riders 2006
    • Book - User-Centred Library Websites: Usability Evaluation Methods, Carole A. George, Chandos Publishing, 2008
    • Blog - Usability Ed: Website usability and content management in higher education, by Neil Allison http://usability-ed.blogspot.com/
    • Blog – 90 Percent of Everything by Harry Brignull. Test sled blog: http://lorrainepaterson.wordpress.com/2011/03/31/ux2-mobile-usability-testing/
  • 45. Thank You!
    • UX2.0
    • Website: http://ux2.nesc.ed.ac.uk
    • Wiki: https://www.wiki.ed.ac.uk/display/UX2
    • Twitter ID: @ux2
    • Lorraine Paterson
    • Email: [email_address]
    • Blog: http://lorrainepaterson.wordpress.com
    • Twitter ID: @lorraine_p