UX research


Published on

UX research

Published in: Design, Technology
  • Be the first to comment

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide
  • Business question: Is anyone working on a major (large-scale) site launch or redesign that your company depends on for survival?
  • Audience question: “How many of you have a project at the point where it is ready for major commitment? (round A, new release, major new upgrade) I have a web site, software or product and I am about to commit major funding or resources to next phase of developmentDo I have usability problems with the user experience that are basically show stoppers Users cannot download the application Users cannot log in Users cannot set up a profile pageUsers cannot navigate to critical content 1-3 critical tasks in 60 min.
  • Business question: How are users actually viewing your content (in what order, for how long and in what specific pattern or pathways)?Audience question: Have you wondered if critical links, buttons or content messaging is being viewed on a critical page?Description: this methodology is very useful when trying to determine why certain homepage metrics from analytics programs are of concern (not clicking on value proposition element…etc.)The respondent sits at a specialized computer screen and undergoes a simple calibration sequence. Respondent is given a stimulus question or task (active or passive) (show homepage for set period of time 15 seconds)System tracks eye pathways and fixations and produces a data file from that task.Important things to know about eye-tracking Tobii not designed for web sites or changing visual stimulus)This makes actual testing of web navigation (changing from page to page) very complex to actually analyze and not accurateVery effective for single stimulus presentations of fixed durationsExcellent for detailed analysis of home pages or critical landing pages and forms Very insightful for assessing impact of advertizing on homepage visual scanning.
  • Business problem: How do I organize information like content, navigation, overall IA so that users understand it?Description: this is an automated version of the classic card sorting studies where you give users a pile of index cards with your content descriptions on them and ask them to sort the cards into groups according to how they relate to the content. Example: If I have a bunch of content categories how do I determine what the groupings are and the high level navigation labels? Lets say you have a site selling women’s underwear and you what to create a navigation structure that matches the users mental model. So do you organize the site by type of underwear on top level and styles, colors, or do you organize the site navigation by life style like (athletic, everyday, intimate) and they by type of article color, and price. Respondents are invited to an online study via email.When they agree they encounter a screen with a list of labels or terms in one column and are asked to sort the terms into groups they find organizationally relevant. When they are finished you can give them another card sort of just finish the study. When the required number of respondents are finished with the card sort you can view the dataCard sorting data is analyzed through the application of cluster analysis (not that easy to understand but very useful)
  • Business question: Do any of you have new development team that has minimal UX / Usability experience? Is your team employing best practices and are they aware of the key UX and Usability performance issues that an effective solution must meet.Description: A highly experienced usability / UI design expert conducts a structured audit of your system or product and rates the system on best practices and estimated performanceInterview and select an expert who has direct experience in your product category and sectorExpert gathers information from your development team and conducts structured audit based on predetermined best practices. Expert presents findings to your team (sometimes not a happy experience for UX design teams without knowledge of formal UCD methods.Very effective early in development and can be repeated with updates at less cost.
  • Pattern Name : A/B TestingClassification: Continuous ImprovementIntent: Can be valuable in refining elements on a web page. Altering the size, placement, or color of a single element, or the wording of a single phrase can have dramatic effects. A / B Testing measures the results of these changes. Also Known As:Other names for the pattern.Motivation (Forces):A scenario consisting of a problem and a context in which this pattern can be used.Applicability:Situations in which this pattern is usable; the context for the pattern.Structure:A graphical representation of the pattern. Class diagrams and Interaction diagrams may be used for this purpose.Participants:A listing of the classes and objects used in the pattern and their roles in the design.Collaboration:A description of how classes and objects used in the pattern interact with each other.Consequences:A description of the results, side effects, and trade offs caused by using the pattern.Implementation:A description of an implementation of the pattern; the solution part of the pattern.Sample Code:An illustration of how the pattern can be used in a programming languageKnown Uses:Examples of real usages of the pattern.Related Patterns:Other patterns that have some relationship with the pattern; discussion of the differences between the pattern and similar patterns.
  • Pattern Name : Kano AnalysisClassification: Business Requirements ManagementIntent: Allows quantitative analysis of feature priority to guide development efforts and specifications. Ensures that organization understands what is valued by users. Less effective for new product categories.Also Known As: Kano ModelMotivation (Forces): You have a need to categorize features by basic must-haves, which features create user satisfaction, and which features delight. Applicability: You have a list of business requirements, however you know that in the current phase of the project, you will not be able to get everything done. You are going to use a Cycle methodology, and you need to know which features the users will want as basic must have’s, which features will excite them, and which are low impact features. In any given release, you will want to include at least one delightful / exciting features. Additionally on your first release you will probably want to include as many basic / must have features. Use Kano Analysis to identify which features are which.Structure:A graphical representation of the pattern. Class diagrams and Interaction diagrams may be used for this purpose.Participants: Potential Users, SurveyorCollaboration:A description of how classes and objects used in the pattern interact with each other.Consequences: This tool tells you about user perceptions. Remember this limitation, you might want to measure something else. Implementation: Survey method that determines how people value features and attributes in a known product domain. Shows what features are basic must-haves, which features create user satisfaction, and which features delight.Sample Code:An illustration of how the pattern can be used in a programming languageKnown Uses:Examples of real usages of the pattern.Related Patterns:Other patterns that have some relationship with the pattern; discussion of the differences between the pattern and similar patterns.
  • Pattern Name : Six Thinking HatsClassification: Business Requirements ManagementIntent: Can enable better decisions by encouraging individuals or teams to abandon old habits and think in new or unfamiliar ways. Can provide insight into the full complexity of a decision, and highlight issues or opportunities which might otherwise go unnoticed.Also Known As:Other names for the pattern.Motivation (Forces):A scenario consisting of a problem and a context in which this pattern can be used.Applicability:Situations in which this pattern is usable; the context for the pattern.Structure:A graphical representation of the pattern. Class diagrams and Interaction diagrams may be used for this purpose.Participants:A listing of the classes and objects used in the pattern and their roles in the design.Collaboration:A description of how classes and objects used in the pattern interact with each other.Consequences:A description of the results, side effects, and trade offs caused by using the pattern.Implementation:A description of an implementation of the pattern; the solution part of the pattern.Sample Code:An illustration of how the pattern can be used in a programming languageKnown Uses:Examples of real usages of the pattern.Related Patterns:Other patterns that have some relationship with the pattern; discussion of the differences between the pattern and similar patterns.
  • Note: give an example here
  • Usability tests are really not such a big deal. Here’s a quick overview of the steps:Come up with a set of 3-5 different tasks that you’ll ask users to perform.Round up some 5-10 volunteers who will act as test participants and then bring them one at a time into a testing area where you’ll observe them as they perform the predetermined tasks.After you’ve observed all the test partipants, you’ll have a pretty good idea of some things that need to be fixed and what things seems to be working OK. After you make the easiest 2-3 fixes, go back and do another round of testing and tweaking, etc.
  • OK, so you now have an idea about what service or resource you’re going to test, next you’ll want to think about what actual tasks you want your test participants to do. You’lll want to pick pick tasks that are going to reveal some useful information to you.One obvious place to go looking for tasks are those pages or services that you and your colleagues already know need work, such as your interlibrary loan form or the way that library hours are displayed.Another strategy is to think about what are the most common activities among patrons in your library. Take a look at your site statistics to see what are the most popular pages. Maybe that’s where you want to do your testing.Or maybe you’re about to launch a new page or service. Those are great opportunities for testing.
  • OK. So the gear you need is not too complicated. You’ll need a computer….a desktop or a laptop will do. Last year, I had test participants use my smarthphone wen I was testing a mobile web site. If you really want to get serious about user-centered design, you may want to do usability testing on paper sketches that precede any actual website coding. This is perfect acceptable and commonly done. It’s a great way to run tests that will help you get a basic page layout and site architecture problems.You’ll also want to install some screen recording software on the computer that your test participants use. That way, you can capture as a movie all the mouse movements, page clicks, and characters typed; this is really rich data to return to when the tests are done and you are trying to write up your report. I’ll talk in a minute about software options.Another option that has worked for me is to simply have a second person on hand helping you with the test. That person’s sole responsibility is to closely observe the test participant and take detailed notes.Finally, if you have screen recording software, you might as well get a USB microphone that can capture the conversation between the test participant and the test facilitator. You’ll want to encourage the participant to think aloud as much as possible as they perform tasks.
  • Here are five options for screen recording software. I’ve used CamStudio a lot mostly because it is free and can be installed on any machine. With the others, you’ll get a much richer feature set but will limited by the number of machines you can install it on.
  • OK, so if you are doing the tests all by your lonesome (not the best situation but certainly still doable), you’ll be in charge of recruiting test participants, running the test, recording the test (you’ll definitely need screen recording software and a mic), and for prepping the test environment.
  • If you can get another person to help you out with the testing, you can break up the tasks in rational ways.
  • It’s essential that you ask the participant to speak aloud so you can hear them express any frustrations or surprises they’ve had.
  • Saves time – Very fast, thousands on panels, Money – essence of quick and dirty. Techniques for dealing with noise, unrealistic to be in the lab that long. Combines both qual/quant and attitudes and behavior.
  • All the flexibility you need to set up a study and analyzing the dataSignificant support in designing study and analysis. Pricing is all project based – typically very expensive – good choice for a large benchmark study
  • - Sort into groups
  • OPEN SORT: good for getting ideas on groups of contentCLOSED: Useful to see where people would put the content.
  • Card sorting as a method in HCI largely took off during the internet boom of the late 1990’s with the proliferation of website navigation.
  • Today it’s one of the most popular methods UX professionals use. In fact, practitioners report using Card Sorting as frequently as task oriented lab-based usability testing.
  • This effect can be used to direct attention, for example on an ad. Here two different versions of an ad were eye-tracked. In this case, the model is looking directly at the viewer.
  • And in this version, the model looks at the product, forming a straight line between her eye and the product name on the package.
  • Using the cards post-task or post-test.Participant walks table, chooses. Returns to discuss meaning. Log comments for later analysis.
  • UX research

    1. 1. UX Research2013.4.14InnoUX CEO 최병호InnoUX@InnoUX.com, @ILOVEHCI
    2. 2. © 2013 InnoUX & Innodesign All rights reserved.UX 리서치Table of Contents• Definition• Case Study• Methodology Overview Contextual Inquiry Diary Study Field Study Card Sorting Usability Test Remote Test Eye Tracking Heuristic Evaluation Cognitive Walkthrough• References1
    3. 3. Definition
    4. 4. 4 Common Biasesin Customer Research• Confirmation Bias• Framing Effect• Observer-expectancy Effect• Recency Bias
    5. 5. Confirmation BiasYour tendency to search for or interpretinformation in a way that confirms yourpreconceptions or hypotheses.
    6. 6. Framing EffectWhen you and your team draw differentconclusions from the same data based on yourown preconceptions.
    7. 7. Observer-expectancyWhen you expect a given result from yourresearch which makes you unconsciouslymanipulate your experiments to give you thatresult
    8. 8. Recency BiasThis results from disproportionate salienceattributed to recent observations (your very lastinterview) – or the tendency to weigh morerecent information over earlier observations
    9. 9. http://www.nngroup.com/articles/return-on-investment-for-usability/
    10. 10. Methodology:Overview
    11. 11. You need to gather:• Factual information• Behavior• Pain points• GoalsYou can document this on the persona validation boardAs well as…Photos, video, audio, journals…document everything
    12. 12. http://www.nngroup.com/articles/how-many-test-users/
    13. 13. http://www.nngroup.com/articles/how-many-test-users/
    14. 14. http://www.nngroup.com/articles/how-to-conduct-a-heuristic-evaluation/
    15. 15. http://www.nngroup.com/articles/guerrilla-hci/
    16. 16. http://www.nngroup.com/articles/why-you-only-need-to-test-with-5-users/
    17. 17. http://www.nngroup.com/articles/how-many-test-users/http://www.nngroup.com/articles/quantitative-studies-how-many-users/http://www.nngroup.com/articles/outliers-and-luck-in-user-performance/http://www.nngroup.com/articles/card-sorting-how-many-users-to-test/
    18. 18. 38Types of Research MethodsQuantitativeQualitativeGenerative Evaulative12 fMRI Brain Imaging10 Eye Tracking6 Lab-Based Testing8 ProfessionalHeuristics1 Contextual Observation2 Remote Ethnography11 Online UX Concept Surveys9 Online Card Sorting7 Large Sample On-Line Behavior Testing5 Ergonomic Observation4 Focus Groups
    19. 19. http://www.nngroup.com/articles/which-ux-research-methods/
    20. 20. 40Methodology: Contextual Observation/Ethnography► Business problem► How are people actually using products versus how they were designed?► Description► In-depth, in-person observation of tasks & activities at work or home. Observationsare recorded.► Benefits► Access to the full dimensions of the user experience (e.g. information flow,physical environment, social interactions, interruptions, etc)► Limitations► Time-consuming research; travel involved, Smaller sample size does not providestatistical significance, Data analysis can be time consuming► Data► Patterns of observed behavior and verbatims based on participant response,transcripts and video recordings► Tools► LiveScribe (for combining audio recording with note-taking)Cost / respondent: Low – Moderate – HighStatistical validity: None – Some – Extensive
    21. 21. 44Methodology: Remote Ethnography► Business problem► How are people actually using in their environment in real-time?► Description► Participants self-record activities over days or weeks with pocket video cameras ormobile devices, based on tasks provided by researcher.► Benefits► Allows participants to capture activities as they happen and where they happen(away from computer), without the presence of observers. Useful for longitudinalresearch & geographically spread participants.► Limitations► Dependence on participant ability to articulate and record activities, Relatively highdata analysis to small sample size ratio► Data► Patterns based on participant response, transcripts and video recordings► Tools► Qualvu.comCost / respondent: Low – Moderate – HighStatistical validity: None – Some – Extensive
    22. 22. 45Methodology: Large-Sample Online Behavior Tracking► Business problem► Major redesign of a large complex site that is business-critical?► Description► 200-10,000+ respondents do tasks using online tracking / survey tools► Benefits:► Large sample size, low cost per respondent, extensive data possible► Limitations► No direct observation of users, survey design complex…other issues► Data► You name it (data exports to professional analysis tools).► Tools of Choice► Keynote WebEffective, UserZoom,Cost / respondent: Low – Moderate – HighStatistical validity: None – Some – Extensive
    23. 23. 46Methodology: Lab-based UX Testing► Business problem► Are there show-stopper (CI) usability problems with your user experience?► Description► 12-24 Respondents undertake structured tasks in controlled setting (Lab)► Benefits► Relatively fast, moderate cost, very graphic display of major issues► Limitations► Small sample, study design, recruiting good respondents► Data► Summary data in tabular and chart format PLUS video out-takes► Tools► Leased testing room, recruiting service and Morae (Industry Standard)Cost / respondent: Low – Moderate – HighStatistical validity: None – Some – Extensive
    24. 24. 48Methodology: Eye-TrackingBusiness ProblemDo users see critical content and in what order?DescriptionRespondents view content on a specialized workstation or glasses.BenefitsVery accurate tracking of eye fixations and pathways.LimitationsRelatively high cost, analysis is complex, data can be deceiving.DataLive eye fixations, heat maps…etc.Tools of ChoiceTobii - SMICost / respondent: Low – Moderate – HighStatistical validity: None – Some – Extensive
    25. 25. 49Methodology: Automated Online Card Sorting► Business problem► User’s cannot understand where content they want is located?► Description► Online card sorting based on terms you provide (or users create)► Benefits► Large sample size, low cost, easy to field► Limitations► Use of sorting tools confuse users, data hard to understand► Data► Standard cluster analysis charts and more► Tools of Choice► WebSort…and othersCost / respondent: Low – Moderate – HighStatistical validity: None – Some – Extensive
    26. 26. 50Methodology: fMRI (Brain Imaging)► Business Problem?► What areas of the brain are being activated by UX design► Description► Respondents given visual stimulus while in FMRI scanner► Benefits► Maps design variables to core functions of the human brain► Limitations► Expensive and data can be highly misleading► Data► Brain scans► Tools► Major medical centers and research services (some consultants)Cost / respondent: Low – Moderate – HighStatistical validity: None – Some – Extensive
    27. 27. 51Methodology: Professional Heuristics► Business problem► Rapid feedback on UX design based on best practices or opinions► Definition► “Heuristic is a simple procedure that helps find adequate, though often imperfect,answers to difficult questions (same root as: eureka)”► Benefits► Fast, low cost, can be very effective in some applications► Limitations► No actual user data, analysis only as good as expert doing audit► Data► Ranging from verbal direction to highly detailed recommendations► Tools of Choice► Written or verbal descriptions and custom tools by each experts.Cost / respondent: NAStatistical validity: None – Some – Extensive
    28. 28. 52Methodology: Focus Groups► Business problem► What are perceptions and ideas around products/concepts?► Description► Moderated discussion group to gain concept/product feedback and inputs; caninclude screens, physical models and other artifacts► Benefits► Efficient method for understanding end-user preferences and for getting earlyfeedback on concepts , particularly for physical or complex products that benefitfrom hands-on exposure and explanation► Limitations► Lacks realistic context of use; Influence of participants on each other► Data► Combination of qualitative observations (like ethnographic research) withquantitative data (e.g. ratings, surveys)► Tools► See qualitative data analysisCost / respondent: Low – Moderate – HighStatistical validity: None – Some – Extensive
    29. 29. A / B TestingWhatA testing procedure in which two (ormore) different designs are evaluated inorder to see which one is the mosteffective. Alternate designs are served todifferent users on the live website.WhyCan be valuable in refining elements on aweb page. Altering the size, placement, orcolor of a single element, or the wordingof a single phrase can have dramaticeffects. A / B Testing measures the resultsof these changes.ResourcesA/B testing is covered in depth in the bookAlways Be Testing: The Complete Guide toGoogle Website Optimizer by BryanEisenberg and John Quarto-von Tivadar.http://www.testingtoolbox.com/You can also check out the free A/Btesting tool Google Optimizer.https://www.google.com/analytics/siteopt/previewA / B Testinghttp://www.flickr.com/photos/danielwaisberg/
    30. 30. Kano AnalysisWhatSurvey method that determineshow people value features andattributes in a known productdomain. Shows what features arebasic must-haves, which featurescreate user satisfaction, and whichfeatures delight.WhyAllows quantitative analysis offeature priority to guidedevelopment efforts andspecifications. Ensures thatorganization understands what isvalued by users. Less effective fornew product categoriesKano Analysis
    31. 31. Six Thinking HatsWhatA tactic that helps you look at decisionsfrom a number of different perspectives.The white hat focuses on data; the red onemotion; the black on caution; the yellowon optimism; the green on creativity; andthe blue on process.WhyCan enable better decisions byencouraging individuals or teams toabandon old habits and think in new orunfamiliar ways. Can provide insight intothe full complexity of a decision, andhighlight issues or opportunities whichmight otherwise go unnoticed.ResourcesLateral thinking pioneer Edward de Bonocreated the Six Thinking Hats method.http://www.edwdebono.com/An explination from Mind Tools.http://www.mindtools.com/pages/article/newTED_07.htmSix Thinking Hatshttp://www.flickr.com/photos/daijihirata/
    32. 32. Methodology:Contextual Inquiry
    33. 33. What isContextual Inquiry?57
    34. 34. What is Ethnography?• Defined as:– a method of observing human interactions in socialsettings and activities (Burke & Kirk, 2001)– as the observation of people in their ‘cultural context’– the study and systematic recording of human cultures;also : a descriptive work produced from such research(Merriam-Webster Online)• Rather than studying people from the outside, youlearn from people from the inside
    35. 35. (Anderson, 1997; Malinowski, 1967; 1987; Kuper 1983)Who Invented Ethnography?• Invented by Bronislaw Malinowski in 1915– Spent three years on the Trobriand Islands (NewGuinea)– Invented the modern form of fieldwork andethnography as its analytic component
    36. 36. (Salvador & Mateas, 1997)Traditional VS Design EthnographyTraditional• Describes cultures• Uses local language• Objective• Compare generalprinciples of society• Non-interference• Duration: Several YearsDesign• Describes domains• Uses local language• Subjective• Compare generalprinciples of design• Intervention• Duration: SeveralWeeks/Months
    37. 37. Contextual inquiry is a field data-gathering techniquethat studies a few carefully selected individuals indepth to arrive at a fuller understanding of the workpractice across all customers.Through inquiry and interpretation, it revealscommonalities across a product’s customer base.What is contextual inquiry?~ Beyer & Holtzblatt
    38. 38. Contextual Inquiry:When to do itEvery ideation and design cycle should start with acontextual inquiry into the full experience of a customer andhis/her.Contextual inquiry clarifies and focuses the problems acustomer is experiencing by discovering the• Precise situation in which the problems occur.• What the problem entails.• How customers go about solving them.
    39. 39. What is your focus?Who is your audience?Recruit & schedule participantsLearn what your users doDevelop scenariosConduct the inquiryInterpret the resultsEvangelize the findingsRinse, repeat (at least monthly)Contextual Inquiry:How to do it
    40. 40. (Nielsen, 2002)Dos & Don’tsDon’t• Ask simple Yes/Noquestions• Ask leading questions• Use unfamiliar jargon• Lead/guide the ‘user’Do• Ask open-ended questions• Phrase questions properlyto avoid bias• Speak their language• Let user notice things onhis/her own
    41. 41. Analyzingthe results“The output from customer research is not aneat hierarchy; rather, it is narratives ofsuccesses and breakdowns, examples of usethat entail context, and messy use artifacts”Dave Hendry66
    42. 42. Research AnalysisWhat are people’s values?People are driven by their social and cultural contexts as much as their rationaldecision making processes.What are the mental models people build?When the operation of a process isn’t apparent, people create their own models ofitWhat are the tools people use?It is important to know what tools people use since you are building new tools toreplace the current ones.What terminology do people use to describe what they do?Words reveal aspects of people’s mental models and thought processesWhat methods do people use?Flow is work is crucial to understanding what people’s needs are and whereexisting tools are failing them.What are people’s goals?Understanding why people perform certain actions reveals an underlyingstructure of their work that they may not be aware of themselves.
    43. 43. AffinityDiagrams“People from different teams engaged in affinitydiagramming is as valuable as tequila shots andkaraoke. Everyone develops a sharedunderstanding of customer needs, without thehangover or walk of shame”68
    44. 44. Research Analysis: Affinity DiagramsCreates a hierarchy of all observations, clustering theminto themes.From the video observations, 50-100 singularobservations are written on post-its(observations ranging from tools, sequences, interactions,work-arounds, mental models, etc)With entire team, notes are categorized by relations intothemes and trends.
    45. 45. Methodology:Diary Study
    46. 46. Customer ValidationDiary Studies help here
    47. 47. Users record thoughts,comments, etc. over timehttp://www.flickr.com/photos/vanessabertozzi/877910821http://www.flickr.com/photos/yourdon/3599753183/http://www.flickr.com/photos/stevendepolo/3020452399/http://www.flickr.com/photos/jevnin/390234217/Interview usersGather feedback, dataOrganise and analyse(affinity maps, analytics)
    48. 48. Participants keep a record of“When” dataDate & timeDurationActivity / task“What" dataActivity / taskFeelings / moodEnvironment / setting
    49. 49. No one right way to collect dataStructuredYes/noSelect a categoryDate & timeMultiple choiceUnstructuredOpen-endedOpinions / thoughts / feelingsNotes / commentsCombine / mix & matchhttp://www.flickr.com/photos/roboppy/9625780/http://www.flickr.com/photos/vanessabertozzi/877910821
    50. 50. “Hygiene” aspectsAt the beginning•Introduction / get-to-know-you•Demographics & psychographics, profiling•Instructions / Setting expectationsAt the end•Follow-up•Thanks / token gift•Reflection
    51. 51. Pitfalls• Belief bias• Behavior adjustment• Ramp-up time• Failure to recall
    52. 52. Methodology:Field Study
    53. 53. Methodology:Usability Test
    54. 54. Usability TestsStart new testIdentify 3-5tasks to testObserve testparticipantsperformingtasksIdentify the 2-3easiest things tofixMake changesto site
    55. 55. Identify Tasks for the TestKnownproblemareasMostcommonactivitiesPopularpagesNewpages orservices
    56. 56. Recommended GearComputer(or paper prototype)Screenrecordingsoftware(or observer)Microphone(or observer)
    57. 57. Screen Recording SoftwarePriceFeatures
    58. 58. Staff of OneRecruits testparticipants Runs the testRecords thetest(screen recordingsoftware & mic)Preps testenvironmentbefore & aftertest
    59. 59. Staff of TwoRecruits testparticipantsRuns the test#1Observes thetestPreps testenvironmentbefore & aftereach test#2
    60. 60. Working Through TasksStick tothe scriptEncourageparticipantto speakaloudDon’t helptheparticipant
    61. 61. Methodology:Remote Test
    62. 62. New technologies andtechniques allow forRemote:– Moderated testing– Unmoderated testing– ObservationIrrelevance ofPlace
    63. 63. Remote Moderated TestingProducts like GotoMeeting allow connectionsto the test (or observation) computer to theInternet. VoIP can carry voice cheaply.LiveMeetingWebExGoToMeetingFor screen VoIP AudioSkypeGoogleTalkTranslatorModeratorParticipantObservers
    64. 64. Remote Unmoderated Testing‘Task-based’ Surveys> Online/remote Usability Studies(unmoderated)> Benchmarking (competitive /comparison)> UX Dash`boards (measure ROI)Online Card Sorting> Open or closed> Stand alone or> Integrated with task-basedstudies & surveysOnline Surveys> Ad hoc research> Voice of Customer studies> Integrated with Web Analytics dataUser Recruiting Tool> Intercept real visitors (tab or layer)> Create your own private panel> Use a panel provider*Robust Set of Services
    65. 65. 102• Saves timeo Lab study takes 2-4 weeks from start to finish, unmoderated typically takes hours toa few days*• Saves moneyo Participants compensation typically a lot less ($10 vs. $100)o Tools are becoming very inexpensive• Reliable metricso Only (reasonable) way to collect UX data from large sample sizes• Geography is not a limitationo Collect feedback from customers all over the world• Greater Customer insighto Richest dataset about the customer experienceWhy Should You Care?
    66. 66. 103Common Research Questions:• What are the usability issues, and how big?• Which design is better, and by how much?• How do customer segments differ?• What are user design preferences?• Is the new design better than the old design?• Where are users most likely to abandon a transaction?Types of Studies:• Comprehensive evaluation• UX benchmark• Competitive evaluation• Live site vs. prototype comparison• Feature/function test• DiscoveryOverviewTypical Metrics:• Task success• Task time• Self-report ratings such as ease of use,confidence, satisfaction• Click paths• Abandonment
    67. 67. 104Full Service Tools – Online Usability Testing
    68. 68. 105Card Sorting / IA Tools
    69. 69. 106Online Surveys
    70. 70. 107Click and Mouse Tools
    71. 71. 108Video Tools
    72. 72. 109Report Tools
    73. 73. 110Expert Review
    74. 74. Methodology:Card Sorting
    75. 75. STEPS IN A CARD SORT1. Decide what you want to learn2. Select the type of Card Sort (open vs closed)3. Choose Suitable Content4. Choose and invite participants5. Conduct the sort (online or in-person)6. Analyze Results7. Integrate results
    76. 76. WHAT ARE YOU WANTING TO LEARN?• New Intranet vs Existing?• Section of Intranet?• Whole organization vs single department?• For a project? For a team?
    77. 77. ProductTargetsCRMProjectReviewCRMOrganization ChartChristmasPartyWalkathonResultsYear inReviewMeetingVacationPolicyPay DaysVacationrequestformYear inReviewMeetingProductTargetsCRMProjectReviewCRMOrganization ChartChristmasPartyWalkathonResultsVacationPolicyPay DaysVacationrequestformOPEN VS CLOSEDVacationPolicyChristmasPartyCRMProjectReviewCRMOrganizationChartProductTargetsYear inReviewMeetingPay DaysWalkathonResultsVacationrequestformVacationPolicyChristmasPartyCRMProjectReviewCRMOrganization ChartProductTargetsYear inReviewMeetingPay DaysWalkathonResultsVacationrequestformCompanyNewsDepartmentsHumanResourcesProjectsCompanyNewsEventsHumanResourcesProjectsCompanyNewsDepartmentsHumanResourcesProjectsOPENSORTCLOSEDSORT
    78. 78. SELECTING CONTENTDo’s•30 – 100 Cards•Select content that can begrouped•Select terms and conceptsthat mean something tousersDon’ts• More than 100 cards• Mix functionality andcontent• Include both detailed andbroad content
    79. 79. ANALYSIS
    80. 80. LOOK AT• What groups were created• Where the cards were placed• What terms were used for labels• Organization scheme used• Whether people created accurate or inaccurate groups
    81. 81. INTEGRATE RESULTS: CREATE YOUR IAOur CompanyExecutive BlogNew YorkVancouverMission and ValuesProjectsProject Name 1Project Name 2Project Name 3Project Name 4DepartmentsExecutiveOperationsOperations SupportVessel PlanningYard PlanningRail PlanningFinance &AdministrationHuman ResourcesCorporateCommunicationsITCommunity &GroupsEventsCharitable CampaignsVancouver CarpoolEmployeeResourcesVacation & HolidaysExpensesTravelHealth & SafetyWellnessBenefitsFacilitiesPayrollCommunication ToolsCenters ofExcellenceProject ManagementProfessionalsEngineeringTerminal TechnologiesNAVISLawsonITYard Planning
    82. 82. Traditional Card Sort
    83. 83. Online Card Sorting
    84. 84. Card Sorting is as common as Lab based UsabilityTestingSource: 2011 UxPA Salary Survey
    85. 85. Terms & Concepts• Open Sort: Users sort items into groups and give thegroups a name.Closed Sort: Users sort items into previously definedcategory names.• Reverse Card Sort (Tree Test) : Users are asked to locateitems in a hierarchy (no design)• Most Users Start Browsing vs Searching: Across 9websites and 25 tasks we found on average 86% startbrowsinghttp://www.measuringusability.com/blog/card-sorting.phphttp://www.measuringusability.com/blog/search-browse.php
    86. 86. Methodology:Eye Tracking
    87. 87. Set-up of an eye tracking testUser tests are often run in 45 to 60minute sessions with 6 to 15participants:1. Participants are give a number oftypical task to complete, using thewebsite, design or product you wantto test.2. The user’s intuitive interaction isobserved, comments and reactionsare recorded.3. The participant‟s impressions arecaptured in an interview at the endof the test.124
    88. 88. Eye tracking results: HeatmapsHeatmaps show what participantsfocus on.In this example, „hot spots‟ are thepicture of the shoes, the central entryfield and the two right-hand tilesunderneath.The data of all participants isaveraged in this map.125
    89. 89. Eye tracking results: GazeplotGaze plots show the „visual path‟ ofindividual participants. Each bubblerepresents a fixation.The bubble size denotes the lengthor intensity of the fixation.Additional results are available intable format for more detailedanalysis.126
    90. 90. The key visual and a box at the bottomNote: Telstra Clear have since re-designed their homepage.The keyvisual gotlots ofattention.Surprising: This box gotheaps of attention. Itreads:“If you are having troublegetting through to us onthe phone, please clickhere to email us, we‟ll getback to you within 2business days”.Participants got theimpression that Telstra Clearhas trouble with theircustomer service.The mainnavigation andits options gotalmost noattention.127
    91. 91. The Face effect – an examplebunnyfootYep, there’sattention oncertain… areas, … the face,however, is thestrongest pointof focus!128
    92. 92. Using the Face effecthumanfactors.comEye tracking results for ad VersionA: We see a face effect: The model‟s facedraws a lot of attention. The slogan is the other hot spot of thedesign. Participants will likely have readit. The product and its name get some,but not a lot of attention.129
    93. 93. Using the Face effectEye tracking results for ad VersionB: Again, we see a strong face effect. BUT:In this version, the models gaze is in linewith the product and its name. The product image and name getconsiderably more attention! Additionally, even the product name atthe bottom is noticed by a number ofparticipants.humanfactors.com 130
    94. 94. Ways to focus attentionusableworld.com.auSame effect: If the baby faces you, you‟ll look at the baby. But if the baby faces the admessage, you pay attention to the message. You basically follow the baby‟s gaze.131
    95. 95. Banner blindness… or are they?In this test, participants weregiven a task: Find the nearestATM.Participants focused on themain navigation and thefooter navigation– this iswhere they found the „ATMlocator‟.So, when visiting a site with atask in mind – as younormally do – the centralbanner can be ignored!132
    96. 96. Compare the visual paths: Task versus browseWhen browsing, the central banner gets lots of attention. But how often do you visit a bankwebsite just to browse?Participant was asked just to look at the homepage Participant was given a task („Find the nearest ATM‟)133
    97. 97. Main focus: Navigation optionsEye tracking results show:When looking forsomething on awebsite, the mainfocus of attention arethe navigation options.Maybe users have learnedthat they‟re unlikely tofind what they‟re lookingfor in a central bannerimage.Task: „What concerts are happen in Auckland this month?‟ Task: „You want to send an email to customer service‟134
    98. 98. Task: „You want to get in touch with customer service‟When do users look at banners?In this example, participants looked at the banner even though they were looking forsomething specific. What‟s different?Participant was asked just to look at the homepage135
    99. 99. Methodology:HeuristicEvaluation
    100. 100. 1. Visibility of system status2. Match between system and real world3. User control and freedom4. Consistency and standards5. Error prevention6. Recognition rather than recall7. Flexibility and efficiency of use8. Aesthetic and minimalist design9. Help users recognize, diagnose, and recover from errors10. Help and documentationJ. Nielsen and R. Mack, eds. Usability Inspection Methods, 1994Nielsen’s 10 heuristicsSlide 138
    101. 101. http://www.slideshare.net/AbbyCovert/information-architecture-heuristics
    102. 102. HE outputSlide 140• A list of usability problems• Tied to a heuristic or rule of practice• A ranking of findings by severity• Recommendations for fixing problems• Oh, and the positive findings, too
    103. 103. usability.spsu.edu UPA 2011141HE samplefindingspage
    104. 104. 142Objectives/goals forthe modulesReason content is beingpresentedConciseness of presentationDefinitions required to workwith the module/contentEvaluation criteria andmethodsDirect tie between contentand assessment measureSequence of presentationfollows logically fromintroductionQuizzes challenge usersDevelop a consistent structure thatdefines what’s noted in thebulleted points, above.Avoid generic statements thatdon’t focus users on what they willbe accomplishing.Advise that there is an assessmentused for evaluation and indicate ifit’s at the end or interspersed inthe moduleConnect ideas in the goals andobjectives with outcomes in theassessmentFollow the order of presentationdefined at the beginningDevelop interesting andchallenging questionsRe-frame goals/objectives at theend of the module   3Finding Description Recommendation H C S Severity RatingObjectives/goals for themodulesReason content is beingpresentedConciseness of presentationDefinitions required to workwith the module/contentEvaluation criteria andmethodsDirect tie between content andassessment measureSequence of presentationfollows logically fromintroductionQuizzes challenge usersDevelop a consistent structure thatdefines what’s noted in the bulletedpoints, above.Avoid generic statements that don’tfocus users on what they will beaccomplishing.Advise that there is an assessmentused for evaluation and indicate if it’sat the end or interspersed in themoduleConnect ideas in the goals andobjectives with outcomes in theassessmentFollow the order of presentationdefined at the beginningDevelop interesting and challengingquestionsRe-frame goals/objectives at the endof the module   3Hyperspace, Shock, and Cardiac Arrest all require more clearly defined goals and objectives.H = Hyperspace; C = Cardiac Arrest; S = Shock
    105. 105. Methodology:CognitiveWalkthrough
    106. 106. References
    107. 107. © 2013 InnoUX & Innodesign All rights reserved.UX 리서치인용/참조 문헌• Ethnography (Santosh Bhandari, Mar 29, 2013)• Cultural probes in real life (gerrygaffney, Jun 11, 2012)• UX of User Stories Workshop (Anders Ramsay, Aug 14, 2012)• Usability behaviors: Usability and the SDLC (Ted Tschopp, Nov 04, 2012)• Know Thy User: The Role of Research in Great Interactive Design (frog, Sep 2012)• The Mobile Frontier (Rachel Hinman, Feb 2012)• Introduction to AgileUX: Fundamentals of Customer Research (Will Evans, Jan 2012)• Customer Research & Persona Development (Will Evans, Oct 2012)• Introduction to UX Research: Conducting Focus Groups (Will Evans, Jan 2012)• Midwest UX 12: Mapping the Experience (Chris Risdon, Jun 2012)• Eye Tracking & User Research (Optimal Usability, Apr 2012)• Taking it to the streets: Investigating mobile and web UX through field studies (Emma Rose, Jun 2012)• NYTECH "Measuring Your User Experience Design“ (New York Technology Council, Mar 2012)• How to Conduct UX Benchmarking (UserZoom, May 2012)• Customer validation with Diary Studies (Boon Chew, Jan 2012)• The Science of Great Site Navigation: Online Card Sorting + Tree Testing (UserZoom, Jul 2012)• Introduction to Card Sorting (ThoughtFarmer, Sep 2012)• Usability Testing Basics (Stephen Francoeur, Mar 2012)• Storytelling: Rhetoric of heuristic evaluation (Southern Polytechnic State University, Mar 2012)• Cognitive and pluralistic (Aarushi Mishra, Oct 2012)• How to Quantitatively Measure Your User Experience (Richard Dalton, May 2012)• Information Architecture Heuristics (Abby Covert, Jul 24, 2011)• Diary Studies in HCI & Psychology (UPABoston, Jul 13, 2011)• Remote Testing Methods & Tools Webinar (UserZoom, Dec 2011)• Beyond User Research (Louis Rosenfeld, Mar 2011)• Using Ethnographic User Research to Drive Knowledge Management and Intranet Strategy (NavigationArts, Dec 01, 2010)• Remote Research, The Talk. (bolt peters, May 27, 2010)• User Interview Techniques (Liz Danzico, May 2010)• The new digital ethnographer’s toolkit: Capturing a participant’s lifestream (Christopher Khalil, Sep 04, 2009)• Design Research For Everyday Projects - UX London (leisa reichelt, Jun 2009)• Contextual Inquiry V1 (Rajesh Jha, Sep 11, 2008)150
    108. 108. 경청해주셔서고맙습니다!