IMLS WebWise 2014 Introduction to User‐Centered Design and Usability Testing


Published on

Whether you are brand new to usability testing or its already part of your business‐as‐usual, this workshop will
provide a closer look at the art and science of prototyping digital products with users at the center of design. This
crash course will increase participants’ understanding of usability testing’s massive return on investment and will
outline the necessary steps to plan for and successfully conduct usability testing in the museum or library
environment. Key concepts covered include benchmarking, task analysis, error types, use of personas, and card

Mini-workshop given in Baltimore Feb 11, 2014

  • Be the first to comment

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide
  • EYM is a small (ok, it’s just me) evaluation consulting business; I help museums and libraries make meaning from complex, and often muddy, data so they can make changes for the better and I help organizations build their evaluation capacity internally.Museum and library projects with digital products as the focus often require a different kind of evaluation… Creative, innovative approaches in our cultural institutions require creative, innovative evaluation.The methods can not only be incredibly informative to the projects but they are also really fun. These are some of my favorite projects. Who of you were in my morning session? (Wow! Back for more, huh?)
  • GROUP:Why are you here? JUST SHOUT OUT TO MEGROUP:On a scale of 1-5, how much do you know about this topic?RAISE THE NUMBER OF FINGERSOk, well this will be a fast-paced, jam-packed session.It is a crash course, so please know this is a taster and I highly encourage all of you to dig deeper post-WebWise.Take notes, tweet, ask questions throughout, etc.Make sure you have the twitter info down.
  • Usability is all about how well users can learn and use a product to achieve their goals. It also refers to how satisfied users are with that process. I love this simple user experience cake as a way to sum up what we’re talking about when we’re thinking about usability.
  • More and more you’ll see user-centered change to human-centered.This isn’t so different than why many museums are moving away from using terms like “visitor” or “guest”And instead saying “audience” or “community.” We are interested in people more holistically– beyond, or even without, their distinct involvement with us. Human-centered design focuses on:Peoples’ needs and preferences+How we can effectively design products to meet those needs… while Fulfilling our organizational objectivesAlso takes into account the limitations and barriers along the way.
  • Peoplevisit your website, or use your digital product, to find information or accomplish tasks. If they don't find it helpful or interesting, you risk them leaving.By focusing on the human beings using your products you:* Are more likely to satisfy themwith a more efficient and user-friendly experience* Increase loyalty and return visits or use* Establish a more relevant and valuable product or site* Create experiencethat supports rather than frustrates peopleEveryone's a winner!
  • Digital projects tend to work in phases…Discovery: The “What & Why.” Where teams are thinking about what hole their product or service plans to fill. Coming together to agree on organizational goals and establish success metrics. In the discovery phase, it’s time to analyze strengths and weaknesses of your existing products.It’s also time to exploring the competition and see what else is out there.User Research: The “Who & Why.” This is about doing your homework to understand peoples’ needs and expectation.Strategy & Structural Design: The “How.”This is about creating a solid plan for the product or, if you have a product already, for product improvements.Launch & Assessment: This the Release and Optimize phase. You’ll be looking to see if what you designed works– and how, gathering feedback, and creating roadmaps for further improvements or refinement. Ideally this phase feeds into another turn of the entire cycle, beginning once again with the discovery phase.The good news is given that one phase will ultimately run into the next phase, No matter where you are in your product cycle, it is smart to do evaluation and learn from your end users.
  • Today there are two methods we’re going to focus on:The first is usability testingThe second is card sortingWe’ll spend more time on the first because, well, it’s in the title of this workshop!Again, this is a crash course…. I will provide resources at the end so you continue to reflect on and practice what we talk about today.
  • GROUP: How many of you have UT before?Usability testing is one part of the human-centered design process– a method, really.Today when I give examples you may find me automatically defaulting to websites. That’s just to keep examples concrete. Everything I say can apply to not just website, but apps, digital media experience, and tech-based exhibits.So with usability testing…We test to identify areas where people– ideally people representing our site visitors-- struggle with the site and make recommendations for improvement.We watch, we listen, we take notes, we ask questions. Ultimately, we learn.You can use usability testing to show return on investment.In fact, there’s strong evidence that the benefits of usability testing outweigh the costs and the resources you’d invest.The earlier those problems are found and fixed, the less expensive the fixes are.Clare-Marie Karat of IBM quite famously demonstrated a 100-fold return on investment for a particular software product. In that case, spending $60,000 on usability engineering throughout development resulted in savings of $6,000,000 in the first year alone.Tangible Benefits are:-increased productivity and customer satisfaction-increase sales and revenues-reduced development time and costs and maintenance costs-decreased training and support costs
  • So what are the objectives of usability testing?To figure out where inconsistencies and usability problem areas areAnd to identify peoples’ common errors so that you can fix themSources of error can include:o   Navigation errors – failure to locate things, excessive keystrokes to complete a function, failure to follow recommended screen flow.o   Presentation errors – failure to locate and properly act upon desired information in screens, selection errors because of unclear labelso   Control usage problems – improper toolbar or entry field usage.In usability testing, all of these things will be tested out under controlled test conditions with representative usersSo that the data we collect can help us understand if our usability goals regarding an effective, efficient, and well-received interface have been achieved.
  • So to find your representative users, typically a screener is used.Can do in person, by phone, by online survey…General QuestionsAre you male or female? [Recruit a mix of participants]Have you participated in a [usability test] in the past six months?Professional DemographicsWhich of the following best describes your work environment? [e.g., commercial business, nonprofit, government agency, self-employed, etc.]Computer ExpertiseDo you use a computer/tablet? What are typical activities you do on the computer? About how many hours per week do you spend on the computer? Domain KnowledgeDo you visit the museum/library?What apps related to science have you explored?Contact InformationIncentives are an option to entice participation, but are not needed
  • This is a time consuming, but rewarding , process. The first thing is to block out your time, and know that typically you will need no less than 3-4 staff for each test.You’ll likely have the tech already– your digital product, either finished or under development. You will want to make sure it’s all working (glitches worked out, no ‘internet is down’ fiascos on test day!)Your training crash course is happening today, but should be ongoing. Typically when I work with clients on this I do a one day training, then sit in on the first handful of tests. The more you practice the better you get. It’s more art than science. Having a really clear protocol is essential. You want to ask all the participants in a uniform way to ensure reliability and validity in your results. Precision matters in UT.At the end of the workshop, I will link you to some awesome resources to help with these.
  • This is general procedure; it should be customized for your needs.Participant’s interaction with the site or digital experience will be monitored by the facilitator who will be in the same room.Note takers and data logger(s) will monitor the sessions– either in the same room or observation room, connected by 1-way glass or video camera feed. (We are usually not so fancy!)The test sessions will be video recorded so you can review them as a team later and make sure you captured everything you were after. There’s debate about whether you want to see their eyes or their screen. Could have observers see face and have the video recorder get screen.Participants will sign an informed consent that acknowledges: participation is voluntary participation can cease at any time session will be videotaped but their privacy of identification will be safeguardedThe facilitator will ask the participant if they have any questions.At the start of each task, the participant will read aloud the task description from the printed copy and begin the task. Time-on-task measurement begins when the participant starts the task. The facilitator will instruct the participant to ‘think aloud’ so that a verbal record exists of their interaction with the site The facilitator and observers will observe and enter user behavior, user comments, and system actionsAfter each task, the participant willcomplete post-task questions and elaborate on the task session with the facilitator. After all task scenarios are attempted, the participant will complete the post-test satisfaction questionnaire.
  • So you can imagine from that, you need quite a few people involved:Trainer·       Provide training overview prior to usability testing, prep the protocol; finalize the questions (work with a local user-center design or user experience consultant, or an experienced evaluator)Facilitator·       Provides overview of study to participants·       Defines usability and purpose of usability testing to participants·       Assists in participant and observer debriefing sessions·       Responds to participant's requests for assistanceData Logger·       Records participant’s actions and commentsTest Observers·       Silent observer·       Assists the data logger in identifying problems, concerns, coding bugs, and procedural errors·       Serve as note takers.Test Participants
  • You don’t need a lab.Can use conference room, meeting room, office– whatever.It’s best if the space is consistent to rule out variability. Make sure the space is reliable– that you won’t be cut off from your tech or interrupted. It’s also important the space is comfortable.Heat and light matter. Coffee, tea, water are good. This person is a guest and doing us a massive favor!
  • I mentioned consent (and assent if they’re younger), but definitely go beyond just saying it.Follow through! All persons involved with the usability test need to follow ethical guidelines:The performance of any test participant should not ever be individually attributable or linked back to that person.Participants’ name should not be used in reference outside the testing session.
  • I’ve been speeding you through a lot so I want to show you a short video to put it all together for you…4min
  • To determine your site’s usability, you’ll need to create measurable usability goals, which then you can measure performance against and use to benchmark.Typical usability goals include time, accuracy, overall success, and satisfaction measures.Sometimes it’s a bit of a guess the first go-round!Even a guess is a fine place to start.This will help you benchmark so you have something to compare to as you continue to develop, launch, and refine.I get asked a lot, “well how long SHOULD it take for them to find our hours?” There is no answer to that question. It is so site dependent. Try to resist Googling ‘benchmarks’. They should be unique to you and your usability goals. If you don’t know where to start, I often recommend starting any analytics and existing data logs you might have. The data can give you a pretty good starting point.
  • I want to take you through some typicalusability goals in a bit more detail, so you can think about how they might apply to your project. Time: Set a goal for the overall time the user will need to carry out a task on your site. You can also break that down into separate goals for time to: Get to the right page Understand the information Recover from an error
  • Accuracy: Set a goal for the accuracy with which the user carries out the task. You can also break it down into separate goals for the number of: Unproductive navigation choices or searches Errors in use Misunderstandings of information
  • Success: Set a goal to measure users’ success with your site. For example:Identify if new users can look for help if they need it, find the help they need, then get back to their original task within 2 minutesSet a goal that repeat visitors be able to successfully complete a task without using the help feature
  • Satisfaction: Set a goal that users are happy with their experience on your site. You can also set separate satisfaction goals for: Navigation Search Content detail and language
  • When you set usability goals, you cannot say, "the system response time is going to be very slow, so we will set our time goal to account for that slow response." Users will leave your site if it is too slow. Set goals that matches users' needs and expectations and find a design solution to improve systemresponse time if that is going to keep you from meeting the usability goal.If users give the site low ratings, you need to fix your site. However, if users give the site high ratings, you may not be getting a true picture.
  • Be skeptical! Especially of satisfaction ratings. Users often give high satisfaction ratings even when they have problems using a site. They may: Be blaming themselves for the problems Not want to hurt your feelings Be being polite rather than saying what they really thinkGROUP: What can you do to mediate the pleaser?
  • Scenarios are the main vehicle for usability testing and the goal is always completion.Each scenario will request that the participant obtains or inputs specific data that would be used in course of a typical task.  The scenario is over when the participant says so, regardless of ifthe goal has been successfully met or notThe scenario can also end when a participant requests and receives sufficient help– so that they would not have been able to figure it out on their own
  • A typical way to measure task completion is by timing them.
  • There are a few different types of task you can give during the usability test:Verb-based tasks ask users to accomplish a specific action with the product. Verb-based tasks are most commonly used to test software, hardware, and web applications.All of the tasks begin with a verb and ask users to complete a specific action:Leave feedback on the blog postClick to the “location” tabCopy the text of this page to a Word documentClose the windowVerb-based tasks help us look at the product's functionality and allow us to testmultiple users on the same tasks. Before the birth of the web, almost all tasks for usability testing were verb-based tasks.
  • Unlike verb-based tasks, we don't use scavenger hunt tasks to evaluate functionality. Instead, scavenger hunt tasks help us look atcontent-rich systems such as information-rich web sites.With scavenger hunt tasks, we ask users to find a specific piece of information. These tasks help design teams evaluate whether users can find and understand the product's content. The tasks almost always begin with the verb, "find.”It’s a challenge with both verb-based and scavenger hunt tasks to know if we have picked realistic tasks for people to accomplish. We risk giving people tasks to complete that aren't related to what they would actually do in real-life.Pre-testing or piloting tasks with friends,family, and co-workers who don’t know site well is never a bad idea to combat this.
  • To address the limitations of verb-based and scavenger hunt tasks, we use interview-based tasksWith interview-based tasks, we interview users before and during the test to uncover users' real goals with a product. During the recruitment phase, we screen candidates to ensure they have interests compatible with we’re looking to test before they come inWith interview-based tasks, when peoplefirst arrive for the test session, we don't actually know what we'll specifically be asking them to do.Instead, at the beginning, we interview participantsto get a better idea of how they use a product.Based on what they say, wework with them during the session to create tasks on the spot that are relevant to their specific needs. While we won't ask all users to complete the same tasks, we get a very good sense of how the product works for people in the real world.
  • [ACTIVITY: 10 min]Pair up with someone near you; ideally someone you don’t know or who isn’t on your projectEach of you come up with one of each task type for your particular projectFor the “interview based” one, think of what you might ask to get participants to start giving you ideas. For example, if you’re working on an app: “Tell me about some of your favorite apps.” What might elicit some good paths?GROUP DISCUSSION:Do you see plusses and minuses?What do you think makes the most sense for your site?Does it depend on the user’s demographics at all?
  • There are a few different error types that can happen when people attempt to complete tasks:A critical error means not reaching the target or end point of the scenario.  Participants may or may not be aware that the task goal is incorrect or incomplete.Independent completion of the scenario is a universal goal; So if they get help from the facilitator or other people in the room, then the scenario has a critical error.Critical errors can also happenwhen the participant initiates (or attempts to initiate) an action that will result in the goal becoming unobtainable.  In general, critical errors are unresolved errors during the process of completing the task or errors that produce an incorrect outcome.
  • Non-critical errors are errors that are recovered from or, if not detected, do not result in processing problems or unexpected results.  Although non-critical errors can be undetected by the participant, when they are detected they are generally frustrating to her or him. These errors may be procedural, in which the participant does not complete a scenario using the most optimal or easy means (e.g., excessive steps and keystrokes).  These errors may also be errors of confusion (e.g., initially selecting the wrong function, attempting to type in an un-editable field).Exploratory behavior, such as opening the wrong menu while searching for a function, may be coded as a non-critical errorNoncritical errors can always be recovered from.
  • In addition to all the quantitative stuff, like time on task and counts of errors, subjective evaluation often happens in the course of a usability test– mostly in the form of questions after a task or at the end of the whole test. It helps us understand ease of use and satisfaction with the site or digital product. You can do this using questionnaires, and during debriefing at the conclusion of the session.  The questionnaires are often open-ended, but can use multiple choice or use rating scales– whatever meets your needs.
  • This slide is super important! 1. Weare testing the Site/Product NOT the UsersWe try hard to ensure that participants do not think that we are testing them. We help them understand that they are helping us test the prototype or site.2. Performance vs. Subjective MeasuresWe measure both performance and subjective (preference) metrics. Performance measures include: success, time, errors, etc. Subjective measures include: user's self reported satisfaction and comfort ratings.People's performance and preference do not always match. Often users will perform poorly but their subjective ratings are very high. Conversely, they may perform well but subjective ratings are very low.3. Make Use of What You LearnThis is true of all evaluation, but only ever ask what you intend to use Usability testing is not just a milestone to be checked off on the project schedule. The team must consider the findings, set priorities, and change the prototype or site based on what happened in the usability test.4. Find the Best SolutionMost projects, including designing or revising Web sites, have to deal with constraints of time, budget, and resources.Balancing all those is one of the major challenges of most projects.
  • To give you a sense of what it looks like in practice, can I please have a volunteer?This is a super sped-up version, but you will get the idea.[ACTIVITY: 10 min]The rest of you please observe!If you want to, you can time the tasks! I usually would, but won’t for the purpose of this demo.Think about what data you might record, what you observe and notice, what you would do differently.HELPER: How did it feel to you?GROUP DISCUSSION:What did you notice?What types of tasks did I ask her/him to do?Did you see any critical errors? Non-critical?What did you hear in the subjective evaluation at the end?What worked/didn’t?What data would you collect/note? (Time, pathway, success)
  • So maybe this seems totally like something you would ROCK at, or maybe facilitating a usability test seems a bit frightening at this point. How do you become a great moderator? Simple—with practice. Facilitating sessions is a learned skill that improves the more you do it. There are some simple tricks and techniques behind it. Once you learn those, and have a chance to practice them, you too can become a top-notch moderator.An important trick to moderating is mastering the multiple personalities involved. Jared M. Spool, has written up a cool explanation of what it takes to be a great usability test moderator….He says he need to be part flight-attendant, part sportscaster, and part scientist…There’s a link to his article on this in the resources I’ll give you at the end.
  • From the moment the participant walks in the door, the moderator helps them feel at home. They get them coffee, explain the procedure, and answer questions. (The best moderators start before the participant arrives, by working with the recruiters to set the right expectations and answer any questions.)During the session, they smile a lot, keeping the session relaxed. They watch diligently for any signs of stress."This is helping a lot." "You're helping us discover problems we didn't realize we had.”Safety and comfort: that's the flight attendant's focus.
  • The sportscaster personality's job is to make sure every observer in the session catches all of the action.When we're facilitating usability tests and we have observers, it is super helpful to setup a projector in the room, so it's easy for them to see what's on the participant's screen. We encourage the participant to "think out loud", letting us know what's going through their head as they use the design.For those participants that are naturally quiet, we engage in a "color commentary", where we repeat and narrate the activity.Sportscaster kicks in to ask questions to better understand the participant's viewpoint.The sportscaster knows her audience. She caters the session to the folks who are watching.Catching all the action: that's the sportscaster's focus.
  • The scientist personality looks for the data.Since the goal of any user research is to help the team make better design decisions, the scientist is there to collect the data and help the team analyze it.Like the other roles, this starts long before the participant shows up. The scientist puts together the test plans, deciding the tasks the participants will try. The scientist creates questionnaires and interview scripts, to learn more about the participant's background and experience. Every thing the scientist does is to make sure the team collects every piece of data they'll need.Part of the preparation involves how the findings are used once the sessions are completed. How will the team analyze and synthesize this information?Guiding the data collection: that's the scientist's focus.Once you master thesepriorities, you'll find it easy to get the team excited about testing.
  • A great way you can incorporate some of the principals of usability testing internally with your teams, rather than doing a full scale usability test where you bring participants in, in by using personas. You can also use these to help focus you when coming up with your scenarios and tasks for user testing.GROUP: Have any of you used these in your work?A persona is a fictional person who represents a major user group for your site or product. It can be a current user group, or one you WANT to cultivate. You select the characteristics you feel best represent those groups and turn them into a persona.For example, in this picture maybe this is guy is Dave Parikh– a 31yo, Indian-American, hipster parent who collects art from local, up-and-coming artists, enjoys exclusive gallery events, and distils his own spirits at home.A persona usually includes a name and a picture. It helps to add some demographics such as age, education, ethnicity, or family status. Give the persona a job title and include their major responsibilities. Include the goals and tasks they are trying to complete using the site and their environment (i.e., physical, social, and technological). Also you can include a quote that sums up what matters most to the persona as it relates to your site.Maybe Dave Parikh’s would be, “I am looking for things do to as a family, sure, but I also want to take my wife out and engage with art like we used to before we became experts at getting spit-up out of our clothes.”You make up the persona's name. Select one that represents that user group. Be relevant and serious; humor usually is not appropriate here. Use licensed or stock photos. No one needs copyright infringement issues!Using personas helps the team focus on the users’ goals and needs. The team can concentrate on designing a manageable set of personas knowing they represent the needs of many users. By always asking, "Would Daveuse this?" the team can avoid the trap of building what users ask for rather than what they will actually use.Designs can be constantly evaluated against the personas and disagreements over design decisions can be sorted by referring back to the personas.Big companies, like Microsoft and Staples, swear by these. [ACTIVITY: 10min]In new pairs, come up with 2-3 personas you think would work well to test your current and/or future digital project.Be as vivid in the details as you can.Describe the person, or try to find a picture on the web that could ‘be’ them. Tweet your ideas if you like!
  • Not even a prototype, ok, ok… Well you can still definitely start using personas as you get started in your design process…But I have another cool method to talk about that you can use too.
  • A great technique to use when you are in the discovery, user research, or early design phases of your project.GROUP:How many of you have done this before?Participants in a card sorting session organize the content from your website or digital experience in a way that makes sense to them. Card sorting is a method used to help design or evaluate the information architecture of a site or product– even a hypothetical one!Card sorting can involve physical cards or pieces of paper, or you can use an online card-sorting software tool.Card sorting will help you understand your users’ expectations and understanding of how your content links together. Knowing how your users group information can help you:Build the structure Decide what to put on a main page or screenLabel categories and navigation
  • There are a few ways to go card sorts.In an open card sort, participants are asked to:Organize topics into groups that make sense to them and thenName each group they created in a way that they feel accurately describes the contentUse an open card sort to learn how users group content and the terms or labels they give each category.
  • In a closed card sort, participants are asked to sort topics into pre-defined categories.A closed card sort works best when you are working with a pre-defined set of categories, and you want to learn how users sort content items into each category.Another option is a combination card sort:First conduct an open card sort first to identify content categoriesThen use a closed card sort to see how well the category labels work
  • Similar to usability testing, you will ask people think aloud while sorting, giving a clearer picture of their reactions and thought processes.
  • You can, in theory, have more than one participant at a time. Participants each sort their own set of cards independently. The facilitator may brief the participants at the beginning and debrief the participants at the end,but the participant works alone for most of the session. Because of the limited interaction, you can have many sessions at the same time with one facilitator. You must have as many sets of cards as concurrent sessionsYou won’t be able to do talk-aloud this way, so you miss out on some of the more nuanced thinking.
  • This method allows you to have many participants in many locations. But… like concurrent you do not get information on why participants sort the cards the way they do, because you cannot see the participants or hear them thinking out loud.Participants sort the cards independently on their own computers. You can do open or closed card sorts remotely. Several software programs out there to help you with large-scale remote card-sorting studies. A benefit of software is it analyzes the data for you.
  • Similar to usability testingYou want people new to the site or product You can screen for what you wantIncentives are an option to entice participation, but are not needed
  • Create your list of content topics. Topics can be phrases, words, etc., and can be very specific or more general. It might be tempting to have a card for every topic on your site, but in this case, more might not be better. Consider the cognitive load on the participant. You want them attentive! I would limit to 50-60, but I would recommend40-50.Cards need to be neat, legible, and consistent. Number the cards in the bottom corner (small) or on the back. This helps you when you analyze the cards.Have blank cards available for participants to add topics and to name the groups they make.Consider using a different colored cards for participants to use when name the groups.Shuffle the deck.For paper card sorts, ensure the participant has enough room to spread the cards out on a table or tack/tape them up on a wall.Again, no lab setting in needed. A conference room works well.Plan to have the facilitator or another usability team member take notes as the participant works and thinks aloud.This isn’t as complicated as usability testing, but more eyes and ears help.Again, an in usability testing video recordings are fantastic so you can review card sorts later. Let the participant work. Minimize interruptions but encourage the participant to think aloud. Encourage the participant to:Add cards - for example, to indicate additional topics.Put cards aside to indicate topics the participant would not want on the site.Name anycategories they create.
  • If you used physical cards for the test, it’s a great idea to photograph the sort.Also, use the numbers on the cards to record what the participant did in a document or spreadsheet.One you’ve collected all your data-Analyze quantitative information based on:Which cards appeared together most oftenHow often cards appeared in specific categoriesWhich cards were left out or not used most oftenMake sure you include talk-aloud comments or things mentioned in the debrief. The qualitative information is helpful!
  • at 2:35)[DISCUSS]What’s wrong with this example?Would you change anything?What worked well?What did you like?So now you have been introduced to 3 methods:usability testing with scenarios and tasks, personas, and card sorts
  • So to wrap up:Asmentioned early on, you should be doing usability testing throughout the process--from baseline testing on the old site, tests with partial and low-fidelity prototypes, and testing both navigation and content as you fill out the site. I showed you few methods today, but there are many more. If you learn a ton, which you will, and feel you can’t implement all the recommendations, develop priorities based on fixing the most global and serious problems. As you prioritize, push to get the changes that users need. Again, remind senior leadership of the 100-fold ROI!The cost of supporting users of a poorly-designed site is much greater than the cost of fixing the site while it is still being developed.The iterative design process in which you :develop a partial prototype, test early, fix and expand the prototype, test again (and repeat) is the most successful way to develop a digital product.
  • And hey– believe that YOU can become great at human-centered design and usability testing.With practice!It’s learned skill that improves the more you do it and put it to USE. And it’s super fun.
  • I can’t wait to see what you are all coming up with and how you test and evaluate throughout the process!This stuff matters, so it’s great to see so many of you here!To paraphrase something that really resonated for me in Nick Poole’s keynote yesterday:we live in an attention economy where relationships matter.Building relationships with people where we elevate their voices, truly listen, and act on what we hear so that they have a direct role in our product development is essential.This takes us from a place of possession to a place of inclusion and shared ownership.Thanks for spending some of your afternoon with me.
  • QUESTIONS?The resources include lots of usability testing tool kits, articles, videos, and other good stuff.The slides are on there too.
  • ×