Ispi Presentation Script 920


Published on

  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide
  • Michael: Welcome to our presentation: Altered State of Consciousness: Confessions of Instructional Designers Who Conduct Usability Testing. My name is Michael Rukstelis, and this is my colleague Yun Zhou. We are both instructional designers from the Learning Strategy Group at Wachovia Bank.   During the past few years we’ve adopted usability testing to be part of the design process. Today we’d like to tell you a story about how our perception of usability testing evolved through the practice of it. Particularly, this story reflects our own journey as instructional designers, which is why we title this presentation as “confessions.” This is a development from simple attempts to get feedback about the interface, such as the font, color, type size, the general visual appeal and basic navigation, to a more mature attempt to sort out the value and meaning of usability as it applies to instructional design. Now let’s look at what you’ll get out of this session and why you should stay here 
  • Michael: In this session, We’ll look at a case study with three iterations of a web-based training program we did a year ago. We’ll role-play and put you in the driver seat as the user, so that you’ll experience “live” tests in these different phases. By doing these very practical, down-to-earth activities, you will understand the key roles of usability testing and learn the key skills easily to apply to your own instructional design. We’ll also share our test findings, reflections, and lessons learned. These you can also take away. To make it even easier, you’ll also receive our test tool template – a test plan documenting the entire process (from planning to executing) with all the instruments attached. We’ll distribute this after the second test. Last but not least, you’ll have chance to participate in highly interactive discussions, share your feedback, learn from each other, and together we’ll make this a very engaging experience for all. Having said that, I hope you have a clearer picture of what you’ll get out of this session today. Now let me turn to Yun to get us started.
  • Yun : Before we dive into the case study, we want to examine our reality. Take a look at these four graphics and think about if things like this has ever happened in your world. Does anyone want to share your experience? At least I have a basketful. (Audience Response) What we can see from these graphics and your experience is our world is never ideal, for example: between the first graphic and the second, what is user really needs is a) often filtered and distorted through layers of communication, b) or maybe a need assessment was not done with the right people, c) or maybe training objectives were based on previous training, etc. And between the second and the third, often an instructional design would get a) so pre-occupied with the subject matter and learning objectives, to the exclusion of how that learning happens in the users’ world: that you only see the trees, not the forest. b) Opinion from subject matter experts, or program sponsors, sometimes reinforce this exclusion. So when the product is delivered, the user looks at this delivery and tries to make sense out of it. It’s not exactly what they need, so they end up exerting a tremendous amount of effort in order to make it work, such as putting a sofa on a swing to bring it closer to what they originally need.   Although the graphics is just illustrative, the point is a gap is almost unavoidable between the user’s true need and what they actually receive. In your experience, what can we do to address this gap? (Audience Response) You have pointed out many good practices. Today we’re going to focus on usability testing as the practice that we find helpful in amending this gap.
  • Yun: Usability testing can be defined differently depending on the discipline and context. In our group, we generally have the following criteria for it: It engages the real end user, person who will really be using your training. It’s not a full-fledge program but a representative portion of it during the early instructional design phase. In e-learning it could be a couple of screens with limited functionality. It’s an end user try out, or test drive, of this early design. 4) The purpose of such tests are to validate the design, to identify areas of design improvement Now let’s reflect for a moment on the role of usability testing in instructional design. Is this something optional in your world? How many of you are doing it? How are you doing it? (Audience response) Thank you for your response. Now let me turn to Michael for our case study, and get ready to give our approach a “test drive.”
  • Michael: This case study is around the design of a a two-hour WBT Program that educates new managers to Identify situations where HR policies should be consulted or assistance should be engaged. Locate, understand, and apply HR policies/assistance. Most policies are on our internal website, and there are also HR assistance team such as our HR support center, HR advisor, etc. In applying the policies, recognize and address the challenges they might face, such as the challenges in coaching and negotiating Master a problem solving strategy that always encourages users to think through the situation, identify the main issue, assess risks/impacts, conducting research etc to develop a solution to any situation. Even though we were able to summarize the learning objectives into these four brief bullets, there is a lot of information to absorb and a lot to learn in two hours. So here is the challenge:
  • Michael : 1) How do you design a WBT that has a high degree of complexity, while making the learning program as meaningful and as easy to use as possible? 2) Next we are going to look at the first quick stab the designer took. What we call this is a “low-fidelity prototype,” it’s a quick, cost effective way to pull together a web design and get feedback – in this case it’s only a couple of screens which give you the look and feel of the interface, with limited functionalities. We often turn this around in the very very early design stage. 3) By the way, my role in this project is the project manager, and Yun’s role is the usability testing lead. So what you’ll look at next was not designed by either of us… 
  • Michael : Now let’s have our first fun activity. We’ll look at this prototype together, and We’ll put you in the driver seat as the users, and You should be yourself and react to what you see. Since we are doing this as a group and the prototype is not fully functional, Yun and I will be your drivers. We will facilitate by asking you some questions, and you tell us how you think this thing should work, such as what you would click, what path you would take if you are a real user using this program Here we ask everyone to think out loud and share your thoughts; don’t edit them Be candid and critical Don’t be concerned at all about saying anything wrong, because this is a test of the system on how well it helps you learn, not one on your individual knowledge or intelligence. When we test drive this prototype in a second, Yun will capture your reactions on a flipchart to show you later how your experience compares to the designer’s intention. We’ll then also look at our usability test findings and lessons we learned last year, and how that help shape the next iteration. Are you ready? Next Steps View the 1 st Case Study. Ask the audience to take a minute to look at the interface. Restate the learning purpose. Ask the audience what they would do first and focus on expectations. Note: remember the main emphasis here for learnability concerns the problem/solution methodology, while it’s the visibility issues for the interface.
  • b) click on Information, to go to our internal HR policy website to consult the policies. c) click on Tool to utilize any checklist, quick reference guide, or job aid provided in the program to resolve this situation. d) click on Ask the Expert to see if there are any Tips or Best Practices they can learn in developing or applying their solution. When they are satisfied with their response, they will click on I'm finished button. Their response will display in this Your Response box, under the Submit My Response button. If they are not happy yet with this response, they will click Enter/Edit My Response. If they are satisfied, they will click Submit My Response. When they submit their response, some text will display in this gray area in the right. This is the recommended solution or feedback by the program. The user is supposed to compare this recommendation with their own response, and take the learning. Along the way :   If they run into questions, they should click on Help.    If they want to go to the next screen or previous screen, use Next or Back. Menu would take them to the main menu where they can select another scenario. This is how the designer intends for things to work. You probably have your own judgment by now on if there is any gaps between that and your way of approaching it. When we tested this prototype last year, we visited five end user representatives in their cubicles, observed and probed with them, just like what we did with you except it was one-on-one. 1) First we found about users’ general reactions, such as they did not like the color, think the screen layout was cluttered and space weren’t used wisely. 2) What we found more important, though, is the pattern of their behaviors. At the first screen, they took off in different directions, not knowing what was expected of them. Some people read the Your Response first. Some read the scenario. Some glanced the entire interface and click on Help or Ask the Expert. Some practically clicked on everything in order to see how it works, before they really worked on the scenario. So there is not a sense of clear path and sequence, and users can’t tell what’s going on just by looking at this interface. 3) When people clicked on I’m finished after developing the answer, they considered that was submitted and expected feedback. When the program didn’t give them feedback, and display their response in a hardly visible, they were confused and experienced loss of control. 4) We also observed hardly anyone had a sense on when and how to use the buttons hidden in the lower right, such as Ask the Expert, Virtual Team, Information, Tools, etc. They seemed encrypted. And users had very little structure developing their response to the question (what do you do to resolve the situation). They just typed a few words. Then when they did get very extensive feedback, the concept of “compare to learn” does not make sense, because the difference was just too drastic. Although there had been so many problems, there are positive things coming out of the test. We concluded from this test why the design wasn’t successful, and the direction of our improvement for next iteration. Let me turn to Michael to share these with you. Yun: The designer’s intention is: The user would read the scenario fully, in this case you need to scroll and read it all. After that, they would read the question in the Your Response box, and click on Enter/edit my response. While developing this response, users should: a) consult the background of the team member involved, by clicking on virtual team to read their profile
  • Michael: From the usability tests on this first prototype, we concluded that beyond the general reaction such as not so much fondness of the color, volume of text, scrolling to read, etc, this design interface wasn’t successful mainly due to the following design elements. I’m summarizing these concepts in language partially borrowed from Donald Norman’s Design of Everyday Things: 1) Visibility : this means by looking, the user can tell with very little thinking what the interface is for and how to interact with it. To put it another way, users know what is expected of them. The example Yun just gave, in this prototype, by looking at the interface, users did not know what was going on and took off in different directions -- that is an indication of poor visibility. Because the screen is overly complex and cluttered, and what they are supposed to do wasn’t showing, the problem-solving design was not visible. 2) Feedback , which means the user should receive full, continuous and meaningful response from their actions which reinforce their sense of control. In this prototype, Yun gave an example that when our users clicked on I’m finished, they expected feedback from the program. But actually the program did not consider that submitted yet, and didn’t give them the expected feedback. This confused the users and impaired their sense of control. 3) Predictability , which means user can well and consistently predict the outcome of their actions. In this prototype, because of the failures in visibility and feedback, it was very hard for the users to predict anything. Yun pointed out users didn’t know when and how to utilize the buttons in the lower right, such as Information, Tool, Ask the Expert, and some users clicked on everything to figure out how the program works before actually working on it. These are all indications of people failing to predict the program. So the first prototype didn’t seem to be such a hit. As a result, (transition to next slide)
  • Michael : We felt: that the design wasn’t executing many of the learning objectives, particularly the problem-solving methodology that encourages users to analyze, assess, research etc to develop a solution to any given situation. We also found that the problem-solving methodology was more important than originally conceived. We found that because this methodology wasn’t apparent in this design, users brought in their own world view to solve the problems, and developed response randomly based on their personal experience only. Because they didn’t solve problem in a structured way, when they compared their response with the program’s feedback, the difference was too drastic for them to generate any meaningful learning. So, making apparent t he problem-solving methodology became critical because it provided a structure that end-users could use to think through and develop a solution that would meet the program’s objectives. To remedy the first prototype, we decide to do the following: Improve the visual appeal including layout, color, way to display text, etc. Simplify the interface; make it more visible for them what to do first, second, and third. Accentuate the problem-solving strategy by making apparent the learning sequence. So we incorporated these findings and quickly turned around a second prototype. Yun will share that with you in second. But before we move on, do you have any questions on the first prototype ? (Audience Response)
  • So now let me ask you first: what would you do when you come to this screen? (audience response) Key Improvements : Let’s look at the key improvement and design intention now. Obviously, better usage of screen space and color, etc, but more importantly There are two main areas of navigations clearly laid out now. These tabs on the top provide the tasks and the content that you need to work with in this scenario. These buttons in the lower right provide global navigation to exit, help and access to main menu. Breaking down the problem-solving model into piece meal including Situation, Issue, Goal, Risks, and users are given feedback continuously along the way. I’ll show this when I explain the design intentions. Design intention: (Click on Situation Tab) Read the Situation thoroughly. After that, user will go from left to right to click on the tabs in order to proceed. (Click on Issue Tab) When you come to the Issue tab, recall the situation you read and type your answer to this question: What is the main issue/problem that you need to address in this situation? While developing response, there is a virtual advisor function to receive optional tips. Submit your response when finished. Then the program displays your response and gives you the program's recommendation. The designer intends for you to compare these and learn, before proceeding to the Goal tab. (Click on Goal and Risks tab) The Goal tab and Risks tab work the same way. You will be asked questions to respond to, and submit your response to compare. (Click on Resources Tab) Designer intends for you to form a better understanding of the situation, issue, risks and goal before you develop a solution. One more thing you need to do within the problem-solving model is to research the HR policies that apply. When you come to the Resources tab, you will see your direction to conduct research. When users are done with their research, the designer intends for users to come back to this screen and continue by clicking Solution. (Click on Solutions Tab) After policies are researched on, the solution tab is activated and the designer intends for the user to develop a solution to the situation and then compare their solution with the program's recommendation. This works the same way as the other three tabs, Issue, Goal, Risks. (Click on Best Practices Tab) The last tab will only be activated after you have a solution submitted. After comparing the solution, you can learn about some of the best management practices associated with such situation. There will be a sub-menu of 3-4 optional questions/concerns that apply most to managers who encounter situations like the one they have worked through, and each question opens up a new screen within the tab. When you reach this tab, the Next button lights up and you can click on it to proceed to the next scenario. So does this design intention connects better with the way you would approach it? (audience response) Yun: A second prototype is quickly designed to simplify the interface and accentuating the problem-solving model. Here we’ll briefly repeat the activity on the first prototype, but we’ll try to do it in a more accelerated fashion. We’ll 1) figure out what do you think you as a user would do first, second, and third, and 2) how do you think of this design compared to the first one, then 3) I’ll walk you through the designer’s intention and point out the key improvements that were intended. You can then, compare that against your perception. And Michael will share our test findings after that.
  • Yun : The second prototype is an almost fully functional prototype and we conducted much more systematic, formal usability tests with six user representatives identified by our client from various business units. The entire test process is documented in your handout, so are the instruments we used, so you can refer to that for details. Here is a layout of the test setup: Test is conducted one-on-one in a test lab which includes a Test Room and an Observer Room separated by a one-way mirror. The facilitator and the user sit in the test room. Up to six people can sit in the Observer room to see and hear what’s happening in the test room. The computer monitor and audio equipments in these two rooms are connected, but the user couldn’t hear what’s going on in the Observer room. Users are observed to go through the prototype with no intervention except occasional probing from the observer. This is very important here. You’ll probably notice by now that in our activities we have never asked a close ended question such as “do you like this color”. We always ask open-ended questions and let the user lead our way to see their reality. We never express our own opinion of the design – which is why we don’t have the designer or project manager conduct the tests. Anyone who is too close to the subject is not going to be objective. And when you work with the users, the minute you open your mouth to comment the design, you are shutting theirs. So observe – probe only when needed – is the key in conducting these tests. After the user goes through the prototype, I’d have them complete a questionnaire to rate various aspects of the course on a 1-5 scale. And we’ll debrief with additional open-ended questions to get out most of the users’ experience and reactions. The questionnaires and the debriefing guides are both in your handouts on page X.
  • Michael : What we found out from the second prototype test are: The improvement on the visual appeal and navigation were much more positive. The layout is intuitive, with the labels on top going from left to right, and the global navigations in the lower right. Very clear. Most people understand the left to right indicates a sequence. However, the labels weren’t so intuitive to some users. While the designer tried to break down the problem-solving model into piece meal, with the situation, issue, risks, goal, etc, many people think of problem solving in a more holistic fashion and do not recognize there is a sequence through these labels. Also, because it’s a folder metaphor, mostly on the web, or in real life, a bunch of folders do not indicate a sequence either. Consequently, in the test, we found: People jumped around without realizing there is an intended problem-solving model. They viewed the tabs as kind of unrelated to one another and challenged the sequence intended. Because of such jumping around and exploring, the progress through the scenario slowed down, negatively affecting the pace. There is one general HR policy and service overview module and additional 8 scenarios to be completed within the 2 hours, and in our test this single scenario was taking 30 minutes. As it took the users longer to figure out things, they became distracted and exhausted, and were able to learn less. At this point of the project, Yun and I (transition to next slide)
  • Michael : … came to the realization that when usability is applied to instructional design, it means two aspects: Ease of Use This relates to the visual appeal, layout, navigation, and all the things we were able to take care of after the first prototype test. We did, as a result, made the prototype much easier to use after the first test. Ease of Learning This relates to a deeper layer in the interface - the things we realized after the first prototype but had not yet been able to completely take care of in the second prototype design. For example, although the visual appeal and navigation were improved, users still encountered obstacles in mastering the problem solving model. After the first prototype test, we recognized that a more structured problem-solving model was critical, so our conception of the interface’s structure, the layout, and the process flow also changed. So we tweaked the interface according to the conclusion that a problem-solving strategy would drive how the objectives, content, and activities would be created. Then we recognized that even after clarifying the structure, another level of interaction needed to be thought through. This level was was more narrowly defined by the users’ perception with the interface. T he issue is more about how users think they should use the interface as opposed to the intent of the design (i.e., use the tab-folder in any order vs. a required sequence). What happened next was more of an intervention by the designer. The test showed that the user’s perception and tendency to explore the interface needed to be constrained in order for the intended instruction to take place successfully. What's important is that it's now clear the difference between the two levels, and it's also clear how the instructional designer is behaving. At the first level, we behaved more like a usability engineer (the screen is too cluttered, there's not enough feedback); but at the second level, we were behaving as ID practitioners, making a decision about how a learning strategy would be structured and sequenced through the interface, according the needs of the audience. Next Yun will walk us through the 3 rd prototype to give you a sense of how we worked through the challenge at the level of learning. Before we do that, are there any questions on the second prototype? (audience response)
  • Yun: Taking what we had learned from the second prototype usability tests, we fine-tuned it to turn around the third prototype. Now it probably took you less than a second to figure out what is expected of you here. So can you tell me that, or can I have a volunteer? (audience response) There are a lot of minor fine-tunings that we did, but one of the major changes is we re-labeled the tabs as Step 1, 2, 3, 4, 5 and 6, and adding more action-oriented and goal-oriented instruction within each tab. For example, Step 1 is Analyze the Situation. The instruction explicitly tells the user to read the situation and think about the issue and impact. This clearly directs users’ action, and gives their reading more purpose, decreases their guessing around and enables them to focus on content. When in the second prototype the tabs were labeled as Issue, Goals, Risks, although the layout indicates a sequence, people challenges that sequence because the labels don’t suggest a sequence to them intuitively, and they don’t feel compelled to respect that sequence. When the tabs were labeled as 1, 2, 3, 4, 5, people no longer challenged this sequence, because Step 1, 2, 3, 4, 5 indicate such an intuitive sequence and user followed the sequence without distraction. (transition to next slide)
  • Yun : We did our third prototype usability tests quickly with another six users after making these changes, and the test was done in similar format as the second round, except we were targeting more the key problem areas we identified from the second round of tests. We specifically timed the time users spent on each tab, and if they followed the suggested sequence, and if they were able to learn better through this structure. Our test findings show people recognized the designer’s intention much sooner and followed it with ease. They had a clear sense of direction, read with more purpose, had less distructions and were able to better focus. The pace was improved and they were able to achieve the program’s learning objectives better. So these is the story about our three iterations of the case study design, our three rounds of tests, and what have we learned from ( transition to next slide?)
  • Michael: So now let’s reflect for a moment what have we learned from the first prototype to the third prototype (transition to next slide).
  • Michael : After looking at these three prototypes and hear about our experience with the three usability tests, I hope that you agree with Yun and me that in e-learning program, usability testing is a big deal in bridging the gap between designer’s intention and user’s perception. Usability testing helps us increase the visibility of design and decrease distraction. When people take training at work there is already more distraction then we want, and e-learning system image should never become one more barrier to prevent learning.   We should design system images that speak a common language with the users, so that they will only have to battle with the subject matter, but not with us. And since we have different users speaking different languages, the only way for us to learn their perception is to work with them. This is why usability testing that engages your true end user becomes an extremely important and powerful practice to help us design system image and fine tune the e-learning program till it’s successful.
  • Michael : Read slide.
  • Ispi Presentation Script 920

    1. 1. Altered State of Consciousness: Confessions of Instructional Designers Who Conduct Usability Testing Wachovia Human Resources – Michael Rukstelis & Yun Zhou
    2. 2. What’s in it for me? <ul><li>Steer “live” tests in three different phases </li></ul><ul><li>Learn the key skills to apply to your job </li></ul><ul><li>Hear about our test findings and lessons learned </li></ul><ul><li>Receive helpful templates and tools </li></ul><ul><li>Participate in interactive discussions </li></ul>
    3. 3. Designer vs. User What the user really needs What the designer thinks the user needs How the design looks to the user How the user thinks it is intended to work Source:
    4. 4. Usability Testing <ul><li>Real end user engagement </li></ul><ul><li>Representative prototype </li></ul><ul><li>“ Try out” or “Test drive” </li></ul><ul><li>Design validation/improvement </li></ul>
    5. 5. Our Case Study <ul><li>A two-hour web-based training program that educates new managers at Wachovia Bank to: </li></ul><ul><li>Identify situations that involve HR policies/assistance </li></ul><ul><li>Locate, understand, and apply HR policies/assistance </li></ul><ul><li>Recognize and address the challenges in applying policies </li></ul><ul><li>Master a problem solving strategy </li></ul>
    6. 6. The Challenge <ul><li>How do you design a web-based training that has a high degree of complexity, while making the learning program as meaningful and as easy to use as possible? </li></ul>
    7. 7. You are in the driver seat! <ul><ul><ul><li>Put yourself in the role of the user. </li></ul></ul></ul><ul><ul><ul><li>Be yourself and react to what you see. </li></ul></ul></ul><ul><ul><ul><li>Think out loud as you go </li></ul></ul></ul><ul><ul><ul><li>Be candid and critical </li></ul></ul></ul><ul><ul><ul><li>Don't be concerned about getting it wrong! </li></ul></ul></ul>
    8. 8. First Prototype
    9. 9. First Prototype Analysis <ul><li>Visibility </li></ul><ul><li>Feedback </li></ul><ul><li>Predictability </li></ul>
    10. 10. First Test Findings <ul><li>Key Findings : </li></ul><ul><li>The design interfered with the learning objectives. </li></ul><ul><li>The problem-solving methodology was more important than originally conceived. </li></ul><ul><li>Remedies : </li></ul><ul><li>Improve visual appeal. </li></ul><ul><li>Make more visible what users are expected of. </li></ul><ul><li>Accentuate the problem-solving methodology through a more apparent learning sequence. </li></ul>
    11. 11. Second Prototype
    12. 12. Second Prototype Test Test Room Observer Room Mirror user facilitator observer
    13. 13. Second Test Findings <ul><li>Improved visual appeal and navigation </li></ul><ul><li>Intuitive layout </li></ul><ul><li>Unintuitive labels and metaphor </li></ul><ul><li>Consequence: </li></ul><ul><li>Users challenged the intended sequence, jumping around. </li></ul><ul><li>Progress through the scenario slowed. </li></ul><ul><li>Users became distracted, exhausted, and learned less. </li></ul>
    14. 14. Implications: The Two Lenses <ul><li>Ease of Use </li></ul><ul><li>Ease of Learning </li></ul><ul><li>  </li></ul>
    15. 15. Third Prototype
    16. 16. Third Test Findings <ul><li>  </li></ul><ul><li>Clearer direction of users’ action </li></ul><ul><li>More purpose to users’ reading </li></ul><ul><li>Decreased distraction/guessing around </li></ul><ul><li>Better focus on content </li></ul><ul><li>Improved pace </li></ul><ul><li>Improved achievements of learning objectives </li></ul><ul><li>  </li></ul>
    17. 17. What Have We Learned?
    18. 18. What Have We Learned? <ul><li>Why adopt usability practices to design e-Learning? </li></ul><ul><li>  </li></ul><ul><li>Bridges the gap between the designer and user </li></ul><ul><li>Teaches us a common language with the user </li></ul><ul><li>Increases the visibility of design </li></ul><ul><li>Decreases unintended distraction </li></ul><ul><li>5. Shapes the learning methodology </li></ul>
    19. 19. What Have We Learned? Usability testing engages your true end user. It is a powerful practice to help design and fine tune the e-learning program until it functions to support learning successfully.
    20. 20. Questions and Comments?
    21. 21. Thank You for Being Here!