Usab_in_SW_Design.ppt

  • 387 views
Uploaded on

 

  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
    Be the first to like this
No Downloads

Views

Total Views
387
On Slideshare
0
From Embeds
0
Number of Embeds
0

Actions

Shares
Downloads
10
Comments
0
Likes
0

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide
  • I could call this session “Usability in Software Development” because I’m mostly going to talk about software development, because that’s my background. But I’ll refer to hardware and other types of products occasionally, so I’m calling this session “Usability in Product Development” I could also call this session “Usability in the Design Process”
  • You test functionality using unit testing and regressions testing to make sure that the product works according to specifications You test performance using performance benchmarks to make sure the product meets performance specifications You test usability using usability requirements and benchmarks to make sure that people can work with the product.
  • You test functionality using unit testing and regressions testing to make sure that the product works according to specifications You test performance using performance benchmarks to make sure the product meets performance specifications You test usability using usability requirements and benchmarks to make sure that people can work with the product.
  • Not always the best method to use - in terms of cost, time, and accuracy
  • Low-fidelity prototypes - paper User walks through prototype and comments on screens
  • User performs tasks rather than walks through them Quantitative measures are collected
  • Takes place close to release of the product need to determine how “adherence to a standard” will be measured “disaster” insurance - assess risk of releasing. Would rather know about problem. Might decide to: slip the schedule put test findings in release notes announce a bug fix, or point release, at release time
  • Forces design team to stretch their conceptions of what will work Forces test participant to really contemplate why one design is better
  • Human information processing Cognitive psychology Experimental psychology User-centered design Have knowledge about which problems can be generalized to the population as a whole and which are more trivial or specific to the test participants Can eliminate the need to test problems which are known in their field, such as inappropriate use of color or ineffective layout schemes
  • TAKE A BREAK AFTER THIS SLIDE NEXT SECTION: Conducting a Usability Test
  • Purpose - high level. Usually tied to business goals within the organization. Problem statement - measurable statements. These replace hypotheses. Bad problem statements: - incomplete and vague Is the current product usable? Is the product ready for release? Good problem statements: Are users able to move freely between the two major modules? Is help easier to access via a “hot key” or via a mouse selection? Is the response time a cause of frustration or errors for experienced users? User profile - work with marketing to determine the target population Method - test design Detailed description of how you are going to carry out the test, how the test session will unfold. Summary of each part of the test I recommend Jeffrey Rubin’s Handbook of Usability Testing for examples of test design Evaluation measures - listing them lets interested parties scan the test plan to make sure that they will be getting the type of data they expect from the test
  • LCU - least competent user. Defined as the least skilled person who could potentially use your product. No computer experience, never even used a word processor, highest level of education is high school. Need not fall at the bottom of ALL scales, but should fall at the bottom of the majority of them.
  • LCU - least competent user. Defined as the least skilled person who could potentially use your product. No computer experience, never even used a word processor, highest level of education is high school. Need not fall at the bottom of ALL scales, but should fall at the bottom of the majority of them.
  • Consistency among different test administrators and also for one test administrator in terms of how he or she interacts with different test participants. For example, if a single test administrator is running tests all day, the fatigue factor can enter into the picture and result in a test administrator cutting corners, or getting sloppy, or interrupting a test participant who is struggling and giving them a hint rather than observing and recording their attempts to work through a task.
  • LOOK AT SAMPLE NONDISCLOSURE FORM
  • LOOK AT SAMPLE NONDISCLOSURE FORM Purpose: To obtain written consent and agreement for nondisclosure
  • LOOK AT SAMPLE NONDISCLOSURE FORM Purpose: To obtain written consent and agreement for nondisclosure
  • LOOK AT SAMPLE QUESTIONNAIRE
  • LOOK AT SAMPLE observer log
  • LOOK AT SAMPLE observer log
  • LOOK AT SAMPLE TASK SCENARIO
  • Exec summary: Brief synopsis of the test logistics, major findings and recommendations, and overall benefits of the test Method: If your test plan was comprehensive paste it in here. Revise for any deviations from the plan that occurred during the test Results: Summarize performance and preference data. You don’t have to include raw data. Findings and recommendations: discuss each finding and then the recommendation based on that finding. Appendices: Sample data collection sheet, sample questionnaire. Optional: tables or charts with raw data.

Transcript

  • 1. Usability Testing Nancy L. Bayer Veritas Software Corporation [email_address]
  • 2. Overview
    • What is Usability?
    • Types of Usability Tests
    • Conducting a Usability Test
    • Anatomy of a Usability Test
    • References
  • 3. What is Usability?
    • Definition
    • Usability means that the people who use the product can use it quickly and easily to accomplish their own tasks
    • Usability is an attribute of a product, as is functionality and performance.
      • Functionality - what the product can do
      • Performance - how fast it can do it
      • Usability - how easy it is to do it
  • 4. The Cost of Bad Design 10 times / day @ 60 sec / time = 60 sec / day 25,000 users 416.7 hrs / day 92,917 hrs / year $5,110,417 / year Observed during a usability test: Because there was no visual confirmation that an action had been performed, users checked the database to see that the action was completed successfully
  • 5. Usability Testing
    • Definition
    • A process that employs participants who are representative of the target population to evaluate the degree to which a product meets specific usability criteria
    • A research tool that has its roots in classical experimental methodology
  • 6. Usability Testing Goal To identify usability deficiencies in software applications and their supporting materials for the purpose of correcting them prior to release
  • 7. Usability Testing
    • Benefits
    • Minimize cost of customer support
    • Increase probability of repeat sales
    • Create a historical record of usability benchmarks for future releases
    • Acquire a competitive edge since usability has become a market separator for products
    • Minimize risk at release time
  • 8. Usability Testing
    • Limitations
    • Testing is always an artificial situation
    • Test results do not prove that a product works
    • Participants are rarely representative of the target population
    • Usability testing is not always the best usability method to use
    • It’s better to test than not to test.
  • 9. Formal Test Methodology
    • Formulate a hypothesis
    • Example: The screen layout in Design A for creating users in the widget system will improve the speed and error rate of experienced users more than the screen layout for Design B
    • Assign randomly chosen participants to experimental conditions
    • Tightly control variables and the test environment to ensure validity
    • Use a control group
    • Use a population sample of sufficient size
  • 10. Formal Test Methodology Might Be Inappropriate
    • Purpose is to improve products, not formulate hypotheses
    • Lack of time
    • Lack of knowledge of experimental method and statistics
    • Difficult to get a sample of typical population and to randomly assign test conditions
    • Difficult to get large enough sample size to achieve generalizable results
    • Doesn’t capture qualitative information
  • 11. Types of Usability Tests
    • Exploratory Test
      • Conducted early in the development cycle
      • User profile and use cases have been defined
      • Evaluates effectiveness of preliminary design concepts
      • Explores user’s mental model of the product
      • Verifies assumptions about users
      • Tests a prototype of a subset of the product, with limited functionality
      • Can save a lot of design and development time
  • 12. Types of Usability Tests
    • Assessment Test
      • Conducted early or midway into the development cycle
      • High-level design has been established
      • Evaluates usability of lower-level operations and aspects of the product
      • Expands findings of the exploratory test
      • Assumes the conceptual model is sound
  • 13. Types of Usability Tests
    • Validation Test
      • Conducted late in the development cycle
      • Benchmarks have been identified
      • Certifies the product’s usability
      • Evaluates how the product compares to some predetermined usability standard or benchmark
      • Evaluates how all the components of the product work together
      • “ Disaster” or “catastrophe” insurance
  • 14. Types of Usability Tests
    • Comparison Test
      • Not associated with any particular point in the development cycle
      • Can be used in conjunction with any of the other three types of tests
      • Side-by-side comparison
      • Can compare two or more alternative designs
      • Can compare your product with a competitor’s product
      • Used to determine which design is better and to understand the pros and cons of each design
  • 15. Usability Test Environment
    • Must be able to control lighting, noise level, access, traffic through or near test area
    • Have test screens and data loaded and ready to go
    • If possible, carry out all aspects of the test in the same room
    • Optional:
      • One-way mirror for unobtrusive observation
      • Video camera on monitor, keyboard, and/or participant
  • 16. Usability Testing - Roles
    • Test administrator
    • Data logger
    • Timers
    • Video recording operator
    • Product / technical experts
    • Observers
      • Developers
      • Marketing staff
      • Technical writers
      • Managers
  • 17. Characteristics of a Test Administrator
    • Knowledge of usability engineering theory and methods
    • Good rapport with participants
    • Good memory
    • Good listening skills
    • Comfortable with ambiguity
    • Flexibility
    • Long attention span
    • Good organizer and coordinator
  • 18. Typical Problems
    • Unintentionally providing cues
      • Tone of voice, facial expression, nod of head
    • Too involved with collecting data and not involved enough in observing
    • Acting too knowledgeable
    • Too inflexible
    • Jumping to conclusions
  • 19. Conducting a Usability Test
    • Develop a test plan
    • Select participants
    • Prepare test materials
    • Conduct the test
    • Turn data into findings and recommendations
  • 20. Developing a Test Plan
    • Blueprint for the entire test
    • Addresses how, when, where, who, why and what
    • Main communication vehicle among test administrators and the rest of the development team
    • Describes required resources
    • Provides a focal point for the test and a milestone for the product being tested
    • Key deliverable for the usability engineer
  • 21. Contents of a Test Plan
    • The test plan should include:
    • The information you want to get from the participants
    • How you plan to elicit the information
    • How you plan to record the information
    • A process for incorporating what you learn from the test into revisions of the product
  • 22. Anatomy of a Test Plan
    • Purpose
    • Problem statement / test objectives
    • User profile
    • Method
    • Task list
    • Test environment / equipment
    • Role of test administrators
    • Evaluation measures
    • Test report contents and presentation
  • 23. Selecting Participants
    • Work with technical marketing to determine your target population
    • Consider including at least one LCU
    • Use internal participants only to pilot the test and conduct early exploratory tests
    • Beware of inadvertently testing only the “best” people
  • 24. Sources of Participants
    • Employment agencies (if you are doing testing on an ongoing basis)
    • Market research firms
    • Existing customers from in-house lists
    • Existing customers through sales reps
    • College campuses
    • Newspaper ads
    • User groups
    • Qualified friends
  • 25. Test Materials
    • Screening questionnaire
    • Orientation script
    • Demographic questionnaire
    • Data collection instruments
    • Nondisclosure agreement and consent form
    • Task scenarios
    • Posttest questionnaire
    • Debriefing topics guide
  • 26. Orientation Script
    • Purpose: To ensure that all participants receive exactly the same information in the briefing session
    • Key factor in maintaining intercoder reliability
  • 27. Intercoder Reliability
    • Consistency among test administrators in terms of how they interact with test participants
    • Intercoder reliability is attained when all test administrators, or coders , are consistent in administering the usability test.
      • Initial briefing
      • Coding or scoring
      • Comments and answers to questions
      • Help given to the participants when they get stuck
  • 28. Anatomy of an Orientation Script
    • Introduction of test administrators
    • Purpose of test
    • Participants aren’t being tested, the product is
    • What participants will be asked to do
    • What test administrators will be doing
    • Policy on answering questions during test
    • Confidentiality assurance
    • Are there any questions?
    • Sign nondisclosure and consent form
  • 29. Orientation Script Example
  • 30. Anatomy of a Nondisclosure Form
    • Thanks for participating
    • Purpose of the usability test
    • Terms of nondisclosure
    • Permission to take notes, audio- or videotape
    • Participation is voluntary
    • Participation is confidential
    • Signature and date of participant and test administrator
  • 31. Nondisclosure Form Example
  • 32. Demographic Questionnaire
    • Background information on participant
      • Domain expertise
      • Tasks frequently performed
      • Experience with the software
      • Hardware or software platforms used
      • Other software frequently used
      • Size of company
      • Number of software application users
  • 33. Data Collection Instruments
    • Time and accuracy measures
    • Verbal protocol (thinking aloud)
    • Visual protocol (observation)
    • Questionnaires (demographic and preference)
    • Posttest interview (debriefing)
  • 34. Sample Evaluation Measures
    • Sample performance measures
      • Time to complete each task
      • Number of tasks completed correctly with and without assistance
      • Count of incorrect menu selections
      • Count of number of uses of user manual
    • Sample preference measures
      • Usefulness of the product
      • How well the product matched expectations
      • Overall ease of use
      • Overall ease of setup and installation
      • Preference of one prototype over another prototype
  • 35. Observer Log Example
  • 36. Task Scenarios
    • Task scenarios should describe:
    • The end results that the participant should try to achieve
    • Motives for performing the work
    • Actual data and names (can be dummy data)
    • The state of the system when a task is initiated
  • 37. Guidelines for Task Scenarios
    • Provide realistic scenarios, complete with motivation to perform
    • Sequence the task scenarios in the order in which they’re most likely to be performed
    • Match the task scenarios to the experience of the participants
    • Avoid cues that serve as giveaways to the correct results
    • Provide a substantial amount of work in each scenario
  • 38. Task Scenario Example
  • 39. Posttest Questionnaire
    • Preference information
      • Ease of use
      • Perceived performance
      • Usefulness of the product
      • Ease of accessibility
      • Usefulness of specific parts of the product (menus, toolbar, icons, etc.)
    • Open- and close-ended questions
    • Likert scales
      • Participants rate their agreement or disagreement with a statement on a 5- or 7-point scale.
  • 40. Posttest Interview (Debriefing)
    • Use a posttest debriefing guide to provide structure for the debriefing session
    • Clarify anything that was confusing or you have questions about from your observations
    • Ask participants to expand on interesting or puzzling remarks they made while thinking aloud
  • 41. Usability Test Report
    • Summarize data
    • Analyze data
      • Identify and focus on those parts that were unsuccessful or surprising
    • Identify user errors
    • Provide recommendations
  • 42. Anatomy of a Usability Test Report
    • Executive summary
    • Method
    • Results
    • Findings and recommendations
    • Appendices
  • 43. Usability Resources
  • 44. Usability Resources