Consistency in labeling, screen layout, and visual coding of information will make the interface much easier for users to learn and use. If users feel confident that similar actions will bring similar results, if they can predict to some degree the behavior of the system, they’ll feel more confident in trying new things, and their exploration will be more productive. Screens that are similar in function or content should be visually consistent. Format of information within screens should be consistent, in order to facilitate recognition and cognitive processing. Use the screen format to support the information architecture of the site.
Typically, you want to keep essential information in a single screen, with auxiliary information on separate screens. Extra information slows users down, so you have to find a balance between including useful information and making sure that essential information stays accessible. Too much info or too many features will reduce usability by making desired features harder to find and by p[lacing too high a burden on user memory and user decision-making. With too many features, users are likely to restrict themselves to a tiny subset of available options in self-defense, possibly missing crucial features that would not have been hard to use or understand if they had not been obscured by a forest of possibilities. Does layout effective use gestalt principles of grouping, part/whole relationships, and figure/ground separation.
Assessment tests Will involve full prototypes, will focus on tasks, may involve quantitative measurements Validative tests Objectives and goals are typically quantitative Results are used to set standards, performance criteria Test involves reduced interaction between test participant and moderator Test conditions need to be fairly well controlled
You can also test how many features the user can remember during a debriefing session. The problem with metrics-based testing is that the individual users may vary widely -- measurements of speed or memory or whatever may depend more on the characteristics of a particular user than of the site or interface you are testing. One solution is to test so many users that you can apply statistical test of validity to the results. This can obviously be expensive. It also requires some statistical expertise.
Rubin says testing is not always the best technique to use; he sometimes recommends an expert evaluation instead. This is fine in the very early stages of the project—it’s pointless to bring in users to tell you that you have no global nav, or that you have inconsistent nav, or that you don’t provide any error recovery. But you must never substitute expert evaluation for usability testing in later stages of the project. Nielsen claims 5 usability experts will find 80% of the problems (DR 67), and I don’t believe it.
Tasks are series of actions made to achieve goal – ie., “request article from Interlibrary Loan using online form.”
Scenarios are tasks put into a short narrative – ie., “your instructor recommends an article and gives you a citation for “Usability Tests made Simple,” from The Journal of Usability, 2 (34), 11-23. You realize that Cook Library doesn’t have that journal, and you would like to request it from Interlibrary Loan”.
They are meant to take some of the artificiality out of the task.
(e.g. You have the afore-mentioned article citation. Go to the Interlibrary Loan link located in the list of services on the left. Complete the online form including author, article title, journal title, volume, issue and page number)
Scenario should test user’s ability to complete the task in a natural way without step-by-step instructions.
Goldberg, J. H., Stimson, M. J., Lewenstein, M., Scott, N., Wichansky, A. M. (2002). Eye tracking in web search tasks: Design implications. In Proceedings of the 2002 symposium on Eye tracking research & applications (pp. 51-58). Retrieved March 28, 2009 from ACM Digital Library .
Hackos, J. T. & Redish, J. C. (1998). User and task analysis for interface design . New York: John Wiley & Sons.
Jeffries, R., Miller, J. R., Wharton, C., & Uyeda, K. M. (1991). User interface evaluation in the real world: Comparison of four techniques. In Proceedings of the SIGCHI conference on human factors in computing systems: Reaching through technology (pp. 119-124). Retrieved March 28, 2009 from ACM Digital Library .
Molich, R., Ede, M. R., Kaasgaard, K., Karyukin, B., (2004). Comparative usability evaluation. Behavior & Information technology, 23 (1), 65. 74. Retrieved March 28 from Academic Search Premier database.
Rubin, J. (1994). Handbook of usability testing: How to plan, design, and conduct effective tests . New York: John Wiley & Sons.
Snyder, C. (2003). Paper prototyping: The fast and easy way to design and refine user interfaces . San Francisco: Morgan Kaufman.