Game Design 2: Expert Evaluation of User Interfaces


Published on

Published in: Education, Technology, Design
  • Be the first to comment

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide
  • Sorry I couldn’t be there.\nI’ll be back on Wednesday and at the classes on Thursday but Romana kindly agreed to help out so thanks Rom!\n\nToday we are going to have two short lectures - but before we do - a word on your coursework.\n\nThis week we are going to again spend the tutorials looking at Solium Infernum. I might move on from this as of next week - so please try to get to the point where you REALLY know this game.\n\nYou can play a single player game in a half hour or hour - and you need only play multiplayer in order to appreciate how it works. Mostly the experience of playing singleplayer is comparable.\n\n\n
  • Here at GCU, we always hark on about the value of user testing. We talk about how you are not designing for yourself - you’re designing for a particular type of person. And of course, that is true - but there are some areas where it makes sense to involve experts. \n\nSo we’re going to talk about why that might be.\nWe’re then going to look at a couple of techniques that you can use to perform an expert evaluation and consider some heuristics.\n\n\n
  • As I said, we are believers in user centred design here. After all you look at the game very differently from the user. You can’t help it. If you ever sit a newbie in front of one of your IP projects or a game jam game, you’ll wince when you see how they struggle to use it - so we recognise the importance and value of involving users.\n\nhowever - there are problems.\n\nExpensive\nSome users funny about using something that is not finished\nHard to get non-hardcore interested\ntakes tonnes of time to recruit and then host\nKleenex testing is a lot of work and is expensive (+ risks)\nparanoid industry\n\nsaves you waiting until THE VERY END to see if design any good\n\n\n
  • \n
  • Have to decide about your user's level of experience\nuse example of crates with e-Bug\nkids vs scientists\n\n\nDescription of user\nsimilar but different to personas\nyou need to know what they know \n\nDescription of system\nmenu plan or paper prototype etc\nor working system!\n\nDescription of task\n\nList of actions required\n\n
  • \n
  • \n
  • Do they know word heuristic? It's a rule of thumb.\n\ndoesn't define how you GET the list!\n\nComparing interface to the heuristics\n\n__\n\nThere are no real solid game UI heuristics yet - so we have to adapt from general software design.\n
  • Feedback\ntomb raider puzzles\n\nuser language\nno jargon\n\nUser control & freedom\nsee that this isn't necessarily game specific\nfor SI, you can cancel orders before submitting\n\nStandards\ntalk about 1st year who mentioned HUDs not changing in 15 years\neasier to go with standards than retrain\n\n
  • Memory load\nstem and leaf \n7 items +- 2\n\nShortcuts\nSI has one for opening main menu \nRTS games, WOW, etc\non windows, barely touch mouse\non mac, not so much\n\nMinimalist\ncolour\n1+1 = 3\nwhite space can be distracting\n\nError messages\nactionscript!\nSI - gives error but doesn't say HOW to get past it\n\ndocs\nno manuals any more\n\n
  • Norman talks about affordances\nshould be able to LOOK at a thing and KNOW what it does\n\n1:\nQuicktime logo bad\nyou don't have that knowledge perhaps\nemail logo with a letter is better\n\nYou get to DECIDE what the user knows, but you better be right!\n\n2:\nMultliplayer mode for SI - emailin files around\nkind of necessary\nbut why not double click to copy file to appropriate dir\nwhy not email from inside app?\n\n3:\nThink about force unleashed\nnot aware of possible moves\n\n4:\nlike consistency and standards - you always expect things in same place\nis hot on the left al the time?\n\n5:\nunderstand that you will be more effective if you have constraints\nAPB is a good example!\nbrowser plug ins that show your bookmarks in a 3d world are good example\n\n6:\nbe defensive, assume error and help\n\n7:\nlearn from other produces. the L4D Flallout example\njump button in L4D vs Fallout\n
  • There isn't ONE WAY of evaluation. No guaranteed set of heuristics. Need to find a structure / set of rules you believe in but no 'official set'.\n\n4: interesting\n- like this in mass effect\n\ndialogues should fuck off \ngive authoritative verbs 'save file' 'don't save file' \n- not yes / close\n\nalso allow larger hit areas where possible\ne-bug / nintendo style of dialogue.\n\n7:\nlocus of control refers to whether YOU think that YOU CAUSE actions or whether they happen TO you.\nyou want user to feel in control\nsupport their agency\nfate vs power!\n\nYou’ll see that there is much similarity between these heuristics - there is a lot of wisdom in these simple rules.\n\n\n\n
  • Illustration showing which evaluators found which usability problems in a heuristic evaluation of a banking system. Each row represents one of the 19 evaluators and each column represents one of the 16 usability problems. Each square shows whether the evaluator represented by the row found the usability problem represented by the column: The square is black if this is the case and white if the evaluator did not find the problem. The rows have been sorted in such a way that the most successful evaluators are at the bottom and the least successful are at the top. The columns have been sorted in such a way that the usability problems that are the easiest to find are to the right and the usability problems that are the most difficult to find are to the left.\n
  • In principle, individual evaluators can perform a heuristic evaluation of a user interface on their own, but the experience from several projects indicates that fairly poor results are achieved when relying on single evaluators. Averaged over six of my projects, single evaluators found only 35 percent of the usability problems in the interfaces. However, since different evaluators tend to find different problems, it is possible to achieve substantially better performance by aggregating the evaluations from several evaluators. Figure 2 shows the proportion of usability problems found as more and more evaluators are added. The figure clearly shows that there is a nice payoff from using more than one evaluator. It would seem reasonable to recommend the use of about five evaluators, but certainly at least three\n\nSo - again - you should read the recommended reading from the blog.\n
  • Game Design 2: Expert Evaluation of User Interfaces

    1. 1. 2011 Game Design 2 Lecture 6: Expert Evaluation
    2. 2. Expert Evaluations & Design / Usability HeuristicsWill look at:• Need for alternatives to user evaluation• Methods of evaluating without end users (using expert evaluators)• Some heuristics / guidelines offered by experts
    3. 3. End User Evaluations• End-user evaluations can be expensive – The methods are very time consuming – Users may not be willing – To get truly ‘fresh’ eyes, so called “kleenex” testing requires different players each time• Concerns about leaks – Few external play testers at early stages – Friends & family play testers may be too kind
    4. 4. Expert Evaluations• As an alternative to some user testing, expert evaluators / testers can be used• Falconer details 10 inspection methods, we will look at two: – Cognitive Walkthrough – Heuristic Evaluation
    5. 5. Cognitive Walkthrough• In this approach experts imitate users – Relatively quick and cheap – Expert needs to be skilled and requires: • A description of users (e.g. level of experience) • A description of system (or an operational system) • A description of the task to be carried out • A list of the actions required to complete the task
    6. 6. Cognitive Walkthrough• Expert addresses questions such as: – Is the goal clear at this stage? – Is the appropriate action obvious? – Is it clear that the appropriate action leads to the goal? – What problems (or potential problems) are there in performing the action?• Essential that the expert tries to think like the end user and not like themselves.
    7. 7. Consider Solium Infernum• A scenario: – Goal is to go to war and capture building – Is the appropriate action obvious? • What if no vendetta in place? – Is it clear that the appropriate action leads to the goal? • Does the game help you? – What problems (or potential problems) are there in performing the action? • If you haven’t got vendetta status, how do you know?• I know how to do this but thinking as a ‘user’ it’s not so easy.
    8. 8. Heuristic Evaluation• Involves assessing how closely an interface or system conforms to a predefined set of guidelines or heuristics.• Examples: – Nielsen’s usability heuristics – Schneiderman’s eight golden rules – Norman’s seven principles
    9. 9. Nielsen’s Usability Heuristics• Give feedback – keep users informed about what is happening• Speak the user’s language – dialogs should be expressed clearly using terms familiar to the user• User control and freedom – clearly marked exits and undo/redo• Consistency and standards• Prevent errors – even better than having good error messages
    10. 10. Nielsen’s Usability Heuristics• Minimise memory load – recognition rather than recall• Shortcuts – accelerators (unseen by novices) speed up interactions for experts• Aesthetic and minimalist design – don’t have irrelevant or rarely needed information• Good error messages – should indicate the problem and explain how to recover• Help and documentation – should be concise and easy to search
    11. 11. Norman’s 7 Principles1: Use both knowledge in the world and knowledge in the head.2: Simplify the structure of tasks.3: Make things visible.4: Get the mappings right.5: Exploit the power of constraints.6: Design for error.7: When all else fails, standardise.
    12. 12. Schneiderman’s heuristics (8 Golden Rules)1. Strive for consistency2. Enable frequent users to use shortcuts3. Offer informative feedback4. Design dialogues to yield closure5. Offer error prevention & simple error handling6. Permit easy reversal of actions7. Support internal locus of control8. Reduce short-term memory load (Faulkner Chapter 7)
    13. 13. How Many Evaluators?Different people find different problems.
    14. 14. How Many Evaluators?