"NUIs”… newway of interacting with technology, based on natural human inputsReason - on the riseDesigning a NUI is unlike designing any other Talking about:Applications, challenge, case study
Hard to find a definition - long or disputedMain points relevant for this talk are that NUI’s are…Eg iPhone = NUI Device – iOS and touch controls = NUI[CLICK]All devices require some acclimatisationNUI devices minimise thisWith good design we can reduce it further
Where are seeing these interfaces? …Smart TV’s with NUI’s built in to them. We could soon even be using it to replace our trusty mouse, although for now it currently works best as a way to compliment it
[CLICK]While I’m going to be talking specifically aboutmy experiences from working with Kinect’s speech and gesture recognition,a lot of these points are applicable to any natural user interface and hopefully beyond.
Broke the record for the world’s fastest selling consumer device ever – and is now in over 19 million homesWhat is Kinect exactly? It’s a camera that works with your Xbox 360 and sits above or below your TVIt looks like this…[CLICK]It has two infrared 3D Depth Sensors that allow Skeletal Tracking (TODO 0 ADD 3D Depth Sensor skeleton picture) and Gesture RecognitionAs well as Facial recognition – auto sign in.The Multi-Array Micallows Voice Recognition which means that when you’re watching a movie on your 360 you can pause it without even picking up the remote just by saying “Xbox Pause”. It’s great – feels like you’re in the future already!
It's not just entertainment that this technology can be used for though– it opens up enormous opportunities to address healthcare problems, education, and providing greater access for the disabled.There are loads of brilliant non games applications of Kinect.Some examples of these from the Microsoft Accelerator programme include: [CLICK]Letting users place virtual items of furniture in their homes so that they can get a sense for whether the real-world item would be suited for their space, and a virtual fitting room that lets users try different sizes of clothing on an avatar which matches their body shape and movements so they can make sure the items fit.[CLICK]Turning any surface into a multi-touch screen, allowing projected images to become interactive displays[CLICK]All sorts of health care applications including rehabilitating stroke patients by encouraging them to carry out exercises while interacting with a virtual environment and monitoring old aged relatives without having to stream images or video. [CLICK]Allowing surgeons to navigate patient MRI and CT scans in the operating room while staying sterile.Pasted from <http://www.bbc.com/news/technology-18643205>
But for me, it’s ALL about the games! Who here plays video games? *Raise hand*I’ve loved games since I was little. 4 years old playing on our Commodore 64. My favorite game was Donald Duck’s Playground. Perhaps it’s these fondearly memories that have reserved games such a special place in my heart. I think it’s more than that though. I think it’s their potential.Games have the capacity to tell compelling stories, to immerse us in rich worlds, to teach us about the world around us and even to teach us about ourselves.As I grew up though, my Dad stopped playing games so much. When one day, I asked him why, one of the main reasons he gave was the gradual move from this:[CLICK]… to this…
Controllers these days are actually pretty complicated. Each new console generation we get another button or feature and gamers gradually get used to it. But if you don’t play games then it makes starting now pretty inaccessible – you get handed this pad with a million buttons on it and you don’t know what to do and spend half your time looking at the controller.
Motion control is one answer to this increase in complexity. Expanding appealChallenges for developersKinect specificTurned games on its head… how
Vital – no hapticsSolutionsDisplaying the Kinect feedAudioIn game feedback (racing car wheel, horse hands etc) + Heads Up Display (HUD) (Dance Central eg limbs)
The next challenge is choosing the right gestures. Gestures need to be distinct , and as different from each other as possible, without too many active at one time – as the more you’re looking for, the more likely you are to get false positives[CLICK]Best advice is to just experiment – find what fits your requirements: what’s fun, find what feels good and is easy to use. Tools like the Visual Gesture Builder allow you to easily record new gestures and try them out. (Micro story about team experimenting with gestures)
Another challenge is how to ensure your GUI works well with this kind of input method.Menus – consensus. No standards still.Avoid upper or lower extremes – because it’s not comfortable for the user.
Safe areaAnother good reason to avoid putting too much at the top or bottom of the screen is the safe area – on a lot of TV’s content gets cropped off so you don’t want any of your buttons going missing.
[CLICK]Let’s now take a look at the two currentmost popular menu navigation methods for Kinect GUI’s to illustrate the different pros and cons. I’m not saying either is better because it’s all about the execution, and there are probably better methods yet to be discovered!
ProsInstantaneous and constant feedbackFamiliarity with paradigmConsTiredness(Jitter) Smoothing/ damping causes lag (10 100 1000 rule)
The swiping system involves having a list of items that the users navigates by moving their hand up or down and selects by swiping one direction, and moves back by swiping the other direction. (PICTURE)Pros[CLICK]It’s very responsive, although only when you do the correct movements. You get little feedback otherwise. [CLICK]You always have something selectedCons[CLICK]No continuous feedback like with a cursor[CLICK]Some people can struggle to get to grips with a control scheme that’s different to anything else they’ve used before
[CLICK]You want distinct phrases – the more varied they are the less chance there is for error. [CLICK]You want to avoid phrases that (too long for player feedback) or short (too little to work on – false positives)Sensitivity – false positives. [CLICK]To set up voice control you just need to add text to a grammar file and Kinect will figure out the pronunciation, you don’t need to train it. You can use a program called SpeechLab on the Kinect developer dashboard to adjust it if you need to though[CLICK]Localisation can be a problem, if you’ve chosen phrases that are distinct and the right length in English, they might probably won’t be so suitable in other languages. You can also get homophones. Therefore carefullocalisation that bears this in mind is needed. (EXAMPLES?)
Our Kinect title Fable: The Journey threw up lots of challenges so I just want to take a look at the how we dealt with them and a few things that perhaps we could have improved on.
The game was Kinect based from day 1 – we didn’t try to shoehorn Kinect into a game that it didn’t work with. It’s a single player narrative experience and not a party game or series of mini games like quite a lot of the of other motion control titlesWe wanted to show that you can make a hardcore game with Kinect that wasn’t just a party game or mini games. [CLICK]The main mechanics are combat with magic andtravelling through the environment on your trusty horse and cart[CLICK]The game allows you tosit as well as stand, which is usual for Kinect games. I will go into more detail on that shortly.
With feedback being so important, let’s take a look at how we tried to provide it in Fable the JourneyTravelling (describe gestures for cracking reins and steering)[CLICK]In reality there is lag – but reality isnt always good!
Spell CastingPrimingCrosses[CLICK]Spell CastingBolt appears as soon as your hand is primed. [PIC]Crosses appear instead if you’ve raised your hands when you can’t cast magic. [PIC]
When the player is stood too near or far, or off to the side, we display a warning and this window to give feedback as it’s important that the player can see what they’re doing wrong.Our tutorials vs. dance central combining the feedback into the gameWe chose to only display this when needed but some games do it all the time and that works pretty well – that way you can always see if it’s tracking the wrong person, your hands are obstructed from its view or you’re stood in the wrong place.
[CLICK]We opted for a cursor system as it's one most people are familiar with from using a mouse.People assume that they need to ‘press’ the buttons due to the mouse paradigm but we tried button presses but it didn't feel so good and people struggled to use it so we used a gradual charge meter on the cursor to indicate that you're selecting a button so you have time to cancel if you didn't intend to select that icon [video][CLICK]One trick to make selecting items easier is that the longer your cursor is on a button the 'stickier' it becomes (a button radius that expands the longer you hover your cursor on it) so it gradually becomes harder to deselect,which helps to prevent players from accidentally cancelling their selection. [CLICK]We animate the buttons when you move the cursor over them to communicate that you’re hovering over them because this cursor isn’t a tiny dot like a mouse pointer, it’s a large to make it easier to see on your TV (TODO PICTURE) Stage 1 is the button animating and stage 2 is the cursor charging. I’ve spoken to other devs who deliberately don’t animate their buttons though as they find that it panics users to have the cursor reacting to every button as they drag it across the screen – the downside to this approach though is that it will feel a little unresponsive if the button doesn’t react immediately. [CLICK]Another decision is how fast the cursor should charge when selecting a button. Too slow and power users will get frustrated with your menus and too fast and novices will be accidentally selecting items. We have an option for users to change the selection speed (from what to what? Dave)[CLICK]We tried to be consistent in all our menus, although some occasions do require you to deviate in some ways.
We disable button activation for 1 second when going onto a new screen to prevent accidental button selection before you can select anything otherwise if you hit the Back button, for example, you will go back to the previous screen which has the back button in the same place and it will instantly start to charge again. Therefore we don’t start selecting anything for 1 second to give you time to move the cursor if that’s not the button you want
Teaching is especially hard when gestures are involved. When you say to someone "press X" there's no ambiguity but when you say "push your hand forwards to cast a spell" everyone will have their own way of doing it.We opted to show users the gesture repeatedly and from multiple angles.
Some of the things we learnt when putting new users in front of the game were that:It’s important to show people what to do before telling them because otherwise they’ll do what feels right to them, which is different for each person.The solution we came up with was to provide Reminders if the user is doing the wrong gesture, although when the user has multiple gestures available it can be hard to tell what gesture it is the user is trying to do. Our User Research taught us some valuable lessons about teaching players different gestures. [CLICK]Once they've done this and 'learned' the incorrect way it's extremely hard to get them to unlearn a gesture. If they’ve had the opportunity to do what they think is right then they’ll often keep doing that no matter what else you tell them.Kinect makes you create a new mental model on how to interact with your Xbox - just waving your hand to control something feels foreign and strange at first but you get used to it quickly.If it's not the most basic of gestures you can't reliably teach this in one still frame - Hard to show a gesture from just one angle - you really need an animation or video. We used motion videos [Show tutorial video with audio]Dance central integrates the tutorial well into the game – video?The audio was important as people often just ignored the text, even if there was nothing else to do other than read it. It's a good idea to show people what they're actually doing vs. what they need to do. (but we only realised this too late it wasn't really the sort of approach we wanted - but in hindsight would probably have helped). Swimmers recording. The running machine at the sportswear shop.How can we address this? Audio. Visuals (green reins). [CLICK]People can be taught the right thing and asked to do it when not playing but then in the pressure of the moment, revert back to the incorrect gestureStory of UR subject having to be shown the gesture multiple times. [CLICK]Even if people do get the gesture right, sometimes over time they will 'drift' and start doingthe gesture in a new, incorrect way. Peoples gestures can evolve over time. [CLICK]They also often needed repeated reminders to show them what they should be doing if they’re getting it wrong. Although the more gestures you are looking for at any one time the harder it is to tell which gesture they are trying and failing to do. Reinforce lessons if needed - is the player repeatedly doing something wrong? Have they been stuck on a particular sections for an unusually long amount of time? If so, display a text hint or visual aid. The best solution is to show them what they’re actually doing vs what they should be doing but this a hard, and quite laboured approach.
Too early vs too lateSpeech/ gesture specific issues (people can over/ under exaggerate)Therefore users who are unfamiliar with the technology are importantUser Research is especially important for NUI projects because you have much more focus on the physical means by which the player is playing the game than usual.[CLICK]It seems to me like there are only ever two times to do UR – too early or too late. The window of ‘the right time’ seems so fleeting I wonder if it even exists. Therefore I would suggest that the best time to start UR is to do it even when it seems too early – you may just have some prototypes but getting people unfamiliar with the product or the hardware will help right from the start. If you leave it until you have a product that you feel is ready for people to use then you’re also probably so far along that the amount of things you can change with it is already limited. [CLICK]You need users who haven’t got used to the tech and who don’t know the gestures, who don’t know how clearly or slowly you might need to speak. As you and the rest of the development team become more and more familiar with every nuance of the game it becomes impossible to gauge things objectively as you get more and more skilled at playing games without a controller so User Research is even more important than usual for controller-less games. The only way to truly assess usability is to allow people to play test the game who are completely unfamiliar with it. Usability needs to be encompassed into pretty much everything we do as game designers, but most calls, even from the most experienced designers are simply educated guesses until you’ve the game is put in front of lots of people and it’s proven what does and doesn’t work. With speech and gesture interfaces being so new, good UR is even more important. [CLICK]What Kinect specific UR issues did we face? Speech/ gesture specific – people over compensating, doing what they think it is looking for.
I’ve talked about what NUI devices are, what they can be used for and why they are important. We’ve looked at some of the challenges they provide and examined how Lionhead tackled some of these issues. So, what can we draw from all of this?
As a game designer or an interface designer working with NUI devices, your safety nets will be gone. You could find yourself wishing for your hardware back. There are so many unexpected issues, and rules turned on their heads. But don’t panic, you will get along fine as long as you aren’t afraid to question your assumptions.Prototyping will help you realise which assumptions hold true and which don’t apply to a natural user interface.
NUI everywhere. Improve. Kinect scratched. More natural and intuitive. Nothing without great interfaces.Designed by people like you. [CLICK]Today, NUI devices are everywhere. They are only going to improve (with new additions likely in the future such as Eye tracking and emotion recognition). Kinect has only just scratched the surface of what’s possible. These devices have massive potential to reduce the barriers to entry for technology and to make our interactions with computers more natural and intuitive but they have to have great interfaces to allow them to do this. Great interfaces that will be designed by people like you.