JChueke_BCS_Mar 2012_PRINT

527 views
456 views

Published on

Thursday 15th March 2012
title : Beyond mouse and keyboard: Post-WIMP and novel forms of interaction
speaker : Dr. George Buchanan and Jacques Chueke, Centre for HCI Design, City University, London
venue : BCS, Southampton Street, London, arrive 18:00 for a 18:30 start
SCIENCE WEEK EVENT

The introduction of novel hardware for computing and gaming during the last decade is changing the way we control everyday devices because it provides, for instance, haptic, gesture-based, voice activation and eye tracking interactions. Dr Buchanan will describe the work being done at the Centre for HCI Design on new types of interaction, and Jacques will report on his PhD project to investigate the cognitive issues that these new technologies present to the user, and how the user explores interfaces that are new and visually unfamiliar.

Dr Buchanan is a Senior Lecturer in the Faculty of Informatics, Centre for HCI Design. His research interests encompass information interaction: from web search, through browsing digital libraries, to accessing information on a mobile phone. His main current interest is to discover how people interact with newly found documents, and how computer technology can assist users to make better informed and relevant decisions.

Jacques worked for 10 years on internet and software projects for large companies in Brazil, and has taught at PUC-Rio. He has been a PhD student in the Centre for HCI Design since October 2010.

Published in: Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
527
On SlideShare
0
From Embeds
0
Number of Embeds
2
Actions
Shares
0
Downloads
0
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide
  • Welcome to my session, entitled ‘Beyond Mouse and Keyboard: Post-WIMP and Novel Forms of Interaction’. My name is Jacques. I’m a PhD researcher at the Centre for HCI Design. I’m a teacher in Brazil, teaching Usability for web and software on Postgraduation degrees and graphic design at PUC-Rio university. I started my research on October 2010.I have the pleasure to be accompanied by My first supervisor, Dr. George Buchanan, who will talk at the end about the research being developed at the centre and the master in human centred-systems. This presentation is about the core subject of my PhD research at the Centre for HCI Design, City Uni.I’d like to thank Mr. Hillmore and the BCS board for this opportunity. It’s my second time here, first I was on June 2010, presenting my work in a Doc Consortium – which was a great experience and where great sharing of information took place. I’m sure the same will take place tonight. Shall we start?I’ll walk you through new developments within technologies for interaction.Wiil present 4 case studies where I could use the technology – and make a few comments about some interface issues I’ve spotted that could be improved.We’ll discuss a protocol for analysis I’m developing with my supervisors which evolves participants with different expertise utilizing NUI technologies and eye tracking technology.I’ll explain the theories I’m using and updating to better understand how people learn and adapt (or not) to this new technologies.In case you’re wondering what is Post-WIMP (stands for Window Icon Menu and Pointing Device): Defined by van Dam as interfaces “containing at least one interaction technique not dependent on classical 2D widgets such as menus and icons”. Ultimately it will involve all senses in parallel, natural language communication and multiple users. Communications of ACM, 1997
  • From early ages we’re in contact with ground breaking technology. PARADIGMS of user interaction are being dissolved at daily basis.
  • People are developing games for kittens. Indeedthings are changing.
  • After almost thirty years of the desktop metaphor as the dominant visual interface with mouse and keyboard as input methods, traditional paradigms of user interaction are changing rapidly (Wigdor, 2011: 1-5). The introduction of novel hardware for computing and gaming during the last decade is changing the way we control everyday devices (Dam, 1997) because it provides, for instance, haptic (e.g. iPhone, iPad, MS Surface), gesture-based and voice (e.g. Nintendo Wii, Microsoft Xbox 360 console gaming with Kinect sensor) and eye tracking interactions (e. g. Tobii P-10). Noticeably, traditional control modes of interaction such as buttons, links, icons and tools, generally activated by a pointer, are not present and no longer hold this kind of interaction. Therefore, the challenge is to step outside the GUI paradigm and enable the user to use control technologies in a non-GUI/WIMP interaction, which is mostly physical. As already mentioned, these technologies, although bringing Post-WIMP interfaces (Beaudouin-Lafon, 2000), might not be displaying appropriate visual cues for physical interaction. As you can see on the video at the right hand-side, people from MIT Media Lab is changing the Kinect SDK in order to research gestural interactions. In case u never heard of the Kinect, is a camera that comes with MS X-Box 360 which has the initial purpose to allow gaming with controllers – it identifies one’s structure in a skeleton view.By natural I DO NOT mean what u gonna see:
  • I gonna let you decide what’s wrong with this picture.This is a jest from Google on April's fool, last year.
  • They were moking this frenzy about NUI and gestural interfaces that was emerging last year. Imagine the need to learn such vocabulary/language. Does this look natural???I leave this as a warning: careful with the new vocabularies you’re introducing and complex gestures one might need to learn to interact...
  • The community is actively changing CONTROL methods. There’s a genuine interest on this.Again: plan properly the gestures u imagine as being “natural” – imagine this in a public space: “excuse me I have to check my email”.
  • Gestural interaction within regular desktop Windows. It brings one of the issues I’m talking about - subject of this presentation. Very clever, indeed. Still a conventional desktop with natural interaction.I see a problem on this. I believe the interface should change, to better convey the message of interacting with gestures, voice, touch, eye-gaze.This is what I’m researching: how to inform properly about possible physical interactions available from the system?How to design better visual, audible, tactile cues to inform about NUI interactions?At the video you can see the guy standing up but now we have a set of lenses(available at Amazon) that allow users to sit in front of the kinect, interacting at close range.
  • Moving fromKinect hacks and people fiddling with the SDK let’s move towards the industry/mainstream companies. Head tracking to control the mouse pointer with eviGroup Paddle Pro.Again regular Windows OS on the background.
  • Again a new learning of gestures vocabularies to shift channels, control volume…
  • Right hand-side I tried to decompose the problem into variables of INPUT and OUTPUT. This regards the current configuration of some of the technologies presented and some that I could experience in different workshops and technology that we have at the Interaction Lab. If you cross over OLD/NEW OUTPUTS X OLD/NEW INPUTS. INPUT: understand input as a user’s command.OUTPUT: understand output as what the system displays.Left aside OLD INPUTxOLD OUTPUT situation. Too much research on this already aiming to improve the GUI-WIMP desktop WYSIWYG.In some cases, NUI modes of interaction just co-exist as additional features on a traditional GUI, presenting hybrid solutions, which could hamper even more the user interaction, creating control problems with the system. Different technologies from specific manufacturers will be regarded in this presentation, in order to exemplify the subject of novel interactions and input methods. By all means the companies here quoted are to receive great admiration for their efforts on R&D of technologies for HCI. Comments made about specific shortcomings or design flaws (as I see it) regarding their modes of interaction do not aim to question the quality of their work. My intention is to point out specific issues that were observed and represent the core issues that this research tackles with. I’ve recorded 4 case studies which exemplify these configurations.
  • Microsoft Surface from 2007 (10-12 K) that we have at the Interaction Lab. This is a playback from my first encounter with the native Media Player, where I was trying to create my own play list with the albums available.No cues, no warnings, no error messages, no tooltips. One thing that would be helpful are tooltips for the newcomer, the inexperienced user. When one finally becomes experienced after learning new languages, new icons andWays to interact with the system could turn off any aids such as tooltips, tutorials, etc. But the very first interaction is very important. Could make the difference for one to choose the technology or not. One without experienceMight never go back to this. Might never become a customer or a user.
  • CONSUMER PREVIEW VERSION – TUI and mouse/keyboard. HIDDEN MENUS/INTERACTIONS.APP SWITCHER / CHARMS / CONTEXTUAL MENUStart screen – could press spacebar to unveil the login screen (does mouse click works?)Metro Dashboard – tilesThe problem of INVISIBLE MENUS. Enter app – how to exit? No home button? No exit icon. Have to make a swipe gesture from the very bezel, the canvas to unveil the CHARMS menu. We praise the effort. Jon mentioned he learned because he watched tutorials. I couldn’t do almost anything.Multi-task. Again same swipe gesture from the bezel – no VISUAL CUES before any interaction takes place. No hot corners – no response while interacting to inform what is there and how to do it. Very specific moves to rearrange apps/create different views.
  • Hot corners, similar to Mac EXPOSÉ but with visual feedback. Right corners that are displayed when you activate a window – informing there’s a connection.
  • I’mveryinterestedonAssistive Technology. Perhaps R&D in thisareamightbenefitnotonlypeoplewithspecialneedsbuttheentirecommunity. SimilarTowhathappenedwith HTML/XML sourcecode for web – accessibilityissues.Accessibility: the technology conveys opportunities for people with special needs to interface with digital devices (multi-sensory).MENTION  GReAT (Gesture Recognition in Aphasia Therapy) SAM MUCROSOFT WORK
  • Did a workshop at SmartLab, UEL with Mick Donegan, specialist on AT with eye gaze for control. Tobii studio with special software with large buttons, icons, shining colors. But when it comes to Windows control it’s a different ball game.
  • Not convinced the visual cues available for control interaction are the best here.
  • TASK: OpenSoftw > Print Screen > Open Sotw > Paste > Save File > Select Folder > Tried to change file format (FAIL) > Close softwChange system setings > look downChange mouse settings > look left
  • Meeting with Scott Hodgins (Director, Acuity ETS Ltd) and Sara Hyléen (TobiiCorporate Marketing Manager)1. Change active windows like Alt + Tab: Spacebar + Gaze to select active Windows. Then you release the spacebar to activate.2. Pointer jumps to any spot you’re looking at with a small movement on the track pad. You never loose your pointer from sight (could that be annoying?). Features were tested with users. Sara got used to the pointer jump and misses it a lot now.3. Zoom/pan large 3d images with mouse/track pad scroll – really handy for zooming in and out over wherever you’re looking at. In another prototype you can use head movements for zooming, spinning, etc. 4. Tobii Media Studio: selecting thumbnails with eye gaze. No significant visual cues for swiping images (left/right gaze) and looking at the bottom to bring back thumb menu.5. Presentation Browser: PPT. Spacebar + eye gaze to select/activate slides. 6. Text browser: vertical scrolling and gaze feedback. How am I planning to investigate this? Visual Perception task – Eye Tracking technology present at Interaction Lab, Centre for HCID, City Uni.
  • I reiterate: In my research I am especially interested on the moment when users scan the screen of NUI systems for the very first time. I’ll investigate what happens cognitively, when a user comes across visual cues they’re not familiar with, in systems they’ve never used before. The visual cues that indicate the range of controls available through physical modes of interaction. How am I planning to investigate this? Visual Perception task – Eye Tracking technology present at Interaction Lab, Centre for HCID, City Uni. Perfect to investigate where people look at.Small sample, 7 people. D and D was spotted as a hidden/invisible interaction. – we want to adapt the protocol to Post-WIMP with NUI and test some of these technologies. In time a prototype will be created novel visual metaphor, a reactive interface which should be tested in order to verify if its PA efficiently convey available interactions. No verbalizations during the 10 seconds – could generate false data – people might have detained their gaze over a spot or a feature they were trying to explain.Quantitative data was obtained with the Tobii x60 eye-tracker (e.g. saccades plots and fixations times) and was compared with verbalizations (qualitative) in order to produce conclusions about how hidden interactions affected participants with different expertise. Explain what is a FIXATION (grows the more one attempts to a specific spot) and a SACCADE (path that connects fixations). The inexperienced participant COULD NOT SEE THE DIFFERENCE between a PWP and a regular Portal and could NOT SPOT D AND DROP interactions. Verbalizations confirmed.
  • Mention they’re instructed to observe only after each question and then with RTA technique they explain their interpretation of the screen based on the question made.‘Q1: What is this website for?’Recurrence between FIXATIONS could indicate hardship to understand some user interface object. There’s a very interesting paper from my 2nd supervisor trying to find patterns on fixations and saccades with specific usability problems.I’ll not discuss the results in depth – no time for this – but experience is a key feature on this kind of interaction (NO VISUAL CUES, no proper PA to inform the newcomer about CONFIGURATION and D AND D features). People need to compareWith what they know to SPOT this kind of interaction. HAD INEXPERIENCED PARTICIPANTS BEEN INFORMED properly – with efficient VISUAL CUES that they could CHANGE the initial CONFIG and RE-ORDER object (widgets in a dashboard) the Experience would have been different? Would eyetracking data show this? I’m most certain it would. That’s why I’ll test this with my prototype later.
  • ‘Q1: What is this website for?’
  • ‘Q2: What can you do in this kind of website?’
  • ‘Q2: What can you do in this kind of website?’
  • ‘Q3: Do you think is possible to change your screen the way you like it?’I was conducting them towards my main question, actually. Even cueing them about this possibility. Inexperienced didn’t spot or suspected besides my warning.
  • ‘Q3: Do you think is possible to change your screen the way you like it?’
  • ‘Q4: Is it possible to move anything in there?’More fixations over the top bar, top part of widgets took place.Question were of great influence over people’s gaze. As Jacques Aumont: the introduction of an order affects how a person scrutinizes an image and disrupts expected trends. We’ve seen landing s over pictures which were accidental, is hard to avoid looking at big pictures.
  • ‘Q4: Is it possible to move anything in there?’
  • PERCEIVED AFFORDANCE x (2) CULTURAL CONSTRAINT = CONVENTION + SYMBOLIC COMMUNICATION (SYMBOLIC MEANING ARBRITARY – LEARNED CONVENTION)EXAMPLE OF PERCEPTIBLE AFFORDANCE: SLIDER/BUTTONPerceptible Affordances theory is used in HCI to better understand how to make a system usable and how to shape the functions that users anticipate a system may have. It teaches us that a tools within systems should be identifiable; its use should be obvious as well its intended effect. What are the control actions and what supposedly are the results? The theory of Perceptible Affordances resonates and complements the very different stages of Norman’s Theory of Action. In particular it relates to the evaluation cycle, where the user is still assessing and trying to make sense of a system. This moment plays a pivotal role on the following interaction between user and system. Fewer mistakes are made if the evaluation cycle is well supported by the interface design. When execution takes place, less activation without awareness of the forthcoming results is prone to happen: users will not be mislead so often. The concept of affordance has been used in HCI to solve problems related to the usability of designed systems. The concept was originally coined by Gibson (1986) and introduced to the HCI field by Norman (1988) and was further appropriated by Gaver (1991), Bærentsen & Trettvik (2002), amongst others. Vyas, D. (2006)The concept of an affordance was coined by the perceptual psychologist James J. Gibson(1979) in his seminal book The Ecological Approach to Visual Perception. The concept was introduced to the HCI community by Donald Norman in his book The Psychology of Everyday Things from 1988.Donald Arthur Norman (born December 25, 1935), is an academic in the field of cognitive science, design and usability engineering and a co-founder and consultant with the Nielsen Norman Group. He is the author of the book The Design of Everyday Things.Much of Norman's work involves the advocacy of user-centered design. His books all have the underlying purpose of furthering the field of design, from doors to computers.‘In today’s screen design sometimes the cursor shape changes to indicate the desired action (e.g., the change from arrow to handshape in a browser), but this is a convention, not an affordance. After all, the user can still click anywhere, whatever the shape of the cursor. Now if we locked the mouse button when the wrong cursor appeared, that would be a real affordance, although somewhat ponderous. The cursor shape is visual information: it is a learned convention. When you learn not to click unless you have the proper cursorform, you are following a cultural constraint. Norman (1999) "Affordance" means what you can do to an object. For example, a checkbox affords turning on and off, and a slider affords moving up or down. "Perceived affordances" are actions you understand just by looking at the object, before you start using it (or feeling it, if it's a physical device rather than an on-screen UI element). All of this is discussed in Don Norman's book The Design of Everyday Things. (a.k.a POET: Psychology of Everyday Things). Jakob Nielsen's Alertbox, February 19, 2008: Top-10 Application-Design Mistakeshttp://www.useit.com/alertbox/application-mistakes.htmlWe view the affordances of an artefact as the possibilities (for both: thinking and doing) that are signified by the users during their interaction with the artefact. Acknowledging the work of Baerentsen & Trettvik, we propose an interaction-centered view of affordance, which we call Affordance in Interaction. From this view, affordances of an artefact are not the properties of the artefact but a relationship that is socially and culturally constructed between the users and the artefact in the lived world. This view strongly suggests that affordance emerges during a user’s interaction with the environment. In addition, the affordance in interaction view focuses on the ‘active interpretations’ of the users interacting with the artefact. From this view, users are actively participating in the interaction with the artefact and continuously interpreting the situation and constructing and re-building meanings about the artefact. We suggest that affordances can be better understood as an interpretative relationship between users and the artefact. Vyas et al (2006) SEMIOTICS SIGN: Charles Morris (Sintatic – Semantic – Pragmatic) Peirce (Representamen – Object – Interpretant x Icon – Index – Symbol)
  • I made a distinction for the purpose of better clarification on how Perceptible Aff operate in Post-WIMP. As a Bridge between the interface layer (visual, acoustic, haptic) + mode of interaction.The trick is to show what is really possible to be done rather than what is apparently possible.‘In today’s screen design sometimes the cursor shape changes to indicate the desired action (e.g., the change from arrow to handshape in a browser), but this is a convention, not an affordance. After all, the user can still click anywhere, whatever the shape of the cursor. Now if we locked the mouse button when the wrong cursor appeared, that would be a real affordance, although somewhat ponderous. The cursor shape is visual information: it is a learned convention. When you learn not to click unless you have the proper cursorform, you are following a cultural constraint. Norman (1999)PERCEIVED AFFORDANCE x (2) CULTURAL CONSTRAINT = CONVENTION + SYMBOLIC COMMUNICATION (SYMBOLIC MEANING ARBRITARY – LEARNED CONVENTION)EXAMPLE OF PERCEPTIBLE AFFORDANCE: SLIDER/BUTTON
  • I consider the Evaluation Cycle paramount during user-interaction
  • We propose a view that identifies some fraction of a user interface as based on the Post-WIMP theme (1) plus some other fraction that provides computer-only functionality (2) that is not realistic. As a design approach or metric, the goal would be to make the first category as large as possible and use the second only as necessary, highlighting the tradeoff explicitly. Jacob at al, 2008With Eye tracking technology I will be able to analyze how participants scrutinize the screen (yielding Quantitative data). I will be able to cross reference quantitative data with participant’s utterances – which were organized in classes, with general inductive approach for qualitative data analysis. A prototype with Post-WIMP characteristics and novel technologies for interaction will be built in order to elicit user exploration of new and visually unfamiliar digital interfaces to understand how users visually scan such interfaces to obtain the gist of its interactive potential. HCI theories and Cognitive Psychology will be used to better understand those issues. I believe both theories of Norman’s theory of Action and Perceptible Affordances can be adapted and updated to the research question and the identified problem. They could also be combined with more recent theories such as Piaget’s theory of INRC (Wigdor, 2011: 137-138): Identity, Negation, Reciprocal and Commutative and the Scaffolding concept, based on the seminal thinking of the famous psychologist Vygotsky (Lajoie, 2005: 541-557) By developing a methodology for an empirical study, which focuses on observation only prior to any interaction, we are willing to identify what elements people will focus on NUI screens. We will be able to extract from their comments a more consistent understanding about what they misunderstood and even disregarded – specifically visual cues for potential interaction with novel and unfamiliar interfaces. With Eye tracking I will be able to analyze how participants scrutinize the screen. I will be able to cross reference this raw quantitative data with participant’s utterances – which were organized in classes, with general inductive approach (Thomas, 2006) for qualitative data analysis. Quantitative and Qualitative data were then combined to produce conclusions about what kind of information can be obtained with the protocol – and how can it be later adapted to a NUI prototype or system. -----------------------------------------The way we perceive things is changing. We need to re-interpret the shift we’re living and review the language itself that would better convey the message of Post-WIMP/NUI and encompass the experience of the very interaction itself.The interface should change to encompass TUI and NUI, rather than just co-exist with addictive features in an already exceeded GUI. Research about the different feedback (multi-sensorial) should take place in order to encompass more effcientlythe possibilitities of interfacing with eyes, gestures, voice, touch, emotions and the very mind itself. GUI additions such as Natural User Interfaces, Microsoft’s Surface Computer, eye-tracking and other Haptic interfaces are not transforming the underlying problems created with the GUI.Sorensen (2009)
  • JChueke_BCS_Mar 2012_PRINT

    1. 1. Beyond Mouse and Keyboard:Post-WIMP and Novel Forms ofInteraction Jacques Chueke London, UK, May 2011 George Buchanan (1st Supervisor) Lecturer, Centre for HCI Design Stephanie Wilson (2nd Supervisor) Lecturer, Centre for HCI DesignMaster in Design, PUC-Rio, RJ, BrazilPhD Researcher at the Centre for HCI DesignSchool of Informatics, City University London 1
    2. 2. Things are changing… iPad: 1 year old growing among touch screens and print 2
    3. 3. Things are changing… iPad for cats??? 3
    4. 4. Overview• The introduction of novel hardware for computing and gaming during the last decade is changing the way we control everyday devices.• It provides NUI control methods, such as haptic (e.g. iPhone, iPad, MS Surface), gesture- based and voice (e.g. Nintendo Wii, Microsoft Xbox 360 console gaming with Kinect sensor) and eye tracking MIT Media Lab: DepthJS – 2011 interactions (e. g. Tobii P-10).• One specific impact this has had is on the user’s control of such devices. 4
    5. 5. It’s not supposed to be like this… Gmail Motion, April 2011 5
    6. 6. It’s not supposed to be like this… Gmail Motion, April 2011 6
    7. 7. New Modes of Interaction• PrimeSense / MS Kinect: Swim Browser Prime sense browser competition winner: Stolarsky SwimBrowser’ – 2011 7
    8. 8. New Modes of Interaction• KinVi: Kinect Virtual Interface Prime sense browser competition 2n place: Windows Control with gestures - 2011 8
    9. 9. New Modes of Interaction• eviGroup Paddle Pro Front-facing webcam to track head movements for cursor control – 2011 9
    10. 10. New Modes of Interaction• Hitachi: Hitachis Gesture Remote Control TV Prototype CES 2009: Hitachis Gesture Remote Control TV Prototype 10
    11. 11. Problem Statement• New command vocabularies NEW CONTROL METHODS (INPUT)have emerged and users donot know how to access or NEW INTERFACES (OUTPUT)activate them.• This is a timely moment toresearch how people make OLD OUTPUT X NEW INPUTsense of these technologies NEW OUTPUT X OLD INPUTfor CONTROL whilst somePost-WIMP interfaces do not NEW OUTPUT X NEW INPUTdisplay appropriate visualcues for NUI interactions. WIMP-GUI (DESKTOP) X NUI (PHYSICAL) POST-WIMP X MOUSE/KEYBOARD POST-WIMP X NUI (PHYSICAL) 11
    12. 12. Case 1: Microsoft Surface Media Player, Microsoft Surface. Nov, 2010 POST-WIMP + NUI (PHYSICAL) 12
    13. 13. Case 2: Windows 8 Metro Dashboard• The new Windows 8 with similar features as used in Windows Phone and Xbox 360 Dashboard. POST-WIMP + NUI (PHYSICAL) x Metro Dashboard: Windows 8 Start Screen. Feb, 2012 MOUSE/KEYBOARD 13
    14. 14. Example: GNOME 3 - desktop environment for GNU/Linuxand UNIX-type operating systems. GNOME 3 hotcorners and responsive interface, 2012 POST-WIMP + NUI (PHYSICAL) x MOUSE/KEYBOARD 14
    15. 15. Case 3: Tobii P-10 Eye Tracker: Gaze for Control 15
    16. 16. Case 3: Tobii P-10 Eye Tracker: Gaze for Control Tobii P-10 at the SmartLab (UEL), Oct. 2010 16
    17. 17. Case 3: Tobii P-10 Eye Tracker: Gaze for Control• Assistive Technology: Tobii P-10 at the SmartLab (UEL). Tobii P-10 equipment, Oct. 2010 WIMP-GUI (DESKTOP) X NUI (PHYSICAL) 17
    18. 18. Case 3: Tobii P-10 Eye Tracker: Gaze for Control Mouse configuration pop up for Windows Control, Tobii P-10 18
    19. 19. Case 4: Tobii LeNovo: Gaze for Control• Tobii Lenovo / PCEye / Acuity Tobii LeNovo, Jun 2011 WIMP-GUI (DESKTOP) X NUI (PHYSICAL) 19
    20. 20. New Visual Cues/Feedback UI affordances are Just-in-time chrome shown on tap. Applying Can be triggered by the principle of touch or proximity – scaffolding will lead youhover effect (pg. 153) to far more successful multi-touch and gesture UI’s. (pg. 154) Tethers indicate that a size The marking constraint (MS menu system Surface) has teaches users tobeen reached on make pen-based an item being gestures (pg. 150) scaled (pg. 91) Wigdor, D. Wixton, D. Brave NUI World, 2011 20
    21. 21. MethodologyAn empirical study with Eye Tracking (Tobii x60) was conducted to test thePerceptible Affordances of Drag and Drop interactions within a iGooglePersonal Web Portal. This study served to create a protocol for analysis whichfocuses on the very first 10 seconds of a participant scrutinizing the screen whiletrying to respond specific questions. Participant, 54, Beginner Expertise Participant, 22, Advanced Expertise Gazeplot Comparison: Beginner x Advanced 21
    22. 22. Quantitative Data Analysis: Gazeplot Resulting gaze plot from seven participants during the first 10 seconds after question 01. 22
    23. 23. Quantitative Data Analysis: Heatmap Resulting heat map from seven participants during the first 10 seconds after question 01. 23
    24. 24. Quantitative Data Analysis: Gazeplot Resulting heat map from seven participants during the first 10 seconds after question 02. 24
    25. 25. Quantitative Data Analysis: Heatmap Resulting heat map from seven participants during the first 10 seconds after question 02. 25
    26. 26. Quantitative Data Analysis: Gazeplot Resulting heat map from seven participants during the first 10 seconds after question 03. 26
    27. 27. Quantitative Data Analysis: Heatmap Resulting heat map from seven participants during the first 10 seconds after question 03. 27
    28. 28. Quantitative Data Analysis: Gazeplot Resulting heat map from seven participants during the first 10 seconds after question 04. 28
    29. 29. Quantitative Data Analysis: Heatmap Resulting heat map from seven participants during the first 10 seconds after question 04. 29
    30. 30. Model 1: Perceptible Affordances • According to Nielsen (2008): "Affordance" means what you can do to an object. For example, a checkbox affords turning on and off, and a slider affords moving up or down. • "Perceived Affordances" are actions you understand just by looking at the object, before you start using it (or feeling it, if its a physical device rather than an on- screen UI element). • In Gaver’s (1991) words, “…Perceptible Affordances are inter-referential: the attributes of the object relevant for action are available for perception. What is perceived is what is acted upon.” 30
    31. 31. Perceptible Affordances in Post-WIMP Post-WIMP GUI [INTERFACE LAYER] OUTPUT - LESS SYMBOLIC - MORE INTUITIVE (USER) PERCEPTIBLE AFFORDANCE - MORE REACTIVE (COMPUTER) INPUT NUI [MODE OF INTERACTION LAYER] 31
    32. 32. Model 2: Norman’s Theory of Action Execution Cycle Execution Specification of Formulation of Actions Intention Sequence Interaction Perception Interpretation Evaluation Evaluation Cycle Preece et al (2009: 121) quoting Norman, (1986) 32
    33. 33. Research Question 1: ACTIVATION Post-WIMP GUI [INTERFACE LAYER] Evaluation Cycle OUTPUT PERCEPTIBLE AFFORDANCE Execution Cycle INPUT NUI [MODE OF INTERACTION LAYER] 33
    34. 34. Conclusions and Future Work• By developing a methodology for an empirical study, which focuses onobservation prior to any interaction, we are willing to identify what elementspeople will focus on NUI screens.• I believe both Norman’s theory of Action and Perceptible Affordances theoriescan be combined with the more recent Piaget’s theory of INRC and theScaffolding concept; and used on my protocol for analysis.• A prototype with Post-WIMP characteristics and NUI mode of interaction will bebuilt in order to to understand how users visually scan such interfaces to obtainthe gist of its interactive potential.• Quantitative (Eye Tracking) and Qualitative (Verbalizations) data will becombined to produce conclusions about what kind of information can be obtainedwith the protocol – and how can this data be adapted to indicate better designinteractions with NUI systems. 34
    35. 35. Thank you for your attention!Jacques ChuekeJacques.chueke.1@city.ac.uk 35
    36. 36. Bibliography Beaudouin-Lafon, M. (November 2000). "Instrumental Interaction: An Interaction Model for Designing Post-WIMP User Interfaces". CHI 00: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. The Hague, The Netherlands: ACM Press. pp. 446–453. doi:10.1145/332040.332473. ISBN 1-58113-216-6. http://www.daimi.au.dk/CPnets/CPN2000/download/chi2000.pdf. Breeze, James. Eye Tracking: Best Way to Test Rich App Usability. UX Magazine, access on 25 November 2010. (http://www.uxmag.com/technology/eye-tracking-the-best-way-to-test-rich-app-usability) Buxton, W. (2001). Less is More (More or Less), in P. Denning (Ed.), The Invisible Future: The seamless integration of technology in everyday life. New York: McGraw Hill, 145–179 ITU Internet Reports 2005: The Internet of Things – Executive Summary. Dam, A. (February 1997). "POST-WIMP User Interfaces". Communications of the ACM (ACM Press) 40 (2): pp. 63–67. doi:10.1145/253671.253708. Dourish, P. Where the Action Is: The Foundations of Embodied Interaction. A Bradford Book: The MIT Press, USA, 2004. Ehmke & Wilson, 2007. Identifying Web Usability Problems from Eye-Tracking Data. Published by the British Computer Society. People and Computers XXI – HCI…but not the way we know it: Proceedings of HCI 2007. Gaver, W. Technology Affordances. Copyright 1991 ACM 0-89791-383-3/91/0004/0079. Gentner, D. and Nielsen, J. (April 1993). "The Anti-Mac Interface". Communications of the ACM (ACM Press) 39 (8): pp. 70–82. http://www.useit.com/papers/anti-mac.html. Jacob, R. et al. (2008). "Reality-Based Interaction: A Framework for Post-WIMP Interfaces". CHI 08: Proceedings of the Twenty-Sixth Annual SIGCHI Conference on Human Factors in Computing Systems. Florence, Italy: ACM. pp. 201–210. doi:http://doi.acm.org.ezproxy.lib.ucf.edu/10.1145/1357054.1357089. ISBN 978-1-60558-011-1. 36
    37. 37. Bibliography McGrenere, J., Ho, W. (2000). Affordances: Clarifying and Evolving a Concept. Procs. of Graphic Interfaces 2000, Montreal, May 2000. McNaughton, J. Utilizing Emerging Multi-touch Table Designs. Technology Enhanced Learning Research Group - Durham University. TR-TEL-10-01. Nielsen, J. (April 1993). "Noncommand User Interfaces". Communications of the ACM (ACM Press) 36 (4): pp. 83–99. doi:10.1145/255950.153582. http://www.useit.com/papers/noncommand.html. Norman, D. (1999). Affordance, Conventions and Design. In ACM Interactions, (May + June, 1999), 38-42. Picard, R. Affective Computing. The MIT Press, Cambridge, Massachusetts. London, England, 1998. PREECE, Jenny. SHARP, Helen. ROGERS, Yvonne. Interaction Design: Beyond Human-Computer Interaction [2nd edition]. John Wiley & Sons, Ltd. West Sussex, UK, 2009. Ramduny-Ellis, D.; Dix, A.; Hare, J.; Gill, S. Physicality: Towards a Less-GUI Interface (Preface). Procs. Third International Workshop on Physicality. Cambridge, England, 2009. Sorensen, M. Making a Case for Biological and Tangible Interfaces. Proceedings of the Third International Workshop on Physicality. Cambridge, England, 2009. Sternberg, R. Cognitive Psychology. Wadsworth, Cengage Learning. Belmont, CA, USA, 2009, 2006. Vyas, D., Chisalita, C. Veer, G. Affordance in Interaction. ECCE 06 Proceedings of the 13th Eurpoean conference on Cognitive ergonomics: trust and control in complex socio-technical systems. ACM New York, NY, USA ©2006 ISBN: 978-3-906509-23-5 WIGDOR, Deniel. WIXON, Dennis. Brave NUI World: designing natural user interfaces for touch and gesture. Morgan Kauffman Publishers, USA, 2011. 37

    ×