• Save
Multimodal flexibility CHI2010 paper - Oulasvirta & Bergstrom-Lehtovirta
Upcoming SlideShare
Loading in...5
×
 

Like this? Share it with your network

Share

Multimodal flexibility CHI2010 paper - Oulasvirta & Bergstrom-Lehtovirta

on

  • 2,513 views

"A simple index for multimodal flexibility"

"A simple index for multimodal flexibility"

Statistics

Views

Total Views
2,513
Views on SlideShare
2,222
Embed Views
291

Actions

Likes
0
Downloads
2
Comments
0

5 Embeds 291

http://www.hiit.fi 253
https://www.hiit.fi 26
http://translate.googleusercontent.com 8
http://www.slideshare.net 3
http://hiit.fi 1

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • CLICK The paper is about the notion and method of MULTIMODAL FLEXIBILITY - MF denotes users’ ability to use a computer with whatever sensory capacities they have CLICK Our goal has been to develop the SIMPLEST possible method that is GENERALLY APPLICABLE and INFORMATIVE for design purpose - you can take any prototype, run a study with a set of chosen modalities and get an idea of how flexible your users are and WHY
  • 1) The MOTIVATION for the method comes from the FACT that users of mobile devices are not fully able to concentrate on an interactive tasks. 2)- There is a competition going on between the environment and the UI over the users’ modalities - some of them phsyiological, such as fingers freezing, some due to environmental noise like loud sounds or tremble, some cognitive such as the tasks you adopt and your time-sharing strategies 3) SO LETS say you are developing a mobile device, a new application, or new interaction technique - you would of course like the users to be able to more flexibly do things and attend to their environment - maximally usable across these conditions and, vice versa, MINIMALLY restrictive in terms of the modalites the UI DEMANDS - because such situations are going to lead to slow down, unuse, and poor user experience
  • 1) Empirical techniques and analytical techniques. 2) LIMITATIONS: - 1-2 modalities - MFI generalizes to any N - assume perfect knowledge of tasks and conditions sot hat they can be replicated - in principle MFI requires no knowledge - require special equipment - MFI can also be done with CHEAP equipemtn 3) - The point is that MFI would be usable in early stage development to get feedback for interface prototypes
  • SO WHAT IS MFI? 1) MFI is a single quantitative index that quantifies yoru users ability to perform in conditions where modaliteis cannot be fully allcoated to the task 2) 0 when ALL senses under study MUST be allocated to the task 1 when none of them MUST be 3) D SO HOW DO WE GET TO THAT?
  • MFI is based on an extension of a very simple idea: BLOCKING 2. Let’s see how it works. THe following is from the LG Texting CHampionships organized this year, where world’s quickest texters were competing in a variety of tasks, one of which was Blindfolded texting.
  • 1) the idea is to block or distort a modality to see it’s importance for performance. 2) You choose a set of mdoalities for your study, you choose ways to implement blockings that make sense for you. 3 It’s not a new idea, btu it is a generalization of old ideas and application of that idea into HCI and usabiliyt testing context. - Already the Gestaltt psychologists utilized the paradigm of Subtraction, but to my knowledge it was Posner’s book Chronometric Explorations of Mind that systematized this sort of methodology. 4) In the paper you can find ideas we collected. LET’S TAKE A LOOK AT AN EXMAPLE
  • 1) the idea is to block or distort a modality to see it’s importance for performance. 2) You choose a set of mdoalities for your study, you choose ways to implement blockings that make sense for you. 3 It’s not a new idea, btu it is a generalization of old ideas and application of that idea into HCI and usabiliyt testing context. - Already the Gestaltt psychologists utilized the paradigm of Subtraction, but to my knowledge it was Posner’s book Chronometric Explorations of Mind that systematized this sort of methodology. 4) In the paper you can find ideas we collected. LET’S TAKE A LOOK AT AN EXMAPLE
  • 1) explain calculations 2) explored with multiple alterantives 3) This formulation has good properties. - First, the index ranges from 0 to 1. - Second, the index is not determined by absolute performance (although, as we will discuss, it can surreptitiously affect it). - Third, statistical testing can be performed on MFI.
  • 1) explain calculations 2) explored with multiple alterantives 3) This formulation has good properties. - First, the index ranges from 0 to 1. - Second, the index is not determined by absolute performance (although, as we will discuss, it can surreptitiously affect it). - Third, statistical testing can be performed on MFI.
  • THIS IS A DETAIL YOU CAN READ MORE ABOUT IN THE PAPER, BUT THe power of the method is that it captures ALL Possible interactions between two modalities! It may also happen that two modalities interact in an interesting way. The paper tries to summarize all possible outcomes of two modalities. Let me mention two. For example, Intersensory facilitation - the help you get from audition when it's used with vision in a taks like catching a bumple bee flying in the room is larger than when audition is used alone. Or mutual distraction can happen when poorly designed audio cues distract spatial attention in a task. * Joanna will now talk about how MFI was used in a real experiment
  • We conducted a comparative study of three mobile text-input interfaces for three - auditory, tactile and visual - modalities to assess if MFI works in practice.
  • We took three commonly available and comparable mobile text input interfaces. These mobile text input devices are fairly similar, however slight differences can be found from the use of modalities. For example, the tactile feedback differs in touchpad compared to two physical keypads. The three modalities - vision, tactition, and audition – were studied: all found important by previous studies. We had 12 Participants, young, with normal vision, who conducted all 8 combinations of 3 modalities for all 3 interfaces. One subject took about 1 hour with 2 task repetitions.
  • 1. Hearing 2. Vision - to computer - to retain the possibility for orienting glances to maintain balance. 3. In designing the blocking of the tactile feedback - the edges of the buttons and button releases were in focus. - thin plastic layer - layout printed on the layer - blocked most of these tactile feedbacks - vibrotactile feedback Experimental task The performance variable was the number of 80% correct words transcribed in half minute based on Levenhstein distance
  • Not very surprisingly, the two Qwerty keyboards performed better than the traditional 12 button keypad in the baseline condition where no modalities were blocked. So what happened with the blockings. We here show example data from three modality-blocking conditions from all three phones.
  • Generally, vision was extremely important modality for both qwerty keyboards. In the two bottom videos you can see what happens when it is blocked. The score on the right drops from the five of unblocked to 1 and 0. So it's virtually impossible to type anything in 30 seconds.
  • The same applied to the touch qwerty.  We were surprised to find out that touch and physical qwerty were both equally devastated by removal of Vision.
  • Here’s what happened with ITu-12. This subject was exceptionally quick with ITU but reflects the observation that almost all users were able to maintain at least some performance in vision-blocked conditions. Their tactic was based the fact that they could locate also the buttons in the mid-column of the keypad reliably. However, when feedback from the button edges was blocked, this tactic was more difficult and the performance suffered.
  • Why the QWERTY-performance was worse than this ITU when vision is blocked, is that most users are totally UNABLE to distinguish keys in the middle part of the keyboard with only their fingers, even with the physical buttons. However, we know from the texting champion, that finding the keys with just tactile modality available can be learned with practice. Pay attention to how she uses fingers to sweep across the buttons to localize the keys when blindfolded. CLICK
  • The MFI was calculated as shown earlier from the performance scores of our mobile text-input interface study. In contrast to absolute performance scores, ITU12 was best in terms of MFI, indicating that reallocating modalities does not decrease text input task performance as much as with two other tested interfaces.
  • The dependence values are representing how much performance decreased when a particular modality was removed. The D-values are indicating all three interfaces vision-dependent, but the ITU12 interface showed this effect the least. Audition in general was not influential in these tested text input interfaces. Averaging over the three interfaces, we found out that vision and tactition are synergistic, indicating that synergy of these two modalities results to better performance than their separate performances’ sum. Now back to Antti and to the conclusions
  • 1) meaningful: (operationalizes a difficult concept, lab-based, inexpensive, captures a wide range of phenomena) - as the study shows, can reveal interesting information of potential value 2) It adds to the traditional toolbox of evaluation methods. - For eaxmple, KoneCranes which is one of the largest crane manufacturers asked us to evaluate their new prototype of crane control interface that had Added auditory and visual feedback for the joystick control taht typically doesn't have it. They were expecting that the prototype'd outpeform the traditional interface, but it didn't - at least in undistracted conditions. The benefits were only seen in the MFI study where we blocked the senses, finding out that the prototype did indeed have somewhat higher MFI.
  • In the paper there is extensive discussion of the potential pittfals in applying the method and outright limitations. Let me mention TWO fo the most significant challenges for developing the method. 1) Blocking is crude - on/off operationalization that misses several tactical and strategic opportunities users have, such as the timing of when to allocate away and back. on the other hand, there may be interference effects across tasks that significantly worsen performance; MFI may give too optimate estimate. It remains an issue for future study how well these estimates predict real-world multitasking ability. 2) No free lunch: agnostisism about the real-world tasks
  • We envision that the method finds use eventually outside of mobile HCI Do you have a prototype you want to be usable in exotic circumastances; or do you have a prototype that would impose changes to users’ sensory allocation
  • We are working hard to make it generally accessible. www.hiit.fi/mfi GIVE IT A SHOT!

Multimodal flexibility CHI2010 paper - Oulasvirta & Bergstrom-Lehtovirta Presentation Transcript

  • 1. Antti Oulasvirta & Joanna Bergström-Lehtovirta A Simple Index for Multimodal Flexibility
  • 2. Noise Fatigue Multitasking Abrupt events Interruptions Lower transduction capacity
  • 3. Related techniques
    • Visual occlusion
    • Secondary tasks that overload cognitive faculties
    • Dual-tasking with natural secondary tasks
    • Changing modality of information display/access
    • Task analysis
    • Cognitive modeling
  • 4. MFI 0 ... 1 D
  • 5. The LG Texting Championships
  • 6. Blocking methodology
  • 7.  
  • 8. Example: 2 modalities Ear protection Cardboard Main task: Text copying, 30s
  • 9. Audition, Vision Audition, Vision Audition, Vision Audition, Vision 5.0 1.0 2.5 0 1.0 0.2 0.5 0 MFI=(0.2+0.5+0)/(4-1)= 0.23 - -
  • 10. Audition, Vision Audition, Vision Audition, Vision Audition, Vision 5.0 1.0 2.5 0 1.0 0.2 0.5 0 D a =((1.0-0.2)+(0.5-0))/2= 0.65 - -
  • 11. a and b ? Bimodality indices
  • 12. Study: Mobile Input Interfaces A Simple Index for Multimodal Flexibility Oulasvirta & Bergstrom-Lehtovirta
  • 13. Interfaces Physical QWERTY Touchpad QWERTY ITU-12
  • 14. Ear protection Plastic layer Cardboard Main task: Text copying, 30s
  • 15. Baseline performance 6.8 7.8 (80% correct words) 4.3
  • 16. 5 1 0 Physical QWERTY 80% correct words Audition Vision Tactition Audition Vision Tactition Audition Vision Tactition Data from subject #11 Video Video Video
  • 17. 7 2 1 Touch-QWERTY 80% correct words Audition Vision Tactition Audition Vision Tactition Audition Vision Tactition Data from subject #11 Video Video Video
  • 18. Audition Vision Tactition Audition Vision Tactition Audition Vision Tactition 8 3 2 ITU-12 80% correct words Data from subject #11 Video Video Video
  • 19.  
  • 20. MFI
  • 21. D
  • 22. Conclusion A Simple Index for Multimodal Flexibility Oulasvirta & Bergstrom-Lehtovirta
  • 23. A simple index
    • Important for mobile HCI
    • Meaningful
    • Informative
    • Inexpensive
  • 24.  
  • 25. Running, watching, AND interacting? WHAT!? I can’t hear you! Can you find the snooze button? Surgical gloves block tactile input?
  • 26. www.hiit.fi/mfi
  • 27.
    • ACKNOWLEDGEMENTS
    • This work was funded by the TEKES project Theseus and by the Emil Aaltonen Foundation.
    • We thank Ville Nurmi, Miikka Miettinen, Tuomo Kujala, Lingyi Ma, Pertti Saariluoma, Tero Jokela, Saija Lemmelä, Kimmo Rantala, Jari Laarni, Eve Hoggan, Poika Isokoski, Miika Silfverberg, Céline Coutrix, Heikki Summala, and Johannes Tarkiainen for help and comments.
    A Simple Index for Multimodal Flexibility Oulasvirta & Bergstrom-Lehtovirta www.hiit.fi/mfi