Playing Games without Visual Feedback
Eelke Folmer, Associate Professor
Human Computer Interaction Lab
University of Nevad...
reno

Human Computer Interaction Lab
University of Nevada, Reno
Research Interests
Accessible Computing

Games
Natural

Visual Impairments Spatial Perception

Augmented

Interfaces

Crow...
Overview of Talk
What are some of the barriers that blind people
face when playing video games?
Why should video games b...
How do we play games?

Player-Game Interaction Research
University of Nevada, Reno
How do we play games?

Player-Game Interaction Research
University of Nevada, Reno
How do we play games?

Player-Game Interaction Research
University of Nevada, Reno
Abstraction

1. feedback
2. fire gun
3. pull trigger

1. feedback
2. kick & punch
3. move feet & arm

1. feedback
2. aim f...
Interaction Model

Player-Game Interaction Research
University of Nevada, Reno
Gaming with Visual Impairment

retintis pigmentosa

color blindness

macular degeneration

blindness

Player-Game Interact...
Interaction Model

Player-Game Interaction Research
University of Nevada, Reno
Sensory Substitution Research
closed captioning

Visual
information

Haptic
braille

Audio
speech synthesis

Player-Game I...
Research Challenges

information
Visual

Haptic
information

Audio
information
Player-Game Interaction Research
University...
Why should people with
Visual Impairments be
able to play video
games?
Benefits

Social

Employment

Education

Health Player-Game Interaction Research

University of Nevada, Reno
Visual Impairment

1.3 million people who are blind (60k kids) [US]
6.8 million who have a visual impairment [US]
Popul...
“Extreme Interaction” Design
Design Interfaces for “extreme” players
Games are complex real time simulations
Disability...
Innovation by solving extreme problems

?

?

Player-Game Interaction Research
University of Nevada, Reno
Example accessible interfaces

Natural Language Virtual World Interfaces
»Seek-n-tag
»Text SL
»Syntherella
Non-Visual Na...
TextSL
Natural Language Virtual World Interface

HCC-Small: TextSL: A Virtual World Interface for Visually Impaired, Natio...
Virtual Worlds
Education

Museums

Communities

Player Game Interaction Research
University of Nevada, Reno
Web Accessibility

Screen
reader

text/images

“ The eiffel tower was built in ...”
“alttag:picture of the eiffel tower”

...
Virtual World Accessibility

Screen
reader
pixels

“.............................”

Player Game Interaction Research
Unive...
Inspiration

Zork

Text SL

Command Line Interface web based interface
» screenreader / speech
» Text only
basic comman...
Natural Language Interpreter
walk
go

give
move

my

move

to

flower
the

to

jane

chair

move

natural language

prepos...
SL Content Problems
this object has a really long name

chest

?

car

chair

?
car

table

?

bike
bike

jill

wall

jack...
Navigation
>move north 10
>move to name (object / avatar)
>teleport to Help Island

Basic Obstacle avoidance
Teleport wh...
Communication
>say “Hello”
>whisper to Jack “bla”
>mute Jack

WTF?

Human Computer Interaction Research
University of Neva...
Interaction
>sit on chair

>touch billboard

- no textual output (yet)

Human Computer Interaction Research
University of ...
Iterative query mechanisms
chair
fire
dog
jill

>describe
“you see avatar jill, and the
objects chair, a fire and a dog”
>...
Syntherella: Scene Synthesizer
car

chest

object with a really long name

chest
chest

dog

?

dog

table

chair

?

?

b...
Abstraction
pig

cat
dog

car
truck

car

animals
bike

dog

cat

object
taxonomy
pig

bike

vehicles
car

truck

>describ...
Detailing

>describe
“there is a cat and a dog and a car
>describe
“to your left is a black cat and a brown dog who is bar...
User Studies

slower

slower

same

slower
Player Game Interaction Research
University of Nevada, Reno
Lack of Meta data
?

?
?

?

?

?
?

>describe
“there is an object an object an object an object an object etc...
Player G...
Labeling images vs 3D objects

- Segmentation
- 2D info

- Defined in Isolation
- Solid bodies (prims)
- Efficient discrim...
Construct Classifier

object categories

shape descriptors

unknown object

Player Game Interaction Research
University of...
how to derive object categories?

dog?

Player Game Interaction Research
University of Nevada, Reno
Use Human Computation
find a cat
label

label

label

Player Game Interaction Research
University of Nevada, Reno
Seek-n-Tag
Goal:
Find a Cat
»Player participates in Scavenger
✔ + 5s
hunt
Find a Tree
Rules:
»30 seconds to “tag” an obj...
User study
object
object
object

object

object
object

object
object

object

Manual labeling versus labeling with a gam...
Manual labeling using AMT
90%

1000
500

Player Game Interaction Research
University of Nevada, Reno
Currently Researching
Interaction with interactive Objects
>touch horse

>touch billboard

Collect descriptions using AMT
Semantic analysis of...
Content Creation
>create green cube “block1”
>create green cube “block1”
>link block1 block2

>create brown
d0g

Human Com...
Publications / More info

• Bugra
Oktay,
Eelke
Folmer.
Syntherella: A Feedback Synthesizer for Efficient Exploration of Vi...
VI Fit

Non Visual Natural User Interfaces

HCC: Small: Proprioceptive Displays to Engage Blind Users into Healthy Whole
B...
Gesture Based Interaction

Player-Game Interaction Research
University of Nevada, Reno
Reliance on Visual Cues
dodge

♫

♫

Player-Game Interaction Research
University of Nevada, Reno
visual impairment & obesity

60,000 Blind Children in US

More obese

Player-Game Interaction Research
University of Nevad...
Barriers to physical activity

Rely on sighted guide

Safety

Player-Game Interaction Research
University of Nevada, Reno
Exercise games

★ Moderate to Vigorous PA (MVPA)
★60 min of MVPA daily (CDC)

★Independently
★Safer
Player-Game Interactio...
Research question

Player-Game Interaction Research
University of Nevada, Reno
Gestures w/o visual cues?

?
Player-Game Interaction Research
University of Nevada, Reno
Physical Activity = SpatialTemporal
where?
when?

Player-Game Interaction Research
University of Nevada, Reno
Target Acquisition
Exergames are temporal-spatial

Kinect Sports
★Temporal/Spatial: Jump
when hurlde is close

Eye Toy Kinetic
★ Spatial: Pun...
Sensory Substitution Research
Visual
information

Haptic

?

target acquisition
(directed) gestures

Audio

?
Player-Game ...
Constraints with regard to SS
cool!!

Social Contexts

Audio

Music based

Haptic?
Player-Game Interaction Research
Univer...
Temporal Challenge

VI Tennis

Wii Sports
serve

♫

bounce

♫

Bu
zz

return
Player-Game Interaction Research
University o...
Sensory Substitution
WII Tennis

VI Tennis

Human Computer Interaction Research
University of Nevada, Reno
Spatial Challenge

Wii Bowling

VI Bowling

Human Computer Interaction Research
University of Nevada, Reno
Wii Bowling

VI Bowling
Studies @ Camp Abilities

Vi Tennis/Bowling yield Active energy expenditure
that is considered moderate physical activity...
User studies
vigorous

Upper Body

moderate

light
VI Tennis

Whole Body
Player-Game Interaction Research
University of Ne...
Pet-n-Punch

Player-Game Interaction Research
University of Nevada, Reno
Instrumentation

Player-Game Interaction Research
University of Nevada, Reno
Real time Video Analysis

> <MAXR>999</MAXR>
></SECTION>

<MING>230</MING> <MAXG>999</MAXG> <MINB>0</MINB> <MAXB>100</MAXB...
VI Ski

QuickTime™ and a
decompressor
are needed to see this picture.

Player-Game Interaction Research
University of Neva...
Related Projects
Publications / More info
•Tony

Morelli,
Eelke
Folmer.
Real-time Sensory Substitution to Enable Players who are Blind to P...
Upcoming SlideShare
Loading in …5
×

Game accessibility at hanze hogeschool

595 views
464 views

Published on

Published in: Education, Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
595
On SlideShare
0
From Embeds
0
Number of Embeds
38
Actions
Shares
0
Downloads
6
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide
  • {"49":"The way we interact with software and games in particular is increasingly modeled after how we interact with the real world as such interaction is most natural to us. \nAll major console manufacturers currently offer gesture based interaction either using inertial sensing with a handheld controller or using computer vision.\nVideo games really push the envelope with regard to interaction design \nBecause gesture based games are intuitive to play they have successfully appealed to non traditional games such as elderly as well as facilitate more social forms of gaming. \n","38":"So the idea is to take a number of example objects for common object categories such as animals, vehicles or furniture and find a suitable shape descriptor that can recognize objects quickly. the discussion of what type of shape descriptor to use is outside of the scope of this paper. The problem here is how do we get a set of accurate training data? We can’t really use existing objects in Second life as this data is inherently unreliable. \n","27":"We identified two big problems when trying to compile meaningful narratives from object names. . \n1) Second Life is very densily populated with objects. We build a bot that acts like a spider which analyzed large regions of Second Life and we found that on average you can find 13 objects within a 10 meter radius around the user. Some names of objects may be really long you may easily overwhelm the user with feedback when you all put it through a screen reader. Also it makes it very hard to navigate anywhere within second Life without running into things. \n2) Another problem which is actually the opposite problem of the first is that many objects actually lack meta data. When you create an object in second Life you can give it a name but as most content creators figure that you can see what the object is almost 40% of the objects in Second Life are called “object”. This is a problem you see “object object object object” \n","66":"In user studies we found this game to engage blind children into moderate physical activity. This is high enough to be healthy but doesn’t contribute significantly do the daily recommended amount of physical activity for children which is 60 minutes of MVPA with at least 20 minute of vigorous activity (running). Because Wii based games only uses motions of the dominant arm, it makes sense to explore exergames that utilize whole body gestures because that should lead to higher active energy expenditures. \n","55":"how can we engage a blind user in gesture based interaction without using visual cues? \n","33":"We identified two major problems when trying to compile meaningful feedback in textsl. \n1) the first problem is that Second Life is very densily populated with objects. We build a a spider that analyzed large regions of Second Life and we found that on average you can find 13 objects within a 10 meter radius around the user. This is a problem as also some object names may be really long which may easily overwhelm the user with feedback when you feed it all through a screen reader. \n2) Another problem which is actually the opposite problem of the first is that many objects actually lack meta data. When you create an object in second Life you can give it a name but as most content creators figure that you can see what the object they leave the name of their objects to their default name which is “object”. After sampling 433 regions we found that 31% of the objects in virtual worlds are called object. This is a problem for making virtual worlds accessible as similar to images that lack an alt tag, these objects are meaningless to a screen reader user. \n","22":"Virtual worlds such as Second Life or openSim have enjoyed significant popularity over the past years and they offer all sorts of social experiences to their users. \nFor example, many universities and colleges use virtual worlds as virtual classrooms. Companies such as IBM use virtual worlds as collaborative workspaces. \nMuseums like the Tech Museum in San Jose provide artists with space to exhibit their virtual artwork. \nAnd in Second Life one can find hundreds of communities where people with similar interests can meet up and socialize. \n","61":"We used haptic and audio cues to represent events that require a player’s response. \nFor example, the distance of the ball to the player is indicated using vibrotactile feedback provided with a wii remote controller. It is important to notice that haptic feedback is a really good modality of feedback for exergames because it is often difficult to use audio as exergames often feature music or are played in social environments. We found that haptic feedback gives much better performance than audio feedback alone. \n","50":"Gesture based interaction typically relies upon visual cues that indicate what input to provide and when. \nFor example in wii sports boxing if your opponent is punching you you need to dodge or block their moves and vice versa. \nIf you are unable to see these visual cues, you are unable to provide the right input and play the game. \nAlthough audio feedback is being provided, this doesn’t contain any information on what type of input to provide and when. \nSo why is it important for individuals with visual impairments to play gesture based games? \n","56":"Physical activities consist of two types of challenges; namely spatial and temporal challenges: \nTake for example basketball: \n- The spatial challenge involves deciding where to shooting the ball \n- and the temporal challenge deals with when to shoot the ball. \n","34":"We identified two major problems when trying to compile meaningful feedback in textsl. \n1) the first problem is that Second Life is very densily populated with objects. We build a a spider that analyzed large regions of Second Life and we found that on average you can find 13 objects within a 10 meter radius around the user. This is a problem as also some object names may be really long which may easily overwhelm the user with feedback when you feed it all through a screen reader. \n2) Another problem which is actually the opposite problem of the first is that many objects actually lack meta data. When you create an object in second Life you can give it a name but as most content creators figure that you can see what the object they leave the name of their objects to their default name which is “object”. After sampling 433 regions we found that 31% of the objects in virtual worlds are called object. This is a problem for making virtual worlds accessible as similar to images that lack an alt tag, these objects are meaningless to a screen reader user. \n","23":"Despite their popularity, virtual worlds are not accessible to users with visual impairments. \nThis is because virtual worlds are often entirely visual and lack any textual description that can be read with a screenreader or tactile display. To allow users with visual impairments access to virtual worlds we developed a text based natural language interface called textsl, which can extract a textual representation from Second Life that can be read with a screen reader. \n","12":"A technique can be used called sensory substitution. Humans can perceive information in three different modalities, e.g. using visuals, audio and haptic. Sensory substitution is converting feedback from one type of modality into another modality. \nSensory substitution has primarily been developed for users with sensory impairments, Examples include representing audio dialog using closed captioning or representing visual information using braille and speech synthesis.\n","62":"So we identified \n","51":"Latest census results show there are currently about 60,000 children in the US who are blind.\nThese children have much higher levels of obesity than their sighted peers and often exhibit delays in motor development because they have limited opportunities to be physically active. \n","68":"Rather than hacking or reverse engineering the kinect we feed a video stream of the game to an external laptop using a USB video capture unit. \nThough the xbox can provide HD video we just use the NTSC composite video to limit processing time. \nA C# application analyzes a 640x480 video stream and if it detects a visual cue we provide haptic feedback to up to four wii remotes that are connected to the laptop. \n","57":"A spatial challenge involves any physical activity, whether it be throwing, hitting, kicking that uses some form of aiming at a target located in 3D space around the user. Users typically acquire such targets visually, so we need to develop a way to do this using a different modality. \n","24":"Despite their popularity, virtual worlds are not accessible to users with visual impairments. \nThis is because virtual worlds are often entirely visual and lack any textual description that can be read with a screenreader or tactile display. To allow users with visual impairments access to virtual worlds we developed a text based natural language interface called textsl, which can extract a textual representation from Second Life that can be read with a screen reader. \n","13":"A technique can be used called sensory substitution. Humans can perceive information in three different modalities, e.g. using visuals, audio and haptic. Sensory substitution is converting feedback from one type of modality into another modality. \nSensory substitution has primarily been developed for users with sensory impairments, Examples include representing audio dialog using closed captioning or representing visual information using braille and speech synthesis.\n","63":"Many physical activities involve spatial challenges that predominantly rely upon vision, e.g., making a gesture aimed at a target that is acquired visually. VI Bowling is a tactile/audio version of Wii Sports Bowling, where players perform a spatial challenge, e.g., finding the location of the ball using vibrotactile feedback. Players wield their Wii remote like a metal detector, directional vibrotactile feedback indicates when their remote is pointed at the pins. A user study with 6 blind individuals found that players were able to aim their ball with an average error of 9 degrees and players were able to get into light physical activity. \n","52":"Some of the barriers to participate in physical activity they encounter include: \n(1) they rely on others to help them exercise, for example, they can cycle or run but they need a sighted guide, which may not always be available. and \n(2) safety issues as individuals with visual impairments have a much higher chance to get injured when exercising. \n","41":"We defined a game called seek-n-tag which works as follows: \nusers participate in a scavenger hunt game. Users have 30 seconds to tag a certain object. You score 50 points per tagged object\nYou initially start with 900 seconds and the game is over when you run out of time. \n","69":"Using an XML configuration file you can define key sections to look for certain pixels of a specified color. When this test is true an action can be defined. \nThis can either be providing a haptic cue to up to four wii remotes connected to the laptop or playing an audio cue. \nTo aid in creating these configuration files sighted users can make a screenshot using the trigger button on their wii remote. \nWe defined key sections for the javelin throw and the long jump game and tested each game with a legally blind individual and adjusted it until he could play each game without errors. \n","25":"We needed something more flexible to support various interactions and for the design of TextSL we let ourselves inspire by the interaction of multi user dungeon games. Multi used dungeon games are the precursors of virtual worlds and they allow multiple users to interact with each other and the game through a command based text interface. Multiuser dungeon games support all the functionality we need in a virtual world browser namely exploration, navigation, interaction and communication. \nTo make virtual worlds accessible we developed an interface called textsl, this is a standalone application that can extract a textual description from a virtual world that can be read with a screen reader. We went for a screen reader based approach as users who are visually impaired use these a lot and they allow for detailed customizations often more than what can be done with synthetic speech provided by an API. We use the LibSecondLife library which is an API for connecting to the second life servers. We encapsulated this library so we can easily connect to other virtual worlds allowing for Text SL to be used as a Virtual world agnostic research platform. Because we don’t do any rendering it can run on a low end machine, possibly even on a smartphone. \n","64":"So we created our own PC version of Wii Bowling called VI Bowling, which communicates with a wii remote using bluetooth and we implemented the exact same motions as Wii bowling to play the game. \nThis is the sensory substitution graph which maps how particular visual cues have been mapped to either audio or tactile cues. \nThe throwing part only involves secondary cues and since these are encoded in visual as well as audio, VI bowling just implements the same audio cues and the game is still playable, the only thing that we added were speech cues that indicate how many pins had been hit. \nThe aiming part involves primary visual cues, specifically:\n","53":"Recent studies show that gesture based games, specifically exercise games can achieve moderate to vigorous levels of physical activity, the level that is high enough to be considered healthy. The center for Disease Control in the United states recommends children to engage into 60 minutes of MVPA daily. \nWe identified that exergames may could increase existing exercise opportunities for individuals who are blind as they overcome some of the barriers that they face. \nFirst exergames can be played independently as you can play against the computer or against a friend online.\nSecond because they are performed in place the risk of getting injured while exercising is minimal. \n","59":"A technique can be used called sensory substitution. Humans can perceive information in three different modalities, e.g. using visuals, audio and haptic. Sensory substitution is converting feedback from one type of modality into another modality. \nSensory substitution has primarily been developed for users with sensory impairments, Examples include representing audio dialog using closed captioning or representing visual information using braille and speech synthesis.\n","37":"The problem of virtual objects lacking a name is very similar to labeling web images. \nHowever the way virtual objects are defined in virtual worlds allows for an approach using automatic recognition. \n3d object recognition is a bit easier than recognizing 2d images as there are no segmentation problems to solve as virtual world objects are defined in isolation of the scene they are in. In Second Life objects are defined using 7 different solid body entities that are defined analytically which allows for creating suitable shape descriptors that can be discriminiate object categories very efficiently. \n","26":"Users interact with second life using a number of commands for exploration, communication, interaction. WE don’t support content creation yet. We also offer a help command and a tutorial. an interpreter allows us to use natural language. A number of synonyms verbs are mapped onto the same internal command. The interpreter also allows for prepositions and adjectives which allows for efficiently supporting a large number of different interaction with avatars and objects. E.g. give my flower to jane, something which would be hard to do using shortcuts. \n","54":"However as mentioned before because gesture based games rely on visual cues, the research question becomes\n"}
  • Game accessibility at hanze hogeschool

    1. 1. Playing Games without Visual Feedback Eelke Folmer, Associate Professor Human Computer Interaction Lab University of Nevada, Reno
    2. 2. reno Human Computer Interaction Lab University of Nevada, Reno
    3. 3. Research Interests Accessible Computing Games Natural Visual Impairments Spatial Perception Augmented Interfaces CrowdSourcing 3D interfaces Sonification Sensory Automotive Interfaces Reality Substitution Wearable Haptics Human exergaming Virtual Reality Gestures Spatial Video computing Navigation computing Mobile computing Player-Game Interaction Research University of Nevada, Reno
    4. 4. Overview of Talk What are some of the barriers that blind people face when playing video games? Why should video games be accessible to users who are blind? How can virtual world & video games be made accessible? Player-Game Interaction Research University of Nevada, Reno
    5. 5. How do we play games? Player-Game Interaction Research University of Nevada, Reno
    6. 6. How do we play games? Player-Game Interaction Research University of Nevada, Reno
    7. 7. How do we play games? Player-Game Interaction Research University of Nevada, Reno
    8. 8. Abstraction 1. feedback 2. fire gun 3. pull trigger 1. feedback 2. kick & punch 3. move feet & arm 1. feedback 2. aim for point 3. drag with finger
    9. 9. Interaction Model Player-Game Interaction Research University of Nevada, Reno
    10. 10. Gaming with Visual Impairment retintis pigmentosa color blindness macular degeneration blindness Player-Game Interaction Research University of Nevada, Reno
    11. 11. Interaction Model Player-Game Interaction Research University of Nevada, Reno
    12. 12. Sensory Substitution Research closed captioning Visual information Haptic braille Audio speech synthesis Player-Game Interaction Research University of Nevada, Reno
    13. 13. Research Challenges information Visual Haptic information Audio information Player-Game Interaction Research University of Nevada, Reno
    14. 14. Why should people with Visual Impairments be able to play video games?
    15. 15. Benefits Social Employment Education Health Player-Game Interaction Research University of Nevada, Reno
    16. 16. Visual Impairment 1.3 million people who are blind (60k kids) [US] 6.8 million who have a visual impairment [US] Population of VI expected to double [WHO] Obesity --> cataracts --> blindness Player-Game Interaction Research University of Nevada, Reno
    17. 17. “Extreme Interaction” Design Design Interfaces for “extreme” players Games are complex real time simulations Disability as a driver of innovation speech synthesis screen readers speech recognition automatic captions Player-Game Interaction Research University of Nevada, Reno
    18. 18. Innovation by solving extreme problems ? ? Player-Game Interaction Research University of Nevada, Reno
    19. 19. Example accessible interfaces Natural Language Virtual World Interfaces »Seek-n-tag »Text SL »Syntherella Non-Visual Natural User Interfaces (Exergames) »VI Tennis »Pet-n-Punch »VI Bowling »RTSS Human Computer Interaction Research University of Nevada, Reno
    20. 20. TextSL Natural Language Virtual World Interface HCC-Small: TextSL: A Virtual World Interface for Visually Impaired, National Science Foundation Eelke Folmer (PI), George Bebis (Co-Pi). Amount: $499,332. SGER: Developing an Accessible Client for Second Life, National Science Foundation Eelke Folmer (PI), Amount: $90,488.
    21. 21. Virtual Worlds Education Museums Communities Player Game Interaction Research University of Nevada, Reno
    22. 22. Web Accessibility Screen reader text/images “ The eiffel tower was built in ...” “alttag:picture of the eiffel tower” Player Game Interaction Research University of Nevada, Reno
    23. 23. Virtual World Accessibility Screen reader pixels “.............................” Player Game Interaction Research University of Nevada, Reno
    24. 24. Inspiration Zork Text SL Command Line Interface web based interface » screenreader / speech » Text only basic commands » Iterative » navigation » communication » Natural language » exploration » interaction » Screen reader accessible Human Computer Interaction Research University of Nevada, Reno
    25. 25. Natural Language Interpreter walk go give move my move to flower the to jane chair move natural language prepositions adjectives Avoid learning specific commands Intuitive & Easy to Learn Human Computer Interaction Research University of Nevada, Reno
    26. 26. SL Content Problems this object has a really long name chest ? car chair ? car table ? bike bike jill wall jack moe ? tree ? tree dog car tree bike ? tree tree curly ? fire larry bike bike Densely populated with objects » Overwhelm users with feedback » Difficult to navigate collision free Lack of meta data (40%) » underwhelm users with feedback Human Computer Interaction Research University of Nevada, Reno
    27. 27. Navigation >move north 10 >move to name (object / avatar) >teleport to Help Island Basic Obstacle avoidance Teleport when stuck Make avatar appear as normal as possible Human Computer Interaction Research University of Nevada, Reno
    28. 28. Communication >say “Hello” >whisper to Jack “bla” >mute Jack WTF? Human Computer Interaction Research University of Nevada, Reno
    29. 29. Interaction >sit on chair >touch billboard - no textual output (yet) Human Computer Interaction Research University of Nevada, Reno
    30. 30. Iterative query mechanisms chair fire dog jill >describe “you see avatar jill, and the objects chair, a fire and a dog” >describe dog “the dog is a golden retriever and is for Sale for $10” >where dog “the dog is 10 meters to your right” Human Computer Interaction Research University of Nevada, Reno
    31. 31. Syntherella: Scene Synthesizer car chest object with a really long name chest chest dog ? dog table chair ? ? bike dog jill wall jack moe ? tree ? tree dog car tree bike ? tree tree curly ? bike fire larry tree bike - Content within range - Remove non descriptive objects - Based on #worldlimit if #words > #wl {abstraction} else {detailing} bike bike ? bike larry Human Computer Interaction Research University of Nevada, Reno
    32. 32. Abstraction pig cat dog car truck car animals bike dog cat object taxonomy pig bike vehicles car truck >describe “there is a cat a dog, a car, a bike a car, a truck and a pig >describe “there are 3 animals and 3 vehicles Player Game Interaction Research University of Nevada, Reno
    33. 33. Detailing >describe “there is a cat and a dog and a car >describe “to your left is a black cat and a brown dog who is barking. In front of you is a red race car. Player Game Interaction Research University of Nevada, Reno
    34. 34. User Studies slower slower same slower Player Game Interaction Research University of Nevada, Reno
    35. 35. Lack of Meta data ? ? ? ? ? ? ? >describe “there is an object an object an object an object an object etc... Player Game Interaction Research University of Nevada, Reno
    36. 36. Labeling images vs 3D objects - Segmentation - 2D info - Defined in Isolation - Solid bodies (prims) - Efficient discrimination Player Game Interaction Research University of Nevada, Reno
    37. 37. Construct Classifier object categories shape descriptors unknown object Player Game Interaction Research University of Nevada, Reno
    38. 38. how to derive object categories? dog? Player Game Interaction Research University of Nevada, Reno
    39. 39. Use Human Computation find a cat label label label Player Game Interaction Research University of Nevada, Reno
    40. 40. Seek-n-Tag Goal: Find a Cat »Player participates in Scavenger ✔ + 5s hunt Find a Tree Rules: »30 seconds to “tag” an object ✔ + 15s »Score 50 points per tagged object Find a Dog »Initially start with 900 seconds »Game over when time =0 ✘ - 30s »Remaining time is added to clock. Find a Shoe Competition Player Game Interaction Research ✔ + 10s »Leaderboard University of Nevada, Reno
    41. 41. User study object object object object object object object object object Manual labeling versus labeling with a game Seek-n-tag faster Seek-n-tag takes fewer labeling attempts to reach consensus on object name. Player Game Interaction Research University of Nevada, Reno
    42. 42. Manual labeling using AMT 90% 1000 500 Player Game Interaction Research University of Nevada, Reno
    43. 43. Currently Researching
    44. 44. Interaction with interactive Objects >touch horse >touch billboard Collect descriptions using AMT Semantic analysis of descriptions to understand what properties of objects play a role See if properties can be parsed from state Human Computer Interaction Research changes in object University of Nevada, Reno
    45. 45. Content Creation >create green cube “block1” >create green cube “block1” >link block1 block2 >create brown d0g Human Computer Interaction Research University of Nevada, Reno
    46. 46. Publications / More info • Bugra Oktay, Eelke Folmer. Syntherella: A Feedback Synthesizer for Efficient Exploration of Virtual Worlds ,Proceedings of Graphics Interface 2011. Pages 65-70, St John, Canada, May 2011. • Bei Yuan, Manjari Sapre, Eelke Folmer. Seek-n-Tag: A Game for Labeling and Classifying Virtual World Objects, Proceedings of Graphics Interf ace (GI). Pages 201-208, Ottawa, Ontario, June 2010. • Eelke Folmer, Bei Yuan, Dave Carr, Manjari Sapre. TextSL: A Command-Based Virtual World Interface for the Visually Impaired, In Proceedings of the 11th international ACM SIGACCESS conference on Computers and Accessibilit y, Pages 59-66, Pittsburgh, Pennsylvania, October 2009. Try ? Contribute? ear.textsl.org code.google.com/p/textsl Human Computer Interaction Research University of Nevada, Reno
    47. 47. VI Fit Non Visual Natural User Interfaces HCC: Small: Proprioceptive Displays to Engage Blind Users into Healthy Whole Body Interaction, National Science Foundation Eelke Folmer (PI), Amount: $420,320. collaborators: John Foly Lauren Liebermann
    48. 48. Gesture Based Interaction Player-Game Interaction Research University of Nevada, Reno
    49. 49. Reliance on Visual Cues dodge ♫ ♫ Player-Game Interaction Research University of Nevada, Reno
    50. 50. visual impairment & obesity 60,000 Blind Children in US More obese Player-Game Interaction Research University of Nevada, Reno
    51. 51. Barriers to physical activity Rely on sighted guide Safety Player-Game Interaction Research University of Nevada, Reno
    52. 52. Exercise games ★ Moderate to Vigorous PA (MVPA) ★60 min of MVPA daily (CDC) ★Independently ★Safer Player-Game Interaction Research University of Nevada, Reno
    53. 53. Research question Player-Game Interaction Research University of Nevada, Reno
    54. 54. Gestures w/o visual cues? ? Player-Game Interaction Research University of Nevada, Reno
    55. 55. Physical Activity = SpatialTemporal where? when? Player-Game Interaction Research University of Nevada, Reno
    56. 56. Target Acquisition
    57. 57. Exergames are temporal-spatial Kinect Sports ★Temporal/Spatial: Jump when hurlde is close Eye Toy Kinetic ★ Spatial: Punch/ Kick Targets ★ Temporal: Dodge target Player-Game Interaction Research University of Nevada, Reno
    58. 58. Sensory Substitution Research Visual information Haptic ? target acquisition (directed) gestures Audio ? Player-Game Interaction Research University of Nevada, Reno
    59. 59. Constraints with regard to SS cool!! Social Contexts Audio Music based Haptic? Player-Game Interaction Research University of Nevada, Reno
    60. 60. Temporal Challenge VI Tennis Wii Sports serve ♫ bounce ♫ Bu zz return Player-Game Interaction Research University of Nevada, Reno
    61. 61. Sensory Substitution WII Tennis VI Tennis Human Computer Interaction Research University of Nevada, Reno
    62. 62. Spatial Challenge Wii Bowling VI Bowling Human Computer Interaction Research University of Nevada, Reno
    63. 63. Wii Bowling VI Bowling
    64. 64. Studies @ Camp Abilities Vi Tennis/Bowling yield Active energy expenditure that is considered moderate physical activity Kids need 20 minutes of moderate to vigorous PA Player Game Interaction Research University of Nevada, Reno
    65. 65. User studies vigorous Upper Body moderate light VI Tennis Whole Body Player-Game Interaction Research University of Nevada, Reno
    66. 66. Pet-n-Punch Player-Game Interaction Research University of Nevada, Reno
    67. 67. Instrumentation Player-Game Interaction Research University of Nevada, Reno
    68. 68. Real time Video Analysis > <MAXR>999</MAXR> ></SECTION> <MING>230</MING> <MAXG>999</MAXG> <MINB>0</MINB> <MAXB>100</MAXB> Player-Game Interaction Research University of Nevada, Reno
    69. 69. VI Ski QuickTime™ and a decompressor are needed to see this picture. Player-Game Interaction Research University of Nevada, Reno
    70. 70. Related Projects
    71. 71. Publications / More info •Tony Morelli, Eelke Folmer. Real-time Sensory Substitution to Enable Players who are Blind to Play Gesture based Vi , In Proceedings of Foundations of Digital Interactive Games, to appear, Bordeaux France, June 2011 •Tony Morelli, John Foley, Lauren Lieberman, Eelke Folmer. Pet-N-Punch: Upper Body Tactile/Audio Exergame to Engage Children with Visual Impairments into Physical Activity, Proceedings of Graphics Interface (GI) •Tony Morelli, John Foley, Eelke Folmer VI Bowling: A Tactile Spatial Exergame for Individuals with Visual Impairments , In Proceedings of ASSETS, Pages 179-186, Orlando, Florida, October 2010. •Tony Morelli, John Foley, Luis Columna, Lauren Lieberman, Eelke Folmer. VI-Tennis: a Vibrotactile/Audio Exergame for Players who are Visually Impaired, Proceedings of FDG, Pages 147-154, Monterey, California, June 2011 Try ? Contribute? vifit.org code.google.com/p/vifit Human Computer Interaction Research University of Nevada, Reno

    ×