Spatial Gestures using a Tactile-Proprioceptive Display

552 views

Published on

This presentation describes a novel display technique that appropriates the body to become a display. This display conveys 2D targets in front of the user which can then be manipulated using spatial gestures.

Published in: Technology, Education
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
552
On SlideShare
0
From Embeds
0
Number of Embeds
4
Actions
Shares
0
Downloads
7
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide
  • Hi, My name is Eelke Folmer and I'm here to present the work I did with my grad student Tony Morelli on using a tactile proprioceptive display to perform spatial gestures. \n\n
  • Spatial gestures are an essential feature of natural user interfaces where a touch or gesture activates or manipulates an on screen object. For example, dragging a file into a folder. \nSpatial interaction relies upon being able to visually acquire the position of an object.\n
  • Spatial gestures are an essential feature of natural user interfaces where a touch or gesture activates or manipulates an on screen object. For example, dragging a file into a folder. \nSpatial interaction relies upon being able to visually acquire the position of an object.\n
  • Spatial gestures are an essential feature of natural user interfaces where a touch or gesture activates or manipulates an on screen object. For example, dragging a file into a folder. \nSpatial interaction relies upon being able to visually acquire the position of an object.\n
  • Spatial gestures are an essential feature of natural user interfaces where a touch or gesture activates or manipulates an on screen object. For example, dragging a file into a folder. \nSpatial interaction relies upon being able to visually acquire the position of an object.\n
  • Spatial gestures are an essential feature of natural user interfaces where a touch or gesture activates or manipulates an on screen object. For example, dragging a file into a folder. \nSpatial interaction relies upon being able to visually acquire the position of an object.\n
  • Spatial gestures are an essential feature of natural user interfaces where a touch or gesture activates or manipulates an on screen object. For example, dragging a file into a folder. \nSpatial interaction relies upon being able to visually acquire the position of an object.\n
  • So you can imagine this pretty difficult if you are unable see or if you don’t have a display. \n\n
  • In recent years several non visual NUI’s have been developed. \n\n- For example, audio based interfaces have been developed that allow blind users to scroll lists and select items on a touch screen devices but those don’t use spatial gestures. \n\n- To increase input spaces on mobile devices, screenless mobile interfaces have been developed. \nUsers interact with imaginary objects or shortcuts that are defined in a plane in front of them. \nThough some of these techniques may allow spatial gestures, no spatial feedback is provided and the user must keep track of the locations of objects which may be hard if there are a large number of objects.\n\nTo address the shortcoming of existing non-visual NUI’s we present a novel ear and eye free display technique that allows you to acquire the location of an object in a 2D display defined in front of the user, which users can then manipulate using a spatial gesture. \n\n\n
  • In recent years several non visual NUI’s have been developed. \n\n- For example, audio based interfaces have been developed that allow blind users to scroll lists and select items on a touch screen devices but those don’t use spatial gestures. \n\n- To increase input spaces on mobile devices, screenless mobile interfaces have been developed. \nUsers interact with imaginary objects or shortcuts that are defined in a plane in front of them. \nThough some of these techniques may allow spatial gestures, no spatial feedback is provided and the user must keep track of the locations of objects which may be hard if there are a large number of objects.\n\nTo address the shortcoming of existing non-visual NUI’s we present a novel ear and eye free display technique that allows you to acquire the location of an object in a 2D display defined in front of the user, which users can then manipulate using a spatial gesture. \n\n\n
  • In recent years several non visual NUI’s have been developed. \n\n- For example, audio based interfaces have been developed that allow blind users to scroll lists and select items on a touch screen devices but those don’t use spatial gestures. \n\n- To increase input spaces on mobile devices, screenless mobile interfaces have been developed. \nUsers interact with imaginary objects or shortcuts that are defined in a plane in front of them. \nThough some of these techniques may allow spatial gestures, no spatial feedback is provided and the user must keep track of the locations of objects which may be hard if there are a large number of objects.\n\nTo address the shortcoming of existing non-visual NUI’s we present a novel ear and eye free display technique that allows you to acquire the location of an object in a 2D display defined in front of the user, which users can then manipulate using a spatial gesture. \n\n\n
  • In recent years several non visual NUI’s have been developed. \n\n- For example, audio based interfaces have been developed that allow blind users to scroll lists and select items on a touch screen devices but those don’t use spatial gestures. \n\n- To increase input spaces on mobile devices, screenless mobile interfaces have been developed. \nUsers interact with imaginary objects or shortcuts that are defined in a plane in front of them. \nThough some of these techniques may allow spatial gestures, no spatial feedback is provided and the user must keep track of the locations of objects which may be hard if there are a large number of objects.\n\nTo address the shortcoming of existing non-visual NUI’s we present a novel ear and eye free display technique that allows you to acquire the location of an object in a 2D display defined in front of the user, which users can then manipulate using a spatial gesture. \n\n\n
  • In recent years several non visual NUI’s have been developed. \n\n- For example, audio based interfaces have been developed that allow blind users to scroll lists and select items on a touch screen devices but those don’t use spatial gestures. \n\n- To increase input spaces on mobile devices, screenless mobile interfaces have been developed. \nUsers interact with imaginary objects or shortcuts that are defined in a plane in front of them. \nThough some of these techniques may allow spatial gestures, no spatial feedback is provided and the user must keep track of the locations of objects which may be hard if there are a large number of objects.\n\nTo address the shortcoming of existing non-visual NUI’s we present a novel ear and eye free display technique that allows you to acquire the location of an object in a 2D display defined in front of the user, which users can then manipulate using a spatial gesture. \n\n\n
  • In recent years several non visual NUI’s have been developed. \n\n- For example, audio based interfaces have been developed that allow blind users to scroll lists and select items on a touch screen devices but those don’t use spatial gestures. \n\n- To increase input spaces on mobile devices, screenless mobile interfaces have been developed. \nUsers interact with imaginary objects or shortcuts that are defined in a plane in front of them. \nThough some of these techniques may allow spatial gestures, no spatial feedback is provided and the user must keep track of the locations of objects which may be hard if there are a large number of objects.\n\nTo address the shortcoming of existing non-visual NUI’s we present a novel ear and eye free display technique that allows you to acquire the location of an object in a 2D display defined in front of the user, which users can then manipulate using a spatial gesture. \n\n\n
  • In recent years several non visual NUI’s have been developed. \n\n- For example, audio based interfaces have been developed that allow blind users to scroll lists and select items on a touch screen devices but those don’t use spatial gestures. \n\n- To increase input spaces on mobile devices, screenless mobile interfaces have been developed. \nUsers interact with imaginary objects or shortcuts that are defined in a plane in front of them. \nThough some of these techniques may allow spatial gestures, no spatial feedback is provided and the user must keep track of the locations of objects which may be hard if there are a large number of objects.\n\nTo address the shortcoming of existing non-visual NUI’s we present a novel ear and eye free display technique that allows you to acquire the location of an object in a 2D display defined in front of the user, which users can then manipulate using a spatial gesture. \n\n\n
  • In recent years several non visual NUI’s have been developed. \n\n- For example, audio based interfaces have been developed that allow blind users to scroll lists and select items on a touch screen devices but those don’t use spatial gestures. \n\n- To increase input spaces on mobile devices, screenless mobile interfaces have been developed. \nUsers interact with imaginary objects or shortcuts that are defined in a plane in front of them. \nThough some of these techniques may allow spatial gestures, no spatial feedback is provided and the user must keep track of the locations of objects which may be hard if there are a large number of objects.\n\nTo address the shortcoming of existing non-visual NUI’s we present a novel ear and eye free display technique that allows you to acquire the location of an object in a 2D display defined in front of the user, which users can then manipulate using a spatial gesture. \n\n\n
  • In recent years several non visual NUI’s have been developed. \n\n- For example, audio based interfaces have been developed that allow blind users to scroll lists and select items on a touch screen devices but those don’t use spatial gestures. \n\n- To increase input spaces on mobile devices, screenless mobile interfaces have been developed. \nUsers interact with imaginary objects or shortcuts that are defined in a plane in front of them. \nThough some of these techniques may allow spatial gestures, no spatial feedback is provided and the user must keep track of the locations of objects which may be hard if there are a large number of objects.\n\nTo address the shortcoming of existing non-visual NUI’s we present a novel ear and eye free display technique that allows you to acquire the location of an object in a 2D display defined in front of the user, which users can then manipulate using a spatial gesture. \n\n\n
  • In recent years several non visual NUI’s have been developed. \n\n- For example, audio based interfaces have been developed that allow blind users to scroll lists and select items on a touch screen devices but those don’t use spatial gestures. \n\n- To increase input spaces on mobile devices, screenless mobile interfaces have been developed. \nUsers interact with imaginary objects or shortcuts that are defined in a plane in front of them. \nThough some of these techniques may allow spatial gestures, no spatial feedback is provided and the user must keep track of the locations of objects which may be hard if there are a large number of objects.\n\nTo address the shortcoming of existing non-visual NUI’s we present a novel ear and eye free display technique that allows you to acquire the location of an object in a 2D display defined in front of the user, which users can then manipulate using a spatial gesture. \n\n\n
  • In recent years several non visual NUI’s have been developed. \n\n- For example, audio based interfaces have been developed that allow blind users to scroll lists and select items on a touch screen devices but those don’t use spatial gestures. \n\n- To increase input spaces on mobile devices, screenless mobile interfaces have been developed. \nUsers interact with imaginary objects or shortcuts that are defined in a plane in front of them. \nThough some of these techniques may allow spatial gestures, no spatial feedback is provided and the user must keep track of the locations of objects which may be hard if there are a large number of objects.\n\nTo address the shortcoming of existing non-visual NUI’s we present a novel ear and eye free display technique that allows you to acquire the location of an object in a 2D display defined in front of the user, which users can then manipulate using a spatial gesture. \n\n\n
  • Along the lines of recent work that turns the body into an input space, we explore turning the body into an display. But instead of using your body to communicate information to someone else we communicate information to the user using their own body. To do that we use a largely unexplored output modality called proprioception. Proprioception is the human ability to sense the orientation of their limbs and which allows you for example to touch your nose with your eyes closed. \n\nRecent work by my lab and others shows you can augment haptic feedback with proprioceptive information to facilitate an significantly larger information space that can be accessed in an ear and eye free manner and which can be used to point out targets around the user. Let me illustrate this with an example. \n\n
  • In previous work we created a bowling game for blind users. \nUser can find the location of the bowling pins by scanning their environment with a hand held orientation aware device that is capable of providing haptic feedback. Directional vibrotactile feedback for example frequency guides the user to point the device at the pins, for example, the higher the frequency the closer you get to the target. The target direction is then conveyed to the user using their own arm, and which allows you to perform a gesture towards the target. \n\nSo some preliminary research on tactile proprioceptive displays has been conducted but these have only explored 1D target acquisition. \n
  • In previous work we created a bowling game for blind users. \nUser can find the location of the bowling pins by scanning their environment with a hand held orientation aware device that is capable of providing haptic feedback. Directional vibrotactile feedback for example frequency guides the user to point the device at the pins, for example, the higher the frequency the closer you get to the target. The target direction is then conveyed to the user using their own arm, and which allows you to perform a gesture towards the target. \n\nSo some preliminary research on tactile proprioceptive displays has been conducted but these have only explored 1D target acquisition. \n
  • In previous work we created a bowling game for blind users. \nUser can find the location of the bowling pins by scanning their environment with a hand held orientation aware device that is capable of providing haptic feedback. Directional vibrotactile feedback for example frequency guides the user to point the device at the pins, for example, the higher the frequency the closer you get to the target. The target direction is then conveyed to the user using their own arm, and which allows you to perform a gesture towards the target. \n\nSo some preliminary research on tactile proprioceptive displays has been conducted but these have only explored 1D target acquisition. \n
  • In previous work we created a bowling game for blind users. \nUser can find the location of the bowling pins by scanning their environment with a hand held orientation aware device that is capable of providing haptic feedback. Directional vibrotactile feedback for example frequency guides the user to point the device at the pins, for example, the higher the frequency the closer you get to the target. The target direction is then conveyed to the user using their own arm, and which allows you to perform a gesture towards the target. \n\nSo some preliminary research on tactile proprioceptive displays has been conducted but these have only explored 1D target acquisition. \n
  • In previous work we created a bowling game for blind users. \nUser can find the location of the bowling pins by scanning their environment with a hand held orientation aware device that is capable of providing haptic feedback. Directional vibrotactile feedback for example frequency guides the user to point the device at the pins, for example, the higher the frequency the closer you get to the target. The target direction is then conveyed to the user using their own arm, and which allows you to perform a gesture towards the target. \n\nSo some preliminary research on tactile proprioceptive displays has been conducted but these have only explored 1D target acquisition. \n
  • In previous work we created a bowling game for blind users. \nUser can find the location of the bowling pins by scanning their environment with a hand held orientation aware device that is capable of providing haptic feedback. Directional vibrotactile feedback for example frequency guides the user to point the device at the pins, for example, the higher the frequency the closer you get to the target. The target direction is then conveyed to the user using their own arm, and which allows you to perform a gesture towards the target. \n\nSo some preliminary research on tactile proprioceptive displays has been conducted but these have only explored 1D target acquisition. \n
  • In previous work we created a bowling game for blind users. \nUser can find the location of the bowling pins by scanning their environment with a hand held orientation aware device that is capable of providing haptic feedback. Directional vibrotactile feedback for example frequency guides the user to point the device at the pins, for example, the higher the frequency the closer you get to the target. The target direction is then conveyed to the user using their own arm, and which allows you to perform a gesture towards the target. \n\nSo some preliminary research on tactile proprioceptive displays has been conducted but these have only explored 1D target acquisition. \n
  • In previous work we created a bowling game for blind users. \nUser can find the location of the bowling pins by scanning their environment with a hand held orientation aware device that is capable of providing haptic feedback. Directional vibrotactile feedback for example frequency guides the user to point the device at the pins, for example, the higher the frequency the closer you get to the target. The target direction is then conveyed to the user using their own arm, and which allows you to perform a gesture towards the target. \n\nSo some preliminary research on tactile proprioceptive displays has been conducted but these have only explored 1D target acquisition. \n
  • In previous work we created a bowling game for blind users. \nUser can find the location of the bowling pins by scanning their environment with a hand held orientation aware device that is capable of providing haptic feedback. Directional vibrotactile feedback for example frequency guides the user to point the device at the pins, for example, the higher the frequency the closer you get to the target. The target direction is then conveyed to the user using their own arm, and which allows you to perform a gesture towards the target. \n\nSo some preliminary research on tactile proprioceptive displays has been conducted but these have only explored 1D target acquisition. \n
  • In previous work we created a bowling game for blind users. \nUser can find the location of the bowling pins by scanning their environment with a hand held orientation aware device that is capable of providing haptic feedback. Directional vibrotactile feedback for example frequency guides the user to point the device at the pins, for example, the higher the frequency the closer you get to the target. The target direction is then conveyed to the user using their own arm, and which allows you to perform a gesture towards the target. \n\nSo some preliminary research on tactile proprioceptive displays has been conducted but these have only explored 1D target acquisition. \n
  • In previous work we created a bowling game for blind users. \nUser can find the location of the bowling pins by scanning their environment with a hand held orientation aware device that is capable of providing haptic feedback. Directional vibrotactile feedback for example frequency guides the user to point the device at the pins, for example, the higher the frequency the closer you get to the target. The target direction is then conveyed to the user using their own arm, and which allows you to perform a gesture towards the target. \n\nSo some preliminary research on tactile proprioceptive displays has been conducted but these have only explored 1D target acquisition. \n
  • In previous work we created a bowling game for blind users. \nUser can find the location of the bowling pins by scanning their environment with a hand held orientation aware device that is capable of providing haptic feedback. Directional vibrotactile feedback for example frequency guides the user to point the device at the pins, for example, the higher the frequency the closer you get to the target. The target direction is then conveyed to the user using their own arm, and which allows you to perform a gesture towards the target. \n\nSo some preliminary research on tactile proprioceptive displays has been conducted but these have only explored 1D target acquisition. \n
  • In the first study we explore 2D target acquisition. \n\nWe implemented a tactile proprioceptive display using the Sony Move controller which can provide two different types of directional vibrotactile feedback and this controller can be tracked using an external camera with very high accuracy. \n\nThe size of this display we use is constrained by the reach of your arm.\n\nTwo different scanning techniques were defined: \nIn linear scanning users first find a band that is defined around the target’s X in which directional vibrotactile feedback is provided upon which the Target’s Y coordinate is found using frequency. \nin multilinear scanning directional vibrotactile feedback is provided on both axes simultaneously using frequency and pulse delay. Preliminary experiences showed multilinear scanning was much harder to perform so it would be of interest which one would yield the best performance. \n\n
  • In the first study we explore 2D target acquisition. \n\nWe implemented a tactile proprioceptive display using the Sony Move controller which can provide two different types of directional vibrotactile feedback and this controller can be tracked using an external camera with very high accuracy. \n\nThe size of this display we use is constrained by the reach of your arm.\n\nTwo different scanning techniques were defined: \nIn linear scanning users first find a band that is defined around the target’s X in which directional vibrotactile feedback is provided upon which the Target’s Y coordinate is found using frequency. \nin multilinear scanning directional vibrotactile feedback is provided on both axes simultaneously using frequency and pulse delay. Preliminary experiences showed multilinear scanning was much harder to perform so it would be of interest which one would yield the best performance. \n\n
  • In the first study we explore 2D target acquisition. \n\nWe implemented a tactile proprioceptive display using the Sony Move controller which can provide two different types of directional vibrotactile feedback and this controller can be tracked using an external camera with very high accuracy. \n\nThe size of this display we use is constrained by the reach of your arm.\n\nTwo different scanning techniques were defined: \nIn linear scanning users first find a band that is defined around the target’s X in which directional vibrotactile feedback is provided upon which the Target’s Y coordinate is found using frequency. \nin multilinear scanning directional vibrotactile feedback is provided on both axes simultaneously using frequency and pulse delay. Preliminary experiences showed multilinear scanning was much harder to perform so it would be of interest which one would yield the best performance. \n\n
  • In the first study we explore 2D target acquisition. \n\nWe implemented a tactile proprioceptive display using the Sony Move controller which can provide two different types of directional vibrotactile feedback and this controller can be tracked using an external camera with very high accuracy. \n\nThe size of this display we use is constrained by the reach of your arm.\n\nTwo different scanning techniques were defined: \nIn linear scanning users first find a band that is defined around the target’s X in which directional vibrotactile feedback is provided upon which the Target’s Y coordinate is found using frequency. \nin multilinear scanning directional vibrotactile feedback is provided on both axes simultaneously using frequency and pulse delay. Preliminary experiences showed multilinear scanning was much harder to perform so it would be of interest which one would yield the best performance. \n\n
  • In the first study we explore 2D target acquisition. \n\nWe implemented a tactile proprioceptive display using the Sony Move controller which can provide two different types of directional vibrotactile feedback and this controller can be tracked using an external camera with very high accuracy. \n\nThe size of this display we use is constrained by the reach of your arm.\n\nTwo different scanning techniques were defined: \nIn linear scanning users first find a band that is defined around the target’s X in which directional vibrotactile feedback is provided upon which the Target’s Y coordinate is found using frequency. \nin multilinear scanning directional vibrotactile feedback is provided on both axes simultaneously using frequency and pulse delay. Preliminary experiences showed multilinear scanning was much harder to perform so it would be of interest which one would yield the best performance. \n\n
  • In the first study we explore 2D target acquisition. \n\nWe implemented a tactile proprioceptive display using the Sony Move controller which can provide two different types of directional vibrotactile feedback and this controller can be tracked using an external camera with very high accuracy. \n\nThe size of this display we use is constrained by the reach of your arm.\n\nTwo different scanning techniques were defined: \nIn linear scanning users first find a band that is defined around the target’s X in which directional vibrotactile feedback is provided upon which the Target’s Y coordinate is found using frequency. \nin multilinear scanning directional vibrotactile feedback is provided on both axes simultaneously using frequency and pulse delay. Preliminary experiences showed multilinear scanning was much harder to perform so it would be of interest which one would yield the best performance. \n\n
  • In the first study we explore 2D target acquisition. \n\nWe implemented a tactile proprioceptive display using the Sony Move controller which can provide two different types of directional vibrotactile feedback and this controller can be tracked using an external camera with very high accuracy. \n\nThe size of this display we use is constrained by the reach of your arm.\n\nTwo different scanning techniques were defined: \nIn linear scanning users first find a band that is defined around the target’s X in which directional vibrotactile feedback is provided upon which the Target’s Y coordinate is found using frequency. \nin multilinear scanning directional vibrotactile feedback is provided on both axes simultaneously using frequency and pulse delay. Preliminary experiences showed multilinear scanning was much harder to perform so it would be of interest which one would yield the best performance. \n\n
  • We conducted a between subjects study with 16 CS students. \nSubject played an augmented reality like space invaders game where they had to shoot 40 aliens. \nResults showed a significant difference between search time corrected for distance with multlinear scanning being significantly faster. No difference in error was found. \n
  • We conducted a between subjects study with 16 CS students. \nSubject played an augmented reality like space invaders game where they had to shoot 40 aliens. \nResults showed a significant difference between search time corrected for distance with multlinear scanning being significantly faster. No difference in error was found. \n
  • The second study explored spatial interaction. \n\nSubject had to scan to the location of a balloon and pop it using a thrust gesture. A gesture was correct if it was within 150 pixels of the target and there was less than 5% error in rotation along each axes of the controller. A user study with 8 users found an aiming error of 21 degrees. \n\n
  • The second study explored spatial interaction. \n\nSubject had to scan to the location of a balloon and pop it using a thrust gesture. A gesture was correct if it was within 150 pixels of the target and there was less than 5% error in rotation along each axes of the controller. A user study with 8 users found an aiming error of 21 degrees. \n\n
  • The second study explored spatial interaction. \n\nSubject had to scan to the location of a balloon and pop it using a thrust gesture. A gesture was correct if it was within 150 pixels of the target and there was less than 5% error in rotation along each axes of the controller. A user study with 8 users found an aiming error of 21 degrees. \n\n
  • Potential applications of our technique include: \n- navigation where the Y coordinate can be used to indicate the distance to a target, low cost motor rehabilitation and exercise games for users who are blind \n
  • Current and future work focuses on extending this to 3D target selection.\n\nWe are further interested in seeing if this non visual pointing task can somehow be modeled. \n\n
  • \n
  • Spatial Gestures using a Tactile-Proprioceptive Display

    1. 1. XSpatial Gestures using a Tactile-Proprioceptive Display Eelke Folmer & Tony Morelli - TEI’12, Kingston Player-Game Interaction Lab University of Nevada, Reno
    2. 2. Spatial Gestures in NUI’s Player-Game Interaction Research University of Nevada, Reno
    3. 3. Spatial Gestures in NUI’s Player-Game Interaction Research University of Nevada, Reno
    4. 4. No Display / Unable to see Player-Game Interaction Research University of Nevada, Reno
    5. 5. No Display / Unable to see ? Player-Game Interaction Research University of Nevada, Reno
    6. 6. Non-Visual NUI’s item A item B item C visual impairment mobile contextsLimitations:» no spatial gestures» rely on visuospatial memory Player-Game Interaction Research University of Nevada, Reno
    7. 7. Non-Visual NUI’s item A “item B” item B item C visual impairment mobile contextsLimitations:» no spatial gestures» rely on visuospatial memory Player-Game Interaction Research University of Nevada, Reno
    8. 8. Non-Visual NUI’s item A “item B” item B item C ? visual impairment mobile contextsLimitations:» no spatial gestures» rely on visuospatial memory Player-Game Interaction Research University of Nevada, Reno
    9. 9. Tactile-Proprioceptive displayTurn the Human body into a displayProprioception »human ability to sense the orientation of limbs »augment haptic feedback with prop. information Player-Game Interaction Research University of Nevada, Reno
    10. 10. Tactile-Proprioceptive displayTurn the Human body into a displayProprioception »human ability to sense the orientation of limbs »augment haptic feedback with prop. information Player-Game Interaction Research University of Nevada, Reno
    11. 11. Example frequency error 0 Player-Game Interaction Research University of Nevada, Reno
    12. 12. Example frequency error 0 Player-Game Interaction Research University of Nevada, Reno
    13. 13. Example frequency error 0 Player-Game Interaction Research University of Nevada, Reno
    14. 14. Study 1: 2D target acquisition linear multilinear Yerror: frequency Yerror: frequencyXerror: band Xerror: pulse delay Player-Game Interaction Research University of Nevada, Reno
    15. 15. Study 1: 2D target acquisition linear multilinear Yerror: frequency Yerror: frequencyXerror: band Xerror: pulse delay Player-Game Interaction Research University of Nevada, Reno
    16. 16. Study 1: 2D target acquisition linear multilinear Yerror: frequency Yerror: frequencyXerror: band Xerror: pulse delay Player-Game Interaction Research University of Nevada, Reno
    17. 17. Study 1: 2D target acquisition linear multilinear Yerror: frequency Yerror: frequencyXerror: band Xerror: pulse delay Player-Game Interaction Research University of Nevada, Reno
    18. 18. Study 1: procedure & resultsSpace invaders like gameBetween-subjects study with 16 subjectsCorrected search time: »linear 51.7 ms/pixel significant difference »multilinear 40.3 ms/pixelNo sig. difference in error Player-Game Interaction Research University of Nevada, Reno
    19. 19. Study 1: procedure & resultsSpace invaders like gameBetween-subjects study with 16 subjectsCorrected search time: »linear 51.7 ms/pixel significant difference »multilinear 40.3 ms/pixelNo sig. difference in error Player-Game Interaction Research University of Nevada, Reno
    20. 20. Study 2: Spatial Gesture XMultilinear scanningcorrect gesture if: » within 150 pixels of target » Z axis decrease of 20 cm » less than 5% error in each axis of rotation8 subjects (not participate in Study 1)corrected search time: 45.9 ms/pixelaiming accuracy: 21.4° Player-Game Interaction Research University of Nevada, Reno
    21. 21. Study 2: Spatial Gesture XMultilinear scanningcorrect gesture if: » within 150 pixels of target » Z axis decrease of 20 cm » less than 5% error in each axis of rotation8 subjects (not participate in Study 1)corrected search time: 45.9 ms/pixelaiming accuracy: 21.4° Player-Game Interaction Research University of Nevada, Reno
    22. 22. Potential Applicationsnavigation low cost motor exergames for users rehabilitation who are blind Player-Game Interaction Research University of Nevada, Reno
    23. 23. Current/Future Work direction of error found Y X 3D scanning Extension of Fitts’s law3D target selectiontwo handed scanningModel for non-visual pointing Player-Game Interaction Research University of Nevada, Reno
    24. 24. props & questionsThis research supported by NSF Grant IIS-1118074Any opinions, findings, and conclusions or recommendations expressed in thismaterial are those of the author(s) and do not necessarily reflect the views of theNational Science Foundation. Player-Game Interaction Research University of Nevada, Reno
    25. 25. props & questions ?This research supported by NSF Grant IIS-1118074Any opinions, findings, and conclusions or recommendations expressed in thismaterial are those of the author(s) and do not necessarily reflect the views of theNational Science Foundation. Player-Game Interaction Research University of Nevada, Reno

    ×