Gestural interactions with in-vehicle controls like audio and climate could reduce driver distraction compared to buttons. The researchers conducted experiments to determine intuitive gestures for controls. Participants preferred gestures to voice when controls were simple and repetitive. Based on the experiments, the researchers designed a prototype steering wheel with touch pads on each side for gestures, removing buttons. The design followed ergonomic data and common gestures identified in the experiments.
A student project aims to build a system that converts visual input into audio signals to help the blind or visually impaired navigate. The project's objectives are to develop a prototype device that uses image processing and computer vision techniques to detect objects and hazards, and converts that visual information into audio cues. The system would integrate technologies like image segmentation, enhancement, and 3D modeling with an acoustic interface to describe a user's surroundings. A Pandaboard single-board computer with OpenCV is used to process images from a webcam in real-time and translate them into audio descriptions for visually impaired users.
This document promotes reshaping one's health and life in 2011 by addressing issues like being overweight, unmotivated, and having low self-esteem, and encourages the reader to just reshape through using the hashtag.
Clustering Historical Wildfires from Interpolated Rates of Spreadredfishgroup
The document discusses clustering historical rates of spread from wildfires. It includes dendrograms and cluster orderings of progression rates from 47 fires grouped into different fuel and fire types. The clustering is used to analyze past fire data to explore fire characteristics and response for training, education, and setting expectations of fire behavior. Historical analysis of fire data can also help with fire situation awareness by querying for analogous past fires during unfolding wildfire events and calibrating fire models.
The document discusses a marketing campaign called "Campaign #JobSearch" created by the author to help find a job. The campaign involved creating a personal brand through habits and designing resumes to stand out from traditional formats. It incorporated both online and offline mediums like blogs, social media, newspapers and more. The goal was to market oneself effectively and take action to land a job rather than just waiting passively.
The document lists 28 reasons the author is happy on her 28th birthday. Some of the key reasons include going on a snowboarding trip with a friend, finding her dream profession, being healthier than when she was younger, gaining confidence, having a good relationship with family and friends, enjoying the California weather in February, and feeling passionate about life. She is also gluten-free and feels great, runs a fitness business, and will be blogging at an upcoming conference. Overall, the author expresses gratitude and enjoyment of her life and where she is at age 28.
The document provides statistics on Sarah Kay Hoffman's Pied Pipers team for the 2009 Nike+ Human Race, including social media followers, team members, miles run, and blog traffic. It also summarizes Google Analytics data on the blog's visitors and discusses recruiting methods, marketing efforts, challenges run, and positive effects on Nike+ users' running. Recommendations are made to start promoting earlier and include more international team members for future events.
PUBLIMOVIL es una empresa especializada desde el 2001 en comunicación en los medios de transporte urbano y realización de proyectos de marketing no convencional
DROWSINESS DETECTION USING COMPUTER VISIONIRJET Journal
This document discusses a system for detecting driver drowsiness using computer vision. The system uses OpenCV to process video input from the vehicle's camera in real-time. It analyzes the driver's facial features such as eye closure and symptoms of fatigue to detect drowsiness. If drowsiness is detected by comparing the facial features to trained data, an alarm is triggered to alert the driver and prevent accidents caused by drowsy driving. The system aims to improve road safety by addressing one of the major causes of accidents.
A student project aims to build a system that converts visual input into audio signals to help the blind or visually impaired navigate. The project's objectives are to develop a prototype device that uses image processing and computer vision techniques to detect objects and hazards, and converts that visual information into audio cues. The system would integrate technologies like image segmentation, enhancement, and 3D modeling with an acoustic interface to describe a user's surroundings. A Pandaboard single-board computer with OpenCV is used to process images from a webcam in real-time and translate them into audio descriptions for visually impaired users.
This document promotes reshaping one's health and life in 2011 by addressing issues like being overweight, unmotivated, and having low self-esteem, and encourages the reader to just reshape through using the hashtag.
Clustering Historical Wildfires from Interpolated Rates of Spreadredfishgroup
The document discusses clustering historical rates of spread from wildfires. It includes dendrograms and cluster orderings of progression rates from 47 fires grouped into different fuel and fire types. The clustering is used to analyze past fire data to explore fire characteristics and response for training, education, and setting expectations of fire behavior. Historical analysis of fire data can also help with fire situation awareness by querying for analogous past fires during unfolding wildfire events and calibrating fire models.
The document discusses a marketing campaign called "Campaign #JobSearch" created by the author to help find a job. The campaign involved creating a personal brand through habits and designing resumes to stand out from traditional formats. It incorporated both online and offline mediums like blogs, social media, newspapers and more. The goal was to market oneself effectively and take action to land a job rather than just waiting passively.
The document lists 28 reasons the author is happy on her 28th birthday. Some of the key reasons include going on a snowboarding trip with a friend, finding her dream profession, being healthier than when she was younger, gaining confidence, having a good relationship with family and friends, enjoying the California weather in February, and feeling passionate about life. She is also gluten-free and feels great, runs a fitness business, and will be blogging at an upcoming conference. Overall, the author expresses gratitude and enjoyment of her life and where she is at age 28.
The document provides statistics on Sarah Kay Hoffman's Pied Pipers team for the 2009 Nike+ Human Race, including social media followers, team members, miles run, and blog traffic. It also summarizes Google Analytics data on the blog's visitors and discusses recruiting methods, marketing efforts, challenges run, and positive effects on Nike+ users' running. Recommendations are made to start promoting earlier and include more international team members for future events.
PUBLIMOVIL es una empresa especializada desde el 2001 en comunicación en los medios de transporte urbano y realización de proyectos de marketing no convencional
DROWSINESS DETECTION USING COMPUTER VISIONIRJET Journal
This document discusses a system for detecting driver drowsiness using computer vision. The system uses OpenCV to process video input from the vehicle's camera in real-time. It analyzes the driver's facial features such as eye closure and symptoms of fatigue to detect drowsiness. If drowsiness is detected by comparing the facial features to trained data, an alarm is triggered to alert the driver and prevent accidents caused by drowsy driving. The system aims to improve road safety by addressing one of the major causes of accidents.
The document discusses a seminar on hand gesture recognition. It defines gestures and what gesture recognition is, including recognizing sign language and using gestures to control devices without physical contact. Examples are given of using gestures for military operations and virtual controllers. Advantages include being easy to operate while replacing keyboards and mice, while disadvantages include gestures not being self-explanatory. Applications discussed include entertainment, robotics, and remote control.
The document discusses a seminar on hand gesture recognition. It defines gestures and what gesture recognition is, including recognizing sign language and using gestures to control devices without physical contact. Examples are given of using gestures for military operations and virtual controllers. Advantages include being easy to use and replacing other input devices, while disadvantages include gestures not being self-explanatory. Applications discussed include entertainment, robotics, and remote control. The conclusion states that gesture control provides an alternative and natural way to efficiently control robots.
Gesture recognition technology uses mathematical algorithms to interpret human gestures and enable interaction with machines without physical devices. It has various applications including sign language recognition, interpreting facial expressions, and electrical field sensing of body proximity. Vision-based and device-based techniques use cameras, gloves, or other sensors to detect gestures. Challenges include varying lighting and background items that can reduce accuracy. The future potential is vast across entertainment, home automation, education, medicine and security.
Hand Gesture Recognition System for Human-Computer Interaction with Web-Camijsrd.com
This paper represents a comparative study of exiting hand gesture recognition systems and gives the new approach for the gesture recognition which is easy cheaper and alternative of input devices like mouse with static and dynamic hand gestures, for interactive computer applications. Despite the increase in the attention of such systems there are still certain limitations in literature. Most applications require different constraints like having distinct lightning conditions, usage of a specific camera, making the user wear a multi-coloured glove or need lots of training data. The use of hand gestures provides an attractive alternative to cumbersome interface devices for human-computer interaction (HCI). This interface is simple enough to be run using an ordinary webcam and requires little training.
The document discusses ubiquitous computing enabled in daily life through a wearable personal virtual controller. It proposes a wrist-worn device that uses optical reflectance to scan the hand and fingers, allowing contactless control of systems. The device can wirelessly control applications at home like entertainment and lighting, reduce health issues in offices, and enable hands-free control of automotive features. Basic hand gestures are recognized to provide intuitive, virtual control of computers and other devices, improving accessibility.
Skinput is a technology developed by Microsoft researchers that allows a user's skin and body to serve as an input surface for controlling devices. It uses sensors in an armband to detect vibrations from taps on the skin to determine the location tapped and translate it into a command. In initial tests, Skinput demonstrated 95.5% accuracy in detecting taps on 5 locations on the arm. It has potential for use on mobile devices, gaming, and assisting disabled individuals by providing a new way to interact without direct contact. However, further research is still needed to address potential health issues and reduce costs before wide adoption.
Skinput is a technology developed by Microsoft researchers that allows a user's skin to be used as an input surface. It uses sensors in an armband to detect vibrations on the skin from finger taps, which can then be used to control devices. In tests, Skinput accurately detected taps on five fingers 95.5% of the time and taps across the forearm 81.5% of the time. This novel approach could enable new types of intuitive, touchless interaction and help people with limited mobility to control devices. However, further research is still needed to address potential health issues and reduce costs before widespread adoption.
This document presents a real-time driver drowsiness detection system that uses computer vision and deep learning techniques. The system monitors a driver's face using a camera to detect drowsiness indicators like eye closure over time. It employs techniques like convolutional neural networks (CNNs) to extract features from images and classify if a driver's eyes are open or closed. If eyes are closed for too long, an alarm sound is triggered to alert the driver and prevent accidents from drowsy driving. The goal is to reduce road accidents by continuously monitoring a driver's alertness level and intervening with an alarm when drowsiness is detected.
Skinput is a technology that appropriates the human body for acoustic transmission.
It allows the skin to be used as an input surface.
It was developed by Chris Harrison, Desney Tan, and Dan Morris of the Microsoft Research's Computational User Experiences Group
Its first public appearance was at Microsoft'sTechFest 2010
Microsoft has not commented on the future of the project, other than it is under active development. It has been reported this may not appear in commercial devices for at least 2 years
IRJET- Hand Gesture Recognition for Deaf and DumbIRJET Journal
This document proposes a system for hand gesture recognition to help deaf and dumb individuals communicate. The system would use computer vision and machine learning techniques to recognize hand gestures from video input and translate them into text in real-time. This would allow deaf and dumb people to communicate with others without needing an interpreter who understands sign language. The proposed system would segment the hand from each video frame, extract features of the hand pose, and classify the gesture by matching it to examples in a dataset. The goal is to provide deaf and dumb individuals a way to independently communicate through a automatic translation of their sign language gestures into text.
This document discusses gesture recognition. It defines a gesture as a form of non-verbal communication using bodily movements. The document then provides examples of gestures and discusses how gesture recognition works by using computer vision and image processing techniques. It outlines different types of gestures including hand gestures, sign language, and gestures detected using electrical fields. The document discusses advantages such as more natural human-computer interaction and disadvantages including issues with ambient light and object detection. It concludes by discussing future trends in gesture recognition technology.
The document describes an artificial passenger system that would converse with drivers to help prevent drowsiness and fatigue. The system would use speech recognition and generation, as well as cameras and voice analysis, to engage the driver in conversation and determine if they seem alert or drowsy. If drowsiness is detected, the system may try to startle the driver by changing the radio, opening windows, sounding an alarm, or spraying water to help ensure driver safety. The goal is to develop natural language capabilities that can run on embedded vehicle computers using limited resources.
Indian Sign Language Recognition using Vision Transformer based Convolutional...IRJET Journal
This document proposes a vision transformer-based convolutional neural network approach for Indian sign language recognition using hand gestures. It aims to improve on traditional machine learning and CNN techniques. The proposed method achieves 99.88% accuracy on a test image database, outperforming state-of-the-art methods. An ablation study also supports that convolutional encoding increases accuracy for hand gesture recognition. The document discusses the challenges of existing data glove and vision-based techniques for hand gesture recognition and human-computer interaction. It aims to develop a more natural and accessible method using computer vision and deep learning.
This document discusses gesture phones and gesture recognition technology. It begins by explaining how gestures are recognized through optical tracking, inertial tracking, and calibration. Examples are given of how gestures could control a smartphone, such as answering calls or controlling media playback. Challenges of gesture recognition are also mentioned. The document then discusses applications of gesture technology on Android and Windows phones through various apps that enable gesture control. Benefits of gesture technology include more intuitive interaction and control when touch is not possible.
Augmented Robotics for Electronic Wheelchair to Enhance Mobility in Domestic Environment
[1] Researchers developed an augmented reality system to aid navigation of power wheelchairs. It uses eye tracking and augmented reality markers to allow both manual control and semi-autonomous navigation.
[2] The system integrates eye tracking, a camera, and navigation software onto a power wheelchair. It allows the user to manually control the wheelchair with their eyes or select augmented reality markers to autonomously navigate to locations.
[3] The researchers tested the repeatability of autonomous navigation maneuvers from different starting positions and orientations. Test results showed the final position accuracy was within an acceptable range for wheelchair navigation.
Survey on Human Computer interaction for disabled persons Muhammad Bilal
This document presents a survey of 10 different techniques for facilitating human-computer interaction for disabled persons. The techniques analyzed include nose tracking cursor control, facial recognition, electrooculography (EOG), vocal mouse control using speech recognition, eye movement detection, hand gesture recognition, tongue control systems, foot mouse control using pressure sensors, finger gesture recognition using color markers, and control using nose tracking and face detection. The techniques are compared based on parameters like efficiency, accuracy, sensors used, cost, whether they include facial recognition, gesture recognition, eye-blink detection or speech detection. The survey concludes that techniques using only cameras are low-cost and easy to use while being non-intrusive, and that outdoor-compatible systems
The document describes an artificial passenger system that is intended to be installed in vehicle dashboards. It contains an AI companion that holds a driver's profile and engages in conversation to monitor for signs of fatigue. Sensors like microphones and cameras are used to analyze speech, lip movements, and eye tracking to determine if a driver is alert. If signs of fatigue or impairment are detected, the system will warn the driver or change the conversation topic to prevent accidents caused by drowsy driving.
Understand human behaviour to improve user adoptionDamian Rees
1. The document discusses how understanding human behavior and prioritizing humans over technology can improve user adoption of new technologies.
2. It emphasizes the importance of considering users' mental models when designing interfaces and how friction occurs when a technology conflicts with a user's expectations.
3. The author advocates starting with user research, designing for mental models, and testing prototypes with users to refine the design before launch in order to maximize adoption.
This document provides an overview of gesture recognition systems. It describes the basic architecture, which involves an input device sending gestures to a computer for processing and recognition. Common input devices include data gloves and cameras. The benefits of gesture recognition are that it provides a more natural human-computer interface without physical devices. Applications include interacting with virtual environments, robots, and public displays. Challenges to accurate recognition include lighting, camera quality, and background noise.
More Related Content
Similar to Gestural Interaction With In-vehicle Audio and Climate Control
The document discusses a seminar on hand gesture recognition. It defines gestures and what gesture recognition is, including recognizing sign language and using gestures to control devices without physical contact. Examples are given of using gestures for military operations and virtual controllers. Advantages include being easy to operate while replacing keyboards and mice, while disadvantages include gestures not being self-explanatory. Applications discussed include entertainment, robotics, and remote control.
The document discusses a seminar on hand gesture recognition. It defines gestures and what gesture recognition is, including recognizing sign language and using gestures to control devices without physical contact. Examples are given of using gestures for military operations and virtual controllers. Advantages include being easy to use and replacing other input devices, while disadvantages include gestures not being self-explanatory. Applications discussed include entertainment, robotics, and remote control. The conclusion states that gesture control provides an alternative and natural way to efficiently control robots.
Gesture recognition technology uses mathematical algorithms to interpret human gestures and enable interaction with machines without physical devices. It has various applications including sign language recognition, interpreting facial expressions, and electrical field sensing of body proximity. Vision-based and device-based techniques use cameras, gloves, or other sensors to detect gestures. Challenges include varying lighting and background items that can reduce accuracy. The future potential is vast across entertainment, home automation, education, medicine and security.
Hand Gesture Recognition System for Human-Computer Interaction with Web-Camijsrd.com
This paper represents a comparative study of exiting hand gesture recognition systems and gives the new approach for the gesture recognition which is easy cheaper and alternative of input devices like mouse with static and dynamic hand gestures, for interactive computer applications. Despite the increase in the attention of such systems there are still certain limitations in literature. Most applications require different constraints like having distinct lightning conditions, usage of a specific camera, making the user wear a multi-coloured glove or need lots of training data. The use of hand gestures provides an attractive alternative to cumbersome interface devices for human-computer interaction (HCI). This interface is simple enough to be run using an ordinary webcam and requires little training.
The document discusses ubiquitous computing enabled in daily life through a wearable personal virtual controller. It proposes a wrist-worn device that uses optical reflectance to scan the hand and fingers, allowing contactless control of systems. The device can wirelessly control applications at home like entertainment and lighting, reduce health issues in offices, and enable hands-free control of automotive features. Basic hand gestures are recognized to provide intuitive, virtual control of computers and other devices, improving accessibility.
Skinput is a technology developed by Microsoft researchers that allows a user's skin and body to serve as an input surface for controlling devices. It uses sensors in an armband to detect vibrations from taps on the skin to determine the location tapped and translate it into a command. In initial tests, Skinput demonstrated 95.5% accuracy in detecting taps on 5 locations on the arm. It has potential for use on mobile devices, gaming, and assisting disabled individuals by providing a new way to interact without direct contact. However, further research is still needed to address potential health issues and reduce costs before wide adoption.
Skinput is a technology developed by Microsoft researchers that allows a user's skin to be used as an input surface. It uses sensors in an armband to detect vibrations on the skin from finger taps, which can then be used to control devices. In tests, Skinput accurately detected taps on five fingers 95.5% of the time and taps across the forearm 81.5% of the time. This novel approach could enable new types of intuitive, touchless interaction and help people with limited mobility to control devices. However, further research is still needed to address potential health issues and reduce costs before widespread adoption.
This document presents a real-time driver drowsiness detection system that uses computer vision and deep learning techniques. The system monitors a driver's face using a camera to detect drowsiness indicators like eye closure over time. It employs techniques like convolutional neural networks (CNNs) to extract features from images and classify if a driver's eyes are open or closed. If eyes are closed for too long, an alarm sound is triggered to alert the driver and prevent accidents from drowsy driving. The goal is to reduce road accidents by continuously monitoring a driver's alertness level and intervening with an alarm when drowsiness is detected.
Skinput is a technology that appropriates the human body for acoustic transmission.
It allows the skin to be used as an input surface.
It was developed by Chris Harrison, Desney Tan, and Dan Morris of the Microsoft Research's Computational User Experiences Group
Its first public appearance was at Microsoft'sTechFest 2010
Microsoft has not commented on the future of the project, other than it is under active development. It has been reported this may not appear in commercial devices for at least 2 years
IRJET- Hand Gesture Recognition for Deaf and DumbIRJET Journal
This document proposes a system for hand gesture recognition to help deaf and dumb individuals communicate. The system would use computer vision and machine learning techniques to recognize hand gestures from video input and translate them into text in real-time. This would allow deaf and dumb people to communicate with others without needing an interpreter who understands sign language. The proposed system would segment the hand from each video frame, extract features of the hand pose, and classify the gesture by matching it to examples in a dataset. The goal is to provide deaf and dumb individuals a way to independently communicate through a automatic translation of their sign language gestures into text.
This document discusses gesture recognition. It defines a gesture as a form of non-verbal communication using bodily movements. The document then provides examples of gestures and discusses how gesture recognition works by using computer vision and image processing techniques. It outlines different types of gestures including hand gestures, sign language, and gestures detected using electrical fields. The document discusses advantages such as more natural human-computer interaction and disadvantages including issues with ambient light and object detection. It concludes by discussing future trends in gesture recognition technology.
The document describes an artificial passenger system that would converse with drivers to help prevent drowsiness and fatigue. The system would use speech recognition and generation, as well as cameras and voice analysis, to engage the driver in conversation and determine if they seem alert or drowsy. If drowsiness is detected, the system may try to startle the driver by changing the radio, opening windows, sounding an alarm, or spraying water to help ensure driver safety. The goal is to develop natural language capabilities that can run on embedded vehicle computers using limited resources.
Indian Sign Language Recognition using Vision Transformer based Convolutional...IRJET Journal
This document proposes a vision transformer-based convolutional neural network approach for Indian sign language recognition using hand gestures. It aims to improve on traditional machine learning and CNN techniques. The proposed method achieves 99.88% accuracy on a test image database, outperforming state-of-the-art methods. An ablation study also supports that convolutional encoding increases accuracy for hand gesture recognition. The document discusses the challenges of existing data glove and vision-based techniques for hand gesture recognition and human-computer interaction. It aims to develop a more natural and accessible method using computer vision and deep learning.
This document discusses gesture phones and gesture recognition technology. It begins by explaining how gestures are recognized through optical tracking, inertial tracking, and calibration. Examples are given of how gestures could control a smartphone, such as answering calls or controlling media playback. Challenges of gesture recognition are also mentioned. The document then discusses applications of gesture technology on Android and Windows phones through various apps that enable gesture control. Benefits of gesture technology include more intuitive interaction and control when touch is not possible.
Augmented Robotics for Electronic Wheelchair to Enhance Mobility in Domestic Environment
[1] Researchers developed an augmented reality system to aid navigation of power wheelchairs. It uses eye tracking and augmented reality markers to allow both manual control and semi-autonomous navigation.
[2] The system integrates eye tracking, a camera, and navigation software onto a power wheelchair. It allows the user to manually control the wheelchair with their eyes or select augmented reality markers to autonomously navigate to locations.
[3] The researchers tested the repeatability of autonomous navigation maneuvers from different starting positions and orientations. Test results showed the final position accuracy was within an acceptable range for wheelchair navigation.
Survey on Human Computer interaction for disabled persons Muhammad Bilal
This document presents a survey of 10 different techniques for facilitating human-computer interaction for disabled persons. The techniques analyzed include nose tracking cursor control, facial recognition, electrooculography (EOG), vocal mouse control using speech recognition, eye movement detection, hand gesture recognition, tongue control systems, foot mouse control using pressure sensors, finger gesture recognition using color markers, and control using nose tracking and face detection. The techniques are compared based on parameters like efficiency, accuracy, sensors used, cost, whether they include facial recognition, gesture recognition, eye-blink detection or speech detection. The survey concludes that techniques using only cameras are low-cost and easy to use while being non-intrusive, and that outdoor-compatible systems
The document describes an artificial passenger system that is intended to be installed in vehicle dashboards. It contains an AI companion that holds a driver's profile and engages in conversation to monitor for signs of fatigue. Sensors like microphones and cameras are used to analyze speech, lip movements, and eye tracking to determine if a driver is alert. If signs of fatigue or impairment are detected, the system will warn the driver or change the conversation topic to prevent accidents caused by drowsy driving.
Understand human behaviour to improve user adoptionDamian Rees
1. The document discusses how understanding human behavior and prioritizing humans over technology can improve user adoption of new technologies.
2. It emphasizes the importance of considering users' mental models when designing interfaces and how friction occurs when a technology conflicts with a user's expectations.
3. The author advocates starting with user research, designing for mental models, and testing prototypes with users to refine the design before launch in order to maximize adoption.
This document provides an overview of gesture recognition systems. It describes the basic architecture, which involves an input device sending gestures to a computer for processing and recognition. Common input devices include data gloves and cameras. The benefits of gesture recognition are that it provides a more natural human-computer interface without physical devices. Applications include interacting with virtual environments, robots, and public displays. Challenges to accurate recognition include lighting, camera quality, and background noise.
Similar to Gestural Interaction With In-vehicle Audio and Climate Control (20)
Gestural Interaction With In-vehicle Audio and Climate Control
1. Gestural Interaction With In-Vehicle Audio and Climate Controls
Chongyoon Chung1 and Esa Rantanen
Rochester Institute of Technology, Rochester, NY
1
Now at Samsung Electronics, Seoul, South Korea
Among the most distractive in-vehicle interactions are audio and climate controls. If these interactions
were as easy and spontaneous as natural language, driving could be much safer. Through this research it
was found that drivers preferred gestural language to voice language when the control was simple and re-
petitive; subsequently, gestural interactions with secondary in-vehicle tasks were investigated. Following
the principle of “eyes on the road and hands on the wheel”, a steering wheel design with two touch pads on
the wheel to recognize gestures was conceived. The physical design of the steering wheel incorporated
good ergonomics and anthropometric data, while gesture stereotypes assigned to a number of in-vehicle
controls were determined empirically by two experiments. The new steering wheel design does not have
any buttons, which may contribute to driver distraction, yet it incorporates 19 functions through natural
thumb-gestures. This compares favorably with most current steering wheel designs, which have more than
11 buttons and 13 functions on the average.
INTRODUCTION functions on their steering wheels. These values are undenia-
bly high.
Modern automobiles have a myriad of manual controls To reduce driver distraction due to operation of in-vehicle
for increasingly complex auxiliary systems that have the po- systems, their control interfaces should be as intuitive and
tential of distracting drivers from their primary task of driving. easy to use as possible. The most natural modes of communi-
Car audio, climate control, and navigation systems have stea- cation and control are language and gestures. Speech recogni-
dily increased in sophistication, and CD players and mp3 tion, such as Microsoft’s Sync system in new Ford automo-
player interfaces are now common even in least expensive biles, has recently been introduced to consumers. Speech rec-
models. Manual control of all these systems requires drivers to ognition has limitations, however, as it is vulnerable to noises,
take their eyes off the road and traffic environment, which has dialects, and individual voice differences. Moreover, current
obvious safety implications. Pickering (2005) found that an technology recognizes only specific words preprogrammed in-
average glance time to control a radio was 1.2 seconds; in that to the system.
time a car travels over 50 ft at 30 mph. Summala, Lamble, and
Laakso (1998) showed that ambient vision was not sufficient Gestural Control Interfaces
for hazard detection: response times increased significantly
with increasing eccentric viewing by up to 2.9 seconds, sug- Gesture control has yet to be implemented in automobiles
gesting that timely hazard detection required some degree of but gesture interfaces have been very successful in many mo-
focal visual resources. Driver distraction has recently received bile communications and computing devices with touch-
long overdue attention as a major contributor in accidents. Ad- screen interfaces. Similar interfaces could also be designed
justing radio, cassette, and CD players has been estimated to and implemented for in-vehicle systems control in automo-
cause 11.4% of drivers’ distractions (Stutts, Reinfurt, Staplin, biles. Gesture recognition interaction addresses many of the
& Rodgman, 2001). In addition, it is best for safe driving to problems associated with voice control and allows for reduc-
keep both hands on the steering wheel in case of sudden ma- tion of both visual and cognitive load of drivers. Gestures can
neuvers are needed to avoid road hazards. also be viewed as an integral part of natural language and
Current automobile interfaces can be very confusing with therefore they would be easy for drivers to learn.
too many functions per control. For example, the BMW 7 se- There are two main techniques for gesture recognition,
ries driver-controlled systems have over 700 functions (Gil- camera recognition and touch sensor recognition. Camera-
bert, 2004). Many controls are hard to find and even invisible. based gesture recognition has serious spatial limitations, how-
In an attempt to make in-vehicle controls better accessible to ever. A camera requires certain distance to recognize a driver's
drivers and to allow them to keep their hands on the steering gesture, but most interiors of automobiles are quite small,
wheel most of the time, most recent car models have placed making it difficult to install a camera that would accurately
many controls on the steering wheel spokes as push-buttons or detect and recognize gestures. Another problem is the posi-
toggle switches. Unfortunately, steering wheels can become tioning of the camera. As the majority of drivers are right-
very crowded with nonstandard ways of grouping and assign- handed it seems natural that gestures could be performed by
ing functionality to the buttons. A survey of eight car models the right hand. The best location for a gesture recognition
(ranging from luxury to compact cars) showed that modern camera would therefore be to the right side of drivers. How-
automobiles have on the average 11.62 buttons and 13.86 ever, the camera could be confused by a passenger’s gestures,
2. as the right side of the driver could be shared with a front pas- ledge or gesture stereotypes were then applied to controls in
senger. an automobile. To simplify this analysis, only audio and venti-
Touch sensor-based gesture recognition system is a good lation controls were considered. Finally, a prototype steering
solution that circumvents the spatial limitations of camera- wheel accommodating touch pads for gesture control was de-
based systems. Touch-based gesture recognition requires no signed. Note that only the initial design process is described
distance from drivers. There are two feasible locations for here. The steering wheel prototype is not functional and so no
touch-sensitive surfaces that could recognize gestures, on the usability testing of how well this design would work in an ac-
center console or on the steering wheel. If a touch pad is in- tual driving environment has yet been conducted.
stalled on the center console, however, drivers will need to
take their hand off the steering wheel to operate the system, PRACTICE INNOVATION
defeating the “eyes on the road and hands on the wheel” de-
sign principle. Therefore, the best place for touch pads is on There were two main considerations in the design. First,
the steering wheel. anthropometric principles and measures had to be considered
There are also two kinds of touch recognition systems, in the physical placement and size and shape of the touch pads
pressure and twist recognition system, and surface touch rec- on the steering wheel. Second, the actual control gestures
ognition system. For automobile applications, pressure points would have to be intuitive to the users. Anthropometric di-
could be incorporated in any place along the steering wheel mensions could be found in literature but gesture stereotypes
rim while a touch sensor area could be located in the hub or had to be determined empirically.
spokes of the steering wheel (Figure 1).
Anthropometric Considerations
Recognizes Because the steering wheel-mounted touch pad would
pressure necessarily be operated by thumbs, the anthropometric meas-
and twist ures of hand and in particular thumbs were the take as a start-
Recognizes ing point of the design. The average length of a male thumb
touch on the from crotch to end is 2.3 in, with a range from 2.1 to 2.9 in,
surface
and the average width 0.9 in, ranging from 0.55 to 1.25 in. For
female thumbs, the average length is 2.1 in, with a range of
1.7 to 2.5 in. The average width of a female thumb is 0.75 in,
ranging from 0.63 to 0.87 in. The lateral movement range of
the thumb is from 80 degrees abduction to 45 degrees of ad-
duction (Tilley, 1993).
Figure 1. Possible locations of pressure points and touch- Experiment 1
sensitive surfaces on a steering wheel.
Materials. To research gesture stereotypes of people for
Examples of how to use pressure and twist recognition certain functions, a mock-up steering wheel with touch pads at
system include a twist out to turn climate control on while two the 10:2 positions was created (Figure 2). The touch pads
twists out would turn climate control off; a twist in could be were made out of thin white fabric, through which partici-
air conditioner on and two twists in could be air conditioner pants’ thumb gestures could be videotaped. The length of the
off. Available mappings of gestures to pressure and twisting sides of the pads was 2.5”, or 95% women’s thumb length.
motions are very limited, however. A surface touch based rec- The angle between the two sides of the touch pad was 80˚,
ognition system would allow for a wider range of gestures to which is the radial abduction angle. The participants were also
be used as well as have other benefits as it is incorporated into queried about their preferred gestures by a questionnaire. The
the steering wheel design. Two touch pads on the wheel questions concerned which side would be better for audio con-
would also overcome some of the problems with depth of the trols and asked the participants to draw a preferred gesture for
menu. For example, one side would be for climate controls common control actions.
and another would be for audio controls. In addition, the best Procedure. Nineteen people volunteered for the experi-
hand position of drivers while driving is 10–2 position and ment, 13 males and 6 females. Everyone had a driver’s li-
therefore touch pads on the steering wheel should be located cense. Most participants were young college students, with the
close to these locations. exception of one person. The participants sat facing the expe-
rimenter and a video camera. The experimenter asked the par-
Purpose of the Research ticipants to perform a control action on the steering wheel
mock-up and they complied by making a gesture they felt was
This paper describes the design process for a gestural vo- the most intuitive for the required control with their thumbs on
cabulary interface for selected in-vehicle tasks. How people the “touch pads”. The gestures could be videotaped through
perceive and understand the meaning of certain most com- the fabric mimicking the touch pads. After this the participants
monly used gestures was investigated. This gestural know- drew gestures they made on the paper.
3. Two problems with gestures were discovered in this ex- last three functions, defrost, air from dashboard and air from
periment. One is that the participants had trouble imagining underneath were a tap or a tap that has spatial meaning such as
gestures in front of a camera and became nervous or rushed to top, center or bottom of the touch pad.
figure them out. Another problem was that they used all im- In the videotaped gestures, 10 participants gestured
aginable good, easy, and spontaneous gestures to “Turn the thumb up and down for volume up and down, and 6 of them
controls on and off.” Hence, they ran out of ideas for good did thumb right and left for change station to higher and lower
gestures for the remaining functions. station. However, 5 answers had directional meaning from
right to left for changing radio station to higher station and left
to right for changing station to lower station. Twelve people
performed a tap for mute, 16 did a tap for pause Mp3 player
and 14 a tap for play. Eleven people gestured thumb right
from left for forward to next and 13 moved their thumbs from
right to left for previous song. Ten participants did thumb up
and down for temperature up and down, and 4 of them did
thumb right and left for fan intensity up and down; 4 other
gestured thumb up and down for fan intensity up and down.
Nine participants tapped for defrost and 3 participants tapped
for air form dashboard; 2 others used a tap that had spatial
meaning like top and bottom. For example, for air from un-
derneath, 3 participants used a tap of the bottom of a touch
pad, one tapped the bottom left of the touch pad; and another
just tapped once.
Experiment 2
Figure 2. A mock-up steering wheel with touch pads for the Since it was possible that the relatively small touch pads
study of gesture stereotypes. Positions of thumbs making the used in Experiment 1 (see Figure 2) may have constrained the
gestures were videotaped through the thin material in the gestures performed by the participants and limited the variety
mock touch pads. of gestures, a second experiment was run with a new steering
wheel mock-up that had larger “touch pads” and with a new
Consequently, the order of testing was changed to have group of participants.
participants first draw gestures on paper with time to imagine Materials. The length of the new mock-up touch pad was
the gestures for the controls, and then perform the gestures for 2.7 in, longer than the average length of male thumb. The
the camera. This arrangement helped most participants to width of the touch pad now fully accommodated thumb
make more gestures that were also more variable, but many movement of 45° adduction and 80° abduction angle.
still ran out of ideas for “defrost” and “air from underneath”, Because the participants in the first experiment were pre-
for example. Regardless, they were asked to just do something dominantly young college students, older participants were re-
for each control action. cruited for the second experiment. Fourteen people volun-
Findings. The results between the drawings of gestures on teered, 5 males and 9 females. Their ages ranged from twen-
paper and those performed for video were almost identical. ties (one participant) to fifties. All had driver’s licenses and
Fourteen (out of 19) people chose right side as audio control. drove daily.
Over half of the participants offered same gestures for each The paper-and-pencil questionnaire was also slightly
function, with the exception of controls for fan intensity and modified for the second experiment: questions about prefe-
airflows. In the drawing task, 11 out of 19 participants sug- rences between gestural and voice command were added and
gested thumb up and down for volume up and down, 9 did the boxes for drawing gestures were eliminated not to con-
thumb right and left for change station to higher and lower strain free drawing of gestures in any way. Finally, a rating
station. Eleven participants offered a tap for mute, 9 drew a scale (1-5) was added to gauge how good, easy, and sponta-
tap to pause Mp3 player, and 11 a tap for play. Even though neous the most common gestures were.
these three functions shared the same gesture, it would work Procedure. With the exception of research material and
because radio and Mp3 player are different features and play questionnaires, the procedure was identical to that in Experi-
and pause are opposite functions. Twelve people gestured ment 1. One participant only answered the questionnaire and
thumb right and left for forward to next and previous song. was not videotaped.
One participant suggested using a long tap as turn on and off Findings. The large surfaces of the touch pads confused
like on a cellular phone, which would be easy to learn and some participants who thought that a big surface had spatial
perform based on this experience. meaning; they tried to push or touch a certain points as if
Ten participants offered thumb up and down for tempera- pushing imaginary buttons in the pad instead of making ges-
ture up and down, and 6 of them did thumb right and left for tures. This group as a whole could not carry out gestures as
fan intensity up and down. The most common answers of the
4. commands, having been accustomed to controlling devices neath and from air from underneath to defrost is a circulating
with buttons for most of their lives. single tap.
All 14 participants chose a conventional blinker over
voice command; 8 preferred a conventional wiper control and DISCUSSION: FINAL DESIGN
6 preferred voice control, suggesting a preference for gesture
to voice as a control of a simple and repeated function. Eleven Based on the anthropometric data, an ergonomically de-
participants selected right side for audio controls, 2 people se- signed shape of a touch pad that looks similar to a piece of pie
lected left side. Most of the gestures both drawn and per- was conceived. The sides of the touch pad are is 2.3 in long,
formed by the second group of participants corresponded to which corresponds to the average thumb length of males. The
those discovered in Experiment 1. On average, over a half of sides form an angle of 80°, which is the abduction angle of
the participants preferred the same gestures as the first group. thumb. Similar touch pads are placed on both sides of the
Gestures for air flow controls were most variable, as was the steering wheel (Figure 3).
case in Experiment 1.
FINDINGS
There was much commonality in how people imagined
gestures to perform common in-vehicle control tasks. Howev-
er, the number of different gestures offered by the participants
was limited, and some less used controls yielded diverse ges-
ture suggestions.
There were also some meaningful differences between the
participants in the two experiments. Participants in the first
experiment consisted of young college students in their twen-
ties whereas participants in the second experiment were older
adults. Younger people are accustomed to digital devices, es-
pecially touch screens and pads and were very good at making
various gestures on touch pad. The gestures gathered from the
second experiment were less variable; some of them tried push
imaginary buttons in their minds. All participants seemed to Figure 3. A technical drawing of the steering wheel design.
run out of imagination for gestures for airflow control. In both The dimensions are in inches and the angle between the two
experiments participants sometimes used their thumbs as if sides of the touch pad in degrees.
drawing a gesture on the touch pad. From these results most
common gestures could nevertheless be identified and used
for assigning controls to gestures on a touch pad. These ges-
tures for the right side, or audio controls were:
Tap and hold: Audio system on/off
Up/down movement: Volume up or down
Left/right movement: Radio station, previous or next
A single tap, or index
finger on backside: Radio to Mp3 player switch
Left/right movement: Previous or next song
A single tap: Pause/play
For the left side, or climate controls the gestures were:
Tap and hold: Climate system on/off
Up/down movement: Temperature up or down
Left/right movement: Fan slower or faster
Figure 4. Computer rendering of the final steering wheel
A single tap: Defrost
design.
A single tap: Air from dashboard
A single tap: Air from underneath.
The two touch pads on the wheel appeared to surround
and wrap the wheel. This appearance was carried over as an
Interaction needs to be consistent to the user. Switching
overall design concept, which underwent several iterations.
play and stop or pause is a single tap on right side for audio
The first draft had two spokes on wheel, which started from
control. Therefore commands of airflow from defrost to air
the pads. To differentiate the design from others, a one-spoke
from dashboard, from air from dashboard to air from under-
5. design instead of the two-spoke design was adopted. This ini- that gestural control interfaces that are already ubiquitous in
tial steering wheel design was full circular. However, to pre- many mobile communication and computing devices (e.g., so-
vent the two touch pads, which were protruding into the inside called smart phones, Apple’s iPhone, iPod, and iPad, and most
of the wheel, from occluding the instrument cluster, the steer- laptop computers) can find wider applications in automobiles.
ing wheel was stretched laterally by one inch. The final design
is depicted in Figure 4. ACKOWLEDGMENTS
On the steering wheel, most automobiles have average
11.6 buttons and 13.7 functions. In this design there are no This paper is based on the first author’s Master of Fine
buttons. Consequently, drivers do not need to look for Arts thesis for the graduate Industrial Design program at the
grouped and small buttons. Audio and climate controls cause Rochester Institute of Technology. Many thanks are due to his
the most drivers’ distractions among the secondary in-vehicle chief adviser, Prof. David Morgan for his help and support
tasks. If these control interactions were as easy as our daily throughout this program. Dr. Rantanen served as an associate
language, it would be very easy for drivers to operate in- adviser in the thesis committee. The helpful suggestions by a
vehicle tasks. “Eyes on the road and hands on the wheel” is third thesis committee member, Dr. Michelle Harris, are also
the maxim for safe driving. Automobile interactions must sa- gratefully acknowledged.
tisfy this for safe drive. This imposed the least visual and cog-
nitive load when controlling of in-vehicle systems. The pro- REFERENCES
posed gesture-based interaction satisfies all the requirements
stated above, potentially allowing drivers to drive less dis- Tilley, A. R. (1993). The measure of man and woman. Human
tracted and more safely. Factors in Design. New York: Henry Dreyfuss Asso-
Note that only the initial design process has been de- ciates.
scribed in this paper, culminating in a non-functional proto- Pickering, C. A. (2005). Interacting with the car. IEE Compu-
type steering wheel. To carry the development of this product ting & Engineering, 16(1), 26.
further, many engineering problems (e.g., materials of the Summala, H., Nieminen, T., & Punto, M. (1996). Maintaining
touch pads, their sensitivity, and gesture-recognition algo- lane position with peripheral vision during in-vehicle
rithms and software) would have to be solved. An extensive tasks. Human Factors, 38 (3), 442-451.
usability study would also need to be conducted to investigate Stutts, J .C., Reinfurt, D. W., Staplin, L., & Rodgman, E. A.
how well drivers could learn the gestures mapped to various (2001). The role of driver distraction in traffic crashes.
control actions and perform them reliably while driving in ac- AAA Foundation for Traffic Safety. Washington, D.C.
tual traffic environments. Despite these limitations, however, http://www.aaafoundation.org/pdf/distraction.pdf
this study revealed many new and potentially significant as- Gilbert, R. K. (2004). BMW i-drive. INFSCI 250. Pittsburgh,
pects about drivers’ interactions with ever-increasing in- PA: University of Pittsburgh
vehicle technologies and functions. Our research also suggests