Touché: Enhancing Touch Interaction on
Humans, Screens, Liquids, and Everyday Objects
ABSTRACT
Touché proposes a novel Swept Frequency Capacitive Sens-
ing technique that can not only detect a touch event, but also
recognize complex configurations of the human hands and
body. Such contextual information significantly enhances
touch interaction in a broad range of applications, from
conventional touchscreens to unique contexts and materials.
For example, in our explorations we add touch and gesture
sensitivity to the human body and liquids. We demonstrate
the rich capabilities of Touché with five example setups
from different application domains and conduct experi-
mental studies that show gesture classification accuracies of
99% are achievable with our technology.
Author Keywords: Touch, gestures, ubiquitous interfaces,
sensors, on-body computing, mobile devices.
ACM Classification Keywords
H.5.2 [Information interfaces and presentation]: User Inter-
faces - Graphical user interfaces; Input devices & strategies.
INTRODUCTION
Touché is a novel capacitive touch sensing technology that
provides rich touch and gesture sensitivity to a variety of
analogue and digital objects. The technology is scalable,
i.e., the same sensor is equally effective for a pencil, a door-
knob, a mobile phone or a table. Gesture recognition also
scales with objects: a Touché enhanced doorknob can cap-
ture the configuration of fingers touching it, while a table
can track the posture of the entire user (Figures 1b, 6 and 7).
Sensing with Touché is not limited to inanimate objects –
the user’s body can also be made touch and gesture sensi-
tive (Figures 1a and 9). In general, Touché makes it very
easy to add touch and gesture interactivity to unusual, non-
solid objects and materials, such as a body of water. Using
Touché we can recognize when users touch the water’s sur-
face or dip their fingers into it (Figures 1c and 10).
Notably, instrumenting objects, humans and liquids with
Touché is trivial: a single electrode embedded into an object
and attached to our sensor controller is sufficient to compu-
tationally enhance an object with rich touch and gesture
interactivity. Furthermore, in the case of conductive objects,
e.g., doorknobs or a body of water, the object itself acts as
an intrinsic electrode – no additional instrumentation is nec-
essary. Finally, Touché is inexpensive, safe, low power and
compact; it can be easily embedded or temporarily attached
anywhere touch and gesture sensitivity is desired.
Touché proposes a novel form of capacitive touch sensing
that we call Swept Frequency Capacitive Sensing (SFCS).
In a typical capacitive touch sensor, a conductive object is
excited by an electrical signal at a fixed frequency. The
sensing circuit monitors the return signal and determines
touch events by identifying changes in this signal caused by
the electrical properties of the human hand touching the
object [46]. In ...
The document describes SkinTrack, a system that uses the human body as an electrical waveguide to enable continuous finger tracking on the skin. It consists of a ring that emits a high frequency AC signal and a wristband with electrodes. Due to phase delays as the signal propagates through the body, the wristband can measure phase differences between electrode pairs to determine the 2D location of a touch with millimeter accuracy. The authors believe this non-invasive approach could enable new touch interactions on small devices like smartwatches beyond their limited screen areas.
This document describes the Skinput technology, which uses bio-acoustic sensing to localize finger taps on the skin. It can provide a direct manipulation graphical user interface projected directly onto the body. The technology was developed by researchers at Microsoft. It consists of a wearable arm band with multiple piezoelectric sensors of different resonant frequencies. When the skin is tapped, acoustic waves propagate through the body and are detected by the sensors. The location can be identified based on differences in signal arrival times and frequencies between sensors. User studies showed it can accurately detect taps on different areas of the arm.
This document summarizes a research paper about a MEMS sensor-controlled haptic forefinger robotic aid. The proposed system uses a MEMS sensor placed on the forefinger to detect the direction of finger movement. This direction information is transmitted via RF to a receiving microcontroller unit that commands a robot to move in the corresponding direction. The design aims to provide a low-cost robotic aid for physically challenged individuals by allowing simple control of a robot through natural forefinger movements. Experimental results validating the forefinger-based directional control of the robot are presented.
The document summarizes a research project that developed Skinput, a technique for using the human body as an input surface. A wearable armband containing piezoelectric sensors is used to capture vibrations from finger taps on the arm. The sensors are tuned to specific resonant frequencies to detect relevant low-frequency signals transmitted through soft tissues and bones. Machine learning classifiers analyze the acoustic features from the sensors to determine the location and timestamps of finger taps, allowing the arm to serve as a touch input surface. Initial proof-of-concept applications are demonstrated.
The document summarizes a technology called Skinput that uses bio-acoustics to turn the human body into an input surface. It does this by using a sensor array worn as an armband to detect vibrations across the skin that are produced when fingers tap on the arm. The summary analyzes these vibrations to determine the location of finger taps. A user study showed it can accurately track multiple fingers. Potential applications are also discussed.
This document presents a multisensory glove that allows prosthetic hands to sense pressure, temperature, and humidity. Off-the-shelf pressure, temperature, and humidity sensors were attached to a flexible printed circuit board and then the board was attached to a glove. The glove was tested on a prosthetic hand by having the hand grasp various objects. Sensors on the glove successfully measured contact parameters like pressure, temperature, and humidity. The low-cost implementation of the smart sensing glove is a step toward providing tactile feedback to prosthetic hand users.
Haptics is the science of applying touch (tactile) sensation and control to interact with computer applications. Haptic device gives people a sense of touch with computer generated environments, so that when virtual objects are touched, they seem real and tangible. Haptic technology refers to technology that interfaces the user with a virtual environment via the sense of touch by applying forces, vibrations, and/or motions to the user. This mechanical stimulation may be used to assist in the creation of virtual objects (objects existing only in a computer simulation), for control of such virtual objects, and to enhance the remote control of machines and devices. This paper includes how haptic technology works, about its devices, its technologies, its applications, future developments and disadvantages.
Haptic Technology- Interaction with Virtualityvivatechijri
The document discusses haptic technology, which uses tactile feedback to allow users to interact with virtual objects. It begins by defining haptics and describing how it enhances features of virtual reality like immersion and interaction. It then explains how haptic devices work by applying forces, pressures, or vibrations using actuators. Common haptic devices like the Phantom and CyberGrasp are described. The document outlines several applications of haptics in fields like virtual reality, surgical simulation, mobile phones, and wearable devices. It concludes by discussing the future of haptics, including its growing role in medical training through virtual reality and a shift toward more digital controls using haptic feedback.
The document describes SkinTrack, a system that uses the human body as an electrical waveguide to enable continuous finger tracking on the skin. It consists of a ring that emits a high frequency AC signal and a wristband with electrodes. Due to phase delays as the signal propagates through the body, the wristband can measure phase differences between electrode pairs to determine the 2D location of a touch with millimeter accuracy. The authors believe this non-invasive approach could enable new touch interactions on small devices like smartwatches beyond their limited screen areas.
This document describes the Skinput technology, which uses bio-acoustic sensing to localize finger taps on the skin. It can provide a direct manipulation graphical user interface projected directly onto the body. The technology was developed by researchers at Microsoft. It consists of a wearable arm band with multiple piezoelectric sensors of different resonant frequencies. When the skin is tapped, acoustic waves propagate through the body and are detected by the sensors. The location can be identified based on differences in signal arrival times and frequencies between sensors. User studies showed it can accurately detect taps on different areas of the arm.
This document summarizes a research paper about a MEMS sensor-controlled haptic forefinger robotic aid. The proposed system uses a MEMS sensor placed on the forefinger to detect the direction of finger movement. This direction information is transmitted via RF to a receiving microcontroller unit that commands a robot to move in the corresponding direction. The design aims to provide a low-cost robotic aid for physically challenged individuals by allowing simple control of a robot through natural forefinger movements. Experimental results validating the forefinger-based directional control of the robot are presented.
The document summarizes a research project that developed Skinput, a technique for using the human body as an input surface. A wearable armband containing piezoelectric sensors is used to capture vibrations from finger taps on the arm. The sensors are tuned to specific resonant frequencies to detect relevant low-frequency signals transmitted through soft tissues and bones. Machine learning classifiers analyze the acoustic features from the sensors to determine the location and timestamps of finger taps, allowing the arm to serve as a touch input surface. Initial proof-of-concept applications are demonstrated.
The document summarizes a technology called Skinput that uses bio-acoustics to turn the human body into an input surface. It does this by using a sensor array worn as an armband to detect vibrations across the skin that are produced when fingers tap on the arm. The summary analyzes these vibrations to determine the location of finger taps. A user study showed it can accurately track multiple fingers. Potential applications are also discussed.
This document presents a multisensory glove that allows prosthetic hands to sense pressure, temperature, and humidity. Off-the-shelf pressure, temperature, and humidity sensors were attached to a flexible printed circuit board and then the board was attached to a glove. The glove was tested on a prosthetic hand by having the hand grasp various objects. Sensors on the glove successfully measured contact parameters like pressure, temperature, and humidity. The low-cost implementation of the smart sensing glove is a step toward providing tactile feedback to prosthetic hand users.
Haptics is the science of applying touch (tactile) sensation and control to interact with computer applications. Haptic device gives people a sense of touch with computer generated environments, so that when virtual objects are touched, they seem real and tangible. Haptic technology refers to technology that interfaces the user with a virtual environment via the sense of touch by applying forces, vibrations, and/or motions to the user. This mechanical stimulation may be used to assist in the creation of virtual objects (objects existing only in a computer simulation), for control of such virtual objects, and to enhance the remote control of machines and devices. This paper includes how haptic technology works, about its devices, its technologies, its applications, future developments and disadvantages.
Haptic Technology- Interaction with Virtualityvivatechijri
The document discusses haptic technology, which uses tactile feedback to allow users to interact with virtual objects. It begins by defining haptics and describing how it enhances features of virtual reality like immersion and interaction. It then explains how haptic devices work by applying forces, pressures, or vibrations using actuators. Common haptic devices like the Phantom and CyberGrasp are described. The document outlines several applications of haptics in fields like virtual reality, surgical simulation, mobile phones, and wearable devices. It concludes by discussing the future of haptics, including its growing role in medical training through virtual reality and a shift toward more digital controls using haptic feedback.
This presentation is about the basic haptic technology. what it is? how it works?? & what are the terms we need to know to make full understanding of this technology.
Haptic technology allows users to interact with virtual objects through touch by simulating the sense of touch and manipulation via forces, vibrations, and motions. It has progressed through multiple generations from using electromagnetic technologies to providing both localized touch responses and customizable haptic effects. Haptic technology has applications in medical simulation, video games, mobile devices, and assistive technologies for the blind. It provides advantages like reducing work time and improving communication and confidence through a more realistic digital experience of touch. However, haptic interfaces also have limitations regarding cost, size, exertable force precision, and bandwidth. Overall, haptic technology brings virtual worlds closer to reality by allowing virtual objects to be touched, felt, and controlled.
The document describes a new input technique called Skinput that allows a user's skin and body to be used as an input surface. It uses a wearable armband with small vibration sensors to detect finger taps on the arm based on the unique acoustic patterns generated. When a finger taps the skin, acoustic waves are produced and transmitted through the soft tissues and bones of the arm. The armband sensors are tuned to different resonant frequencies to pick up on these frequency signals. Experiments showed the system could accurately detect taps on different areas of the arm and distinguish individual fingers. This provides an "always available" input that does not require the user to hold or touch a device.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
This document provides an overview of computer haptics and haptic devices. It discusses how haptics allows users to interact with and feel virtual objects through force feedback simulations. The document outlines the history of haptics research from psychophysics studies in the early 20th century to modern developments that combine haptic interfaces, computational modeling, and an understanding of human touch perception. It describes the two main human haptic senses - kinesthetic perception of forces/torques and tactile perception of textures/vibrations. Finally, it categorizes different types of haptic devices, including graspable devices like mice/joysticks, wearable devices like gloves, and touchable surfaces that allow exploration of virtual textures.
Design a System for Hand Gesture Recognition with Neural NetworkIRJET Journal
This document proposes a system for recognizing hand gestures using surface electromyography (sEMG) and artificial neural networks. sEMG signals are collected from the forearm using a Myo wristband to measure muscle activity. The signals are preprocessed to remove noise and extract time and frequency domain features. An artificial neural network classifier is then trained to predict different gesture classes from the features, achieving 87.32% accuracy in recognizing various hand movements. The proposed system provides an effective method for hand gesture recognition using sEMG signals and neural networks for applications in human-computer interaction and assistive technologies.
The document describes a wireless hand gesture system called GestureNail that is installed on artificial fingernails. It uses an LED and coil powered by radio frequency to notify the user when their finger is within the gesture detection area. A study found the notification increased task success rate from 85% to 100% and reduced task time by 0.7 seconds. The system could enable contactless control of devices like home electronics, public interfaces, or adding click gestures to tablets.
The document discusses haptic technology, which uses tactile feedback to allow users to experience virtual objects through touch. It provides an overview of haptics, including how haptic systems work, different types of haptic devices, and applications in fields like virtual reality, robotics, medicine, and gaming. The document also outlines the history and future potential of haptics, such as for telepresence surgery or interactive holograms. Haptics brings the sense of touch to digital worlds and can increase realism, though challenges remain in precision and cost.
This document discusses touchscreen devices and technologies. It begins with an introduction to touchscreens, describing them as input and output devices that allow users to control information systems through touch gestures on the screen. It then covers the history, components, technologies, advantages, disadvantages, and applications of touchscreens. The technologies section describes different types of touch sensing technologies including resistive, capacitive, surface acoustic wave, infrared, and optical imaging. The document concludes by discussing future touchscreen technologies such as multi-touch and morphing touchscreens.
Haptic technology uses tactile feedback via forces, vibrations or motions to enhance the sense of touch in virtual environments. It allows for the creation of virtual objects that can be controlled and manipulated through touch. Haptic devices incorporate tactile sensors to measure user input forces and have advanced our understanding of how the sense of touch works. Research in haptics has applications in fields like virtual reality, robotics, medicine, and more.
Haptic technology adds the sense of touch to virtual environments through haptic interfaces. This allows users to feel virtual objects on a computer through forces, vibrations, and motions. Haptic interfaces track user movements and apply forces through motors. Haptic rendering algorithms compute interaction forces between virtual objects and the user's movements in real-time. Applications include medical training simulations, remote robotics, virtual prototyping, and assisting those with disabilities.
This document describes the design and implementation of a sensitive artificial prosthetic hand glove. The glove uses various sensors to provide feedback about texture, shape, size, temperature, and weight of objects. Pressure, temperature, and humidity sensors were selected and placed on flexible printed circuits attached to a glove. Testing showed the sensor placements allowed detection of pressure and temperature changes when manipulating different objects. The low-cost smart glove design has potential applications for prosthetic limbs if further developed.
1. The document discusses touchless touch screen technology as an alternative to traditional touchscreens. It works by using sensors and cameras to detect hand motions and gestures in front of the screen rather than requiring physical touch.
2. Examples of companies developing this technology include Elliptic Labs, which uses ultrasound sensors to recognize gestures like hand waves to control devices without touching them. Microsoft is also exploring touchless interfaces using gestures.
3. Potential applications discussed include touchless monitors for medical use when wearing gloves, as well as general control of devices like computers through gestures and hand motions in front of a sensor. The technology aims to avoid issues like screen degradation from frequent physical touching.
Recent progress in tactile sensor technologymakoto shimojo
In this paper, I would like to explain the role of the sense of touch, recent research and development trends, and the desired functions of tactile sensors, focusing on the robotics field.
1.Introduction
2.what are the characteristics of tactile?
1. Confirmation by contact
2. Detection of invisible objects
3. primordial sensation
3.Mechanism and functions of tactile sensors
4.Recent trends in R&D of tactile sensors
1. Use of new methods & materials
2. Modularity
3. Use of camera modules
4. Robot-Assisted Surgery (RAS)
5. Tactile - Proximity sensor
6. Increasing use in machine learning
5.Desired functions of tactile sensors
1. The center position of the load and its total load would be sufficient?
2. What applications require slip detection?
3. Fast Response Is Important
4. Freeform Covering
5. Detecting the shape features of the contact area
6. Integrated tactile - proximity sensor is convenient
7. How does the fingertip cover work?
8. Development of a sensor-integrated hand
6.Summary & References
This document provides an introduction and overview of a project on vision-based hand gesture recognition. It discusses the motivation for the project and how hand gestures can provide a more natural human-computer interaction compared to traditional input devices like keyboards and mice. The document outlines the objectives of the project, which are to develop a system that can identify specific hand gestures using a webcam and interpret them to control mouse operations on a computer. It also provides an overview of the organization of the project report and the topics that will be discussed in subsequent chapters, such as the literature review, proposed methodology, results, and conclusions.
The document discusses haptic technology, which uses tactile feedback to allow users to interact with virtual objects. It provides a brief history of haptics, explaining how robotics and virtual reality helped advance the field. The core concepts covered include haptic feedback, which simulates the sense of touch through tactile and force feedback. Various haptic devices are categorized and applications like gaming, mobile devices, and medicine are explored. The document concludes by discussing the future potential of haptics in fields like telepresence surgery, virtual shopping, and new industries enabled by more advanced haptic interfaces.
Survey on Red Tacton Technology for Human Area NetworkIOSR Journals
This document summarizes research on Red Tacton technology for human area networks (HANs). Red Tacton uses the human body as a communication medium by transmitting data via electric fields induced on the skin's surface. It allows devices to communicate when touching a person. The document discusses Red Tacton's working principles, safety measures, applications in areas like healthcare monitoring and underwater communication, and comparisons to other HAN technologies. It concludes that Red Tacton is well-suited for wireless pervasive environments and its use in such networks will continue to expand.
This document describes a digital vocalizer system that uses a data glove with flex sensors and an accelerometer to detect hand gestures. The sensors detect finger bending and hand tilt/position. The Arduino UNO microcontroller converts these detected gestures into corresponding audio words or visual text displayed on an LCD screen. This system aims to help reduce communication barriers between deaf/mute/blind communities and others by translating gestures into audio and visual outputs.
The document proposes using the Leap Motion controller and a 6-DOF Jaco robotic arm to allow for intuitive and adaptive robotic arm manipulation. An algorithm would allow optimum mapping between a user's hand movements tracked by the Leap Motion controller and movements of the Jaco arm. This would allow a more natural human-computer interaction and smooth manipulation of the robotic arm, adapting to hand tremors or shakes. The goal is to enhance quality of living for people with upper limb problems and support them in performing essential daily tasks.
Deadline 6 PM Friday September 27, 201310 Project Management Que.docxedwardmarivel
Deadline 6 PM Friday September 27, 2013
10 Project Management Questions with sub-questions under each question. A word document is provided with all questions and directions.
Problem 1
The following data were obtained from a project to create a new portable electronic.
Activity
Duration
Predecessors
A
5 Days
---
B
6 Days
---
C
8 Days
---
D
4 Days
A, B
E
3 Days
C
F
5 Days
D
G
5 Days
E, F
H
9 Days
D
I
12 Days
G
Step 1: Construct a network diagram for the project.
Step 2: Answer the following questions:
a)
What is the Scheduled Completion of the Project?
b)
What is the Critical Path of the Project?
c)
What is the ES for Activity D?
d)
What is the LS for Activity G?
e)
What is the EF for Activity B?
f)
What is the LF for Activity H?
g)
What is the float for Activity I?
Problem 2
The following data were obtained from a project to build a pressure vessel:
Activity
Duration
Predecessors
A
6 weeks
---
B
6 weeks
---
C
5 weeks
B
D
4 weeks
A, C
E
5 weeks
B
F
7 weeks
D, E, G
G
4 weeks
B
H
8 weeks
F
I
5 weeks
G
J
3 week
I
Step 1: Construct a network diagram for the project.
Step 2: Answer the following questions:
a)
Calculate the scheduled completion time.
b)
Identify the critical path
c)
What is the slack time (float) for activity A?
d)
What is the slack time (float) for activity D?
e) What is the slack time (float) for activity E?
f) What is the slack time (float) for activity G?
Problem 3
The following data were obtained from a project to design a new software package:
Activity
Duration
Predecessors
A
5 Days
---
B
8 Days
---
C
6 Days
A
D
4 Days
C, B
E
5 Days
A
F
4 Days
D, E, G
G
4 Days
B, C
H
3 Day
G
Step 1: Construct a network diagram for the project.
Step 2: Answer the following questions:
a)
Calculate the scheduled completion time.
b)
Identify the critical path(s)
c)
What is the slack time (float) for activity B?
d)
What is the slack time (float) for activity D?
e) What is the slack time (float) for activity E?
f) What is the slack time (float) for activity G?
Problem 4
The following data were obtained from an in-house MIS project:
Activity
Duration
Predecessors
A
5 Days
---
B
8 Days
---
C
5 Days
A
D
4 Days
B
E
5 Days
B
F
3 Day
C, D
G
7 Days
C, D
H
6 Days
E, F, G
I
9 Days
E, F
Step 1: Construct a network diagram for the project.
Step 2: Answer the following questions:
a)
Calculate the scheduled completion time.
b)
Identify the critical path
c)
What is the slack time (float) for activity A?
d)
What is the slack time (float) for activity D?
e)
What is the slack time (float) for activity E?
f)
What is the slack time (float) for activity F?
PROBLEM 5
Use the network diagram below and the additional information provided to answer the corresponding questions.
a) Give the crash cost per day per activity.
b) Which activities should be crash.
More Related Content
Similar to Touché Enhancing Touch Interaction on Humans, Screens, Liqu.docx
This presentation is about the basic haptic technology. what it is? how it works?? & what are the terms we need to know to make full understanding of this technology.
Haptic technology allows users to interact with virtual objects through touch by simulating the sense of touch and manipulation via forces, vibrations, and motions. It has progressed through multiple generations from using electromagnetic technologies to providing both localized touch responses and customizable haptic effects. Haptic technology has applications in medical simulation, video games, mobile devices, and assistive technologies for the blind. It provides advantages like reducing work time and improving communication and confidence through a more realistic digital experience of touch. However, haptic interfaces also have limitations regarding cost, size, exertable force precision, and bandwidth. Overall, haptic technology brings virtual worlds closer to reality by allowing virtual objects to be touched, felt, and controlled.
The document describes a new input technique called Skinput that allows a user's skin and body to be used as an input surface. It uses a wearable armband with small vibration sensors to detect finger taps on the arm based on the unique acoustic patterns generated. When a finger taps the skin, acoustic waves are produced and transmitted through the soft tissues and bones of the arm. The armband sensors are tuned to different resonant frequencies to pick up on these frequency signals. Experiments showed the system could accurately detect taps on different areas of the arm and distinguish individual fingers. This provides an "always available" input that does not require the user to hold or touch a device.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
This document provides an overview of computer haptics and haptic devices. It discusses how haptics allows users to interact with and feel virtual objects through force feedback simulations. The document outlines the history of haptics research from psychophysics studies in the early 20th century to modern developments that combine haptic interfaces, computational modeling, and an understanding of human touch perception. It describes the two main human haptic senses - kinesthetic perception of forces/torques and tactile perception of textures/vibrations. Finally, it categorizes different types of haptic devices, including graspable devices like mice/joysticks, wearable devices like gloves, and touchable surfaces that allow exploration of virtual textures.
Design a System for Hand Gesture Recognition with Neural NetworkIRJET Journal
This document proposes a system for recognizing hand gestures using surface electromyography (sEMG) and artificial neural networks. sEMG signals are collected from the forearm using a Myo wristband to measure muscle activity. The signals are preprocessed to remove noise and extract time and frequency domain features. An artificial neural network classifier is then trained to predict different gesture classes from the features, achieving 87.32% accuracy in recognizing various hand movements. The proposed system provides an effective method for hand gesture recognition using sEMG signals and neural networks for applications in human-computer interaction and assistive technologies.
The document describes a wireless hand gesture system called GestureNail that is installed on artificial fingernails. It uses an LED and coil powered by radio frequency to notify the user when their finger is within the gesture detection area. A study found the notification increased task success rate from 85% to 100% and reduced task time by 0.7 seconds. The system could enable contactless control of devices like home electronics, public interfaces, or adding click gestures to tablets.
The document discusses haptic technology, which uses tactile feedback to allow users to experience virtual objects through touch. It provides an overview of haptics, including how haptic systems work, different types of haptic devices, and applications in fields like virtual reality, robotics, medicine, and gaming. The document also outlines the history and future potential of haptics, such as for telepresence surgery or interactive holograms. Haptics brings the sense of touch to digital worlds and can increase realism, though challenges remain in precision and cost.
This document discusses touchscreen devices and technologies. It begins with an introduction to touchscreens, describing them as input and output devices that allow users to control information systems through touch gestures on the screen. It then covers the history, components, technologies, advantages, disadvantages, and applications of touchscreens. The technologies section describes different types of touch sensing technologies including resistive, capacitive, surface acoustic wave, infrared, and optical imaging. The document concludes by discussing future touchscreen technologies such as multi-touch and morphing touchscreens.
Haptic technology uses tactile feedback via forces, vibrations or motions to enhance the sense of touch in virtual environments. It allows for the creation of virtual objects that can be controlled and manipulated through touch. Haptic devices incorporate tactile sensors to measure user input forces and have advanced our understanding of how the sense of touch works. Research in haptics has applications in fields like virtual reality, robotics, medicine, and more.
Haptic technology adds the sense of touch to virtual environments through haptic interfaces. This allows users to feel virtual objects on a computer through forces, vibrations, and motions. Haptic interfaces track user movements and apply forces through motors. Haptic rendering algorithms compute interaction forces between virtual objects and the user's movements in real-time. Applications include medical training simulations, remote robotics, virtual prototyping, and assisting those with disabilities.
This document describes the design and implementation of a sensitive artificial prosthetic hand glove. The glove uses various sensors to provide feedback about texture, shape, size, temperature, and weight of objects. Pressure, temperature, and humidity sensors were selected and placed on flexible printed circuits attached to a glove. Testing showed the sensor placements allowed detection of pressure and temperature changes when manipulating different objects. The low-cost smart glove design has potential applications for prosthetic limbs if further developed.
1. The document discusses touchless touch screen technology as an alternative to traditional touchscreens. It works by using sensors and cameras to detect hand motions and gestures in front of the screen rather than requiring physical touch.
2. Examples of companies developing this technology include Elliptic Labs, which uses ultrasound sensors to recognize gestures like hand waves to control devices without touching them. Microsoft is also exploring touchless interfaces using gestures.
3. Potential applications discussed include touchless monitors for medical use when wearing gloves, as well as general control of devices like computers through gestures and hand motions in front of a sensor. The technology aims to avoid issues like screen degradation from frequent physical touching.
Recent progress in tactile sensor technologymakoto shimojo
In this paper, I would like to explain the role of the sense of touch, recent research and development trends, and the desired functions of tactile sensors, focusing on the robotics field.
1.Introduction
2.what are the characteristics of tactile?
1. Confirmation by contact
2. Detection of invisible objects
3. primordial sensation
3.Mechanism and functions of tactile sensors
4.Recent trends in R&D of tactile sensors
1. Use of new methods & materials
2. Modularity
3. Use of camera modules
4. Robot-Assisted Surgery (RAS)
5. Tactile - Proximity sensor
6. Increasing use in machine learning
5.Desired functions of tactile sensors
1. The center position of the load and its total load would be sufficient?
2. What applications require slip detection?
3. Fast Response Is Important
4. Freeform Covering
5. Detecting the shape features of the contact area
6. Integrated tactile - proximity sensor is convenient
7. How does the fingertip cover work?
8. Development of a sensor-integrated hand
6.Summary & References
This document provides an introduction and overview of a project on vision-based hand gesture recognition. It discusses the motivation for the project and how hand gestures can provide a more natural human-computer interaction compared to traditional input devices like keyboards and mice. The document outlines the objectives of the project, which are to develop a system that can identify specific hand gestures using a webcam and interpret them to control mouse operations on a computer. It also provides an overview of the organization of the project report and the topics that will be discussed in subsequent chapters, such as the literature review, proposed methodology, results, and conclusions.
The document discusses haptic technology, which uses tactile feedback to allow users to interact with virtual objects. It provides a brief history of haptics, explaining how robotics and virtual reality helped advance the field. The core concepts covered include haptic feedback, which simulates the sense of touch through tactile and force feedback. Various haptic devices are categorized and applications like gaming, mobile devices, and medicine are explored. The document concludes by discussing the future potential of haptics in fields like telepresence surgery, virtual shopping, and new industries enabled by more advanced haptic interfaces.
Survey on Red Tacton Technology for Human Area NetworkIOSR Journals
This document summarizes research on Red Tacton technology for human area networks (HANs). Red Tacton uses the human body as a communication medium by transmitting data via electric fields induced on the skin's surface. It allows devices to communicate when touching a person. The document discusses Red Tacton's working principles, safety measures, applications in areas like healthcare monitoring and underwater communication, and comparisons to other HAN technologies. It concludes that Red Tacton is well-suited for wireless pervasive environments and its use in such networks will continue to expand.
This document describes a digital vocalizer system that uses a data glove with flex sensors and an accelerometer to detect hand gestures. The sensors detect finger bending and hand tilt/position. The Arduino UNO microcontroller converts these detected gestures into corresponding audio words or visual text displayed on an LCD screen. This system aims to help reduce communication barriers between deaf/mute/blind communities and others by translating gestures into audio and visual outputs.
The document proposes using the Leap Motion controller and a 6-DOF Jaco robotic arm to allow for intuitive and adaptive robotic arm manipulation. An algorithm would allow optimum mapping between a user's hand movements tracked by the Leap Motion controller and movements of the Jaco arm. This would allow a more natural human-computer interaction and smooth manipulation of the robotic arm, adapting to hand tremors or shakes. The goal is to enhance quality of living for people with upper limb problems and support them in performing essential daily tasks.
Similar to Touché Enhancing Touch Interaction on Humans, Screens, Liqu.docx (20)
Deadline 6 PM Friday September 27, 201310 Project Management Que.docxedwardmarivel
Deadline 6 PM Friday September 27, 2013
10 Project Management Questions with sub-questions under each question. A word document is provided with all questions and directions.
Problem 1
The following data were obtained from a project to create a new portable electronic.
Activity
Duration
Predecessors
A
5 Days
---
B
6 Days
---
C
8 Days
---
D
4 Days
A, B
E
3 Days
C
F
5 Days
D
G
5 Days
E, F
H
9 Days
D
I
12 Days
G
Step 1: Construct a network diagram for the project.
Step 2: Answer the following questions:
a)
What is the Scheduled Completion of the Project?
b)
What is the Critical Path of the Project?
c)
What is the ES for Activity D?
d)
What is the LS for Activity G?
e)
What is the EF for Activity B?
f)
What is the LF for Activity H?
g)
What is the float for Activity I?
Problem 2
The following data were obtained from a project to build a pressure vessel:
Activity
Duration
Predecessors
A
6 weeks
---
B
6 weeks
---
C
5 weeks
B
D
4 weeks
A, C
E
5 weeks
B
F
7 weeks
D, E, G
G
4 weeks
B
H
8 weeks
F
I
5 weeks
G
J
3 week
I
Step 1: Construct a network diagram for the project.
Step 2: Answer the following questions:
a)
Calculate the scheduled completion time.
b)
Identify the critical path
c)
What is the slack time (float) for activity A?
d)
What is the slack time (float) for activity D?
e) What is the slack time (float) for activity E?
f) What is the slack time (float) for activity G?
Problem 3
The following data were obtained from a project to design a new software package:
Activity
Duration
Predecessors
A
5 Days
---
B
8 Days
---
C
6 Days
A
D
4 Days
C, B
E
5 Days
A
F
4 Days
D, E, G
G
4 Days
B, C
H
3 Day
G
Step 1: Construct a network diagram for the project.
Step 2: Answer the following questions:
a)
Calculate the scheduled completion time.
b)
Identify the critical path(s)
c)
What is the slack time (float) for activity B?
d)
What is the slack time (float) for activity D?
e) What is the slack time (float) for activity E?
f) What is the slack time (float) for activity G?
Problem 4
The following data were obtained from an in-house MIS project:
Activity
Duration
Predecessors
A
5 Days
---
B
8 Days
---
C
5 Days
A
D
4 Days
B
E
5 Days
B
F
3 Day
C, D
G
7 Days
C, D
H
6 Days
E, F, G
I
9 Days
E, F
Step 1: Construct a network diagram for the project.
Step 2: Answer the following questions:
a)
Calculate the scheduled completion time.
b)
Identify the critical path
c)
What is the slack time (float) for activity A?
d)
What is the slack time (float) for activity D?
e)
What is the slack time (float) for activity E?
f)
What is the slack time (float) for activity F?
PROBLEM 5
Use the network diagram below and the additional information provided to answer the corresponding questions.
a) Give the crash cost per day per activity.
b) Which activities should be crash.
DEADLINE 15 HOURS
6 PAGES
UNDERGRADUATE
COURSEWORK
HARVARD FORMATING
DOUBLE SPACING
INSTRUCTIONS
This assignment seeks to assess your ability to:
• Critically evaluate and discuss the major developments during 2017 in corporate taxation from the perspective of multinational companies and their auditors, governments and other stakeholders.
• Apply appropriate knowledge, analytical techniques and concepts to problems and issues arising from both familiar and unfamiliar situations;
• Think critically, examine problems and issues from a number of perspectives, challenge viewpoints, ideas and concepts and make well-reasoned judgements;
• Present, discuss and defend ideas, concepts and views effectively through formal language.
Background:
In the final weeks of 2017 a leading tax expert suggested that “a whirlwind of international tax changes has swept the globe”. He also went on to say that for companies operating in Europe there is no end in sight to the pace of change. The final recommendations on base erosion and profit shifting (BEPS) from the OECD have been endorsed by the EU. In fact a number of European governments have already implemented large parts of these proposals ahead of schedule.
The third quarter of the year saw the European Commission in the spotlight with its landmark decision that the technology giant Apple must repay no less than €13 billion of taxes to the Irish government. This ruling was based on the view that the favourable tax treatment was effectively state aid and hence the Irish government had broken EU law. At the same time countries across the world continue to compete by reducing the rate of corporate taxes. Many commentators suggest that the UK government will cut the corporate tax rate to 10% if the country fails to negotiate a trade deal with the European Union as part of the Brexit process. In a separate development earlier in the year the government of Hungary announced it would become the tax haven of Central Europe with a plan to reduce corporation tax to a mere 9%.
Required:
You are to write a report for the Board of Directors of a listed global company that has manufacturing and R&D activities across Europe, Asia, Australasia and America. The report should assume that the directors have detailed knowledge of the group activities but are not taxation specialists. However they would be aware of issues relating to corporate governance, transparency and reputational risks.
The report should cover the following aspects:
Evaluate the major developments that occurred in corporate taxation in 2017 and the issues that may arise in the current year.
Discuss the implications for the group in regard to the relationship with its auditors.
Consider how other stakeholders and non-governmental organisations (NGOs) may be affected by changes in the level of corporate taxes and their possible reaction.
The resources below are on Blackboard and provide an introduction to the topic.
“Corpor.
De nada.El gusto es mío.Encantada.Me llamo Pepe.Muy bien, grac.docxedwardmarivel
Este documento presenta varios diálogos y conversaciones cortas que incluyen saludos comunes, preguntas sobre el origen y el nombre de las personas, y despedidas. Los diálogos practican vocabulario y estructuras básicas de conversación en español.
DDL 24 hours reading the article and writing a 1-page doubl.docxedwardmarivel
DDL:
24 hours
reading the article and writing a
1-page double space
annotated bibliography
including:
1.reference
2.specify the concept you will use
3.explain its significance to the course
4.specify how you'll use it in your project
see the article and project inf below
.
*
DCF valuation methodSuper-normal growth modelApplications: single CF, annuity, perpetuity, uneven CFs, bond, stock, etc.
LECTURE 2 Valuation Basics
(Chapters 4, 6, 7)
*
Amount of cash flows expectedRisk of the cash flows Timing of the cash flow stream
Factors that Determine Value
*
DCF Method: General Formula
Finding PVs is discounting. The discount factor i is determined by the cost of capital invested.
*
10%
Single Cash Flow
100
0
1
2
3
PV = ?
What’s the PV of $100 due in 3 years if i = 10%?
*
Financial Calculator Setup
BGN END
P/Y 1
FORMAT: DEC 4 or larger
*
Financial Calculator
Solution
s
N I/YR PV PMTFV
?
N = 3, I/YR = 10, PMT = 0, FV = 100
CPT, PV
-75.13
/
INPUTS
OUTPUT
*
Spreadsheet
.
DDBA 8307 Week 2 Assignment Exemplar
John Doe[footnoteRef:1] [1: Type your name here]
DDBA 8307-6[footnoteRef:2] [2: Type in DDBA section number (e.g. DDBA 8307 – 6) ]
Dr. Jane Doe[footnoteRef:3] [3: Enter faculty name here.]
1
Scales of Measurement
Type text here. Discuss the implications of “scales of measurement” in quantitative research. Be sure to use a minimum of two citations to support your position(s). Be sure to review the “Scales of Measurement” media from Week 1. This section should be no more than two paragraphs.
Research Question
What are the means, standard deviations, frequencies, and percentages of the Lesson 21 Exercise File variables?
Presentation of Findings
I analyzed data from Lesson 21 Exercise File [footnoteRef:4]. In this section, I present descriptive statistics for the study quantitative and qualitative variables. Appropriate APA tables and figures accompany the analysis[footnoteRef:5]. [4: Insert the appropriate file name. ] [5: The tables and figures from your SPSS output will need to be copied and pasted in the appropriate location.]
Descriptive Statistics[footnoteRef:6] [6: Detailed information can be found in Lesson 20, “Univariate Descriptive Statistics for Qualitative Variables,” and Lesson 21, “Univariate Descriptive Statistics for Quantitative Variables,” in the Green and Salkind text.
]
Descriptive statistics were run for the quantitative and qualitative variables in the Week 1 Assignment data set. Table 1 depicts the means and standard deviations for the quantitative data. Figure 1 depicts a histogram for the GPA variable. Table 2 depicts the frequencies and percentages for the qualitative (categorical) data. Figure 2 depicts a pie chart for the ethnic variable. Appendix 1 depicts the SPSS output.
Table 1[footnoteRef:7] [7: This is an example of an APA-formatted descriptive statistics table. Refer to Sections 5.01-5.19, in the APA Manual for detailed information on APA tables. The descriptive statistics table here includes the appropriate information derived from the SPSS output that is to be pasted as an appendix. Do not split tables across pages. Note: The numbers in the SPSS output presented here are fictitious numbers and do not represent correct numbers in the data set you will use for this application.
]
Means (M) and Standard Deviations (SD) for Study
Quantitative Variables (N = 105)
Variable[footnoteRef:8] [8: You would simply add rows to the table to accommodate the variables you have used in the analysis (i.e., variable 3, variable 4, etc.). Hint: Use the Microsoft Word Table feature.
]
M
SD
GPA
2.78
.76
Final
61.48
7.94
Percent
80.34
12.12
Figure 1. Histogram of GPA distribution.
Table 2[footnoteRef:9] [9: Recall from Lesson 20, “Univariate Descriptive Statistics for Qualitative Variables” (Green & Salkind, 2017), frequencies and percentages are reported for qualitative (nominal) variables. Note: Frequency and percentages are the only c.
DBM380 v14Create a DatabaseDBM380 v14Page 2 of 2Create a D.docxedwardmarivel
DBM/380 v14
Create a Database
DBM/380 v14
Page 2 of 2Create a Database
The following assignment is based on the business scenario for which you created both an entity-relationship diagram and a normalized database design in Week 2.
For this assignment, you will create multiple related tables that match your normalized database design. In other words, you will implement a physical design (an actual, usable database) based on a logical design.
Refer to the linked W3Schools.com articles “SQL CREATE TABLE Statement,” “SQL PRIMARY KEY Constraint,” “SQL FOREIGN KEY Constraint,” and “SQL INSERT INTO Statement” for help in completing this assignment.
Note: In the industry, even the most carefully thought out database designs can contain mistakes. Feel free to correct in your tables any mistakes you notice in your normalized database design. Also, note that in Microsoft® Access®, you follow the steps below to launch the SQL editor:
Figure 1. To create a SQL query in Microsoft® Access®, begin by clicking the CREATE tab.
To Complete This Assignment:
1. Use the CREATE TABLE statement to create each table in your design. Note that a table in a RDMS corresponds to an entity in an entity-relationship diagram. Recommended tables for this assignment are CUSTOMER, ORDER, ORDER_DETAIL, PRODUCT, EMPLOYEE, and STORE.
2. As part of each CREATE TABLE statement, define all of the columns, or fields, that you want each particular table to contain. Give them short, meaningful names and include constraints; that is, describe what type of data each column (field) is allowed to hold and any other constraints, such as size, range, or uniqueness.
3. Note that any field you marked as a unique identifier in your normalized database design is a key field. Key fields must be described as both UNIQUE and NOT NULL, which means a value must exist for each record and that value must be unique across all records.
4. After you have created all six tables, including relationships between the tables as appropriate (matching the primary key in one table to a foreign key in another table), use the INSERT INTO statement to insert 10 records into each of your tables. You will need to make up the data you insert into your tables. For example, to insert one record into the CUSTOMER table, you will need to invent a customer number, a customer name, and so on—one value for each of the fields you defined for the CUSTOMER table—to insert into the table.
5. To ensure that your INSERT INTO statements succeeded in populating your tables, use the SELECT statement described in Ch. 7, “Introduction to Structured Query Language,” in Database Systems: Design, Implementation, and Management.to retrieve the records you inserted. For example, to see all 10 records you inserted into the CUSTOMER table, you might apply the following SQL statement: SELECT * FROM CUSTOMER;
After you have created all six tables and populated ten records in each table, submit to the Assignment Files tab the database containin.
DB3.1 Mexico corruptionDiscuss the connection between pol.docxedwardmarivel
DB3.1: Mexico corruption
Discuss the connection between politics, corruption, and criminal organizations in Mexico. How would you go about separating these? Give examples and be specific. Support your ideas on why you would do these specific measures.
DB3.2: Collapse of Soviet Union
How has the collapse of the Soviet Union fostered pirate capitalism and organized crime? Be specific with your answer and support your answer. Do you think that if the Soviet Union did not collapse pirate capitalism and organized crime would still flourish? Support your opinion.
300 words per post
.
DB2Pepsi Co and Coke American beverage giants, must adhere to th.docxedwardmarivel
DB2
Pepsi Co and Coke American beverage giants, must adhere to the U.S Foreign Corruption Act wherever their businesses may take them. Both companies expanded their U.S businesses to India with differing initial results. Coke came home (initially) and Pepsi Co prospered.
Do your research and explain the socio-cultural barriers faced by these two companies? What in your view were the reasons which negatively impacted Coke and positively touched Pepsi Co?
WEEK 3:
Interactive
: Select one company other than the 2 mentioned above, and share this company’s experience in the United Arab Emirates. Comment on another learner’s company experience in a different location of the world.
WEEK 4:
Interactive
: Comment on a different learner’s company experience in a totally different location from those completed earlier. Do you feel that cultural training is an essential pre-requisite for expatriates in any host country? Why/Why not?
Remember to use APA referencing in the body of your posting.
.
DB1 What Ive observedHave you ever experienced a self-managed .docxedwardmarivel
DB1: What I've observed
Have you ever experienced a self-managed team? If so, describe it. If not, why do you think your organization has not embraced self managed teams?
DB2: Case Analysis
Review the case study at the end of Chapter 8, Frederick W. Smith - FedEx. Answer the five questions below:
1. How do the standards set by Fred Smith for FedEx teams improve organizational performance?
2. What motivates the members of FedEx to remain highly engaged in their teams?
3. Describe the role FedEx managers play in facilitating team effectiveness.
4. What types of teams does FedEx use? Provide evidence from the case to support your answer.
5. Leaders play a critical role in building effective teams. Cite evidence from the case that FedEx managers performed some of these roles in developing effective teams.
Image Source Team:
http://www.freedigitalphotos.net/images/gallery-thumbnails.php?id=50143103253525199427035558
.
DB Response 1I agree with the decision to search the house. Ther.docxedwardmarivel
DB Response 1
I agree with the decision to search the house. There was reasonable suspicion to believe the fugitive could have been in the home. The homeowner not only consented to the search of the house but requested it for her safety. Complacency kills. In this situation, the officer is very regretful in his decision to conduct a complacent search of the home, and luckily nobody was killed.
My department does not have body cameras, but I still conduct business as if somebody is recording me. We live in a generation of surveillance. You never know when there are hidden cameras, a camera on a business you did not notice, or a cell phone recording from the top floor of a building. We hire police officers with high amounts of integrity because the definition of integrity is doing the right thing even when nobody is looking. I would be lying if I said my grandmother would approve of everything I do on the job. I am most guilty of foul language and it is something that I am working on not doing that. However, I can emphatically say I work with integrity and honesty without a doubt.
I think setting limits on tolerable behavior in regards to sexual and general harassment is appropriate; however, there are too many situations to make a policy for every behavior one could find inappropriate. When it comes to using force again every situation is different but there should be a pretty well laid out policy at departments for when and how an officer should use a certain amount of force. Officers should be trained on de-escalation tactics and alternatives to using force. Tactical training should include strategies to create time, space, and distance, to reduce the likelihood that force will be necessary and should occur in realistic conditions appropriate to the department’s location (U.S. Commission On Civil Rights, 2018).
Philippians 2 verses 3 – 8 is a pretty straightforward verse with great leadership lessons. Be humble, put others before yourself, and be a servant leader.
From the very beginning of any interrogation, the accused has constitutional rights not to speak to police and also to have an attorney present. The Eighth Amendment to the Constitution prohibits cruel and unusual punishments placed upon any persons in the U.S. With these rights in mind I will only go as far as the Constitution allows when interrogating this suspect even if the suspect admits where the child is if the admission was coerced that admission could get thrown out of court. I would never compromise the investigation. There are other ways to find the abducted girl through detective work than just interrogating the suspect. The cost of illegal interrogations is documented in the number of lost prosecutions. Literally, thousands of cases across the country have had to be dismissed because prosecutors could not trust that the evidence provided by police officers was legitimate or the officer had lost credibility as a witness in all cases because of his or her wrongdoing (P.
DB Response prompt ZAKChapter 7, Q1.Customers are expecting.docxedwardmarivel
DB Response prompt ZAK
Chapter 7, Q1.
Customers are expecting more from their service providers. Rather than traditionally accepting boilerplate offerings from service providers, customers desire that service providers cater to their requests. Organizations providing services must keep up with the customer’s demand or risk losing business to others who will. Many service providers have been adopting lean principles to accommodate the needs of their customers in successful attempts to decrease waste, increase efficiency, improve customer service and satisfaction (Daft, 2016, p. 275). From online music providers, customers expect music tracks personalized for their tastes. From airlines, customers can expect preflight seat and meal selections. Amazon.com provides custom personalization to a customers’ home pages by placing personally directed advertisements and products which the customer is more likely to order from the company. Amazon book recommendations are personalized to the specific customer and are provided based upon previous books read. With customers expecting customized and catered experiences, companies need to keep up with this demand and embrace mass customization in order to obtain and retain customers.
Chapter 7, Q2.
While many facets of businesses may involve craft technology, it is still important for business schools to teach management. Some businesses which only expect their leaders to gain knowledge and expertise from experience, may be creating a bureaucratic and restricted model for their business. Companies which rely only on internal training for their leaders can miss opportunities from potential leaders coming in from the outside. Business schools which teach management can provide potential leaders with a foundation to draw from. Teaching management can expose students to issues and opportunities experienced by others, not just ones restricted to one specific company. Teaching management from a textbook is just one method of conveying information. Just as one would not necessarily be proficient in piloting a boat from reading a book, a textbook about doing so would provide the student with underlying concepts which could dramatically increase the success of the student when they move to an actual boat. This textbook based training would be further enhanced with some practical experience.
Chapter 8, Q1.
Technology has progressed allowing real time instant messaging and virtual meetings. High level managers can indeed expect technology to allow them to do their jobs with little face-to-face communication, but they should question if that is something they really want to do. There are currently methods available which could be used effectively to communicate with subordinates, employees and stockholders, such as recorded feeds which would be able to reach every associated individual. These however may not provide a sense of personalization from the managers. Leaders in an organization may resort to using tec.
DB Topic of Discussion Information-related CapabilitiesAnalyze .docxedwardmarivel
DB Topic of Discussion: Information-related Capabilities
Analyze 2 of the 14 information-related capabilities and explain how the joint force can use these capabilities to affect the three dimensions of the information environment. Give examples of real-world or life events for the capabilities and how can you use these concepts as a CSM/SGM.
Consumer Brand Metrics Q3 2015
Eater Archetypes:
Brand usage and preferences by consumer segment
The restaurant industry has long relied on demographic factors to
identify and prioritize consumer groups. For example, many
brands currently obsess over attracting Millennials—some
without pausing to consider the variations among consumers
within this demographic cohort. In addition to life stages,
consumer attitudes about health, value, convenience and the
overall role of foodservice in their lives drive significant
differences in preferences and behavior.
With these distinctions in mind, we have updated the Consumer
Brand Metrics (CBM) survey with questions that allow us to
segment consumers into one of seven Eater Archetypes. Each
segment has a distinct psychographic profile, which is outlined in
our recent Consumer Foodservice Landscape. Accordingly, their
patronage of the segments and brands tracked in CBM varies.
This paper explores some differences we can discern after the
initial quarterly results, including the archetypes’ segment usage,
brand patronage and occasion dynamics. Examining CBM data by
Eater Archetype reveals nuances that complement a demographic
profile of a chain’s guests.
By Colleen Rothman, Manager, Consumer Insights
To learn more about the Consumer Brand Metrics program or to sign up for future
Spotlight by Consumer Brand Metrics white papers, please contact Bart Henyan,
Senior Marketing Manager, at [email protected]
Consumer Brand Metrics Q3 2015
Segmenting consumers by psychographic factors, rather than
just demographic characteristics, can lead to a better
understanding of the consumers that matter to your brand and
how to appeal to them.
Key Takeaways
Busy Balancers and Functional Eaters drive usage across
restaurants and convenience stores. Full-service restaurant
(FSR) operators may also consider targeting Foodservice
Hobbyists and Affluent Socializers, as these archetypes
comprise more than a quarter of FSR patrons, on average.
How does foodservice segment usage vary by archetype?
Driven by unique needs and motivations, Eater Archetypes
gravitate to a wide variety of brands. For example,
McDonald’s, Burger King and Whataburger each
disproportionately attract unique archetypes (Habitual
Matures, Bargain Hunters and Functional Eaters,
respectively).
Which chains do each archetype visit most frequently?
Archetypes that patronize the same restaurant may not use
the brand the same way. For example, usage varies by
daypart, with afternoon snacks skewing to Busy Balancers
and late-night meals d.
DB Instructions Each reply must be 250–300 words with a minim.docxedwardmarivel
DB Instructions:
Each reply must be 250–300 words with a minimum of 1 scholarly source. The scholarly source used for your thread and response should be in addition to the class textbooks.
Reference Book: Young, M. (2017). Learning the Art of Helping. Boston, MA: Pearson. ISBN: 9780134165783.
.
DB Defining White Collar CrimeHow would you define white co.docxedwardmarivel
DB: Defining White Collar Crime
How would you define white collar crime? What are the advantages and disadvantages of the various terms, such as “white collar crime,” “crimes of the powerful,” “elite deviance,” etc., used to describe the type of crimes.
300 Word Minimum
.
This slide is special for master students (MIBS & MIFB) in UUM. Also useful for readers who are interested in the topic of contemporary Islamic banking.
How to Setup Warehouse & Location in Odoo 17 InventoryCeline George
In this slide, we'll explore how to set up warehouses and locations in Odoo 17 Inventory. This will help us manage our stock effectively, track inventory levels, and streamline warehouse operations.
The simplified electron and muon model, Oscillating Spacetime: The Foundation...RitikBhardwaj56
Discover the Simplified Electron and Muon Model: A New Wave-Based Approach to Understanding Particles delves into a groundbreaking theory that presents electrons and muons as rotating soliton waves within oscillating spacetime. Geared towards students, researchers, and science buffs, this book breaks down complex ideas into simple explanations. It covers topics such as electron waves, temporal dynamics, and the implications of this model on particle physics. With clear illustrations and easy-to-follow explanations, readers will gain a new outlook on the universe's fundamental nature.
Exploiting Artificial Intelligence for Empowering Researchers and Faculty, In...Dr. Vinod Kumar Kanvaria
Exploiting Artificial Intelligence for Empowering Researchers and Faculty,
International FDP on Fundamentals of Research in Social Sciences
at Integral University, Lucknow, 06.06.2024
By Dr. Vinod Kumar Kanvaria
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...PECB
Denis is a dynamic and results-driven Chief Information Officer (CIO) with a distinguished career spanning information systems analysis and technical project management. With a proven track record of spearheading the design and delivery of cutting-edge Information Management solutions, he has consistently elevated business operations, streamlined reporting functions, and maximized process efficiency.
Certified as an ISO/IEC 27001: Information Security Management Systems (ISMS) Lead Implementer, Data Protection Officer, and Cyber Risks Analyst, Denis brings a heightened focus on data security, privacy, and cyber resilience to every endeavor.
His expertise extends across a diverse spectrum of reporting, database, and web development applications, underpinned by an exceptional grasp of data storage and virtualization technologies. His proficiency in application testing, database administration, and data cleansing ensures seamless execution of complex projects.
What sets Denis apart is his comprehensive understanding of Business and Systems Analysis technologies, honed through involvement in all phases of the Software Development Lifecycle (SDLC). From meticulous requirements gathering to precise analysis, innovative design, rigorous development, thorough testing, and successful implementation, he has consistently delivered exceptional results.
Throughout his career, he has taken on multifaceted roles, from leading technical project management teams to owning solutions that drive operational excellence. His conscientious and proactive approach is unwavering, whether he is working independently or collaboratively within a team. His ability to connect with colleagues on a personal level underscores his commitment to fostering a harmonious and productive workplace environment.
Date: May 29, 2024
Tags: Information Security, ISO/IEC 27001, ISO/IEC 42001, Artificial Intelligence, GDPR
-------------------------------------------------------------------------------
Find out more about ISO training and certification services
Training: ISO/IEC 27001 Information Security Management System - EN | PECB
ISO/IEC 42001 Artificial Intelligence Management System - EN | PECB
General Data Protection Regulation (GDPR) - Training Courses - EN | PECB
Webinars: https://pecb.com/webinars
Article: https://pecb.com/article
-------------------------------------------------------------------------------
For more information about PECB:
Website: https://pecb.com/
LinkedIn: https://www.linkedin.com/company/pecb/
Facebook: https://www.facebook.com/PECBInternational/
Slideshare: http://www.slideshare.net/PECBCERTIFICATION
Executive Directors Chat Leveraging AI for Diversity, Equity, and InclusionTechSoup
Let’s explore the intersection of technology and equity in the final session of our DEI series. Discover how AI tools, like ChatGPT, can be used to support and enhance your nonprofit's DEI initiatives. Participants will gain insights into practical AI applications and get tips for leveraging technology to advance their DEI goals.
Touché Enhancing Touch Interaction on Humans, Screens, Liqu.docx
1. Touché: Enhancing Touch Interaction on
Humans, Screens, Liquids, and Everyday Objects
ABSTRACT
Touché proposes a novel Swept Frequency Capacitive Sens-
ing technique that can not only detect a touch event, but also
recognize complex configurations of the human hands and
body. Such contextual information significantly enhances
touch interaction in a broad range of applications, from
conventional touchscreens to unique contexts and materials.
For example, in our explorations we add touch and gesture
sensitivity to the human body and liquids. We demonstrate
the rich capabilities of Touché with five example setups
from different application domains and conduct experi-
mental studies that show gesture classification accuracies of
99% are achievable with our technology.
Author Keywords: Touch, gestures, ubiquitous interfaces,
sensors, on-body computing, mobile devices.
ACM Classification Keywords
H.5.2 [Information interfaces and presentation]: User Inter-
faces - Graphical user interfaces; Input devices & strategies.
INTRODUCTION
Touché is a novel capacitive touch sensing technology that
provides rich touch and gesture sensitivity to a variety of
analogue and digital objects. The technology is scalable,
i.e., the same sensor is equally effective for a pencil, a door-
knob, a mobile phone or a table. Gesture recognition also
scales with objects: a Touché enhanced doorknob can cap-
ture the configuration of fingers touching it, while a table
can track the posture of the entire user (Figures 1b, 6 and 7).
Sensing with Touché is not limited to inanimate objects –
2. the user’s body can also be made touch and gesture sensi-
tive (Figures 1a and 9). In general, Touché makes it very
easy to add touch and gesture interactivity to unusual, non-
solid objects and materials, such as a body of water. Using
Touché we can recognize when users touch the water’s sur-
face or dip their fingers into it (Figures 1c and 10).
Notably, instrumenting objects, humans and liquids with
Touché is trivial: a single electrode embedded into an object
and attached to our sensor controller is sufficient to compu-
tationally enhance an object with rich touch and gesture
interactivity. Furthermore, in the case of conductive objects,
e.g., doorknobs or a body of water, the object itself acts as
an intrinsic electrode – no additional instrumentation is nec-
essary. Finally, Touché is inexpensive, safe, low power and
compact; it can be easily embedded or temporarily attached
anywhere touch and gesture sensitivity is desired.
Touché proposes a novel form of capacitive touch sensing
that we call Swept Frequency Capacitive Sensing (SFCS).
In a typical capacitive touch sensor, a conductive object is
excited by an electrical signal at a fixed frequency. The
sensing circuit monitors the return signal and determines
touch events by identifying changes in this signal caused by
the electrical properties of the human hand touching the
object [46]. In SFCS, on the other hand, we monitor the
response to capacitive human touch over a range of fre-
quencies. Objects excited by an electrical signal respond
differently at different frequencies, therefore, the changes in
Munehiko Sato1,2*, Ivan Poupyrev1, Chris Harrison1,3
1 Disney Research Pittsburgh,
4720 Forbes Avenue,
Pittsburgh, PA 15213 USA
{munehiko.sato, ivan.poupyrev}
3. @disneyresearch.com
2 Graduate School of Engineering,
The University of Tokyo,
Hongo 7-3-1, Tokyo
113-8656 Japan
3 HCI Institute,
Carnegie Mellon University,
5000 Forbes Avenue,
Pittsburgh, PA 15213 USA
[email protected]
GES
TUR
E PA
SSW
ORD
:*
a b
c d
Figure 1: Touché applications: (a) on-body gesture sensing;
(b) a smart doorknob with a “gesture password”; (c) interact-
ing with water; (d) hand postures in touch screen interaction.
* This work was conducted over the course of a one-year
research intern-
ship at Disney Research, Pittsburgh.
4. Permission to make digital or hard copies of all or part of this
work for
personal or classroom use is granted without fee provided that
copies are
not made or distributed for profit or commercial advantage and
that copies
bear this notice and the full citation on the first page. To copy
otherwise, or
republish, to post on servers or to redistribute to lists, requires
prior specific
permission and/or a fee.
CHI 2012, May 5-10, 2012, Austin, TX, USA.
Copyright 2012 ACM 978-1-4503-1015-4/12/05...$10.00.
the return signal will also be frequency dependent. Thus,
instead of measuring a single data point for each touch
event, we measure a multitude of data points at different
frequencies. We then use machine learning and classifica-
tion techniques to demonstrate that we can reliably extract
rich interaction context, such as hand or body postures, from
this data. Not only can we determine that a touch event oc-
curred, we can also determine how it occurred. Importantly,
this contextual touch information is captured through a sin-
gle electrode, which could be simply the object itself.
Although electromagnetic signal frequency sweeps have
been used for decades in wireless communication and in-
dustrial proximity sensors [35], we are not aware of any
previous attempt to explore this technique for touch interac-
tion. Our contributions, therefore, are multifold:
1) We propose and develop a novel capacitive touch sensing
technology, called Swept Frequency Capacitive Sensing. It
allows for minimally instrumented objects to capture a
wealth of information about the context of touch interaction.
5. It also permits novel mediums for capacitive touch and ges-
ture sensing, such as water and the human body.
2) We report a number of innovative applications that
demonstrate the utility of our technology including a) smart
touch interaction on everyday objects, b) tracking human
body postures with a table, c) enhancing touchscreen inter-
action, d) making the human body touch sensitive, and e)
recognizing hand gestures in liquids.
3) We conduct controlled experimental evaluations for each
of the above applications. Results demonstrate recognition
rates approaching 100%. This suggests Touché is immedi-
ately feasible in a variety of real-world applications.
RELATED WORK AND APPROACHES
The importance of touch and gestures has been long appre-
ciated in the research and practice of human-computer in-
teraction (HCI). There is a tremendous body of previous
work related to touch, including the development of touch
sensors and tactile displays, hand gesture tracking and
recognition, designing interaction techniques and applica-
tions for touch, building multitouch, tangible and flexible
devices. See [2, 6, 21, 24, 25, 27, 44] for a subset of previ-
ous work on touch.
The foundation for all touch interaction is touch sensing,
i.e., technologies that capture human touch and gestures.
This includes sensing touch using cameras or arrays of opti-
cal elements [15, 22], laser rangefinders [4], resistance and
pressure sensors [31] and acoustics [16, 17] – to name a
few. The most relevant technology is capacitive touch sens-
ing, a family of sensing techniques based on same physical
phenomenon – capacitive coupling.
The basic principles of operation in most common capaci-
tive sensing techniques are quite similar: A periodic electri-
cal signal is injected into an electrode forming an oscillating
electrical field. As the user’s hand approaches the electrode,
a weak capacitive link is formed between the electrode and
conductive physiological fluids inside the human hand, al-
6. tering the signal supplied by the electrode. This happens
because the user body introduces an additional path for flow
of charges, acting as a charge “sink” [46]. By measuring the
degree of this signal change, touch events can be detected.
There is a wide variety of capacitive touch sensing tech-
niques. One important design variable is the choice of signal
property that is used to detect touch events, e.g., changes in
signal phase [19] or signal amplitude [1, 25, 30] can be
used for touch detection. The signal excitation technique is
another important design variable. For example, the earliest
capacitive proximity sensors in the 1970s were oscillating at
resonant frequency and measured signal dumping as addi-
tional capacitance that would affect the resonant frequency
of the sensing circuit [35]. The choice of topology of elec-
trode layouts, the materials used for electrodes and sub-
strates and the specifics of signal measurement resulted in a
multitude of capacitive techniques, including charge trans-
fer, surface and projective capacitive, among others [1, 25].
Capacitive sensing is a malleable and inexpensive technolo-
gy – all it requires is a simple conductive element that is
easy to manufacture and integrate into devices or environ-
ments. Consequently, today we find capacitive touch in mil-
lions of consumer device controls and touch screens. It has,
however, a number of limitations. One important limitation
is that capacitive sensing is not particularly expressive – it
can only detect when a finger is touching the device and
sometimes infer finger proximity. To increase the expres-
siveness, matrices of electrodes are scanned to create a 2D
capacitive image [6, 21, 30, 37]. Such space multiplexing
allows the device to capture spatial gestures, hand profiles
[30] or even rough 3D shapes [36]. However, this comes at
the cost of increased engineering complexity, limiting its
applications and precluding ad hoc instrumentation of our
living and working spaces. Current capacitive sensors are
also limited in materials they can be used with. Typically
7. they cannot be used on the human body or liquids.
In this paper, we advocate a different approach to enhancing
the expressivity of capacitive sensing – by using frequency
multiplexing. Instead of using a single, pre-determined fre-
quency, we sense touch by sweeping through a range of
frequencies. We refer to the resulting curve as a capacitive
profile and demonstrate its ability to expand the vocabulary
of interactive touch without increasing the number of elec-
trodes or the complexity of the sensor itself.
Importantly, our technology is not limited to a single elec-
trode. Sensor matrices can be easily constructed and would
bring many of the unique sensing dimensions described in
this paper. However, in the current work, we focus on a
single electrode solution, as that is the simplest – and yet
allows for surprisingly rich interactions. At the same time, it
is compact, inexpensive and can be easily integrated into a
variety of everyday objects and real world applications.
SWEPT FREQUENCY CAPACITIVE SENSING
The human body is conductive, e.g., the average internal
resistance of a human trunk is ~100 Ω [42]. Skin, on the
other hand, is highly resistive, ~1M Ω for dry undamaged
skin [42]. This would block any weak constant electrical
(DC) signal applied to the body. Alternating current (AC)
signal, however, passes through the skin, which forms a
capacitive interface between the electrode and ionic physio-
logic fluids inside the body [10]. The body forms a charge
“sink” with the signal flowing though tissues and bones to
ground, which is also connected to the body through a ca-
pacitive link [25, 46].
The resistive and capacitive properties of the human body
oppose the applied AC signal. This opposition, or electrical
impedance1, changes the phase and amplitude of the AC
signal. Thus, by measuring changes in the applied AC signal
8. we can 1) detect the presence of a human body and also 2)
learn about the internal composition of the body itself. This
phenomenon, in its many variations, has been used since the
1960s in medical practice to measure the fluid composition
of the human body [10], in electro-impedance tomography
imaging [5] and even to detect the ripeness of nectarine
fruits [13]. More recently, it has been used in a broad varie-
ty of capacitive touch buttons, sliders and touchscreens in
human-computer interaction [6, 19, 30, 46].
The amount of signal change depends on a variety of fac-
tors. It is affected by how a person touches the electrode,
e.g., the surface area of skin touching the electrode. It is
affected by the body’s connection to ground, e.g., wearing
or not wearing shoes or having one or both feet on the
ground. Finally, it strongly depends on signal frequency.
This is because at different frequencies, the AC signal will
flow through different paths inside of the body [10]. Indeed,
just as DC signal flows through the path of least resistance,
the AC signal will always flow through the path of least
impedance. The human body is anatomically complex and
different tissues, e.g., muscle, fat and bones, have different
resistive and capacitive properties. As the frequency of the
AC signal changes, some of the tissues become more op-
posed to the flow of charges, while others less, thus chang-
1 Impedance is defined as a total opposition of circuit or
material to AC
signal at a certain frequency. Impedance consists of resistance
and reac-
tance, which, in case of human body, is purely capacitive [10].
ing the path of the signal flow (see [10] for an overview of
the bioelectrical aspects of human body impedance).
Therefore, by sweeping through a range of frequencies in
capacitive sensing applications, we obtain a wealth of in-
formation about 1) how the user is touching the object, 2)
9. how the user is connected to the ground and 3) the current
configuration of the human body and individual body prop-
erties. The challenge here is to reliably capture the data and
then find across-user commonalities – static and temporal
patterns that allow an interactive system to infer user inter-
action with the object, the environment, as well as the con-
text of interaction itself.
SFCS presents an exciting opportunity to significantly ex-
pand the richness of capacitive sensing. We are not aware of
previous attempts to design SFCS touch and gesture inter-
faces, investigate their interactive properties, identify possi-
ble application domains, or rigorously evaluate their feasi-
bility for supporting interactive applications2. All relevant
capacitive touch sensing techniques use a single frequency.
One of the reasons why SFCS techniques have not been
investigated before could be due to computational expense:
instead of sampling a single data point at a single frequency,
SFCS requires a frequency sweep and analysis of hundreds
of data points. Only recently, with the advance of fast and
inexpensive microprocessors, has it become feasible to use
SFCS in touch interfaces. Another challenge in using SFCS
is that it requires high-frequency signals, e.g., ~3 Mhz. De-
signing conditioning circuitry for high-frequency signals is
a complex problem. We will discuss these challenges and
solutions in detail in the next section of this paper.
TOUCHÉ IMPLEMENTATION
The overall architecture of Touché is presented on Figure
2a. The user interacts with an object that is attached to a
Touché sensor board via a single wire. If the object itself is
conductive, the wire can be attached directly to it. Other-
wise, a single electrode has to be embedded into the object
and the wire attached to this electrode.
2 We reported preliminary explorations of SFCS technology in
[29].
11. control
Touché
sensor board
Sensor
wire Object User
Light On
Swept
Frequency
Capacitive
Sensing
SVM
Classifier
Light Off
Personal
computer
Wireless
link
Commands Application
Figure 2: (a) Touché architecture (b) Swept Frequency
Capacitive Sensing with Touché.
Touché implements SFCS on a compact custom-built board
12. powered by an ARM Cortex-M3 microprocessor (Figure 3).
The on-board signal generator excites an electrode with
sinusoid sweeps and measures returned signal at each fre-
quency. The resulting sampled signal is a capacitive profile
of the touch interaction. We stress that in the current version
of Touché we do not measure phase changes of the signal in
response to user interaction. We leave this for future work.
Finally, the capacitive profile is sent to a conventional com-
puter over Bluetooth for classification. Recognized gestures
can then be used to trigger different interactive functions.
While it is possible to implement classification directly on
the sensor board, a conventional computer provided more
flexibility in fine-tuning and allowed for rapid prototyping.
Sensor board design
An ARM microprocessor, NXP LPC1759 running at 120
MHz, controls an AD5932 programmable wave generator to
synthesize variable frequency sinusoidal signal sweeps from
1 KHz to 3.5 MHz in 17.5 KHz steps (i.e., 200 steps in each
sweep, see Figure 2b). The signal is filtered to remove envi-
ronmental noise and undesirable high frequency compo-
nents and is also amplified to 6.6 Vpp (Figure 4a), which is
then used to excite the attached conductive object. In the
current design we tune Touché to sense very small varia-
tions of capacitance at lower excitation frequencies by add-
ing a large bias inductor Lb (~100 mH), a technique used in
impedance measurement. By replacing it with a bias capaci-
tor, we can make Touché sensitive to very small inductive
variations, e.g., copper coil stretching.
The return signal from the object is measured by adding a
small sensing resistor, which converts alternating current
into an alternating voltage signal (Figure 4b). This signal is
then fed into a buffer to isolate sensing and excitation sec-
tions; an envelope detector then converts the AC signal into
a time-varying DC signal (Figure 4c). The microprocessor
samples the signal at a maximum of 200 KHz using a 12-bit
analog-digital converter (ADC). A single sweep takes
13. ~33 ms, translating to a 33 Hz update rate.
Currently, the sampling rate of ADC is a main limiting fac-
tor for speed: a dedicated ADC with a higher sampling rate
would significantly increase the speed of Touché. Sampling
is much slower at low frequencies, as it takes longer for the
analogue circuitry to respond to a slowly varying signal. In
applications where an object does not respond to low fre-
quencies, we swept only in the high frequency range, tri-
pling the sensor update rate to ~100 Hz.
Touché Sensing Configurations
There are two basic sensor configurations. First, the user is
simply touching on object or an electrode (Figures 5a and
5c). This is the classic capacitive sensor configuration that
assumes that both the sensor and the user are sharing com-
mon ground, even through different impedances. For exam-
ple, if the sensor were powered from an electrical outlet, it
would be connected to the ground line of a building. The
user would be naturally coupled to the same ground via a
capacitive link to the floor or building structure. Although
this link may be weak, it is sufficient for Touché.
In the second case, the sensor is touching two different loca-
tions of the user body with its ground and signal electrodes
(Figures 5b and 5d). In this configuration Touché measures
the impedance between two body locations [10].
Communication and Recognition
For classification, we use a Support Vector Machine (SVM)
implementation provided by the Weka Toolkit [12] (SMO,
C=2.0, polynomial kernel, e=1.0) that runs on the aforemen-
tioned conventional computer. Each transmission from the
sensor contains a 200-point capacitive profile, from which
we extract a series of features for classification.
The raw impedance values from the frequency sweep have a
natural high-order quality. As can be seen in Figure 6-10,
the impendence profiles are highly continuous, distinctive
and temporally stable. Therefore, we use all 200 values as
14. features without any additional processing. Additionally, we
compute the derivative of the impedance profile at three
different levels of aliasing, by down-sampling profiles into
arrays of 10, 20, 40 and using [-1, 1] kernel, yielding anoth-
er 70 features. This helps to capture shape features of the
Figure 3: Touché sensing board: 36x36x5.5 mm, 13.8 grams.
Touché
Door
a b
dc
a
Touché
Touché
Electrode
Water
Touché
Ground
Signal
Back Panel
15. Touch Panel
SignalGround
Signal
Signal
Ground
Ground
Figure 5: Configurations of Touché applications.
Figure 4: Variable frequencies sweep and return signal.
profile, independent of amplitude, e.g., it is easy to see the
peaks minima in Figures 6-10 – more difficult is to see the
visually subtle, but highly discriminative peak slopes.
Moreover, using the derivative increases robustness to glob-
al variations in impendence, e.g., an offset of signal ampli-
tude across all frequencies due to temperature variations. As
a final feature, we include the capacitive profile minima,
which was found to be highly characteristic in pilot studies
(see Figures 6-10). Once the SVM has been trained, classi-
fication can proceed in a real-time fashion.
EXAMPLE TOUCHÉ APPLICATIONS
The application space of Touché is broad, therefore at least
some categorization is pertinent to guide the development of
the interfaces based on this technology. We identified five
application areas where we felt that Touché could have the
largest impact – either as a useful enhancement to an estab-
16. lished application or a novel application, uniquely enabled
by our approach:
• making everyday objects touch gesture sensitive
• sensing human bimanual hand gestures
• sensing human body configuration (e.g., pose)
• enhancing traditional touch interfaces
• sensing interaction with unusual materials (e.g., liquids)
In the rest of this section we propose a single exemplary
application for each category, highlighting the utility and
scope of our sensing approach. We then evaluate these ap-
plications experimentally in the next section of the paper.
Making objects touch and grasp sensitive
If analogue or digital objects can be made aware of how
they are being touched, held or manipulated, they could
configure themselves in meaningful and productive ways
[14, 28, 34, 37, 38]. The canonical example is a mobile
phone which, when held like a phone, operates as a phone.
However, when held like a camera, the mode could switch
to picture-taking automatically.
Touché offers a lightweight, non-invasive sensing approach
that makes it very easy to add touch and gesture sensitivity
to everyday objects. Doorknobs provide an illustrative ex-
ample: they lie in our usual paths and already require touch
to operate. Yet, in general, doorknobs have not been infused
with computational abilities. A smart doorknob that can
sense how a user is touching it could have many useful fea-
tures. For example, closing a door with a tight grasp could
lock it, while closing it with a pinch might set a user’s away
message, e.g., “back in five minutes”. A sequence of grasps
could constitute a “grasp password” that would allow an
authorized user to unlock the door (Figure 1b).
Objects such as doorknobs can be easily instrumented with
Touché (Figures 5a). More importantly, existing conductive
17. structures can be used as sensing electrodes, for example,
the brass surface of a doorknob. Our Touché sensor could be
connected to these elements with a single wire, requiring no
further instrumentation (Figure 6). Contrast this to previous
techniques that generally require a matrix of sensors [20, 30,
37, 38]. We present detailed experimental evaluations of
Touché in this context later in this paper.
Body Configuration Sensing
Touché can be used to sense the configuration of the entire
human body. For example, a door could sense if a person is
simply standing next to it, if they have raised their arm to
knock on it, are pushing the door, or are leaning against it.
Alternatively, a chair or a table could sense the posture of a
seated person – reclined or leaning forward, arms on the
armrests or not, one or two arms operating on the surface, as
well as their configuration (Figure 7). More importantly,
this can occur without instrumenting the user. Similar to
everyday objects, conductive tables can be used as is, just
by connecting a Touché sensor. Non-conductive tables
would require a single flat electrode added to their surface
or could simply be painted with conductive paint.
Sensing the pose of the human body without instrumenting
the user has numerous compelling applications. Posture-
sensing technologies are an active area of research, with
applications in gaming, adaptive environments, smart offic-
es, in-vehicle interaction, rehabilitation and many others [9].
We view Touché as one such enabling technology, with
many exciting applications. To this end, we report an evalu-
ation of body posture sensing with a Touché-enhanced table
in the following Touché Evaluation section.
Figure 6: Capacitive profiles for making objects touch and grasp
sensitive (doorknob example).
0.5
18. 1
1.5
2
0.5 1 1.5 2 2.5 3 3.5
Si
g
n
al
A
m
p
lit
u
d
e
(V
)
Frequency (MHz)
one !ngerno touch pinch circle grasp
Figure 7: Capacitive profiles for sensing body postures (table
examples).
19. Enhancing Touchscreen Interaction
Touché brings new and rich interaction dimensions to con-
ventional touch surfaces by enhancing touch with sensed
hand posture (Figure 1d) For example, Touché could sense
the configuration of fingers holding a device, e.g., if they
are closed into a fist or held open, whether a single finger is
touching, all five fingers, or the entire palm (Figures 8). The
part of the hand touching could be also possibly be inferred,
e.g., fingertips or knuckles, a valuable extra dimension of
natural touch input [16].
These rich input dimensions are generally invisible to tradi-
tional capacitive sensing. Diffuse infrared (IR) illumination
can capture touch dimensions such as finger orientation [39]
and hand shape [22]. However, sensing above the surface is
challenging as image quality and sensing distance is severe-
ly degraded by traditional diffuse projection surfaces ([18]
offers an expensive alternative). An external tracking infra-
structure can also be used. This, however, prohibits the use
of mobile devices and introduces additional cost and com-
plexity [20, 40].
Touché provides a lightweight, yet powerful, solution to
bring hand posture sensing into touchscreen interaction.
There are many possible implementations – one is presented
in Figures 5d and 8. At the very minimum, this would ena-
ble a touch gesture similar to a mouse “right click”. Right
click is a standard and useful feature in desktop GUI inter-
faces. However, it has proved to be elusive in contemporary
touch interfaces, where it is typically implemented as a tap-
and-hold [16]. Additionally, combining hand pose and touch
could lead to many more sophisticated interactions, such as
gesture-based 3D manipulation and navigation (Figure 1d),
advanced 3D sculpting and drawing, music composition and
performance, among others.
In general, Touché could prove particularly useful for mo-
bile touchscreen interaction, where input is constrained due
20. to small device size. In this context, a few extra bits of input
bandwidth would be a welcomed improvement. Detailed
controlled experiments evaluating gesture sensing on a sim-
ulated mobile device are reported subsequently.
On-Body Gesture Sensing
Unlike inanimate physical objects, the human body is highly
variable and uncontrolled, making it a particularly challeng-
ing “input device”. Compounding this problem is that users
are highly sensitive to instrumentation and augmentation of
their bodies. For a sensing technique to be successful, it has
to be minimally invasive. Research has attempted to over-
come these challenges by exploring remote sensing ap-
proaches, including bio-acoustics [17], EMG [33] and com-
puter vision [15], each of which has its own distinct set of
advantages and drawbacks. Touché is able to sidestep much
of this complexity by taking advantage of the conductive
properties of the body and appropriate the skin as a touch
sensitive surface while being minimally invasive.
Because humans are inherently mobile, it is advantageous to
define an on-body signal source and charge sink for Touché.
As our hands serve as our primarily means of manipulating
the world, they are the most logical location to augment
with Touché. In this case, the source or sink is placed near
the hands, for example, worn like a wristwatch. The other
electrode can be placed in many possible locations, includ-
ing the opposite wrist (Figure 5b and 9), the waist, collar
area, or lower back [11]. As a user touches different parts of
their body the impedance between the electrodes varies as
the signal flows through slightly different paths on and in
the user’s body. The resulting capacitive profile is different
for each gesture, which allows us to detect a range of hand-
to-hand gestures and touch locations (Figure 9).
It is worth noting the remarkable kinesthetic awareness of a
human being [3], which has important implications in the
21. design of on-body interfaces [17]. As the colloquialism
“like the back of your hand” suggests, we are intimately
familiar with our bodies. This can be readily demonstrated
by closing one’s eyes and touching our noses or clapping
our hands together. In addition to our powerful kinesthetic
senses, we have finely-tuned on-body touch sensations and
hand-eye coordination, all of which can be leveraged for
digital tasks.
A wide array of applications can be built on top of the body.
One example is controlling a mobile phone using a set of
Figure 8: Capacitive profiles for enhancing touchscreen
interaction with a hand posture sensing.
Figure 9: Capacitive profiles for on-body Sensing with wrists-
mounted Touché sensors.
on-body gestures (Figure 1a). For example, making a “shh”
gesture with index finger touching to the lips, could put the
phone into silent mode. Putting the hands together, forming
a book-like gesture, could replay voicemails. We evaluate
feasibility of using a simple gesture set in the next section.
Sensing Gestures in Liquids
The real world does not consist only of hard and flat surfac-
es that can be easily enhanced with touch sensitivity. Liq-
uid, viscous, soft and stretchable materials are important
elements of everyday life. Enhancing these materials with
touch sensitivity, however, is challenging. Although there is
a growing body of research sensing touch for textile, paper
and silicon materials [8, 32, 44], enhancing a body of liquid
with rich touch sensing has mostly remained out of reach,
and is a good example of Touché’s application range.
By interacting with water, we do not mean using touch
screens under water, but touching the water itself. In partic-
22. ular, our approach can distinguish between a user touching
the water’s surface and dipping their finger into it (Figure
10), which is difficult to accomplish with current capacitive
touch sensing techniques [7]. Resistive e.g., [31] touchpads
would work under water, but require users to physically
press on the surface, which is not truly interaction with the
liquid, but rather with a submerged touchpad. Mechanical
[41] or optical [26] techniques introduce large external sens-
ing apparatus, prohibiting ad-hoc and mobile interactions.
Furthermore, optical sensing generally requires controlled
lighting and clear liquids. Water-activated electrical switch-
es [45] can be used to detect the presence of water, but not
the user playing with water. These are just a few of the chal-
lenges of user-liquid interaction.
Touché can easily add touch sensitivity to various amounts
of liquid held in any container (Figures 5c). Simply by plac-
ing the electrode on the bottom of the water vessel, we can
detect a user touching the surface, dipping their fingers in
the water, and so on (Figures 10). The container can be
made of any material, and the electrode can be affixed to the
outside – although putting it inside increases sensitivity.
Applications of water sensing are mostly experiential, such
as games, art and theme park installations and interactive
aquariums. We can also track indirect interactions, i.e.,
when users are touching water via a conductive object. In
this way children’s water toys and eating utensils could be
easily enhanced with sounds and lights (Figure 1c).
TOUCHÉ EVALUATION
In the previous section we described five example applica-
tion domains where Touché could enhance touch interac-
tion. For our evaluation, we selected an exemplary configu-
ration and gesture set from each of these five domains de-
signed specifically to tax our system’s accuracy. Not only
does this minimize the potential for accuracy ceiling effects,
but also enables us to estimate the “sweet spot” in gesture
23. set size and composition through several post-hoc analyses
that are discussed subsequently.
These studies serve several purposes: 1) to demonstrate the
wide variety of applications and interactions enabled by
Touché, 2) to underscore the immediate feasibility of Tou-
ché, 3) to explore the potential richness of gesture vocabu-
laries our system could support, and 4) to establish the base-
line performance of the recognition engine.
Participants
We used two groups of 12 participants. The first group
completed the first four studies (9 males, 3 females, mean
age 27.6). A second group of 12 completed the final liquid
study created at a later date (10 males, 2 females, mean age
28.6). Each study was run independently allowing us to
distribute data collection over approximately a seven-day
period. This permitted us to capture natural, real-world vari-
ations in e.g., humidity, temperature, user hydration and
varying skin resistance. Although we do not specifically
control for these factors, we show that our system is robust
despite their potential presence. In fact, our “walk-up” gen-
eral classifiers were specifically designed to model these
temporal and inter-participant variances.
Procedure
The five studies followed the same basic structure described
below. Each study was run independently; the entire exper-
iment took approximately 60 minutes to complete.
Training
Participants were shown pictographically a small set of ges-
tures and asked to perform each sequentially. While per-
forming gestures, the participants were told to adjust their
gestures slightly, e.g., tighten their grip. This helped to cap-
ture additional variety that would be acquired naturally with
extended use, but impossible in a 60-minute experiment.
While the participants performed each gesture, the experi-
menter recorded 10 gesture instances by hitting the spacebar
and then advanced the participant to the next gesture until
24. all gestures were performed. This procedure was repeated
three times providing 30 instances per gesture per user. In
addition to providing three periods of training data useful in
post-hoc analysis, this procedure allowed us to capture vari-
ability in participant gesture performance, obtaining more
gesture variety and improving classification.
Figure 10: Capacitive profiles for interacting with water.
Testing
Following the training phase, collected data were used to
initialize the system for a real-time classification evaluation.
Participants were requested to perform one of the gestures
from the training set randomly selected and presented on a
display monitor. The system – invisible to both the experi-
menter and participants – made a classification when partic-
ipants performed each gesture. A true positive result was
obtained when the requested gesture matched the classifier’s
guess. The experimenter used the spacebar to advance to the
next trial, with five trials for each gesture.
Accuracy Measures
Our procedure follows a per-user classifier paradigm where
each participant had a custom classifier trained using only
his or her training data. This produces robust classification
since it captures the peculiarities of the user. Per-user classi-
fiers are often ideal for personal objects used by a single
user, as would be the case with, e.g., a mobile phone, desk-
top computer, or car steering wheel.
To assess performance dimensions that were not available
in a real-time accuracy assessment, we ran two additional
experiments post-hoc. Our first post-hoc analysis simulated
the live classification experiment with one fewer gestures
per set. The removed gesture was the one found to have the
25. lowest accuracy in the full gesture set. For example, in the
case of the grasp-sensing doorknob study, the circle gesture
was removed, leaving no touch, one finger, pinch and grasp
(Figure 6) Accuracy typically improves as the gesture set
contracts. In general, we sought to identify gesture sets that
exceeded the 95% accuracy threshold.
Our second post-hoc analysis estimated performance with
“walk up” users – that is, classification without any training
data from that user, a general classifier. To assess this, we
trained our classifier using data from eleven participants,
and tested using data from a twelfth participant (all combi-
nations, i.e., 12-fold cross validation). This was the most
challenging evaluation because of the natural variability of
how people perform gestures, anatomical differences, as
well as variability in clothes and shoes worn by the partici-
pants. However, this accuracy measure provides the best
insight into potential real-world performance when per-user
training is not feasible, e.g., a museum exhibit or theme
park attraction. Moreover, it serves as an ideal contrast to
our per-user classifier experimental results.
EVALUATION RESULTS
Figures 6 though 10 illustrate the physical setup and accom-
panying touch gesture sets for each of the five application
domains we tested. Real-time accuracy results for all five
studies are summarized in Figure 11. “Walk -up” accuracies
with different-sized gesture sets are shown in Figure 12.
Study 1: Making Objects Touch and Grasp Sensitive
A doorknob was an obvious and interesting choice for our
touch and grasp sensing study setup (Figure 6). We used a
brass fixture that came with a high-gloss coating, providing
a beneficial layer of insulation. A single wire was soldered
to the interior metallic part of the knob, and connected to
our sensor. As doors are fixed infrastructure, we grounded
our sensor in this configuration. This is a minimally inva-
sive configuration that allows for existing doors to be easily
26. retrofitted with additional touch sensitivity.
A set of five gestures was evaluated as seen in Figure 6: no
touch, one finger, pinch, circle, and grasp. This setup per-
formed well in the real-time per–user classifier experiment,
at 96.7% accuracy (SD=5.6%). Dropping the circle gesture
increased accuracy to 98.6% (SD=2.5%).
Walk-up accuracy was significantly worse for five gestures
– 76.8% (SD=9.2%), where the circle gesture was responsi-
ble for 95.0% of the errors. Once the circle gesture was re-
moved, walk-up accuracy improves to 95.8% (SD=7.4%).
Study 2: Body Configuration Sensing
To evaluate performance of Touché in body posture recog-
nition scenarios, we constructed a sensing table. This con-
sisted of a conventional table with a thin copper plate on top
of it, covered with a 1.6 mm glass fiber and resin composite
board (CEM-1) (Figure 7). A single wire connected copper
plate to the Touché sensor board. The static nature of a table
meant that we could ground the sensor to the environment
in this configuration.
A set of seven gestures was evaluated: not present, present,
one hand, two hands, one elbow, two elbows, arms (Figure
7). Average real-time classification performance with seven
gestures was 92.6% (SD=9.4%). Eliminating the two elbows
gesture boosted accuracy to 96.0% (SD=6.1%).
Walk-up accuracy at seven gestures stands at 81.2%. As
seen in Figure 12, accuracy surpasses 90% with five ges-
tures (not present, present, one hand, two hands, two elbow;
91.6%, SD=7.8%). With only three gestures (presence, two
hands, two elbow), accuracy is 100% for every participant.
Figure 11. Real-time, per-user classification accuracy
for five example applications.
27. Figure 12. “Walk Up” classification accuracy for
five example applications.
Study 3: Enhancing Touchscreen Interaction
The application possibilities of Touché to touchscreen inter-
action are significant and diverse. For both experimental
and prototyping purposes we chose mobile device form fac-
tor (Figure 8). Mobility implies the inability to ground the
sensor, making this setup particularly difficult.
As a proof of concept, we created a pinch-centric gesture set
which could be used for, e.g., a “right click”, zoom in/out,
copy/paste, or similar function [23]. Our mobile device
mockup has two electrodes: the front touch surface, simulat-
ing a touch panel, and the backside of the device. A Touché
sensor is configured to measure the impedance between
these two surfaces through the participant’s hand connecting
them (Figure 5d).
Figure 9 depicts a set of five gestures that were evaluated:
no touch, thumb, one finger pinch, two finger pinch and all
finger pinch. Per-user classifier accuracy with all gestures is
93.3% (SD=6.2%). Removing the two finger pinch brings
accuracy up to 97.7% (SD=2.6%). Walk-up accuracy at five
gestures is 76.1% (SD=13.8%), too low for practical use.
However, by reducing the gesture set to no touch, thumb
and one finger pinch, accuracy is 100% for all participants,
showing the immediate feasibility for mobile applications.
Study 4: On-Body Gesture Sensing
Unlike the previous three studies, human-gesture sensing
has a predefined device – the human body. This leaves us
with two design variables: sensor placement and gestures.
For this study, we chose to place an electrode on each wrist,
28. worn like a watch. The Touché sensor measured impedance
between wrist electrodes through the body of participants.
Due to the highly variable and uncontrolled nature of the
human body, this experimental condition was the most chal-
lenging of our five studies.
Our gesture set consisted of five gestures: no touch, one
finger, five fingers, grasp, and cover ears (Figure 9). Real-
time, per-user classification accuracy was 84.0% (SD =
11.4%) with five gestures. Removing a single gesture – one
finger – improved accuracy to a useable 94.0% (SD=7.4%).
In contrast, walk- up accuracy with a general classifier does
significantly worse, with all five gestures yielding 52.9%
accuracy (SD=13.8%). Reducing the gesture set to three (no
touch, five fingers, grasp) only draws accuracy up to 87.1%
(SD=12.5%) – stronger, but still too low for robust use.
This divergence in accuracy performance between per-user
and general classifiers is important. The results suggest that
for on-body gestures where the user is both the “device” and
input, per-user training is most appropriate. This should not
be particularly surprising – unlike doorknobs, the individual
differences between participants are very significant, not
only in gesture performance, but also in their bodies’ com-
position. A per-user classifier captures and accounts for
these per-user differences, making it robust.
Study 5: Touching Liquids
We attached a single electrode under a 250 mm-wide and
500 mm-long fish tank, and filled it to a depth of 35 mm of
water. The electrode was separated from the liquid by a
pane of 3 mm-thick glass and attached to the Touché sensor
board via a single wire (Figures 5c).
Our test liquid gesture set consisted of no touch, one finger
tip, three finger tips, one finger bottom, and hand sub-
merged (Figure 10). This experimental condition performed
the best of the five. Real-time, per-user classification accu-
racy with the full gesture set was 99.8% (SD=0.8%).
29. “Walk-up” classification performance was equally strong
with all five gestures: 99.3% (SD=1.4%). Removing three
finger tips improves accuracy up to 99.9% (Figure 12).
Anatomical Factors
Touché is sensitive to variations in users’ anatomy. To test
if anatomical variations have a systematic effect on classifi-
cation accuracy, we ran several post hoc tests. We found no
correlation between accuracy and height (1.6 ~ 1.9m),
weight (52 ~ 111kg), BMI (19.6 ~ 32.3), or gender. This
suggests the sensing is robust across a range of users.
DISCUSSIONS AND CONCLUSION
Touché has demonstrated that multi-frequency capacitive
sensing is valuable and opens new and exciting opportuni-
ties in HCI. Would it be possible to achieve the same results
with fewer sampling points? What are the optimal sweeping
ranges and resolutions needed to achieve maximum perfor-
mance and utility of such a sensing technique?
Optimizing and fine-tuning SFCS for specific configura-
tions and uses is a subject of future work and, therefore,
beyond the scope of the current paper. In general, however,
designing any SFCS solutions can be considered as a sam-
pling problem, i.e., how many samples and what frequency
bands would allow us to accurately identify the state of the
system? In the current implementation of Touché we used
200 samples between 1 KHz and 3.5 MHz. Empirically, we
found it to be a good trade-off between speed and accuracy
of the recognition. More importantly, it allowed us to cap-
ture details of shapes of capacitive profiles, which was im-
portant in some of the Touché applications, e.g., in hand-to-
hand gestures. Therefore, decreasing the sweep resolution
would improve performance, but also reduces the robust-
ness of gesture recognition in some of applications.
We found that it was difficult, if not impossible, to deter-
mine a-priori which frequency bands are most characteristic
for specific interactions, applications, users, materials and
contexts. Indeed, around 1 MHz looks useful on Figure 9,
30. but not at all on Figure 10. Therefore, we designed the most
general sensing solution by sampling over a broad range of
frequencies. Consequently, Touché – without any modifica-
tion – enables a rich swath of interactions from humans, to
doorknobs, to water. This would be impossible if we limited
the range of frequencies. However, in practical applications
the sensing can be limited to a range of frequencies that are
most appropriate for a particular product, reducing cost and
improving robustness.
Our work on Touché was broadly motivated by the vision of
disappearing computers postulated by Mark Weiser [43]. He
argued that computer must disappear in everyday objects
and “… the most profound technologies are those that dis-
appear”. As powerful and inspiring as this vision is, it im-
poses a significant problem: how we will interact with com-
puters that are invisible? From the end-user perspective, the
interface will appear as a computer as long as there are but-
tons to press and mice to move, and thus will never truly
disappear. Completely new interaction technologies are re-
quired, and we hope that this work contributes to the emer-
gence of future ubiquitous computing environments.
AKNOWLEDGEMENT
We are thankful to Zhiquan Yeo, Josh Griffin and Scott
Hudson for their significant contributions in developing
early prototypes of Touché and helping to understand the
principles of its operation [29]. We thank Jonas Loh for his
efforts on exploring initial applications of Touché. We are
thankful to Disney Research and The Walt Disney Corpora-
tion for continued support of this research effort. Finally, we
thank anonymous reviewers for their insightful comments as
well as subjects who participated in the experiments.
REFERENCES
1. Barrett, G. and Omote, R. Projected-Capacitive Touch
31. Technology. Information Display. (26) 3, 2010. 16-21.
2. Bau, O., Poupyrev, I., Israr, A., Harrison, C., Teslatouch:
electrovibration for touch surfaces. in UIST'10, 283-292.
3. Buxton, W. and Myers, B., A Study in Two-Handed Input. in
CHI'86, 321-326.
4. Cassinelli, Á., Perrin, S., Ishikawa, M., Smart laser-scanner
for
3D human-machine interface. in CHI EA'05, 1138-1139.
5. Cheney, M., Isaacson, D., Newell, J.C. Electrical impedance
tomography. SIAM Review, 41, 1, 1999. 85-101.
6. Dietz, P. and Leigh, D., DiamondTouch: A Multi-User Touch
Technology. in UIST '01, 219-226.
7. Dietz, P.H., Han, J.Y., Westhues, J., Barnwell, J., Yerazunis,
W., Submerging technologies. in SIGGRAPH'06 ETech, 30.
8. Follmer, S., Johnson, M., Adelson, E., Ishii, H., deForm: an
interactive malleable surface for capturing 2.5 D arbitrary
objects, tools and touch. in UIST '11, 527-536.
9. Forlizzi, J., Disalvo, C., Zimmerman, J., Hurst, A., The
SenseChair : The lounge chair as an intelligent assistive device
for elders. in DUX '05, 1-13.
10. Foster, K.R. and Lukaski, H.C. Whole-body impedance -
what
does it measure? The American journal of clinical nutrition, 64
(3). 1996. 388S-396S.
11. Gemperle, F., Kasabach, C., Stivoric, J., Bauer, M. , Martin,
32. R.
Design for wearability. in IEEE ISWC'98, 116-122.
12. Hall, M., Frank, E., Holmes, G., Pfahringer, B., Reutemann,
P.,
Witten, I.H. The WEKA data mining software: an update.
SIGKDD Explorations, 11, 1, 2009. 10-18.
13. Harker, F.R. and Maindonald, J.H. Ripening of Nectarine
Fruit. Plant physiology, 106, 1994. 165-171.
14. Harrison, B.L., Fishkin, K.P., Gujar, A., Mochon, C., Want,
R.,
Squeeze me, hold me, tilt me! An exploration of manipulative
user interfaces. in CHI '98, 17-24.
15. Harrison, C., Benko, H., Wilson, A., OmniTouch: wearable
multitouch interaction everywhere. in UIST'11, 441-450.
16. Harrison, C., Schwarz, J., Hudson, S., TapSense: enhancing
finger interaction on touch surfaces. in UIST'11, 627-636.
17. Harrison, C., Tan, D. , Morris, D., Skinput: Appropriating
the
Body as an Input Surface. in CHI '10, 453-462.
18. Hilliges, O., Izadi, S., Wilson, A.D., Hodges, S., Garcia-
Mendoza, A., Butz, A., Interactions in the air: adding further
depth to interactive tabletops. in UIST '09, 139-148.
19. Hinkley, K. and Sinclair, M., Touch-sensing input devices.
in
CHI '99, 223-230.
20. Kry, P.G. and Pai, D.K. Grasp Recognition and
Manipulation
33. with the Tango. in ISER'06. 551-559.
21. Lee, S.K., Buxton, W., Smith, K.C., A multi-touch three
dimensional touch-sensitive tablet. in CHI '85, 21-25.
22. Matsushita, N. and Rekimoto, J., HoloWall: designing a
finger,
hand, body and object sensitive wall. UIST'97, 209-210.
23. Miyaki, T., Rekimoto, J. GraspZoom: zooming and scrolling
control model for single-handed mobile interaction. in
MobileHCI '09, 81-84.
24. Paradiso, J. and Hsiao, K., Swept-frequency, magnetically-
coupled resonant tags for realtime, continuous, multiparameter
control. in CHI EA '99, 212-213.
25. Philipp, H. Charge transfer sensing. Sens. Review, 19. 96-
105.
26. Pier, M.D. and Goldberg, I.R., Using water as interface
media
in VR applications. in CLIHC '05, ACM, 162-169.
27. Poupyrev, I. and Maruyama, S., Tactile interfaces for small
touch screens. in UIST '03, 217-220.
28. Poupyrev, I., Oba, H., Ikeda, T., Iwabuchi, E., Designing
embodied interfaces for casual sound recording devices. in
CHI EA'08, 2129-2134.
29. Poupyrev, I., Yeo, Z., Griffin, J.D., Hudson, S., Sensing
human
activities with resonant tuning. in CHI EA '10, 4135-4140.
30. Rekimoto, J., SmartSkin: An Infrastructure for Freehand
34. Manipulation onInteractive Surfaces. in CHI '02, 113-120.
31. Rosenberg, I. and Perlin, K. The UnMousePad: an
interpolating multi-touch force-sensing input pad. in
SIGGRAPH'09, Article 65. 65:1-65:9.
32. Russo, A., Ahn, B.Y., Adams, J.J., Duoss, E.B., Bernhard,
J.T.,
Lewis, J.A. Pen-on-Paper Flexible Electronics. Advanced
materials, (23) 30. 2011. 3426-3430.
33. Saponas, T.S., Tan, D.S., Morris, D., Balakrishnan, R.,
Turner,
J., Landay, J.A., Enabling always-available input with muscle-
computer interfaces. in UIST '09, 167-176
34. Sato, M. Particle display system: a real world display with
physically distributable pixels. in CHI EA '08, 3771-3776.
35. Skulpone, S., Dittman, K. Adjustable proximity sensor.
US Patent 3,743,853, 1973.
36. Smith, J.R. Field mice: Extracting hand geometry from
electric
field measurements. IBM Systems Journal, 35. 587-608.
37. Song, H., Benko, H., Izadi, S., Cao, X., Hinckley, K., Grips
and Gestures on a Multi-Touch Pen. in CHI '11, 1323-1332.
38. Taylor, B. and Bove, V., Graspables: Grasp-recognition as
a
user interface. in CHI '09, 917-926.
39. Wang, F. and Ren, X., Empirical evaluation for finger input
properties in multi-touch interaction. in CHI '09, 1063-1072.
35. 40. Wang, R.Y. and Popovic, J. Real-time hand-tracking with a
color glove. in SIGGRAPH '09, Article 63, 63:1-63:8
41. Watanabe, J., VortexBath: Study of Tangible Interaction
with
Water in Bathroom for Accessing and Playing Media Files. in
HCI '07, 1240-1248.
42. Webster, J.G. Ed, Medical instrumentation: application and
design. Wiley, 2010.
43. Weiser, M. The computer for the 21st century. Scientific
American (9). 94-104.
44. Wimmer, R. and Baudisch, P., Modular and Deformable
Touch-Sensitive Surfaces Based on Time Domain
Reflectometry. in UIST '11, 517-526.
45. Yonezawa, T. and Mase, K., Tangible Sound: Musical
instrument using fluid media. in ICMC2000.
46. Zimmerman, T.G., Smith, J.R., Paradiso, J.A., Allport, D.,
Gershenfeld, N., Applying electric field sensing to human-
computer interfaces. in CHI '95, 280-287.