SlideShare a Scribd company logo
ADVANCE INTERACTION
TECHNIQUES
SYED QASIM RAZA 13054119-022
AQIB RAUF 13054119-033
WAQAR ALI 13054119-061
Sub -Topics
• Touch
• gestures
• eye tracking
• head and pose tracking
• speech
• brain-body interfaces.
Content
 ABSTRACT
 INTRODUCTION
 PARTS AND WORKING
 USES
 ADVANCEMENTS
 APPLICATIONS
 FUTURE SCOPE
MULTITOUCH INTERACTION
(A POWERPOINT PRESENTATION)
OVERVIEW
Introduction
Hardware and Software
Natural User Interfaces
Markets and Applications
Types
Gestures
Implementation
Applications
Market trends
Present and Future
MULTITOUCH?
“In computing, multi-touch refers
to a touch sensing surface's (track
pad or touchscreen) ability to
recognize the presence of two or
more points of contact with the
surface.”
Introduction
• Multi-touch technology presents a wide range of new
opportunities for interaction with graphical user interfaces,
allowing expressive gestural control and fluid multi-user
collaboration through relatively simple and inexpensive hardware
and software configurations.
Types
a) Capacitive Touch Technologies
b) Resistive Touch Technologies
c) Optical Touch Technologies
d) SAW Touch Technologies
e) Infrared Technologies
Capacitive Touch
Screen
Technology
 A capacitive touch screen consists
of a glass panel with a capacitive
(charge storing) material coating
its surface. Circuits located at
corners of the screen measure the
capacitance of a person touching
the overlay.
 But it only responds to finger
contact and will not work with a
gloved hand or pen stylus.
Resistive
Touch Screen
Technology
Resistive touch screen technology consists
of a glass or acrylic panel that is coated
with electrically conductive and resistive
layers.The thin layers are separated by
invisible separator dots.When operating,
an electrical current moves through the
screen.When pressure is applied to the
screen the layers are pressed together,
causing a change in the electrical current
and a touch is registered.
Surface Acoustic Wave
(SAW)
SAW technology uses ultrasonic
waves that pass over the touch
screen panel.When the panel is
touched, a portion of the wave is
absorbed.This change in the
ultrasonic waves registers the
position of the touch event and sends
this information to the controller for
processing. It uses pure glass
construction, hence SAW provides
superior image clarity, resolution and
higher light transmission compared
to resistive and capacitive
technologies.
Infrared
Infrared technology relies
on the interruption of an
infrared light grid in front
of the display screen.The
touch frame contains a row
of infrared LEDs and photo
transistors, each mounted
on two opposite sides to
create a grid of invisible
infrared light.The frame
assembly comprises
printed wiring boards, on
which the electronics are
mounted and is concealed
behind an infrared-
transparent bezel.
Optical Touch
Optical imaging solution is one of the more
modern touch technologies.
Since NextWindow's technology uses optical
sensors to detect the touch point, the touch
registers just before the physical touch on
the screen.This means that users can apply
zero or light touch to the screen to initiate a
response, and any input device, such as a
paintbrush, finger, pen, or stylus will work.
Optical imaging provides a solution without
calibration drift.
Comparison
Multi-touch
Gestures
Tap
Pan
Long press
Scroll
Two finger scroll
Flick
Two finger tap
Pinch open
Pinch close
Applications
• Our technique is force-sensitive, and provides
unprecedented resolution and scalability, allowing us to
create sophisticated multi-point widgets for applications
large enough to accommodate both hands and multiple
users.
The current use of multi-touch technology
 The current use of multi-touch technology enables users to easily interact with various devices by
simply using a touch screen and navigate through interactive content with ease, ensuring great
flexibility and speed.
Nowadays, users have the opportunity to use multi-touch display panels, multi-touch displays
windows, but also multi-touch tables and notebooks.
Some important players on the market of multi-touch solutions areTouch Data LLC and GestureTek
who are focused on the development of comfortable and effective solutions such as the multi-touch
wall or multi-touch workstation.
Their products and solutions ensure the possibility of simultaneously accommodating multiple users
and individual use. Such solutions are widely used in professional presentations and for broadcast
use due to their flexibility, speed, effective interactivity and great design.
The scope of multi-touch interaction
The multi-touch solutions will continue to evolve in complexity and ease of use,
and we will be main beneficiaries of such multi-touch achievements.
More investments will be made in the field of multi-touch technology as well as
in the research papers conducted by professional engineers in this area of
expertise.
Enhancement in Technology
In more recent work we focus on new multi-touch paradigms
and interactions that combine both traditional 2D interaction and
novel 3D interaction on a touch surface to form a new class of multi-
touch systems, which we refer to as interscopic multi-touch surfaces
(iMUTS). We discuss iMUTS-based user interfaces that support
interaction with 2D content displayed in monoscopic mode and 3D
content usually displayed stereoscopically.
Market Trends
GESTURE RECOGNITION
TECHNOLOGY
Introduction
• Gesture recognition is a topic in computer science and language technology
with the goal of interpreting human gestures via mathematical algorithms.
• Gestures can originate from any bodily motion or state but commonly
originate from the face or hand.
• Many approaches have been made using cameras and computer vision
algorithms to interpret sign language
• In gesture recognition technology a camera reads the movements of the
human body and communicates the data to a computer that uses the
gestures as input to control devices or applications.
• Gesture recognition can be conducted with techniques from computer
vision and image processing.
Gesture types
• In computer interfaces, two types of gestures are distinguished.
• Offline gestures
• Online gestures
Image processing
• Image processing is any form of signal processing for which
the input is an image.
• Image processing usually refers to digital image processing,
but optical and analog image processing also are possible.
Input devices
• The ability to track a person's movements and determine what
gestures they may be performing can be achieved through various
tools.
• Wired gloves
• Depth aware cameras
• Stereo cameras
• Controller based gestures
• Single camera
Technology Behind it..
Wired gloves
• These can provide input to the computer about the
position and notation of the hands using magnetic or
inertial tracking devices.
• This uses fiber optic cables running down the back of
the hand. Light pulses are created and when the fingers
are bent, and is registered giving an approximation of
the hand pose.
Technology Behind it..
Depth aware cameras
• Using specialized cameras such as structured light , one
can generate a depth map of what is being seen through
the camera.
• These can be effective for detection of hand gestures due
to their short range capabilities.
Depth aware camera
Technology Behind it..
Stereo cameras
• It is a camera that has two lenses about the same
distance apart as your eyes and take two pictures at a
same time.
• A 3D representation can be approximated by the
output of the cameras.
Technology Behind it..
Controller based gestures
• These controllers act as an extension of the body so that when gestures are performed
,some of their motion can be conveniently captured by software.
• Mouse gestures are one example, where the motion of the mouse is correlated to a
symbol being drawn by a persons hand.
s
Technology Behind it..
Single camera
• A normal camera can be used for gesture recognition where the resources would not be
convenient for other forms of image based recognition.
• Earlier it was thought that single camera may not be effective as stereo or depth aware
cameras .
Challenges
• Accuracy
• Background noise
• Quality
• Robust computer vision methods
Uses
• Sign language recognition
• For socially assistive robotics
• Directional indication through pointing
• Immersive game technology
• Effective computing
• Remote control
ABSTRACT
•The Eye gaze System is a communication
system for people with complex physical
disabilities.
•This operates with eyes by looking at
control keys displayed on a screen.
•With this system a person can synthesize
speech, control his environment , operate
a telephone, run computer software,
operate a computer mouse, and access
the Internet and e-mail.
INTRODUCTION
The Eye gaze System is a direct-select
vision-controlled communication and
control system.
It was developed in FAIRFAX, Virginia
by LCTechnologies.
Who is using the Eyegaze System?
THIS SYSTEM IS MAINLY
DEVELOPED FOR THOSE WHO
LACK THE USE OF THEIR
HANDS OR VOICE.
ITS USERS ARE ADULTS AND
CHILDREN WITH CEREBRAL
PALSY, SPINAL CORD
INJURIES, BRAIN INJURIES,
ALS, MULTIPLE SCLEROSIS,
BRAINSTEM STROKES
SKILLS NEEDED BY THE USERS
 Good control of one eye
 Adequate vision
Ability to maintain a position in front of the Eye gaze
monitor.
Mental abilities that improve the probability for
successful Eyegaze use:
Cognition
Ability to read
Memory
PARTS AND WORKING OF THE EYEGAZE
SYSTEM
As a user sits in front of the
Eyegaze monitor, a specialized
video camera mounted below
the monitor observes one of
the user's eyes.
How to run the Eyegaze System
• A user operates the Eyegaze System by
looking at rectangular keys that are
displayed on the control screen.
• To "press" an Eyegaze key, the user
looks at the key for a specified period of
time.
USES OF EYEGAZE
The Basic Eyegaze Can
Calibrate
Typewrite
Read Text
Entertain with games
Teach Screens
With Options The Eye gaze Can
•BE AT TWO SITES!!
•BE A KEYBOARD
•SPEAK through a speech synthesizer
•CONTROL appliances
•DIAL and answer a phone
MENUS OF EYEGAZE SYSTEM
The Phrase Program
Typewriter Program
The telephone program
Run Second PC
 Paddle games & Score Four
Read Text Program
Television
The Phrase Program Typewriter Program
The telephone program The Lights & appliances Program
Run Second PC
As a keyboard As a mouse controller
Environment required for an Eyegaze system
•The Eyegaze System must be
operated in an environment
where there is limited of
infrared light.
•The System works best away
from windows, and in a room
lit with fluorescent or mercury-
vapor lights, which are low in
infrared.
ADVANCEMENTS
Portable Eyegaze System Mounted on
Wheelchair Screen of Eyegaze System
•It is a sophisticated system with a high tracking rate and
excellent working range.
• It can accommodate rapid or involuntary head
movements
IntelliGaze IG-30
•Intelligaze uses the latest camera
technology, very sophisticated image
processing and calibration methods
For People with Limited Eye Control
• Scanning Keyboard is the new
row/column keyboard with an on-screen
eye "switch" for people with limited eye
movement.
• The user may "speak" what he has
typed.
Some of the common eye movement problems that interfere with Eyegaze
use are
Nystagmus - constant, involuntary movement of the eyeball
Alternating strabismus - eyes cannot be directed to the same object,
either one deviates
The common vision problems are:
Inadequate Visual acuity
Diplopia (double vision)
Blurred vision
Cataracts (clouding of the
 lens of the eye)
APPLICATIONS
•Neurosciences /Neuropsychology
•Vision Research
•Experimental Psychology
•Cognitive Psychology
•Psycholinguistics
•Psychiatry /Mental Health
•Transportation: Flight simulators /driving simulators
•Robotics - remote vision control
•Video and arcade games
FUTURE WORK
•Totally Free Head Motion
•Automatic Eye Acquisition
•Binocular Eye tracking
•High Gaze point Tracking Accuracy
•Easy User Calibration
The Eye gaze System’s Eyefollower2.0
CONCLUSION
Today, the human eye-gaze can be recorded by relatively unremarkable techniques. This
thesis argues that it is possible to use the eye-gaze of a computer user in the interface to
aid the control of the application.
Care must be taken, though, that eye-gaze tracking data is used in a sensible way, since
the nature of human eye-movements is a combination of several voluntary and involuntary
cognitive processes.
Speech Recognition
Introduction
• What is Speech Recognition?
also known as automatic speech recognition or computer speech recognition which means understanding voice
by the computer and performing any required task.
• Where can it be used?
- Dictation
- System control/navigation
- Commercial/Industrial applications
-Voice dialing
Recognition
Voice Input Analog to Digital Acoustic Model
Language Model
Display Speech EngineFeedback
•Acoustic Model
An acoustic model is created by taking audio recordings of speech, and their text transcriptions,
and using software to create statistical representations of the sounds that make up each word. It is
used by a speech recognition engine to recognize speech.
•Language Model
 Language modeling is used in many natural language processing applications such as speech
recognition tries to capture the properties of a language, and to predict the next word in a speech
sequence.
TYPES OF VOICE RECOGNITION
• There are two types of speech recognition. One is called speaker-dependent and the
other isspeaker-independent. Speaker-dependent software is commonly used for
dictation software, while speaker-independent software is more commonly found in
telephone applications.
• Speaker-dependent software works by learning the unique characteristics of a single
person’s voice, in a way similar to voice recognition. New users must first “train” the
software by speaking to it, so the computer can analyze how the person talks.This
often means users have to read a few pages of text to the computer before they can
use the speech recognition software.
TYPES OF VOICE RECOGNITION
• Speaker-independent software is designed to recognize anyone’s
voice, so no training is involved.This means it is the only real option
for applications such as interactive voice response systems — where
businesses can’t ask callers to read pages of text before using the
system.The downside is that speaker-independent software is
generally less accurate than speaker-dependent software.
• Speech recognition engines that are speaker independent generally
deal with this fact by limiting the grammars they use. By using a
smaller list of recognized words, the speech engine is more likely to
correctly recognize what a speaker said.
How do humans do it?
• Articulation produces
• sound waves which
• the ear conveys to the brain
• for processing
How might computers do it?
Acoustic waveform Acoustic signal
Speech recognition
• Digitization
• Acoustic analysis of the speech signal
• Linguistic interpretation
DIFFERENT PROCESSES INVOLVED
• Digitization
– Converting analogue signal into digital representation
• Signal processing
– Separating speech from background noise
• phonetics & Phonology
-is the production and perception of speech sounds in any language and it deals with
"phone". Phonology on the other hand is the interpretation of speech sounds in a particular
language and it deals with phoneme: the smallest unit of sound.
• Lexicology and syntax
-Lexicology is that part of linguistics which studies words, their nature and meaning, words' elements, relations between
words,words groups and the whole lexicon. .
DIFFERENT PROCESSES
INVOLVED(CONTD.)
• Syntax and pragmatics
• Semantics tells about the meaning
• Pragmatics is concerned with bridging the explanatory gap between sentence meaning
and speaker's meaning
Digitization
• Analogue to digital conversion
• Sampling and quantizing
 Sampling is converting a continuous signal into a discrete signal
 Quantizing is the process of approximating a continuous range of values
• Use filters to measure energy levels for various points on the frequency
spectrum
• Knowing the relative importance of different frequency bands (for speech)
makes this process more efficient
• E.g. high frequency sounds are less informative, so can be sampled using a
broader bandwidth (log scale)
Separating speech from background noise
• Noise cancelling microphones
• Two mics, one facing speaker, the other facing away
• Ambient noise is roughly same for both mics
• Knowing which bits of the signal relate to speech
EVOLUTION OF VOICE RECOGNITION
Pattern Matching
Interactive Voice Recognition (IVR)
Dictation
Speech Integration into Applications
Hands Free –
Eyes Free
Process of speech recognition
Speaker
Recognition
Speech
Recognition
parsing
and
arbitration
S1
S2
SK
SN
Speaker
Recognition
Speech
Recognition
parsing
and
arbitration
Switch on Channel
9
S1
S2
SK
SN
Speaker
Recognition
Speech
Recognition
parsing
and
arbitration
Who is speaking?
Annie
David
Cathy
S1
S2
SK
SN
“Authentication”
Speaker
Recognition
Speech
Recognition
parsing
and
arbitration
What is he
saying?
On,Off,TV
Fridge,Door
S1
S2
SK
SN
“Understanding”
Speaker
Recognition
Speech
Recognition
parsing
and
arbitration
What is he
talking about?
Channel->TV
Dim->Lamp
On->TV,Lamp
S1
S2
SK
SN
“Switch”,”to”,”channel”,”nine”
“Inferring and execution”
Framework of Voice Recognition
Face
Recognition
Gesture
Recognition
parsing
and
arbitration
S1
S2
SK
SN
“Authentication” “Understanding” “Inferring and execution”
Speaker Recognition
•Definition
• It is the method of recognizing a person based on his voice
• It is one of the forms of biometric identification
•Depends of speaker specific characteristics.
ADVANTAGES
• Advantages
• People with disabilities
• Organizations - Increases productivity, reduces costs and errors.
• Lower operationalCosts
• Advances in technology will allow consumers and businesses to implement
speech recognition systems at a relatively low cost.
• Cell-phone users can dial pre-programmed numbers by voice command.
• Users can trade stocks through a voice-activated trading system.
• Speech recognition technology can also replace touch-tone dialing resulting in the
ability to target customers that speak different languages
DISADVANTAGES
• Difficult to build a perfect system.
• Conversations
• Involves more than just words (non-verbal communication; stutters etc.
• Every human being has differences such as their voice, mouth, and speaking
style.
• Filtering background noise is a task that can even be difficult for
humans to accomplish.
Future of Speech Recognition
• Accuracy will become better and better.
• Dictation speech recognition will gradually become accepted.
• Small hand-held writing tablets for computer speech recognition
dictation and data entry will be developed, as faster processors and
more memory become available.
• Greater use will be made of "intelligent systems" which will attempt
to guess what the speaker intended to say, rather than what was
actually said, as people often misspeak and make unintentional
mistakes.
• Microphone and sound systems will be designed to adapt more
quickly to changing background noise levels, different environments,
with better recognition of extraneous material to be discarded.
Head pose tracking
• Proactive computing technology allows the development of more intelligent and more friendly
human-computer interaction (HCI) methods. In the future computers will interact with people in a
more natural way.To achieve this kind of advanced interaction, the system should understand
human behaviour and respond appropriately for example by observing one’s facial expressions and
gestures. Machine vision provides an excellent way of realizing such an intelligent human-computer
interface that enables a user to control a computer without any physical contact with devices such
as keyboards, mice and displays.
• In the example system, the video camera is located
above the computer display and it acquires images of
the user’s face continuously. Face and facial features
can be extracted and tracked from a sequence of
images.The 3-D position and orientation (pose) of the
head is determined from these feature points.The pose
obtained allows recognition of simple gestures such as
nodding and head shaking
Spatial and semantic head gestures
SPATIAL HEAD GESTURES (WHERE THE
HEAD MOVES IN ONE OF THE GENERAL
DIRECTIONS: LEFT, RIGHT, UP, OR DOWN)
ALLOW FOR SPATIAL REFERENCES AND
INDICATION OF DIRECTIONS
SEMANTIC HEAD GESTURES LIKE
NODDING AND SHAKING THE HEAD ARE
USED TO EXPRESS AGREEMENT OR
DISAGREEMENT
Applications
INTELLIGENT ENVIRONMENTS
 ALTERNATIVE INPUT INTERFACES
HUMAN-ROBOT INTERACTION
Intelligent environments
Alternative input interfaces
Alternative input interfaces
• .
Human Robot interaction
Brain body Interfaces
Using a relatively new brain sensing tool called functional near-infrared spectroscopy (fNIRS), along
with a more established brain sensing tool called electroencephalography (EEG), we can detect signals
within the brain that indicate various cognitive states.These devices provide data on brain activity
while remaining portable and non-invasive.
Some applications for brain and body
oAugmented Anatomical Overlay (megic mirror)
oControl Robots With Body movement
oVirtual Clothes-Fitting
oTurn Any SurfaceTouchscreen-Enabled
oVirtual Reality Interaction
oRetrieve DataVia Gestures
oTranslate Sign Language
Thanks

More Related Content

What's hot

Virtual Reality
Virtual RealityVirtual Reality
Virtual Reality
100614065martins
 
Finger tracking in mobile human compuetr interaction
Finger tracking in mobile human compuetr interactionFinger tracking in mobile human compuetr interaction
Finger tracking in mobile human compuetr interactionAkhil Kumar
 
Silverlight
SilverlightSilverlight
Silverlight
BiTWiSE
 
User interface design
User interface designUser interface design
User interface design
Naveen Sagayaselvaraj
 
Introduction to iOS Apps Development
Introduction to iOS Apps DevelopmentIntroduction to iOS Apps Development
Introduction to iOS Apps DevelopmentProf. Erwin Globio
 
Introduction to IoT Security
Introduction to IoT SecurityIntroduction to IoT Security
Introduction to IoT Security
CAS
 
Introduction to Mobile Application Development
Introduction to Mobile Application DevelopmentIntroduction to Mobile Application Development
Introduction to Mobile Application Development
Tharindu Dassanayake
 
Android fundamentals and tutorial for beginners
Android fundamentals and tutorial for beginnersAndroid fundamentals and tutorial for beginners
Android fundamentals and tutorial for beginnersBoom Shukla
 
Hand gesture recognition system(FYP REPORT)
Hand gesture recognition system(FYP REPORT)Hand gesture recognition system(FYP REPORT)
Hand gesture recognition system(FYP REPORT)Afnan Rehman
 
Ppt on use of biomatrix in secure e trasaction
Ppt on use of biomatrix in secure e trasactionPpt on use of biomatrix in secure e trasaction
Ppt on use of biomatrix in secure e trasaction
Devyani Vaidya
 
Development of Mobile Application -PPT
Development of Mobile Application -PPTDevelopment of Mobile Application -PPT
Development of Mobile Application -PPT
Dhivya T
 
Augmented reality ppt
Augmented reality pptAugmented reality ppt
Augmented reality ppt
Sourav Rout
 
Internet of things ppt
Internet of things pptInternet of things ppt
Internet of things ppt
Dania Purnama Sari
 
Hand Gesture Recognition
Hand Gesture RecognitionHand Gesture Recognition
Hand Gesture Recognition
Shounak Katyayan
 
Chap08
Chap08Chap08
Sixth sense technology
Sixth sense technologySixth sense technology
Sixth sense technology
Pulkit Singhal
 
Introduction to Android ppt
Introduction to Android pptIntroduction to Android ppt
Introduction to Android ppt
Taha Malampatti
 
20 Latest Computer Science Seminar Topics on Emerging Technologies
20 Latest Computer Science Seminar Topics on Emerging Technologies20 Latest Computer Science Seminar Topics on Emerging Technologies
20 Latest Computer Science Seminar Topics on Emerging Technologies
Seminar Links
 
Extended Reality (XR): The End of Distance @ SXSW
Extended Reality (XR): The End of Distance @ SXSWExtended Reality (XR): The End of Distance @ SXSW
Extended Reality (XR): The End of Distance @ SXSW
Rori DuBoff
 
Technical seminar on virtual smart phone
Technical seminar on virtual smart phoneTechnical seminar on virtual smart phone
Technical seminar on virtual smart phone
Akshitha Chutke
 

What's hot (20)

Virtual Reality
Virtual RealityVirtual Reality
Virtual Reality
 
Finger tracking in mobile human compuetr interaction
Finger tracking in mobile human compuetr interactionFinger tracking in mobile human compuetr interaction
Finger tracking in mobile human compuetr interaction
 
Silverlight
SilverlightSilverlight
Silverlight
 
User interface design
User interface designUser interface design
User interface design
 
Introduction to iOS Apps Development
Introduction to iOS Apps DevelopmentIntroduction to iOS Apps Development
Introduction to iOS Apps Development
 
Introduction to IoT Security
Introduction to IoT SecurityIntroduction to IoT Security
Introduction to IoT Security
 
Introduction to Mobile Application Development
Introduction to Mobile Application DevelopmentIntroduction to Mobile Application Development
Introduction to Mobile Application Development
 
Android fundamentals and tutorial for beginners
Android fundamentals and tutorial for beginnersAndroid fundamentals and tutorial for beginners
Android fundamentals and tutorial for beginners
 
Hand gesture recognition system(FYP REPORT)
Hand gesture recognition system(FYP REPORT)Hand gesture recognition system(FYP REPORT)
Hand gesture recognition system(FYP REPORT)
 
Ppt on use of biomatrix in secure e trasaction
Ppt on use of biomatrix in secure e trasactionPpt on use of biomatrix in secure e trasaction
Ppt on use of biomatrix in secure e trasaction
 
Development of Mobile Application -PPT
Development of Mobile Application -PPTDevelopment of Mobile Application -PPT
Development of Mobile Application -PPT
 
Augmented reality ppt
Augmented reality pptAugmented reality ppt
Augmented reality ppt
 
Internet of things ppt
Internet of things pptInternet of things ppt
Internet of things ppt
 
Hand Gesture Recognition
Hand Gesture RecognitionHand Gesture Recognition
Hand Gesture Recognition
 
Chap08
Chap08Chap08
Chap08
 
Sixth sense technology
Sixth sense technologySixth sense technology
Sixth sense technology
 
Introduction to Android ppt
Introduction to Android pptIntroduction to Android ppt
Introduction to Android ppt
 
20 Latest Computer Science Seminar Topics on Emerging Technologies
20 Latest Computer Science Seminar Topics on Emerging Technologies20 Latest Computer Science Seminar Topics on Emerging Technologies
20 Latest Computer Science Seminar Topics on Emerging Technologies
 
Extended Reality (XR): The End of Distance @ SXSW
Extended Reality (XR): The End of Distance @ SXSWExtended Reality (XR): The End of Distance @ SXSW
Extended Reality (XR): The End of Distance @ SXSW
 
Technical seminar on virtual smart phone
Technical seminar on virtual smart phoneTechnical seminar on virtual smart phone
Technical seminar on virtual smart phone
 

Similar to Advance Interaction Techniques

Multitouch Interaction
Multitouch   InteractionMultitouch   Interaction
Multitouch Interaction
Saurabh Singh Chauhan
 
Controlling Computer using Hand Gestures
Controlling Computer using Hand GesturesControlling Computer using Hand Gestures
Controlling Computer using Hand Gestures
IRJET Journal
 
multitouch screen
multitouch screenmultitouch screen
multitouch screen
veena jl
 
Touchless Touchscreen
Touchless TouchscreenTouchless Touchscreen
Touchless Touchscreen
Sandeep Jangid
 
project presentation on mouse simulation using finger tip detection
project presentation on mouse simulation using finger tip detection project presentation on mouse simulation using finger tip detection
project presentation on mouse simulation using finger tip detection Sumit Varshney
 
Sixth sense technology ppt
Sixth sense technology pptSixth sense technology ppt
Sixth sense technology ppt
Mohammad Adil
 
Touchless touchscreen technology
Touchless touchscreen technologyTouchless touchscreen technology
Touchless touchscreen technology
MATHEW JOSEPH
 
Touchless touchscreen technology
Touchless touchscreen technologyTouchless touchscreen technology
Touchless touchscreen technology
MATHEW JOSEPH
 
Project soli
Project soliProject soli
Project soli
Bhavin Bhadran
 
AatifKhan ppt on ppt analytics data and bca
AatifKhan ppt on ppt analytics data and bcaAatifKhan ppt on ppt analytics data and bca
AatifKhan ppt on ppt analytics data and bca
crazychekerislive
 
Virtual Mouse Control Using Hand Gestures
Virtual Mouse Control Using Hand GesturesVirtual Mouse Control Using Hand Gestures
Virtual Mouse Control Using Hand Gestures
IRJET Journal
 
Study of Various Touch Screen Technologies
Study of Various Touch Screen TechnologiesStudy of Various Touch Screen Technologies
Study of Various Touch Screen Technologies
Santosh Ankam
 
33138_HCI_touch_interfaced_devices.pptx
33138_HCI_touch_interfaced_devices.pptx33138_HCI_touch_interfaced_devices.pptx
33138_HCI_touch_interfaced_devices.pptx
ssuser71b143
 
Google project soli
Google project soliGoogle project soli
Google project soli
Anudeep Sharma Ramadugu
 
Gesture Recogntion Technology
Gesture Recogntion TechnologyGesture Recogntion Technology
Gesture Recogntion Technology
Mohit Sipani
 
Media Control Using Hand Gesture Moments
Media Control Using Hand Gesture MomentsMedia Control Using Hand Gesture Moments
Media Control Using Hand Gesture Moments
IRJET Journal
 
Gesture recognition adi
Gesture recognition adiGesture recognition adi
Gesture recognition adi
aditya verma
 
Seventh sense by Sahal Hash
Seventh sense by Sahal HashSeventh sense by Sahal Hash
Seventh sense by Sahal Hash
Sahal Hashim
 
Z4501149153
Z4501149153Z4501149153
Z4501149153
IJERA Editor
 

Similar to Advance Interaction Techniques (20)

Multitouch Interaction
Multitouch   InteractionMultitouch   Interaction
Multitouch Interaction
 
Controlling Computer using Hand Gestures
Controlling Computer using Hand GesturesControlling Computer using Hand Gestures
Controlling Computer using Hand Gestures
 
multitouch screen
multitouch screenmultitouch screen
multitouch screen
 
Touchless Touchscreen
Touchless TouchscreenTouchless Touchscreen
Touchless Touchscreen
 
project presentation on mouse simulation using finger tip detection
project presentation on mouse simulation using finger tip detection project presentation on mouse simulation using finger tip detection
project presentation on mouse simulation using finger tip detection
 
Sixth sense technology ppt
Sixth sense technology pptSixth sense technology ppt
Sixth sense technology ppt
 
Touchless touchscreen technology
Touchless touchscreen technologyTouchless touchscreen technology
Touchless touchscreen technology
 
Touchless touchscreen technology
Touchless touchscreen technologyTouchless touchscreen technology
Touchless touchscreen technology
 
Project soli
Project soliProject soli
Project soli
 
AatifKhan ppt on ppt analytics data and bca
AatifKhan ppt on ppt analytics data and bcaAatifKhan ppt on ppt analytics data and bca
AatifKhan ppt on ppt analytics data and bca
 
Virtual Mouse Control Using Hand Gestures
Virtual Mouse Control Using Hand GesturesVirtual Mouse Control Using Hand Gestures
Virtual Mouse Control Using Hand Gestures
 
Touch screen
Touch screenTouch screen
Touch screen
 
Study of Various Touch Screen Technologies
Study of Various Touch Screen TechnologiesStudy of Various Touch Screen Technologies
Study of Various Touch Screen Technologies
 
33138_HCI_touch_interfaced_devices.pptx
33138_HCI_touch_interfaced_devices.pptx33138_HCI_touch_interfaced_devices.pptx
33138_HCI_touch_interfaced_devices.pptx
 
Google project soli
Google project soliGoogle project soli
Google project soli
 
Gesture Recogntion Technology
Gesture Recogntion TechnologyGesture Recogntion Technology
Gesture Recogntion Technology
 
Media Control Using Hand Gesture Moments
Media Control Using Hand Gesture MomentsMedia Control Using Hand Gesture Moments
Media Control Using Hand Gesture Moments
 
Gesture recognition adi
Gesture recognition adiGesture recognition adi
Gesture recognition adi
 
Seventh sense by Sahal Hash
Seventh sense by Sahal HashSeventh sense by Sahal Hash
Seventh sense by Sahal Hash
 
Z4501149153
Z4501149153Z4501149153
Z4501149153
 

Recently uploaded

Beyond Event Sourcing - Embracing CRUD for Wix Platform - Java.IL
Beyond Event Sourcing - Embracing CRUD for Wix Platform - Java.ILBeyond Event Sourcing - Embracing CRUD for Wix Platform - Java.IL
Beyond Event Sourcing - Embracing CRUD for Wix Platform - Java.IL
Natan Silnitsky
 
Enterprise Resource Planning System in Telangana
Enterprise Resource Planning System in TelanganaEnterprise Resource Planning System in Telangana
Enterprise Resource Planning System in Telangana
NYGGS Automation Suite
 
Using IESVE for Room Loads Analysis - Australia & New Zealand
Using IESVE for Room Loads Analysis - Australia & New ZealandUsing IESVE for Room Loads Analysis - Australia & New Zealand
Using IESVE for Room Loads Analysis - Australia & New Zealand
IES VE
 
Globus Connect Server Deep Dive - GlobusWorld 2024
Globus Connect Server Deep Dive - GlobusWorld 2024Globus Connect Server Deep Dive - GlobusWorld 2024
Globus Connect Server Deep Dive - GlobusWorld 2024
Globus
 
A Sighting of filterA in Typelevel Rite of Passage
A Sighting of filterA in Typelevel Rite of PassageA Sighting of filterA in Typelevel Rite of Passage
A Sighting of filterA in Typelevel Rite of Passage
Philip Schwarz
 
Innovating Inference - Remote Triggering of Large Language Models on HPC Clus...
Innovating Inference - Remote Triggering of Large Language Models on HPC Clus...Innovating Inference - Remote Triggering of Large Language Models on HPC Clus...
Innovating Inference - Remote Triggering of Large Language Models on HPC Clus...
Globus
 
Paketo Buildpacks : la meilleure façon de construire des images OCI? DevopsDa...
Paketo Buildpacks : la meilleure façon de construire des images OCI? DevopsDa...Paketo Buildpacks : la meilleure façon de construire des images OCI? DevopsDa...
Paketo Buildpacks : la meilleure façon de construire des images OCI? DevopsDa...
Anthony Dahanne
 
Globus Compute wth IRI Workflows - GlobusWorld 2024
Globus Compute wth IRI Workflows - GlobusWorld 2024Globus Compute wth IRI Workflows - GlobusWorld 2024
Globus Compute wth IRI Workflows - GlobusWorld 2024
Globus
 
top nidhi software solution freedownload
top nidhi software solution freedownloadtop nidhi software solution freedownload
top nidhi software solution freedownload
vrstrong314
 
Gamify Your Mind; The Secret Sauce to Delivering Success, Continuously Improv...
Gamify Your Mind; The Secret Sauce to Delivering Success, Continuously Improv...Gamify Your Mind; The Secret Sauce to Delivering Success, Continuously Improv...
Gamify Your Mind; The Secret Sauce to Delivering Success, Continuously Improv...
Shahin Sheidaei
 
Accelerate Enterprise Software Engineering with Platformless
Accelerate Enterprise Software Engineering with PlatformlessAccelerate Enterprise Software Engineering with Platformless
Accelerate Enterprise Software Engineering with Platformless
WSO2
 
Prosigns: Transforming Business with Tailored Technology Solutions
Prosigns: Transforming Business with Tailored Technology SolutionsProsigns: Transforming Business with Tailored Technology Solutions
Prosigns: Transforming Business with Tailored Technology Solutions
Prosigns
 
First Steps with Globus Compute Multi-User Endpoints
First Steps with Globus Compute Multi-User EndpointsFirst Steps with Globus Compute Multi-User Endpoints
First Steps with Globus Compute Multi-User Endpoints
Globus
 
GlobusWorld 2024 Opening Keynote session
GlobusWorld 2024 Opening Keynote sessionGlobusWorld 2024 Opening Keynote session
GlobusWorld 2024 Opening Keynote session
Globus
 
How to Position Your Globus Data Portal for Success Ten Good Practices
How to Position Your Globus Data Portal for Success Ten Good PracticesHow to Position Your Globus Data Portal for Success Ten Good Practices
How to Position Your Globus Data Portal for Success Ten Good Practices
Globus
 
Custom Healthcare Software for Managing Chronic Conditions and Remote Patient...
Custom Healthcare Software for Managing Chronic Conditions and Remote Patient...Custom Healthcare Software for Managing Chronic Conditions and Remote Patient...
Custom Healthcare Software for Managing Chronic Conditions and Remote Patient...
Mind IT Systems
 
Developing Distributed High-performance Computing Capabilities of an Open Sci...
Developing Distributed High-performance Computing Capabilities of an Open Sci...Developing Distributed High-performance Computing Capabilities of an Open Sci...
Developing Distributed High-performance Computing Capabilities of an Open Sci...
Globus
 
May Marketo Masterclass, London MUG May 22 2024.pdf
May Marketo Masterclass, London MUG May 22 2024.pdfMay Marketo Masterclass, London MUG May 22 2024.pdf
May Marketo Masterclass, London MUG May 22 2024.pdf
Adele Miller
 
Large Language Models and the End of Programming
Large Language Models and the End of ProgrammingLarge Language Models and the End of Programming
Large Language Models and the End of Programming
Matt Welsh
 
How Recreation Management Software Can Streamline Your Operations.pptx
How Recreation Management Software Can Streamline Your Operations.pptxHow Recreation Management Software Can Streamline Your Operations.pptx
How Recreation Management Software Can Streamline Your Operations.pptx
wottaspaceseo
 

Recently uploaded (20)

Beyond Event Sourcing - Embracing CRUD for Wix Platform - Java.IL
Beyond Event Sourcing - Embracing CRUD for Wix Platform - Java.ILBeyond Event Sourcing - Embracing CRUD for Wix Platform - Java.IL
Beyond Event Sourcing - Embracing CRUD for Wix Platform - Java.IL
 
Enterprise Resource Planning System in Telangana
Enterprise Resource Planning System in TelanganaEnterprise Resource Planning System in Telangana
Enterprise Resource Planning System in Telangana
 
Using IESVE for Room Loads Analysis - Australia & New Zealand
Using IESVE for Room Loads Analysis - Australia & New ZealandUsing IESVE for Room Loads Analysis - Australia & New Zealand
Using IESVE for Room Loads Analysis - Australia & New Zealand
 
Globus Connect Server Deep Dive - GlobusWorld 2024
Globus Connect Server Deep Dive - GlobusWorld 2024Globus Connect Server Deep Dive - GlobusWorld 2024
Globus Connect Server Deep Dive - GlobusWorld 2024
 
A Sighting of filterA in Typelevel Rite of Passage
A Sighting of filterA in Typelevel Rite of PassageA Sighting of filterA in Typelevel Rite of Passage
A Sighting of filterA in Typelevel Rite of Passage
 
Innovating Inference - Remote Triggering of Large Language Models on HPC Clus...
Innovating Inference - Remote Triggering of Large Language Models on HPC Clus...Innovating Inference - Remote Triggering of Large Language Models on HPC Clus...
Innovating Inference - Remote Triggering of Large Language Models on HPC Clus...
 
Paketo Buildpacks : la meilleure façon de construire des images OCI? DevopsDa...
Paketo Buildpacks : la meilleure façon de construire des images OCI? DevopsDa...Paketo Buildpacks : la meilleure façon de construire des images OCI? DevopsDa...
Paketo Buildpacks : la meilleure façon de construire des images OCI? DevopsDa...
 
Globus Compute wth IRI Workflows - GlobusWorld 2024
Globus Compute wth IRI Workflows - GlobusWorld 2024Globus Compute wth IRI Workflows - GlobusWorld 2024
Globus Compute wth IRI Workflows - GlobusWorld 2024
 
top nidhi software solution freedownload
top nidhi software solution freedownloadtop nidhi software solution freedownload
top nidhi software solution freedownload
 
Gamify Your Mind; The Secret Sauce to Delivering Success, Continuously Improv...
Gamify Your Mind; The Secret Sauce to Delivering Success, Continuously Improv...Gamify Your Mind; The Secret Sauce to Delivering Success, Continuously Improv...
Gamify Your Mind; The Secret Sauce to Delivering Success, Continuously Improv...
 
Accelerate Enterprise Software Engineering with Platformless
Accelerate Enterprise Software Engineering with PlatformlessAccelerate Enterprise Software Engineering with Platformless
Accelerate Enterprise Software Engineering with Platformless
 
Prosigns: Transforming Business with Tailored Technology Solutions
Prosigns: Transforming Business with Tailored Technology SolutionsProsigns: Transforming Business with Tailored Technology Solutions
Prosigns: Transforming Business with Tailored Technology Solutions
 
First Steps with Globus Compute Multi-User Endpoints
First Steps with Globus Compute Multi-User EndpointsFirst Steps with Globus Compute Multi-User Endpoints
First Steps with Globus Compute Multi-User Endpoints
 
GlobusWorld 2024 Opening Keynote session
GlobusWorld 2024 Opening Keynote sessionGlobusWorld 2024 Opening Keynote session
GlobusWorld 2024 Opening Keynote session
 
How to Position Your Globus Data Portal for Success Ten Good Practices
How to Position Your Globus Data Portal for Success Ten Good PracticesHow to Position Your Globus Data Portal for Success Ten Good Practices
How to Position Your Globus Data Portal for Success Ten Good Practices
 
Custom Healthcare Software for Managing Chronic Conditions and Remote Patient...
Custom Healthcare Software for Managing Chronic Conditions and Remote Patient...Custom Healthcare Software for Managing Chronic Conditions and Remote Patient...
Custom Healthcare Software for Managing Chronic Conditions and Remote Patient...
 
Developing Distributed High-performance Computing Capabilities of an Open Sci...
Developing Distributed High-performance Computing Capabilities of an Open Sci...Developing Distributed High-performance Computing Capabilities of an Open Sci...
Developing Distributed High-performance Computing Capabilities of an Open Sci...
 
May Marketo Masterclass, London MUG May 22 2024.pdf
May Marketo Masterclass, London MUG May 22 2024.pdfMay Marketo Masterclass, London MUG May 22 2024.pdf
May Marketo Masterclass, London MUG May 22 2024.pdf
 
Large Language Models and the End of Programming
Large Language Models and the End of ProgrammingLarge Language Models and the End of Programming
Large Language Models and the End of Programming
 
How Recreation Management Software Can Streamline Your Operations.pptx
How Recreation Management Software Can Streamline Your Operations.pptxHow Recreation Management Software Can Streamline Your Operations.pptx
How Recreation Management Software Can Streamline Your Operations.pptx
 

Advance Interaction Techniques

  • 1. ADVANCE INTERACTION TECHNIQUES SYED QASIM RAZA 13054119-022 AQIB RAUF 13054119-033 WAQAR ALI 13054119-061
  • 2. Sub -Topics • Touch • gestures • eye tracking • head and pose tracking • speech • brain-body interfaces.
  • 3. Content  ABSTRACT  INTRODUCTION  PARTS AND WORKING  USES  ADVANCEMENTS  APPLICATIONS  FUTURE SCOPE
  • 5. OVERVIEW Introduction Hardware and Software Natural User Interfaces Markets and Applications Types Gestures Implementation Applications Market trends Present and Future
  • 6. MULTITOUCH? “In computing, multi-touch refers to a touch sensing surface's (track pad or touchscreen) ability to recognize the presence of two or more points of contact with the surface.”
  • 7. Introduction • Multi-touch technology presents a wide range of new opportunities for interaction with graphical user interfaces, allowing expressive gestural control and fluid multi-user collaboration through relatively simple and inexpensive hardware and software configurations.
  • 8. Types a) Capacitive Touch Technologies b) Resistive Touch Technologies c) Optical Touch Technologies d) SAW Touch Technologies e) Infrared Technologies
  • 9. Capacitive Touch Screen Technology  A capacitive touch screen consists of a glass panel with a capacitive (charge storing) material coating its surface. Circuits located at corners of the screen measure the capacitance of a person touching the overlay.  But it only responds to finger contact and will not work with a gloved hand or pen stylus.
  • 10. Resistive Touch Screen Technology Resistive touch screen technology consists of a glass or acrylic panel that is coated with electrically conductive and resistive layers.The thin layers are separated by invisible separator dots.When operating, an electrical current moves through the screen.When pressure is applied to the screen the layers are pressed together, causing a change in the electrical current and a touch is registered.
  • 11. Surface Acoustic Wave (SAW) SAW technology uses ultrasonic waves that pass over the touch screen panel.When the panel is touched, a portion of the wave is absorbed.This change in the ultrasonic waves registers the position of the touch event and sends this information to the controller for processing. It uses pure glass construction, hence SAW provides superior image clarity, resolution and higher light transmission compared to resistive and capacitive technologies.
  • 12. Infrared Infrared technology relies on the interruption of an infrared light grid in front of the display screen.The touch frame contains a row of infrared LEDs and photo transistors, each mounted on two opposite sides to create a grid of invisible infrared light.The frame assembly comprises printed wiring boards, on which the electronics are mounted and is concealed behind an infrared- transparent bezel.
  • 13. Optical Touch Optical imaging solution is one of the more modern touch technologies. Since NextWindow's technology uses optical sensors to detect the touch point, the touch registers just before the physical touch on the screen.This means that users can apply zero or light touch to the screen to initiate a response, and any input device, such as a paintbrush, finger, pen, or stylus will work. Optical imaging provides a solution without calibration drift.
  • 15. Multi-touch Gestures Tap Pan Long press Scroll Two finger scroll Flick Two finger tap Pinch open Pinch close
  • 16. Applications • Our technique is force-sensitive, and provides unprecedented resolution and scalability, allowing us to create sophisticated multi-point widgets for applications large enough to accommodate both hands and multiple users.
  • 17. The current use of multi-touch technology  The current use of multi-touch technology enables users to easily interact with various devices by simply using a touch screen and navigate through interactive content with ease, ensuring great flexibility and speed. Nowadays, users have the opportunity to use multi-touch display panels, multi-touch displays windows, but also multi-touch tables and notebooks. Some important players on the market of multi-touch solutions areTouch Data LLC and GestureTek who are focused on the development of comfortable and effective solutions such as the multi-touch wall or multi-touch workstation. Their products and solutions ensure the possibility of simultaneously accommodating multiple users and individual use. Such solutions are widely used in professional presentations and for broadcast use due to their flexibility, speed, effective interactivity and great design.
  • 18. The scope of multi-touch interaction The multi-touch solutions will continue to evolve in complexity and ease of use, and we will be main beneficiaries of such multi-touch achievements. More investments will be made in the field of multi-touch technology as well as in the research papers conducted by professional engineers in this area of expertise.
  • 19. Enhancement in Technology In more recent work we focus on new multi-touch paradigms and interactions that combine both traditional 2D interaction and novel 3D interaction on a touch surface to form a new class of multi- touch systems, which we refer to as interscopic multi-touch surfaces (iMUTS). We discuss iMUTS-based user interfaces that support interaction with 2D content displayed in monoscopic mode and 3D content usually displayed stereoscopically.
  • 22. Introduction • Gesture recognition is a topic in computer science and language technology with the goal of interpreting human gestures via mathematical algorithms. • Gestures can originate from any bodily motion or state but commonly originate from the face or hand. • Many approaches have been made using cameras and computer vision algorithms to interpret sign language
  • 23.
  • 24. • In gesture recognition technology a camera reads the movements of the human body and communicates the data to a computer that uses the gestures as input to control devices or applications. • Gesture recognition can be conducted with techniques from computer vision and image processing.
  • 25. Gesture types • In computer interfaces, two types of gestures are distinguished. • Offline gestures • Online gestures
  • 26. Image processing • Image processing is any form of signal processing for which the input is an image. • Image processing usually refers to digital image processing, but optical and analog image processing also are possible.
  • 27. Input devices • The ability to track a person's movements and determine what gestures they may be performing can be achieved through various tools. • Wired gloves • Depth aware cameras • Stereo cameras • Controller based gestures • Single camera
  • 28. Technology Behind it.. Wired gloves • These can provide input to the computer about the position and notation of the hands using magnetic or inertial tracking devices. • This uses fiber optic cables running down the back of the hand. Light pulses are created and when the fingers are bent, and is registered giving an approximation of the hand pose.
  • 29. Technology Behind it.. Depth aware cameras • Using specialized cameras such as structured light , one can generate a depth map of what is being seen through the camera. • These can be effective for detection of hand gestures due to their short range capabilities.
  • 31. Technology Behind it.. Stereo cameras • It is a camera that has two lenses about the same distance apart as your eyes and take two pictures at a same time. • A 3D representation can be approximated by the output of the cameras.
  • 32. Technology Behind it.. Controller based gestures • These controllers act as an extension of the body so that when gestures are performed ,some of their motion can be conveniently captured by software. • Mouse gestures are one example, where the motion of the mouse is correlated to a symbol being drawn by a persons hand.
  • 33. s
  • 34. Technology Behind it.. Single camera • A normal camera can be used for gesture recognition where the resources would not be convenient for other forms of image based recognition. • Earlier it was thought that single camera may not be effective as stereo or depth aware cameras .
  • 35.
  • 36.
  • 37. Challenges • Accuracy • Background noise • Quality • Robust computer vision methods
  • 38. Uses • Sign language recognition • For socially assistive robotics • Directional indication through pointing • Immersive game technology • Effective computing • Remote control
  • 39.
  • 40.
  • 41. ABSTRACT •The Eye gaze System is a communication system for people with complex physical disabilities. •This operates with eyes by looking at control keys displayed on a screen. •With this system a person can synthesize speech, control his environment , operate a telephone, run computer software, operate a computer mouse, and access the Internet and e-mail.
  • 42. INTRODUCTION The Eye gaze System is a direct-select vision-controlled communication and control system. It was developed in FAIRFAX, Virginia by LCTechnologies.
  • 43. Who is using the Eyegaze System? THIS SYSTEM IS MAINLY DEVELOPED FOR THOSE WHO LACK THE USE OF THEIR HANDS OR VOICE. ITS USERS ARE ADULTS AND CHILDREN WITH CEREBRAL PALSY, SPINAL CORD INJURIES, BRAIN INJURIES, ALS, MULTIPLE SCLEROSIS, BRAINSTEM STROKES
  • 44. SKILLS NEEDED BY THE USERS  Good control of one eye  Adequate vision Ability to maintain a position in front of the Eye gaze monitor. Mental abilities that improve the probability for successful Eyegaze use: Cognition Ability to read Memory
  • 45. PARTS AND WORKING OF THE EYEGAZE SYSTEM As a user sits in front of the Eyegaze monitor, a specialized video camera mounted below the monitor observes one of the user's eyes.
  • 46. How to run the Eyegaze System • A user operates the Eyegaze System by looking at rectangular keys that are displayed on the control screen. • To "press" an Eyegaze key, the user looks at the key for a specified period of time.
  • 47. USES OF EYEGAZE The Basic Eyegaze Can Calibrate Typewrite Read Text Entertain with games Teach Screens
  • 48. With Options The Eye gaze Can •BE AT TWO SITES!! •BE A KEYBOARD •SPEAK through a speech synthesizer •CONTROL appliances •DIAL and answer a phone
  • 49. MENUS OF EYEGAZE SYSTEM The Phrase Program Typewriter Program The telephone program Run Second PC  Paddle games & Score Four Read Text Program Television
  • 50. The Phrase Program Typewriter Program The telephone program The Lights & appliances Program
  • 51. Run Second PC As a keyboard As a mouse controller
  • 52. Environment required for an Eyegaze system •The Eyegaze System must be operated in an environment where there is limited of infrared light. •The System works best away from windows, and in a room lit with fluorescent or mercury- vapor lights, which are low in infrared.
  • 53. ADVANCEMENTS Portable Eyegaze System Mounted on Wheelchair Screen of Eyegaze System
  • 54. •It is a sophisticated system with a high tracking rate and excellent working range. • It can accommodate rapid or involuntary head movements IntelliGaze IG-30 •Intelligaze uses the latest camera technology, very sophisticated image processing and calibration methods
  • 55. For People with Limited Eye Control • Scanning Keyboard is the new row/column keyboard with an on-screen eye "switch" for people with limited eye movement. • The user may "speak" what he has typed.
  • 56. Some of the common eye movement problems that interfere with Eyegaze use are Nystagmus - constant, involuntary movement of the eyeball Alternating strabismus - eyes cannot be directed to the same object, either one deviates The common vision problems are: Inadequate Visual acuity Diplopia (double vision) Blurred vision Cataracts (clouding of the  lens of the eye)
  • 57. APPLICATIONS •Neurosciences /Neuropsychology •Vision Research •Experimental Psychology •Cognitive Psychology •Psycholinguistics •Psychiatry /Mental Health •Transportation: Flight simulators /driving simulators •Robotics - remote vision control •Video and arcade games
  • 58. FUTURE WORK •Totally Free Head Motion •Automatic Eye Acquisition •Binocular Eye tracking •High Gaze point Tracking Accuracy •Easy User Calibration The Eye gaze System’s Eyefollower2.0
  • 59. CONCLUSION Today, the human eye-gaze can be recorded by relatively unremarkable techniques. This thesis argues that it is possible to use the eye-gaze of a computer user in the interface to aid the control of the application. Care must be taken, though, that eye-gaze tracking data is used in a sensible way, since the nature of human eye-movements is a combination of several voluntary and involuntary cognitive processes.
  • 61. Introduction • What is Speech Recognition? also known as automatic speech recognition or computer speech recognition which means understanding voice by the computer and performing any required task.
  • 62. • Where can it be used? - Dictation - System control/navigation - Commercial/Industrial applications -Voice dialing
  • 63. Recognition Voice Input Analog to Digital Acoustic Model Language Model Display Speech EngineFeedback
  • 64. •Acoustic Model An acoustic model is created by taking audio recordings of speech, and their text transcriptions, and using software to create statistical representations of the sounds that make up each word. It is used by a speech recognition engine to recognize speech. •Language Model  Language modeling is used in many natural language processing applications such as speech recognition tries to capture the properties of a language, and to predict the next word in a speech sequence.
  • 65. TYPES OF VOICE RECOGNITION • There are two types of speech recognition. One is called speaker-dependent and the other isspeaker-independent. Speaker-dependent software is commonly used for dictation software, while speaker-independent software is more commonly found in telephone applications. • Speaker-dependent software works by learning the unique characteristics of a single person’s voice, in a way similar to voice recognition. New users must first “train” the software by speaking to it, so the computer can analyze how the person talks.This often means users have to read a few pages of text to the computer before they can use the speech recognition software.
  • 66. TYPES OF VOICE RECOGNITION • Speaker-independent software is designed to recognize anyone’s voice, so no training is involved.This means it is the only real option for applications such as interactive voice response systems — where businesses can’t ask callers to read pages of text before using the system.The downside is that speaker-independent software is generally less accurate than speaker-dependent software. • Speech recognition engines that are speaker independent generally deal with this fact by limiting the grammars they use. By using a smaller list of recognized words, the speech engine is more likely to correctly recognize what a speaker said.
  • 67. How do humans do it? • Articulation produces • sound waves which • the ear conveys to the brain • for processing
  • 68. How might computers do it? Acoustic waveform Acoustic signal Speech recognition • Digitization • Acoustic analysis of the speech signal • Linguistic interpretation
  • 69.
  • 70. DIFFERENT PROCESSES INVOLVED • Digitization – Converting analogue signal into digital representation • Signal processing – Separating speech from background noise • phonetics & Phonology -is the production and perception of speech sounds in any language and it deals with "phone". Phonology on the other hand is the interpretation of speech sounds in a particular language and it deals with phoneme: the smallest unit of sound. • Lexicology and syntax -Lexicology is that part of linguistics which studies words, their nature and meaning, words' elements, relations between words,words groups and the whole lexicon. .
  • 71. DIFFERENT PROCESSES INVOLVED(CONTD.) • Syntax and pragmatics • Semantics tells about the meaning • Pragmatics is concerned with bridging the explanatory gap between sentence meaning and speaker's meaning
  • 72. Digitization • Analogue to digital conversion • Sampling and quantizing  Sampling is converting a continuous signal into a discrete signal  Quantizing is the process of approximating a continuous range of values • Use filters to measure energy levels for various points on the frequency spectrum • Knowing the relative importance of different frequency bands (for speech) makes this process more efficient • E.g. high frequency sounds are less informative, so can be sampled using a broader bandwidth (log scale)
  • 73. Separating speech from background noise • Noise cancelling microphones • Two mics, one facing speaker, the other facing away • Ambient noise is roughly same for both mics • Knowing which bits of the signal relate to speech
  • 74. EVOLUTION OF VOICE RECOGNITION Pattern Matching Interactive Voice Recognition (IVR) Dictation Speech Integration into Applications Hands Free – Eyes Free
  • 75. Process of speech recognition Speaker Recognition Speech Recognition parsing and arbitration S1 S2 SK SN
  • 79. Speaker Recognition Speech Recognition parsing and arbitration What is he talking about? Channel->TV Dim->Lamp On->TV,Lamp S1 S2 SK SN “Switch”,”to”,”channel”,”nine” “Inferring and execution”
  • 80. Framework of Voice Recognition Face Recognition Gesture Recognition parsing and arbitration S1 S2 SK SN “Authentication” “Understanding” “Inferring and execution”
  • 81. Speaker Recognition •Definition • It is the method of recognizing a person based on his voice • It is one of the forms of biometric identification •Depends of speaker specific characteristics.
  • 82. ADVANTAGES • Advantages • People with disabilities • Organizations - Increases productivity, reduces costs and errors. • Lower operationalCosts • Advances in technology will allow consumers and businesses to implement speech recognition systems at a relatively low cost. • Cell-phone users can dial pre-programmed numbers by voice command. • Users can trade stocks through a voice-activated trading system. • Speech recognition technology can also replace touch-tone dialing resulting in the ability to target customers that speak different languages
  • 83. DISADVANTAGES • Difficult to build a perfect system. • Conversations • Involves more than just words (non-verbal communication; stutters etc. • Every human being has differences such as their voice, mouth, and speaking style. • Filtering background noise is a task that can even be difficult for humans to accomplish.
  • 84. Future of Speech Recognition • Accuracy will become better and better. • Dictation speech recognition will gradually become accepted. • Small hand-held writing tablets for computer speech recognition dictation and data entry will be developed, as faster processors and more memory become available. • Greater use will be made of "intelligent systems" which will attempt to guess what the speaker intended to say, rather than what was actually said, as people often misspeak and make unintentional mistakes. • Microphone and sound systems will be designed to adapt more quickly to changing background noise levels, different environments, with better recognition of extraneous material to be discarded.
  • 85. Head pose tracking • Proactive computing technology allows the development of more intelligent and more friendly human-computer interaction (HCI) methods. In the future computers will interact with people in a more natural way.To achieve this kind of advanced interaction, the system should understand human behaviour and respond appropriately for example by observing one’s facial expressions and gestures. Machine vision provides an excellent way of realizing such an intelligent human-computer interface that enables a user to control a computer without any physical contact with devices such as keyboards, mice and displays.
  • 86. • In the example system, the video camera is located above the computer display and it acquires images of the user’s face continuously. Face and facial features can be extracted and tracked from a sequence of images.The 3-D position and orientation (pose) of the head is determined from these feature points.The pose obtained allows recognition of simple gestures such as nodding and head shaking
  • 87. Spatial and semantic head gestures SPATIAL HEAD GESTURES (WHERE THE HEAD MOVES IN ONE OF THE GENERAL DIRECTIONS: LEFT, RIGHT, UP, OR DOWN) ALLOW FOR SPATIAL REFERENCES AND INDICATION OF DIRECTIONS SEMANTIC HEAD GESTURES LIKE NODDING AND SHAKING THE HEAD ARE USED TO EXPRESS AGREEMENT OR DISAGREEMENT
  • 88. Applications INTELLIGENT ENVIRONMENTS  ALTERNATIVE INPUT INTERFACES HUMAN-ROBOT INTERACTION
  • 93. Brain body Interfaces Using a relatively new brain sensing tool called functional near-infrared spectroscopy (fNIRS), along with a more established brain sensing tool called electroencephalography (EEG), we can detect signals within the brain that indicate various cognitive states.These devices provide data on brain activity while remaining portable and non-invasive.
  • 94. Some applications for brain and body oAugmented Anatomical Overlay (megic mirror) oControl Robots With Body movement oVirtual Clothes-Fitting oTurn Any SurfaceTouchscreen-Enabled oVirtual Reality Interaction oRetrieve DataVia Gestures oTranslate Sign Language