2. Introduction
Sixth Sense is a wearable gestural interface that augments the
physical world around us with digital information and lets us use
natural hand gestures to interact with that information.
Steve Mann is considered as the father of Sixth Sense Technology
who made wearable computer in 1990. He implemented this as the
neck worn projector with a camera system (which Mann originally
referred to as “Synthetic Synesthesia of the Sixth Sense ”).
Steve Mann’s work was carried forward by Pranav Mistry , a Ph.D.
student at MIT.
3. Related Technologies
AUGEMENTED
REALITY
• AR is a live direct or indirect view of a physical, real-world environment whose elements are augmented(or
supplemented) by computer-generated sensory input from the world of data (present on the web).
• It connects the real world directly with the world of data or digital information.
Gesture
recognition
• Gesture recognition is a topic in computer science and language technology with the goal of interpreting
human gestures via mathematical algorithms.
• Current focuses in the field include emotion recognition from face and hand gesture recognition.
Computer
vision
• Computer vision is a field that includes methods for acquiring, processing, analyzing, and understanding images
and, in general, high-dimensional data from the real world in order to produce numerical or symbolic
information, e.g., in the forms of decisions.
• It is the ability of a machine to act like an human eye by electronically perceiving and understanding an image.
5. Working
Camera : It captures whatever it views and tracks the user hand gesture. Data is sent to the
computing device in the pocket which is connected to the internet. It act as a digital eye which
connects to the world of digital information.
Projector : This is used to project the visual information on the physical objects and surfaces thus
enabling them to be used as an interface. The data which it displays is coming from computing
device.
Mirror : As the projector hangs from the neck pointing downwards and it reflects the image to
the desired surface and thus placing data into the physical world.
Mobile Component : It is basically a computing device which is connected to the database over
the web .
Colored Markers : The colored markers are placed at the tips of the fingers which helps the
camera to recognize the hand gestures. The various movements and structural arrangements
made by these markers are interpreted as gestures that subsequently act as an instruction for the
application interfaces that are projected.
This all are coupled in a pendant like wearable device.
8. Applications
Get product information.
Take pictures.
Create multimedia reading experiences.
Get flight updates.
Feed information on people.
Zooming feature.
Viewing map.
And more….
9. Pros :
• Portable .
• Cost effective.
• Open source software .
• Data access directly from the web anywhere.
• Support multi touch and multi user interaction.
Cons :
• Hardware limitation of the devices.
• Post processing can occur.
• Some phones may not allow the external camera feed to be manipulated in real time.
10. Conclusions
This recognizes the objects around us, displaying information
automatically and letting us to access it in any way we need.
The prototype implements several applications that demonstrate the
usefulness , viability , and flexibility of the system.
The potential of becoming the ultimate “transparent” user interface
for accessing information about everything around us.
Allowing us to interact with this information via natural hand
gestures.
11. Future Requirements
To get rid of the colored markers.
To have 3D gesture tracking.
To incorporate camera and projector both in the mobile computing
device , to make it more compact.
To make it work as a fifth sense for a disabled person.