Sixth Sense technology allows users to access digital information about objects and surfaces in the physical world using hand gestures. It consists of a camera, projector, and mirror connected to a mobile device. The camera recognizes hand gestures and objects, and the projector displays additional digital information onto physical surfaces based on the camera's input. Some examples of uses include getting information about books by gesturing near them, checking flight statuses by gesturing over boarding passes, and making calls or accessing maps with hand gestures in the air. The technology aims to more seamlessly integrate digital information into everyday life using natural hand motions.
Sixth Sense technology allows users to interact with digital information in the physical world using natural hand gestures. It consists of a camera, projector, mirror, and mobile device connected via Bluetooth. The camera tracks hand gestures marked by colored fingers caps and objects in view. The mobile device processes this data and the projector displays related digital information onto physical surfaces. This bridges the gap between physical and digital worlds by letting users access online data about physical objects or people in real-time through hand gestures alone.
The document describes the components and working of Sixth Sense technology, which is a wearable gestural interface. It consists of a camera, projector, mirror, smartphone, and color markers on the fingertips. The camera captures images and tracks hand gestures via the color markers. The smartphone processes the data and searches the internet. It projects information onto surfaces using the projector and mirror. The technology bridges the physical and digital world by recognizing objects and displaying related information using hand gestures.
The document describes a technical seminar presentation on Sixth Sense technology. It provides an abstract, introduction, and overview of the components and technologies involved in Sixth Sense, including a camera, colored markers, mobile phone, projector, and mirror. Some key advantages are discussed such as portability, support for multi-touch and multi-user interaction, low cost, connecting the physical and digital worlds, and enabling real-time data access from machines.
This document summarizes a technology called Sixth Sense, which allows users to perform gestures to interact with digital information rather than using keyboards or mice. It discusses using commands recognized by a speech integrated circuit instead of gestures to overcome limitations of gesture recognition. The speech IC is trained to recognize commands, which then trigger actions performed by a mobile device and projected for the user.
Sixth Sense is a wearable device developed by Pranav Mistry that uses a camera, projector, and mirror coupled with a mobile phone to augment the physical world with digital information. It allows users to access information about their environment or objects by pointing their hands, and interact with this information using natural hand gestures. The camera recognizes hand gestures and objects, sends this data to the connected mobile phone for processing, and the projector projects the output back onto physical surfaces and objects, blending digital information with the physical world. This effectively gives users a "sixth sense" by bringing intangible online information into the tangible world in a seamless way.
The document discusses the Sixth Sense technology, which aims to connect the digital world to the physical world. It describes the key components of the Sixth Sense device prototype, including a camera, projector, mirror, and colored markers on the fingers. The device processes gestures to project digital information onto physical surfaces. Some applications mentioned include using maps, taking photos, drawing, making calls, getting product/flight information, and interacting with objects. Future projects building on this technology, like mouseless computing and allowing multiple views on a single display, are also discussed.
The document describes a project to create a sixth sense robot using an Atmel ATMega8 microcontroller development board. The robot uses computer vision techniques to track colored markers on a user's fingers to interpret gestures and send commands to control motors on the robot. Specifically, it captures images with a webcam, processes them to detect different colored markers, and sends signals to an H-bridge motor controller to move the robot forward, backward, or turn based on the detected gesture. The code uses color thresholding algorithms to identify pixels matching the colors of each marker and determine the gesture.
sixth sense technology 2014 ,by Richard Des Nieves,Bengaluru,kar,India.Richard Des Nieves M
for more Information visit the below references.....!
http://www.pranavmistry.com/projects/sixthsense/
https://code.google.com/p/sixthsense/wiki/Software
Microsoft 2020 technology future vision
http://www.youtube.com/watch?v=2wto6SFl13A
Sixth Sense technology allows users to interact with digital information in the physical world using natural hand gestures. It consists of a camera, projector, mirror, and mobile device connected via Bluetooth. The camera tracks hand gestures marked by colored fingers caps and objects in view. The mobile device processes this data and the projector displays related digital information onto physical surfaces. This bridges the gap between physical and digital worlds by letting users access online data about physical objects or people in real-time through hand gestures alone.
The document describes the components and working of Sixth Sense technology, which is a wearable gestural interface. It consists of a camera, projector, mirror, smartphone, and color markers on the fingertips. The camera captures images and tracks hand gestures via the color markers. The smartphone processes the data and searches the internet. It projects information onto surfaces using the projector and mirror. The technology bridges the physical and digital world by recognizing objects and displaying related information using hand gestures.
The document describes a technical seminar presentation on Sixth Sense technology. It provides an abstract, introduction, and overview of the components and technologies involved in Sixth Sense, including a camera, colored markers, mobile phone, projector, and mirror. Some key advantages are discussed such as portability, support for multi-touch and multi-user interaction, low cost, connecting the physical and digital worlds, and enabling real-time data access from machines.
This document summarizes a technology called Sixth Sense, which allows users to perform gestures to interact with digital information rather than using keyboards or mice. It discusses using commands recognized by a speech integrated circuit instead of gestures to overcome limitations of gesture recognition. The speech IC is trained to recognize commands, which then trigger actions performed by a mobile device and projected for the user.
Sixth Sense is a wearable device developed by Pranav Mistry that uses a camera, projector, and mirror coupled with a mobile phone to augment the physical world with digital information. It allows users to access information about their environment or objects by pointing their hands, and interact with this information using natural hand gestures. The camera recognizes hand gestures and objects, sends this data to the connected mobile phone for processing, and the projector projects the output back onto physical surfaces and objects, blending digital information with the physical world. This effectively gives users a "sixth sense" by bringing intangible online information into the tangible world in a seamless way.
The document discusses the Sixth Sense technology, which aims to connect the digital world to the physical world. It describes the key components of the Sixth Sense device prototype, including a camera, projector, mirror, and colored markers on the fingers. The device processes gestures to project digital information onto physical surfaces. Some applications mentioned include using maps, taking photos, drawing, making calls, getting product/flight information, and interacting with objects. Future projects building on this technology, like mouseless computing and allowing multiple views on a single display, are also discussed.
The document describes a project to create a sixth sense robot using an Atmel ATMega8 microcontroller development board. The robot uses computer vision techniques to track colored markers on a user's fingers to interpret gestures and send commands to control motors on the robot. Specifically, it captures images with a webcam, processes them to detect different colored markers, and sends signals to an H-bridge motor controller to move the robot forward, backward, or turn based on the detected gesture. The code uses color thresholding algorithms to identify pixels matching the colors of each marker and determine the gesture.
sixth sense technology 2014 ,by Richard Des Nieves,Bengaluru,kar,India.Richard Des Nieves M
for more Information visit the below references.....!
http://www.pranavmistry.com/projects/sixthsense/
https://code.google.com/p/sixthsense/wiki/Software
Microsoft 2020 technology future vision
http://www.youtube.com/watch?v=2wto6SFl13A
Sixth Sense is a wearable gestural interface that augments physical reality by projecting digital information onto surfaces. It consists of a camera, projector, mirror, smart phone, and color markers. The camera captures gestures tagged with color markers and sends the data to the smart phone. The projector projects outputs from the smart phone onto surfaces via a mirror. Applications include making calls, accessing maps/emails, taking pictures, and obtaining product information by recognizing gestures. Sixth Sense bridges the physical and digital world through computer vision, gesture recognition and augmented reality. Future enhancements could eliminate markers and enable 3D gesture tracking and assistance for disabled individuals.
This document describes the Sixth Sense technology, which allows users to interact with digital information through natural hand gestures. The Sixth Sense consists of a camera, projector, mirror, and mobile device coupled together. The camera tracks hand gestures while the projector displays interfaces on surfaces. This enables applications like accessing information about objects, taking photos, and making calls. The system has advantages such as being portable and supporting multi-touch interaction. Future enhancements could eliminate color markers and incorporate the camera and projector directly into a mobile device.
Sixth Sense is a wearable gestural interface that augments the physical world with digital information. It uses a camera, projector, and mirror mounted on a pendant to project information onto surfaces based on hand gestures. Applications include making calls, getting maps and directions, viewing videos, taking photos, and accessing information about products, books, and people. The technology supports multi-touch and user interaction and has potential for use in gaming, education, and assisting disabled individuals through recognition of objects and gestures.
1. The document discusses Sixth Sense technology, which allows a user to integrate digital information into the physical world by projecting it onto surfaces using a camera-projector worn around the neck.
2. It consists of color markers, a camera, mobile component, projector, and mirror. The camera tracks hand gestures to interact with projected interfaces for applications like making calls, using maps, checking time, taking photos, games and accessing information.
3. Advantages include making any surface an interactive display and easy access to data in real time using gestures. Disadvantages are potential health issues from projection, security concerns if data is projected for others to see, and limited battery life.
Sixth Sense is a wearable gestural interface device developed by Pranav Mistry, a PhD student in the Fluid Interfaces Group at the MIT Media Lab. It is similar to Telepointer, a neck worn projector/camera system developed by Media Lab student Steve Mann (which Mann originally referred to as "Synthetic Synesthesia of the Sixth Sense").
Pranav Mistry, a PhD student at MIT Media Lab, developed the sixth sense technology. It is a wearable gestural interface that augments the physical world with digital information using hand gestures detected by a camera. The system includes a camera, projector, mirror, and smart phone to allow interactions like making calls, viewing maps, taking photos, and accessing information by pointing at physical objects or surfaces. While promising improved access to information, challenges include privacy and visibility of projections depending on lighting conditions.
Sixth sense technology is an user-interface that helps us add the touch of intangible,digital world to the tangible,physical world...It was developed to its latest form by Pranav Mistry,a PhD student in the Fluid Interfaces Group at the MIT Media Lab.
The document describes a presentation by two students, K.Poojitha and K.P.Reshma, on Sixth Sense Technology. Sixth Sense Technology allows a user to access digital information from the internet and project it onto physical surfaces using a small, wearable device comprised of a webcam, projector, mirror, and mobile phone. The device recognizes hand gestures and objects in view to summon and interact with virtual information. The students' presentation covers the components, working, applications, and advantages of Sixth Sense Technology.
This document summarizes a seminar presentation on Sixth Sense technology. It describes sixth sense as a wearable gestural interface that augments the physical world with digital information and allows for interaction through natural hand gestures. The presentation outlines the history and components of sixth sense, including a camera, projector, mirror, and colored markers. It then explains how sixth sense works by capturing gestures via camera, processing the information, and projecting it onto surfaces. Finally, the document lists several applications of sixth sense and its advantages in seamlessly connecting the digital and physical worlds.
Sixth Sense is a wearable technology developed by Pranav Mistry that allows users to access digital information using hand gestures. It consists of a camera, projector, and mobile device. The camera tracks colored markers on the user's fingers to recognize gestures and access information from the internet. This information can then be projected onto physical surfaces using the projector. Some applications include accessing maps, taking photos, drawing, making calls, getting product information, and checking the weather. Whirlpool has also integrated Sixth Sense technology into washing machines to optimize water and energy usage during cycles based on load size.
This document summarizes a seminar presentation on Sixth Sense technology. Sixth Sense technology allows a user to integrate digital information into the physical world using hand gestures. It works by using a camera to capture gestures which are processed by a mobile device and used to interact with information projected onto physical surfaces by a projector. Some potential applications include making calls, viewing maps, taking photos, and getting information about physical objects. The technology aims to allow computers to adapt to human needs through natural hand gestures.
The Sixth Sense technology developed by Pranav Mistry allows users to interact with digital information in the physical world using natural hand gestures. The prototype consists of a camera, projector, and colored markers that the camera uses to interpret gestures. It projects information onto surfaces based on gestures made near objects. Applications include using hand gestures to make calls, access maps, and check the time without touching a device. The technology aims to seamlessly integrate digital and physical worlds.
vimal kumar's presentation on Sixth sense technology & its workingvimalstar
This is a device which has been invented by an Indian PRANAV MISTRY in the year 2008,
Since many of us didn't know about this device
My presentation is about 6th sense technology & its working principle & its applications
PPT of 6th sense tech. Jagdeep Singh Sidhujagdeepsidhu
The document describes the Sixth Sense technology, a wearable gestural interface developed by Pranav Mistry in 2009. It allows users to access digital information about the physical world by projecting it onto surfaces and interacting through natural hand gestures. The system uses a camera, projector, and mirror connected to a smartphone to recognize objects, gestures, and surfaces and display related data seamlessly overlaid on the physical world. Some applications mentioned include using gestures to draw, access maps and photos, and interact with projected interfaces on surfaces like palms or walls. Educational and other potential uses are also discussed.
The document describes Sixth Sense technology, a wearable gestural interface developed by Pranav Mistry that allows users to access digital information by interacting with the physical world through natural hand gestures. It consists of a camera, projector, and mirror coupled in a pendant-like device. The camera captures objects and gestures, the projector displays information on the mirror, and gestures are interpreted to interact with projected interfaces for applications like making calls, getting maps/product info, and taking photos. It aims to seamlessly merge the digital and physical worlds.
The Sixth Sense device is a wearable gestural interface created by Pranav Mistry that allows users to interact with digital information using natural hand gestures. It consists of a pocket projector, mirror, and camera connected to a mobile device. The camera tracks colored markers on the user's fingers to interpret gestures and project information from the internet onto surrounding surfaces. Some features include accessing product information by pointing at items in a store, checking flight statuses from paper tickets, and projecting additional information onto surfaces like books and newspapers. The device costs $350 and has potential educational uses such as virtual field trips, portable research, and facilitating collaboration and connection between students.
Sixth Sense is a wearable gestural interface that uses hand gestures and motions to allow users to interact with digital information projected onto any surface. It consists of a pocket projector, camera, and mobile phone that work together to project information and allow interaction through hand gestures without the need for physical devices like keyboards or mice. Some potential uses include sign language recognition, augmented reality applications, controlling devices through gestures, and creating alternative computer interfaces that make the entire world a computer interface. The goal is to make the technology widely available and adapt computers to human needs through more natural gestural interaction.
Sixth Sense technology discovered by Pranav Mistry. It is a wearable gestural based device which integrates the two worlds, i.e Physical world and Digital world.
Sixth Sense Technology is a mini-projector coupled with a camera and a cellphone—which acts as the computer and connected to the Cloud, all the information stored on the web. Sixth Sense can also obey hand gestures. The camera recognizes objects around a person instantly, with the micro-projector overlaying the information on any surface, including the object itself or hand. Also can access or manipulate the information using fingers. make a call by Extend hand on front of the projector and numbers will appear for to click. know the time by Draw a circle on wrist and a watch will appear. take a photo by Just make a square with fingers, highlighting what want to frame, and the system will make the photo—which can later organize with the others using own hands over the air.and The device has a huge number of applications , it is portable and easily to carry as can wear it in neck.
The drawing application lets user draw on any surface by observing the movement of index finger. Mapping can also be done anywhere with the features of zooming in or zooming out. The camera also helps user to take pictures of the scene is viewing and later can arrange them on any surface. Some of the more practical uses are reading a newspaper. reading a newspaper and viewing videos instead of the photos in the paper. Or live sports updates while reading the newspaper.
The document describes Sixth Sense technology, a wearable gesture-based device that augments physical reality with digital information. It consists of a camera, projector, mirror, and mobile device. The camera tracks hand gestures and objects in view, sending data to the mobile device. The mobile device processes the data and searches the internet for relevant information. The projector then projects this digital information onto physical surfaces and objects, allowing users to interact seamlessly between the physical and digital worlds using natural hand gestures.
Sixth Sense is a wearable gestural interface that augments physical reality by projecting digital information onto surfaces. It consists of a camera, projector, mirror, smart phone, and color markers. The camera captures gestures tagged with color markers and sends the data to the smart phone. The projector projects outputs from the smart phone onto surfaces via a mirror. Applications include making calls, accessing maps/emails, taking pictures, and obtaining product information by recognizing gestures. Sixth Sense bridges the physical and digital world through computer vision, gesture recognition and augmented reality. Future enhancements could eliminate markers and enable 3D gesture tracking and assistance for disabled individuals.
This document describes the Sixth Sense technology, which allows users to interact with digital information through natural hand gestures. The Sixth Sense consists of a camera, projector, mirror, and mobile device coupled together. The camera tracks hand gestures while the projector displays interfaces on surfaces. This enables applications like accessing information about objects, taking photos, and making calls. The system has advantages such as being portable and supporting multi-touch interaction. Future enhancements could eliminate color markers and incorporate the camera and projector directly into a mobile device.
Sixth Sense is a wearable gestural interface that augments the physical world with digital information. It uses a camera, projector, and mirror mounted on a pendant to project information onto surfaces based on hand gestures. Applications include making calls, getting maps and directions, viewing videos, taking photos, and accessing information about products, books, and people. The technology supports multi-touch and user interaction and has potential for use in gaming, education, and assisting disabled individuals through recognition of objects and gestures.
1. The document discusses Sixth Sense technology, which allows a user to integrate digital information into the physical world by projecting it onto surfaces using a camera-projector worn around the neck.
2. It consists of color markers, a camera, mobile component, projector, and mirror. The camera tracks hand gestures to interact with projected interfaces for applications like making calls, using maps, checking time, taking photos, games and accessing information.
3. Advantages include making any surface an interactive display and easy access to data in real time using gestures. Disadvantages are potential health issues from projection, security concerns if data is projected for others to see, and limited battery life.
Sixth Sense is a wearable gestural interface device developed by Pranav Mistry, a PhD student in the Fluid Interfaces Group at the MIT Media Lab. It is similar to Telepointer, a neck worn projector/camera system developed by Media Lab student Steve Mann (which Mann originally referred to as "Synthetic Synesthesia of the Sixth Sense").
Pranav Mistry, a PhD student at MIT Media Lab, developed the sixth sense technology. It is a wearable gestural interface that augments the physical world with digital information using hand gestures detected by a camera. The system includes a camera, projector, mirror, and smart phone to allow interactions like making calls, viewing maps, taking photos, and accessing information by pointing at physical objects or surfaces. While promising improved access to information, challenges include privacy and visibility of projections depending on lighting conditions.
Sixth sense technology is an user-interface that helps us add the touch of intangible,digital world to the tangible,physical world...It was developed to its latest form by Pranav Mistry,a PhD student in the Fluid Interfaces Group at the MIT Media Lab.
The document describes a presentation by two students, K.Poojitha and K.P.Reshma, on Sixth Sense Technology. Sixth Sense Technology allows a user to access digital information from the internet and project it onto physical surfaces using a small, wearable device comprised of a webcam, projector, mirror, and mobile phone. The device recognizes hand gestures and objects in view to summon and interact with virtual information. The students' presentation covers the components, working, applications, and advantages of Sixth Sense Technology.
This document summarizes a seminar presentation on Sixth Sense technology. It describes sixth sense as a wearable gestural interface that augments the physical world with digital information and allows for interaction through natural hand gestures. The presentation outlines the history and components of sixth sense, including a camera, projector, mirror, and colored markers. It then explains how sixth sense works by capturing gestures via camera, processing the information, and projecting it onto surfaces. Finally, the document lists several applications of sixth sense and its advantages in seamlessly connecting the digital and physical worlds.
Sixth Sense is a wearable technology developed by Pranav Mistry that allows users to access digital information using hand gestures. It consists of a camera, projector, and mobile device. The camera tracks colored markers on the user's fingers to recognize gestures and access information from the internet. This information can then be projected onto physical surfaces using the projector. Some applications include accessing maps, taking photos, drawing, making calls, getting product information, and checking the weather. Whirlpool has also integrated Sixth Sense technology into washing machines to optimize water and energy usage during cycles based on load size.
This document summarizes a seminar presentation on Sixth Sense technology. Sixth Sense technology allows a user to integrate digital information into the physical world using hand gestures. It works by using a camera to capture gestures which are processed by a mobile device and used to interact with information projected onto physical surfaces by a projector. Some potential applications include making calls, viewing maps, taking photos, and getting information about physical objects. The technology aims to allow computers to adapt to human needs through natural hand gestures.
The Sixth Sense technology developed by Pranav Mistry allows users to interact with digital information in the physical world using natural hand gestures. The prototype consists of a camera, projector, and colored markers that the camera uses to interpret gestures. It projects information onto surfaces based on gestures made near objects. Applications include using hand gestures to make calls, access maps, and check the time without touching a device. The technology aims to seamlessly integrate digital and physical worlds.
vimal kumar's presentation on Sixth sense technology & its workingvimalstar
This is a device which has been invented by an Indian PRANAV MISTRY in the year 2008,
Since many of us didn't know about this device
My presentation is about 6th sense technology & its working principle & its applications
PPT of 6th sense tech. Jagdeep Singh Sidhujagdeepsidhu
The document describes the Sixth Sense technology, a wearable gestural interface developed by Pranav Mistry in 2009. It allows users to access digital information about the physical world by projecting it onto surfaces and interacting through natural hand gestures. The system uses a camera, projector, and mirror connected to a smartphone to recognize objects, gestures, and surfaces and display related data seamlessly overlaid on the physical world. Some applications mentioned include using gestures to draw, access maps and photos, and interact with projected interfaces on surfaces like palms or walls. Educational and other potential uses are also discussed.
The document describes Sixth Sense technology, a wearable gestural interface developed by Pranav Mistry that allows users to access digital information by interacting with the physical world through natural hand gestures. It consists of a camera, projector, and mirror coupled in a pendant-like device. The camera captures objects and gestures, the projector displays information on the mirror, and gestures are interpreted to interact with projected interfaces for applications like making calls, getting maps/product info, and taking photos. It aims to seamlessly merge the digital and physical worlds.
The Sixth Sense device is a wearable gestural interface created by Pranav Mistry that allows users to interact with digital information using natural hand gestures. It consists of a pocket projector, mirror, and camera connected to a mobile device. The camera tracks colored markers on the user's fingers to interpret gestures and project information from the internet onto surrounding surfaces. Some features include accessing product information by pointing at items in a store, checking flight statuses from paper tickets, and projecting additional information onto surfaces like books and newspapers. The device costs $350 and has potential educational uses such as virtual field trips, portable research, and facilitating collaboration and connection between students.
Sixth Sense is a wearable gestural interface that uses hand gestures and motions to allow users to interact with digital information projected onto any surface. It consists of a pocket projector, camera, and mobile phone that work together to project information and allow interaction through hand gestures without the need for physical devices like keyboards or mice. Some potential uses include sign language recognition, augmented reality applications, controlling devices through gestures, and creating alternative computer interfaces that make the entire world a computer interface. The goal is to make the technology widely available and adapt computers to human needs through more natural gestural interaction.
Sixth Sense technology discovered by Pranav Mistry. It is a wearable gestural based device which integrates the two worlds, i.e Physical world and Digital world.
Sixth Sense Technology is a mini-projector coupled with a camera and a cellphone—which acts as the computer and connected to the Cloud, all the information stored on the web. Sixth Sense can also obey hand gestures. The camera recognizes objects around a person instantly, with the micro-projector overlaying the information on any surface, including the object itself or hand. Also can access or manipulate the information using fingers. make a call by Extend hand on front of the projector and numbers will appear for to click. know the time by Draw a circle on wrist and a watch will appear. take a photo by Just make a square with fingers, highlighting what want to frame, and the system will make the photo—which can later organize with the others using own hands over the air.and The device has a huge number of applications , it is portable and easily to carry as can wear it in neck.
The drawing application lets user draw on any surface by observing the movement of index finger. Mapping can also be done anywhere with the features of zooming in or zooming out. The camera also helps user to take pictures of the scene is viewing and later can arrange them on any surface. Some of the more practical uses are reading a newspaper. reading a newspaper and viewing videos instead of the photos in the paper. Or live sports updates while reading the newspaper.
The document describes Sixth Sense technology, a wearable gesture-based device that augments physical reality with digital information. It consists of a camera, projector, mirror, and mobile device. The camera tracks hand gestures and objects in view, sending data to the mobile device. The mobile device processes the data and searches the internet for relevant information. The projector then projects this digital information onto physical surfaces and objects, allowing users to interact seamlessly between the physical and digital worlds using natural hand gestures.
Sixth Sense is a wearable device that augments reality by projecting digital information onto physical surfaces using an attached pico-projector and mirror. It allows users to interact with this projected information using natural hand gestures recognized by an onboard camera. Some key applications include accessing information about physical objects by pointing at them, drawing on surfaces, getting maps and directions, checking the time with a gesture, and taking photos. The system has potential for hands-free interaction with information and enhancing understanding of the physical world around us.
Sixth Sense is a wearable technology that augments the physical world with digital information. It consists of a camera, projector, and mirror connected to a mobile phone. The camera tracks hand gestures and objects, sending this data to the phone. The phone processes the data and the projector projects the resulting digital information onto surfaces through the mirror. This bridges the gap between the digital and physical worlds, allowing users to interact with digital information via natural hand gestures.
Steve Mann invented the concept of a wearable computer in 1990. Pranav Mistry later developed the Sixth Sense technology, which uses a camera, projector, and markers to allow users to interact with digital information through natural hand gestures. The Sixth Sense system bridges the physical and digital worlds by projecting interfaces onto surfaces and recognizing gestures to manipulate information. Potential applications include making calls, accessing maps, checking the time, and getting product information through gesture controls. The system provides a portable, cost-effective way to access and share information through augmented reality interactions.
'Sixth Sense' is a wearable gestural interface that augments the physical world around us with digital Information and lets us use natural hand gestures to interact with that information. All of us are aware of the five basic senses - seeing, feeling, smelling,
tasting and hearing. But there is also another sense called the sixth sense. Sixth Sense Technology is the science of tomorrow with the aim of connecting the digital world with the physical world seamlessly, eliminating hardware devices.This combination of devices and software together create a reality in which the digital world is merged with the physical world.
The Sixth Sense technology was developed by Pranav Mistry as a wearable gestural interface that projects digital information onto physical surfaces using a camera, projector, and mirror. It allows users to interact with and manipulate this projected information using natural hand gestures. The technology has applications like making calls, accessing maps, getting flight updates, and drawing - transforming any surface into an interface. It aims to bridge the gap between the digital and physical world.
Sixth Sense is a wearable gestural interface developed by Pranav Mistry that allows users to interact with digital information overlaid on the physical world using natural hand gestures. It consists of a camera, projector, and mirror connected to a mobile device. The camera tracks hand gestures while the projector displays additional digital information on surrounding surfaces based on what the camera sees. This bridges the gap between physical and digital worlds by letting users seamlessly access and interact with digital data in the real world through intuitive hand motions.
Virtual Class room using six sense TechnologyIOSR Journals
The document describes a virtual classroom system that uses six sense technology. Six sense technology allows for a virtual classroom that does not require computers, laptops, or an internet connection. It uses a wearable device with a camera and projector that recognizes gestures and projects digital information onto surfaces. The system would allow students and professors to access information and interact with a virtual classroom anywhere through hand gestures.
The document describes Sixth Sense technology, a wearable gestural interface developed by Pranav Mistry that augments the physical world with digital information. It consists of a camera, projector and mirror worn around the neck connected to a smartphone. The camera tracks hand gestures marked with colored markers to interact with projected interfaces on surfaces. Applications include making calls, maps, photos. Benefits are an intuitive interface and turning any surface into a display. Future enhancements could remove markers and incorporate the technology into mobile devices and applications like education and gaming.
The document describes Sixth Sense, a wearable gestural interface developed by Pranav Mistry. It consists of a camera, projector, and mirror coupled in a pendant-like device. The camera recognizes hand gestures tagged with colored markers to project interfaces onto surrounding surfaces. Applications include making calls, using maps, drawing, getting flight updates, and more. The device costs around $350 to build, with the goal of being open source and adapting computers to human needs by making the world the interface.
The document discusses the Sixth Sense technology, a wearable gestural interface developed by Pranav Mistry at the MIT Media Lab. It consists of a camera, projector, and mirror worn on a pendant around the neck. The camera tracks hand gestures which are used to interact with information projected onto physical surfaces using the projector. Sixth Sense allows users to access digital information in a natural way through gestures, overlaying virtual content onto the real world. It has applications in areas like augmented reality, mobile devices, and helping disabled individuals. The technology provides an easy to use, portable interface but security and accuracy need further improvement before widespread adoption.
The document describes the Sixth Sense technology, which allows users to interact with digital information using hand gestures. The Sixth Sense was developed by Pranav Mistry building off earlier work by Steve Mann. It consists of a camera, projector, and mirror worn around the neck that allows users to view and manipulate projected interfaces with natural hand motions. Some example applications described are viewing maps, taking pictures, and accessing information about objects in view. The potential for Sixth Sense is as a transparent interface for accessing any information in the environment.
Pranav Mistry, 28 year old, of Indian origin is the mastermind behind the sixth sense technology. The device sees what we see but it lets out information that we want to know while viewing the object. It can project information on any surface, be it a wall, table or any other object and uses hand / arm movements to help us interact with the projected information. The device brings us closer to reality and assists us in making right decisions by providing the relevant information, thereby, making the entire world a computer.
Sixth Sense Technology is a mini-projector coupled with a camera and a
cellphone—which acts as the computer and connected to the Cloud, all the
information stored on the web. Sixth Sense can also obey hand gestures. The
camera recognizes objects around a person instantly, with the micro-projector
overlaying the information on any surface, including the object itself or hand.
Also can access or manipulate the information using fingers. make a call by
Extend hand on front of the projector and numbers will appear for to click.
know the time by Draw a circle on wrist and a watch will appear. take a photo
by Just make a square with fingers, highlighting what want to frame, and the
system will make the photo—which can later organize with the others using
own hands over the air.and The device has a huge number of applications , it is
portable and easily to carry as can wear it in neck.
The drawing application lets user draw on any surface by observing the
movement of index finger. Mapping can also be done anywhere with the
features of zooming in or zooming out. The camera also helps user to take
pictures of the scene is viewing and later can arrange them on any surface.
Some of the more practical uses are reading a newspaper. reading a newspaper
and viewing videos instead of the photos in the paper. Or live sports updates
while reading the newspaper.
The device can also tell arrival, departure or delay time of air plane on
tickets. For book lovers it is nothing less than a blessing. Open any book and
find the Amazon ratings of the book. To add to it, pick any page and the device
gives additional information on the text, comments and lot more add on feature
Sixth Sense Technology allows users to seamlessly access and interact with digital information in the physical world using hand gestures. It works by projecting digital interfaces onto physical surfaces using a mini projector, camera, and cell phone. The current prototype costs $350 and implements applications that demonstrate its usefulness for tasks like making calls, accessing maps, checking the time, drawing, zooming, getting product/book information, taking pictures, and more - essentially giving users a "sixth sense" of accessing information beyond their five physical senses.
The document discusses Sixth Sense, a wearable gestural interface developed by Pranav Mistry. It consists of a camera, projector, and mirror coupled in a pendant-like device. The camera tracks hand gestures to access digital information, which is processed on a smartphone and projected back using the mirror. Some applications include making calls, taking pictures, checking the time, and accessing flight updates using natural hand gestures. In conclusion, Sixth Sense allows accessing digital information about the environment automatically and interacting with it using gestures in a transparent interface.
SixthSense is a name for extra information supplied by a wearable computer, such as the device called EyeTap (Mann), Telepointer (Mann), and "WuW" (Wear yoUr World) by Pranav Mistry
Sixth sense technology seminar by ayush jain pptayush jain
Sixth Sense is a wearable gestural interface that augments the physical world by projecting digital information onto surfaces using a camera, projector, and mirror attached to a pendant. It allows users to interact with this information using natural hand gestures by recognizing colored markers on the fingers. The system captures gestures and images, processes the data on a connected smartphone, and projects the output onto surfaces via the mirror. Some applications include making calls, accessing maps and information about objects, and taking photos using hand gestures.
This document provides a summary of 18 rulemaking projects being undertaken by the Federal Aviation Administration (FAA). The projects cover topics such as digital flight data recorder regulations for Boeing 737s, aging aircraft programs, flight rules for the Washington D.C. area, repair station regulations, security considerations for airplane design, and congestion management rules for airports like LaGuardia. Each project summary includes the popular title, regulation identification number, current stage of rulemaking, docket information, abstract of the rulemaking, potential effects, and status of completing the final rule.
This document provides an introduction and overview of 5G technology. It discusses the evolution of mobile technologies from 1G to 5G networks. Key points include:
- 5G is the next major phase of mobile telecommunications following 4G LTE networks and will provide faster speeds, lower latency, and better connectivity.
- Previous generations included 1G (analog voice-only), 2G (digital voice and basic data), 3G (broadband data and internet access), and 4G (high-speed data for mobile internet).
- 5G aims to offer significantly higher minimum speeds (20Gbps+), extreme connectivity for billions of connected devices, and cutting edge applications like autonomous vehicles, telemedicine,
This document describes a proposed mobile virtual reality service (VRS) that would allow users to access real-time sights and sounds of physical environments virtually through mobile devices and networks. It outlines the key components needed for a VRS, including actual physical environments, VRS user equipment, a VRS access system, and a VRS core system for controlling VRS episodes. Challenges to implementing a VRS include needing very high data transmission rates for streaming video and audio, sophisticated user equipment, and an efficient signaling and control network. The document proposes an architecture and entities for a VRS core network, including a VRS episode control entity, VRS episode management entity, and gateway entity to facilitate VRS episode setup and control
The document summarizes a seminar report on Brain Gates. It describes how Brain Gates were developed by Cyberkinetics in 2003 to help people with disabilities control devices using only their brain activity. The Brain Gate system consists of a sensor implanted in the motor cortex that detects brain signals, which are then translated by a computer into cursor movements or control of other devices. Currently two patients have been implanted with Brain Gates, which use 100 electrodes to monitor brain activity related to intended limb movements and allow control of a computer cursor.
This document provides a 3-page summary of a seminar report on non-contact heart rate measurement using photoplethysmography. It begins with an introduction describing the motivation and challenges of non-contact heart rate measurement. It then provides background on topics such as resting heart rate measurement, photoplethysmography, and the use of blind source separation to remove motion artifacts. The experimental setup used a basic webcam to record videos of faces that were then analyzed to compute heart rate measurements in a non-contact manner.
This document provides information about digital scent technology, including its history, principles, hardware devices, applications, and limitations. It discusses how digital scent works, with hardware devices like the iSmell connecting to computers to emit smells from cartridges containing 128 chemicals. Applications mentioned include enhancing virtual reality experiences for movies, games, and online shopping. While the technology enhances multimedia, the summary notes it also faces limitations like rapid human acclimation to scents.
This document provides information about surface computing. It discusses Microsoft Surface, a large multi-touch tabletop computer that allows multiple users to interact directly on its screen surface using hands, brushes or other objects. Key features of surface computing include multi-touch interaction, tangible user interfaces using physical objects, support for multiple simultaneous users, and object recognition capabilities. The document also outlines the hardware components of Microsoft Surface and provides examples of its applications.
Project Loon is Google's initiative to provide internet access using high-altitude balloons. Balloons travel in the stratosphere and are arranged to form a communications network between 10-60km altitude. They are carried by wind currents and can be steered to different altitudes with different wind directions. People on the ground connect to the balloon network using a special antenna. The signal bounces between balloons and then back to earth, providing internet access over a 40km diameter area comparable to 3G speeds. Each balloon is made of a polyethylene envelope that houses solar panels and communications equipment to power the balloon and connect it to the network.
The document describes a technical seminar report on a smart note taker device, including an overview of the system and its construction, current products like mobile and PC note takers as well as smart pens, the technologies used including display and handwriting recognition, advantages and disadvantages, applications, future scope, and conclusions. It provides details on the interior structure and technical requirements and includes diagrams of the smart note taker system and current products.
The document discusses security improvements for ATMs. It proposes integrating facial recognition and iris scanning technologies into the identity verification process used by ATMs. This would help protect against fraud from stolen cards and PINs. The system would match a live image to an image stored in the bank's database associated with the account. Only a match between the images and correct PIN would verify the user. The document also discusses using iris scanning instead of cards and PINs for a cardless, password-free way to withdraw money by matching a scanned iris to images in the database. It suggests this biometric authentication could improve security over current magnetic card and PIN verification methods.
Kerberos is an authentication system that allows clients to securely request services from servers across an insecure network. It was developed at MIT to prevent passwords from being sent in unencrypted form. This document provides an overview of Kerberos, including its goals of providing secure authentication, a history of its development from versions 1-5, and concepts like tickets, encryption, and cross-realm authentication. It also discusses Kerberos applications, security issues and solutions, and potential future developments like smart cards and better encryption standards.
This document provides an overview of optical computing. Some key points:
- Optical computing uses light instead of electrons for computations and can process data much
faster than traditional electronic computers. An optical desktop computer is capable of processing
data 100,000 times faster.
- Important optical components that enable optical computing include vertical cavity surface emitting
lasers, spatial light modulators, smart pixel technology, and wavelength division multiplexing.
- Nonlinear optical materials play a significant role by interacting with light and modulating its
properties, enabling functions like optical logic gates. However, current materials have low efficiency.
- Optical computing was researched in the 1980s but progress slowed due
Project Ara is a modular smartphone platform developed by Google that allows users to customize their phone by swapping modules. The platform includes an endoskeleton frame that holds interchangeable modules for functions like display, camera, battery. This modularity provides longer usage by allowing users to replace broken modules or upgrade individual parts. The first developer version is scheduled for late 2016 with a basic phone costing around $50. Success depends on a vibrant ecosystem of third-party developed modules.
The document summarizes Rolls-Royce's Vision Next 100 concept vehicle. Key points:
- It is a fully autonomous, electric vehicle with no steering wheel and virtual assistant named Eleanor.
- The interior is a luxurious lounge-like space with seats resembling a couch.
- The vehicle is a concept of Rolls-Royce's vision for autonomous luxury mobility in the future, embracing new technologies while retaining the brand's focus on customization and coachbuilding.
This document provides an overview of iTwin technology, which allows users to securely access and share files between two computers using paired hardware devices. It describes how to install the iTwin software, set up a private VPN to access files and networks remotely, and troubleshoot any issues. Security features like remote disabling and passwords are also summarized to prevent unauthorized access to files or networks if one of the paired devices is lost.
The document provides information on QR codes, including their history, structure, capabilities, and generation. It discusses how QR codes can store more data than traditional barcodes, in a smaller space, and how their error correction allows them to be read even if dirty or damaged. The document also describes the key components of a QR code, such as finder patterns, alignment patterns, and data areas, and explains how QR codes are encoded with different data types.
The Emo Spark is a 90mm cube that uses artificial intelligence to interact with users based on their emotions. It can detect emotions like joy, sadness, trust and more using face tracking and content analysis. Over time, it builds an emotional profile graph of each user to better understand their preferences. The cube can communicate through conversation, play music and videos tailored to the user's emotions. It has various hardware components like a CPU, memory and custom emotion processing unit. The cube can connect to other devices and share media with other cubes based on similar emotional profiles. It aims to enhance how users experience media like music by understanding their emotional responses.
The document summarizes a technical seminar report on Apple iBeacon technology presented by D. Madhavi. It discusses how Apple created iBeacons using Bluetooth low-energy technology to allow companies to interact with customers using their smart devices within close proximity. Locally placed beacons can send messages to phones if the user has the company's app installed and Bluetooth turned on. The report also covers how beacons work, their battery life, compatible devices, applications, advantages and disadvantages of using beacon technology.
Android 5.0 Lollipop introduced major changes including a redesigned user interface called "material design" and improvements to notifications, battery life, security, and device sharing. It also improved performance through the new Android Runtime replacing Dalvik, added new connectivity and media features, and supported devices like Android TV. Lollipop aimed to provide a more consistent experience across different Android devices through its visual and functional changes.
The document describes Blue Eyes technology, which aims to give computers human-like perceptual abilities such as facial recognition, speech recognition, and emotion detection. It discusses the system overview including hardware and software components. Specifically, it explains how the system uses sensors to monitor users' physiological signals and detect their emotions in order to understand how they are interacting with the computer.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Communications Mining Series - Zero to Hero - Session 1
14 568
1. A seminar report on
SIXTH SENSE TECHNOLOGY
DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING
SRI VASAVI ENGINEERING COLLEGE
Pedatadepalli, Tadepalligudem-534101,
W.G.Dist, AndhraPradesh,
Submitted By
CH.MOUNIKA
14A81A05A68
2. ABSTRACT
Sixth Sense Technology is a mini-projector coupled with a camera and a
cellphone—which acts as the computer and connected to the Cloud, all the
information stored on the web. Sixth Sense can also obey hand gestures.
The camera recognizes objects around a person instantly, with the micro-projector
overlaying the information on any surface, including the object itself or hand.
Also can access or manipulate the information using fingers. Make a call
by extend hand on front of the projector and numbers will appear for to click .
To know the time by Draw a circle on wrist and a watch will appear. Take a
photo by Just make a square with fingers, highlighting what want to frame, and the
system will make the photo—which can later organize with the others using own
hands over the air. And the device has a huge number of applications, it is portable
and easily to carry as can wear it in neck.
The drawing application lets user draw on any surface by observing the movement
of index finger. Mapping can also be done anywhere with the features of zooming in
or zooming out.
The camera also helps user to take pictures of the scene is viewing and
later can arrange them on any surface. Some of the more practical uses are reading a
newspaper. Reading a newspaper and viewing videos instead of the photos in the
paper, live sports updates while reading the newspaper.
The device can also tell arrival, departure or delay time of air plane on tickets.
For book lovers it is nothing less than a blessing .Open any book and find the
Amazon ratings of the book. To add to it, pick any page and the device gives
additional information on the text, comments and lot more add on feature.
3. CHAPTER NO TITLE PAGE
ABSTRACT
1. INTRODUCTION 1
2. SIXTH SENSE TECHNOLOGY 3
2.1 WHAT IS SIXTH SENSE
2.2 EARLIER SIXTH SENSE PROTOTYPE
2.3 RECENT PROTOYPE
3. WORKING OF SIXTH SENSE 6
3.1 COMPONENTS
3.1.1 CAMERA
3.1.2 PROJECTOR
3.1.3 MIRROR
3.1.4 MOBILE COMPONENT
3.1.5 COLOR MARKERS
3.2 WORKING
4. RELATED TECHNOLOGY 9
4.1 AUGMENTED REALITY
4.2 GESTURE RECOGNITION
4.3 COMPUTER VISION BASED
ALGORITHM
4.4 TECHNOLOGIES THAT USES SIXTH
SENSEAS PLATFORM
5. APPLICATIONS 18
5.1 MAKE A CALL
5.2 CALL UP A MAP
5.3 CHECK THE TIME
5.4 CREATE MULTIMEDIA
READING EXPERIENCE
6. ADVANTAGES AND ENHANCEMENTS 24
6.1 ADVANTAGES
6.2 DISADVANTAGES
6.3 FUTURE ENHANCEMENTS
CONCLUSION 25
FIG.NO FIGURE NAME PAGE.NO
4. 2.1 SIX SENSES 3
2.2 EARLIER DEVICE 3
2.3 PRESENT DEVICE 4
3.1 CAMERA 6
3.2 PROJECTOR 6
3.3 MIRROR 7
3.4 MOBILE COMPONENT 7
3.5 COLOR MARKERS 8
3.6 WORKING 8
5.1 MAKE A CALL 18
5.2 CHECK A TIME 19
5.3 CALL UP MAP 19
5.4 MULTIMEDIA EXPERIENCE 20
5.5 DRAWING APPLICATIONS 20
5.6 PRODUCT INFORMATION 21
5.7 ZOOMING FEATURES 21
5.8 BOOK INFORMATION 22
5.9 TAKE PICTURES 22
5.10 GET FLIGHT UPDATES 23
5.11 FEED INFORMATION ON PEOPLE 23
CHAPTER-1
INTRODUCTION
5. We’ve evolved over millions of years to sense the world around us.
When we encounter something, someone or some place, we use our five natural
senses which includes eye, ear, nose, tongue mind and body to perceive information
about it; that information helps us make decisions and chose the right actions to take.
But arguably the most useful information that can help us make the right
decision is not naturally perceivable with our five senses, namely the data, information
and knowledge that mankind has accumulated about everything and which is
increasingly all available online.
Although the miniaturization of computing devices allows us to carry
computers in our pockets, keeping us continually connected to the digital world, there
is no link between our digital devices and our interactions with the physical world.
Information is confined traditionally on paper or digitally on a screen.
Sixth Sense bridges this gap, bringing intangible, digital information out
into the tangible world, and allowing us to interact with this information via natural
hand gestures. ‘Sixth Sense’ frees information from its confines by seamlessly
integrating it with reality, and thus making the entire world your computer.
“ Sixth Sense Technology”, it is the newest jargon that has proclaimed its
presence in the technical arena. This technology has emerged, which has its relation to
the power of these six senses.
Our ordinary computers will soon be able to sense the different
feelings accumulated in the surroundings and it is all a gift of the ”Sixth Sense
Technology” newly introduced.
Sixth Sense is a wearable “gesture based” device that augments the
physical world with digital information and lets people use natural hand gestures to
interact with that information.
It was developed by Pranav Mistry, a PhD student in the Fluid Interfaces
Group at the MIT Media Lab. A grad student with the Fluid Interfaces Group at MIT,
he caused a storm with his creation of Sixth Sense.
1
He says that the movies “Robocop” and “Minority Report” gave him the
inspiration to create his view of a world not dominated by computers, digital
information and human robots, but one where computers and other digital devices
enhance people’s enjoyment of the physical world.
6. Right now, we use our “devices” (computers, mobile phones, tablets, etc.) to
go into the internet and get information that we want. With Sixth Sense we will use a
device no bigger than current cell phones and probably eventually as small as a button
on our shirts to bring the internet to us in order to interact with our world!
Sixth Sense will allow us to interact with our world like never before. We
can get information on anything we want from anywhere within a few moments! We
will not only be able to interact with things on a whole new level but also with people!
One great part of the device is its ability to scan objects or even people and project out
information regarding what you are looking at.
2
CHAPTER-2
SIXTH SENSE TECHNOLOGY
7. 2.1 What is SixthSense?
Figure 2.1: Six Senses
Sixth Sense in scientific (or non-scientific) terms is defined as Extra
Sensory Perception or in short ESP. It involves the reception of information not
gained through any of the five senses.
Nor is it taken from any experiences from the past or known. Sixth Sense aims
to more seamlessly integrate online information and tech into everyday life. By
making available information needed for decision-making beyond what we have
access to with our five senses, it effectively gives users a sixth sense.
2.2 Earlier Sixth Sense Prototype
Fig no:2.2 Eariler Device
Maes’ MIT group, which includes seven graduate students, were thinking about how
a person could be more integrated into the world around them and access information
without having to do something like take out a phone.
3
They initially produced a wristband that would read an Radio Frequency
Identification tag to know, for example, which book a user is holding in a store .They
also had a ring that used infrared to communicate by beacon to supermarket smart
shelves to give you information about products.
As we grab a package of macaroni, the ring would glow red or green to tell us if
the product was organic or free of peanut traces —Whatever criteria we program into
8. the system.They wanted to make information more useful to people in real time with
minimal effort in a way that doesn’t require any behaviour changes.
The wristband was getting close, but we still had to take out our cell phone
to look at the information. That’s when they struck on the idea of accessing
information from the internet and projecting it. So someone wearing the wristband
could pick paperback in the bookstore and immediately call up reviews about the
book, projecting them onto a surface in the store or doing a keyword search through
the book by accessing digitized pages on Amazon or Google books.
They started with a larger projector that was mounted on a helmet. But that
proved cumbersome if someone was projecting data onto a wall then turned to speak
to friend — the data would project on .
2.3 Recent Prototype
Fig.no: 2.3 Present Device
Now they have switched to a smaller projector and created the pendant
prototype to be worn around the neck.
The SixthSense prototype is composed of a pocket projector, a mirror and a
camera. The hardware components are coupled in a pendant-like mobile wearable
device. Both the projector and the camera are connected to the mobile computing
device in the user’s pocket.
We can very well consider the Sixth Sense Technology as a blend of the
computer and the cell phone. It works as the device associated to it is hanged around
the neck of a person and thus the projection starts by means of the micro projector
attached to the device.
Therefore, in course, you turn out to be a moving computer in yourself and
the fingers act like a mouse and a keyboard.
9. The prototype was built from an ordinary webcam and a battery-powered 3M
projector, with an attached mirror — all connected to an internet-enabled mobile
phone.
The setup, which costs less than $350, allows the user to project information
from the phone onto any surface — walls, the body of another person or even your
hand. Mistry wore the device on a lanyard around his neck, and colored magic marker
caps on four fingers (red, blue, green and yellow) helped the camera distinguish the
four fingers and recognize his hand gestures with software that Mistry created.
CHAPTER-3
WORKING OF SIXTH SENSE TECHNOLOGY
3.1 Components
10. The hardware components are coupled in a pendant like mobile wearable device.
∑ Camera
∑ Projector
∑ Mirror
∑ Mobile Component
∑ Color Markers
3.1.1 Camera
Figure 3.1: Camera
A webcam captures and recognises an object in view and tracks the user’s hand
gestures using computer-vision based techniques.
It sends the data to the smart phone. The camera, in a sense, acts as a digital eye,
seeing what the user sees.
It also tracks the movements of the thumbs and index fingers of both of
the user's hands. The camera recognizes objects around you instantly, with the
microprojector overlaying the information on any surface, including the object itself
or your hand.
3.1.2 Projector
Figure 3.2: Projector
Also, a projector opens up interaction and sharing. The project itself contains a
battery inside, with 3 hours of battery life. The projector projects visual information
enabling surfaces, walls and physical objects around us to be used as interfaces.
6
We want this thing to merge with the physical world in a real physical sense.
You are touching that object and projecting info onto that object. The information will
look like it is part of the object. A tiny LED projector
displays data sent from the smart phone on any surface in view–object, wall, or
person.
11. 3.1.3 Mirror
fig.no:3.3 Mirror
The usage of the mirror is significant as the projector dangles pointing
downwards from the neck.
3.1.4 Mobile Component
Fig.no:3.4 Smart Phone
The mobile devices like Smartphone in our pockets transmit and receive voice and
data anywhere and to anyone via the mobile internet. An accompanying Smartphone
runs the SixthSense software, and handles the connection to the internet. A Web-
enabled smart phone in the user’s pocket processes the video data. Other software
searches the Web and interprets the hand gestures.
7
3.1.5 Color Markers
12. Fig.no: 3.5 color Markers
It is at the tip of the user’s fingers. Marking the user’s fingers with red, yellow,
green, and blue tape helps the webcam recognize gestures. The movements and
arrangements of these makers are interpreted into gestures that act as interaction
instructions for the projected application interfaces.
3.2 Working
Fig.no:3.6 Working
∑ The hardware that makes Sixth Sense work is a pendant like mobile
wearable interface.
∑ It has a camera, a mirror and a projector and is connected wirelessly to
a Bluetooth or 3G or wifi smart phone that can slip comfortably into
one’s pocket.
∑ The camera recognizes individuals, images, pictures, gestures one
makes with their hands.
8
∑ Information is sent to the Smartphone for processing.
13. ∑ The downward-facing projector projects the output image on to the
mirror.
∑ Mirror reflects image on to the desired surface.
∑ Thus, digital information is freed from its confines and placed in the
physical world.
The entire hardware apparatus is encompassed in a pendant-shaped mobile
wearable device. Basically the camera recognises individuals, images, pictures,
gestures one makes with their hands and the projector assists in projecting any
information on whatever type of surface is present in front of the person. The usage of
the mirror is significant as the projector dangles pointing downwards from the neck.
To bring out variations on a much higher plane, in the demo video which was
broadcasted to showcase the prototype to the world.
Mistry uses coloured caps on his fingers so that it becomes simpler
for the software to differentiate between the fingers, demanding various applications.
The software program analyses the video data caught by the camera and also tracks
down the locations of the coloured markers by utilising single computer vision
techniques. One can have any number of hand gestures and movements as long as
they are all reasonably identified and differentiated for the system to interpret it,
preferably through unique and varied fiducials.
This is possible only because the ‘Sixth Sense’ device supports multi-touch
and multi-user interaction MIT basically plans to augment reality with a pendant
picoprojector: hold up an object at the store and the device blasts relevant information
onto it (like environmental stats, for instance), which can be browsed and manipulated
with hand gestures.
The "sixth sense" in question is the internet, which naturally supplies the
data, and that can be just about anything
-- MIT has shown off the device projecting information about a person you meet at a
party on that actual person (pictured), projecting flight status on a boarding pass,
along with an entire non-contextual interface for reading email or making calls.
It's pretty interesting technology that, like many MIT Media Lab
projects, makes the wearer look like a complete dork -- if the projector doesn't give it
away, the colored finger bands the device uses to detect finger motion certainly might.
The idea is that SixthSense tries to determine not only what someone is
interacting with, but also how he or she is interacting with it. The software searches
the internet for information that is potentially relevant to that situation, and then the
projector takes over.
9
All the work is in the software," says Dr Maes. "The system is constantly trying to
figure out what's around you, and what you're trying to do. It has to recognize the
14. images you see, track your gestures, and then relate it all to relevant information at the
same time."
The software recognizes 3 kinds of gestures:
_ Multitouch gestures, like the ones you see in Microsoft Surface or the iPhone --
where you touch the screen and make the map move by pinching and dragging.
_ Freehand gestures, like when you take a picture [as in the photo above]. Or, you
might have noticed in the demo, because of my culture, I do a namaste gesture to start
the projection on the wall.
_ Iconic gestures, drawing an icon in the air. Like, whenever I draw a star, show me
the weather. When I draw a magnifying glass, show me the map. You might want to
use other gestures that you use in everyday life. This system is very customizable.
The technology is mainly based on hand gesture recognition, image
capturing, processing, and manipulation, etc. The map application lets the user
navigate a map displayed on a nearby surface using hand gestures, similar to gestures
supported by multi-touch based systems, letting the user zoom in, zoom out or pan
using intuitive hand movements. The drawing application lets the user draw on any
surface by tracking the fingertip movements of the user’s index finger.
10
CHAPTER-4
15. RELATED TECHNOLOGIES
SixthSense’ technology takes a different approach to computing
and tries to make the digital aspect of our lives more intuitive, interactive and, above
all, more natural. We shouldn’t have to think about it separately. It’s a lot of complex
technology squeezed into a simple portable device. When we bring in connectivity, we
can get instant, relevant visual information projected on any object we pick up or
interact with the technology is mainly based on hand augmented reality, gesture
recognition, computer vision based algorithm etc.
4.1 Augmented reality
Augmented reality (AR) is a term for a live direct or indirect view of a physical
realworld environment whose elements are augmented by virtual computer-generated
imagery. It is related to a more general concept called mediated reality in which a
view of reality is modified (possibly even diminished rather than augmented) by a
computer. The augmentation is conventionally in real-time and in semantic context
with environmental elements.
Sixth sense technology which uses Augmented Reality concept to super
imposes digital information on the physical world. With the help of advanced AR
technology (e.g. adding computer vision and object recognition) the information about
the surrounding real world of the user becomes interactive and digitally usable.
Artificial information about the environment and the objects in it can be stored and
retrieved as an information layer on top of
the real world view.
The main hardware components for augmented reality are: display, tracking,
input devices, and computer. Combination of powerful CPU, camera, accelerometers,
GPS and solid state compass are often present in modern Smartphone, which make
them prospective platforms.
There are three major display techniques for Augmented Reality:
∑ Head Mounted Displays
∑ Handheld Displays
∑ Spatial Displays
Head Mounted Displays
A Head Mounted Display (HMD) places images of both the physical world
and registered virtual graphical objects over the user's view of the world. The HMD's
are either optical seen through or video see-through in nature.
11
Handheld Displays
16. Handheld Augment Reality employs a small computing device with a display
that fits in a user's hand. All handheld AR solutions to date have employed video see-
through techniques to overlay the graphical information to the physical world. Initially
handheld AR employed sensors such as digital compasses and GPS units for its six
degree of freedom tracking sensors.
Spatial Displays
Instead of the user wearing or carrying the display such as with head mounted
displays or handheld devices; Spatial Augmented Reality (SAR) makes use of digital
projectors to display graphical information onto physical objects.
Modern mobile augmented reality systems use one or more of the
following tracking
technologies: digital cameras and/or other optical sensors, RFID, wireless sensors etc.
Each of these technologies have different levels of accuracy and precision. Most
important is the tracking of the pose and position of the user's head for the
augmentation of the user's view.
For users with disabilities of varying kinds, AR has real potential to
help people with a variety of disabilities. Only some of the current and future AR
applications make use of aSmartphone as a mobile computing platform.
4.2 Gesture Recognition
Gesture recognition is a topic in computer science and language technology
with the goal of interpreting human gestures via mathematical algorithms. Gestures
can originate from any bodily motion or state but commonly originate from the face or
hand.
Current focuses in the field include emotion recognition from the face and
hand gesture recognition. Many approaches have been made using cameras and
computer vision algorithms to interpret sign language.
Gestures can exist in isolation or involve external objects. Free of any object,
we wave, beckon, fend off, and to a greater or lesser degree (depending on training)
make use of more formal sign languages. With respect to objects, we have a broad
range of gestures that are almost universal, including pointing at objects, touching or
moving objects, changing object shape, activating objects such as controls, or handing
objects to others.
12
Gesture recognition can be seen as a way for computers to begin to
understand human body language, thus building a richer bridge between machines and
humans than primitive text user interfaces or even GUIs (graphical user interfaces),
17. which still limit the majority of input to keyboard and mouse. Gesture recognition
enables humans to interface with the machine (HMI) and interact naturally without
any mechanical devices.
Gestures can be used to communicate with a computer so we will be mostly
concerned with empty handed semiotic gestures. These can further be categorized
according to their functionality.
_ Symbolic gestures
These are gestures that, within each culture, have come to a single meaning. An
Emblem such as the “OK” gesture is one such example, however American Sign
Language gestures also fall into this category.
_ Deictic gestures
These are the types of gestures most generally seen in HCI and are the gestures
of pointing, or otherwise directing the listeners attention to specific event or objects in
the environment.
_ Iconic gestures
As the name suggests, these gestures are used to convey information about the
size, shape or orientation of the object of discourse. They are the gestures made when
someone says “The plane flew like this”, while moving their hand through the air like
the flight path of the aircraft.
_ Pantomimic gestures:
These are the gestures typically used in showing the use of movement of some
invisible tool or object in the speaker’s hand. When a speaker says “I turned the
steering wheel hard to the left”, while mimicking the action of turning a wheel with
both hands, they are making a pantomimic gesture.
Using the concept of gesture recognition, it is possible to point a finger at the
computer screen so that the cursor will move accordingly. This could potentially make
conventional input devices such as mouse, keyboards and even touch-screens
redundant. Gesture recognition can be conducted with techniques from computer
vision and image processing.
13
The literature includes ongoing work in the computer vision field on
capturing gestures or more general human pose and movements by cameras connected
to a computer.
18. 4.3 Computer vision based algorithm
Computer vision is the science and technology of machines that see. As a
scientific discipline, computer vision is concerned with the theory behind artificial
systems that extract information from images. The image data can take many forms,
such as video sequences, views from multiple cameras, or multi-dimensional data
from a medical scanner.
Computer vision, on the other hand, studies and describes the processes
implemented in software and hardware behind artificial vision systems. The software
tracks the user’s gestures using computer-vision based algorithms.
Inverse of computer graphics. While computer graphics produces image
data from 3D models, computer vision often produces 3D models from image data.
There is also a trend towards a combination of the two disciplines, e.g., as explored in
augmented reality.
The fields most closely related to computer vision are image processing, image
analysis and machine vision. Image processing and image analysis tend to focus on
2D images, how to transform one image to another his characterisation implies that
image processing/analysis neither require assumptions nor produce interpretations
about the image content.
Computer vision tends to focus on the 3D scene projected onto one or several
images, e.g., how to reconstruct structure or other information about the 3D scene
from one or several images. Machine vision tends to focus on applications, mainly in
manufacturing, e.g., vision based autonomous robots and systems for vision based
inspection or measurement.
The Recognition algorithms
The computer vision system for tracking and recognizing the hand postures
that control the menus is based on a combination of multi-scale color feature
detection, view based hierarchical hand models and particle filtering.
The hand postures or states are represented in terms of hierarchies of multi-scale
color image features at different scales, with qualitative inter-relations in terms of
scale, position and orientation. In each image, detection of multi scale colour features
is performed.
14
The hand postures are then simultaneously detected and tracked using particle
filtering, with an extension of layered sampling referred to as hierarchical layered
sampling. To improve the performance of the system, a prior on skin color is included
in the particle filtering.
4.4 Technologies that uses Sixth Sense as Platform
19. SixthSense technology takes a different approach to computing and tries to
make the digital aspect of our lives more intuitive, interactive and, above all, more
natural. When you bring in connectivity, you can get instant, relevant visual
information projected on any object you pick up or interact with. So, pick up a box of
cereal and your device will project whether it suits your preferences. Some of the
technologies that uses this are Radio Frequency Identification, gesture gaming,
washing machine.
4.4.1 Radio Frequency Identification
SixthSense is a platform for Radio Frequency Identification based enterprise
intelligence that combines Radio Frequency Identification events with information
from other enterprise systems and sensors to automatically make inferences about
people, objects, workspaces, and their interactions.
Radio Frequency Identification is basically an electronic tagging technology
that allows the detection and tracking of tags and consequently the objects that they
are affixed to. This ability to do remote detection and tracking coupled with the low
cost of passive tags has led to the widespread adoption of RFID in supply chains
worldwide.
Pranav Mistry, a researcher at the media lab of the Massachusetts Institute
Technology, has developed a 'sixth sense' device – a gadget worn on the wrist that can
function as a 'touch device for many modern applications. The gadget is capable of
selecting a product either by image recognition or radio frequency identification
(RFID) tags and project information, like an Amazon rating.
The idea of SixthSense is to use Radio Frequency Identification technology in
conjunction with a bunch of other enterprise systems such as the calendar system or
online presence that can track user activity. Here, we consider an enterprise setting of
the future where people (or rather their employee badges) and their personal objects
such as books, laptops, and mobile phones are tagged with cheap, passive RFID tags,
and there is good coverage of RFID readers in the workplace.
15
SixthSense incorporates algorithms that start with a mass of
undifferentiated tags and automatically infer a range of information based on an
accumulation of observations .The technology is able to automatically differentiate
between people tags and object tags, learn the identities of people, infer the ownership
of objects by people , learn the nature of different zones in a workspace
(e.g., private office versus conference room), and perform other such inferences.
20. By combining information from these diverse sources, SixthSense
records all tag-level events in a raw database. The inference algorithms consume these
raw events to infer events at the level of people, objects, and workspace zones, which
are then recorded in a separate processed database.
Applications can either poll these databases by running SQL queries or set up
triggers to be notified of specific events of interest. SixthSense infers when a user has
interacted with an object for example, when you pick up your mobile phone. It is a
platform in that its programming model makes the inferences made automatically
available to applications via a rich set of APIs.
To demonstrate the capabilities of the platform, the researchers have
prototyped a few applications using these APIs, including a misplaced object alert
service , an enhanced calendar service, and rich annotation of video with physical
events.
4.4.2 Sixth Sense Washing Machine
Whirlpool AWOE 8758 White Washing Machine is a remarkable front loader
that incorporates the unparalleled Sixth Sense technology. Whirlpool’s 2009 range of
washing machines comes integrated with enhanced 6th sense technology that gives
more optimisation of resources and also increased saving in terms of energy, water
and time.
Ideal washing machine for thorough washing that requires sixth sense to detect
stubborn stains and adjust wash impact. It is a feature packed washing ally with Sixth
Sense Technology and several customized programs to enhance the washing
performance and dexterously assist you in heavy washing loads.
The New Generation 6th Sense appliances from Whirlpool are helping to
protect the environment and to reduce your energy bills. Whirlpool 6th Sense
appliances are designed to be intelligent and energy efficient appliances that adapt
their performance to better suit your needs. All Whirlpool appliances with intelligent
6th Sense technology work on three key principles; Sense, Adaption and Control, to
ensure that they achieve optimal performance each and every time that they are used.
16
Whirlpool 6th Sense washing machines can save you up to 50% less water,
energy and time during the cycle. These intelligent machines sense the size of the load
and adjust and control the cycle dependent on the load inside in order to optimise the
use of water, energy and time.
Some models also contain a detergent overdosing monitor to make sure that
you do not use too much washing detergent. Tumble dryers use 6th Sense technology
21. to minimise energy and time wastage by monitoring the humidity inside your laundry
and adjusting the drying time accordingly.
17
CHAPTER-5
APPLICATIONS
The Sixth Sense prototype implements several applications that demonstrate the
usefulness, viability and flexibility of the system.
22. The SixthSense device has a huge number of applications. The following are few of
the applications of Sixth Sense Technology.
∑ Make a call
∑ Call up a map
∑ Check the time
∑ Create multimedia reading experience
∑ Drawing application
∑ Zooming features
∑ Get product information
∑ Get book information
∑ Get flight updates
∑ Feed information on people
∑ Take pictures
∑ Check the email
5.1 Make a call
Fig.no:5.1 Make a Call
You can use the Sixth Sense to project a keypad onto your hand, then use
that virtual keypad to make a call. Calling a number also will not be a great task with
the introduction of Sixth Sense Technology.
No mobile device will be required, just type in the number with your palm acting as
the virtual keypad. The keys will come up on the fingers. The fingers of the other hand
will then be used to key in the number and call.
18
5.2 Call up a map
23. Fig.no:5.2 Map
The sixth sense also implements map which lets the user display the map on any
physical surface and find his destination and he can use his thumbs and index fingers
to navigate the map, for example, to zoom in and out and do other controls.
5.3 Check the time
Fig.no:5.3 Wrist watch
Sixth Sense all we have to do is draw a circle on our wrist with our index finger
to get a virtual watch that gives us the correct time. The computer tracks the red
marker cap or piece of tape, recognizes the gesture, and instructs the projector to flash
the image of a watch onto his wrist.
19
5.4 Create multimedia reading experiences
24. Fig.no:Vedio in newspaper
The SixthSense system also augments physical objects the user is interacting
with by projecting more information about these objects projected on them. For
example, a newspaper can show live video news or dynamic information can be
provided on a regular piece of paper. Thus a piece of paper turns into a video
display.
5.5 Drawing application
Fig.no: 5.5 Drawing
The drawing application lets the user draw on any surface by tracking the
fingertip movements of the user’s index finger.
20
5.6 Zooming features
25. Fig.no: 5.6 Zoom in and zoom out
The user can zoom in or zoom out using intuitive hand movements.
5.7 Get product information
Fig.no:5.7 Production information
Maes says Sixth Sense uses image recognition or marker technology to
recognize products you pick up, then feeds you information on those products. For
example, if you're trying to shop "green" and are looking for paper towels with the
least amount of bleach in them, the system will scan the product you pick up off the
shelf and give you guidance on whether this product is a good choice for you.
21
5.8 Get book information
26. Fig.no: 5.8 Book information
Maes says Sixth Sense uses image recognition or marker
technology to recognize products you pick up , then feeds you information on
books. The system can project Amazon ratings on that book, as well as reviews
and other relevant information.
5.9 Take pictures
Fig.no: 5.9 Take Pictures
If we fashion our index fingers and thumbs into a square (the typical "framing"
gesture), the system will snap a photo. After taking the desired number of photos, we
can project them onto a surface, and use gestures to sort through the photos, and
organize and resize them.
22
5.10 Get flight updates
27. Fig.no: 5.10 Flight updates
The system will recognize your boarding pass and let you know whether your flight is
on time and if the gate has changed.
5.11 Feed information on people
Fig.no: 5.11 Information of people
Sixth Sense also is capable of "a more controversial use”. When you go out and
meet someone, projecting relevant information such as what they do, where they
work, and also maya it could display tags about the person floating on their shirt. It
could be handy if it displayed their facebook relationship status so that you knew not
to waste your time.
23
CHAPTER-6
28. ADVANTAGES AND ENHANCEMENTS
6.1 Advantages
∑ SixthSense is an user friendly interface which integrates digital information
into the physical world and its objects, making the entire world your computer.
∑ SixthSense does not change human habits but causes computer and other
machines to adapt to human needs.
∑ It uses hand gestures to interact with digital information.
∑ Supports multi-touch and multi-user interaction.
∑ Data access directly from machine in real time
∑ It is an open source and cost effective and we can mind map the idea anywhere
∑ It is gesture-controlled wearable computing device that feeds our
∑ Relevant information and turns any surface into an interactive display.
∑ It is portable and easy to carry as we can wear it in our neck.
∑ The device could be used by anyone without even a basic knowledge of a
keyboard or mouse.
∑ There is no need to carry a camera anymore. If we are going for a holiday, then
from now on wards it will be easy to capture photos by using mere fingers.
6.2 Disadvantages
∑ Hardware limitation of the devices , that we are currently carry around us.
∑ For example many phones will not allow the external camera feed to be
manipulated in real time.
∑ Post processing can occur however.
6.3 Future Enhancements
∑ Too get rid of colour markers.
∑ To incorporate camera and projector inside mobile computing device.
∑ Whenever we place pendant- style wearable device on table, it should allow us
to use the table as multi touch user interface.
∑ Applying this technology in various interest like gaming, education systems
etc.
∑ To have 3D gesture tracking.
∑ To make sixth sense work as fifth sense for disabled person.
24
29. CONCLUSION
The key here is that Sixth Sense recognizes the objects around you,
displaying information automatically and letting you access it in any way you want, in
the simplest way possible.
Clearly, this has the potential of becoming the ultimate "transparent" user
interface for accessing information about everything around us. If they can get rid of
the colored finger caps and it ever goes beyond the initial development phase, that is.
But as it is now, it may change the way we interact with the real world and truly give
everyone complete awareness of the environment around us.
25