Project Tango is a prototype smartphone developed by Google that uses advanced sensors and cameras to create a 3D map of the environment around it in real-time. The phone tracks its motion and position using an array of cameras including a rear-facing RGB/IR camera, 180-degree fisheye camera, and 120-degree front camera. It also has a depth sensor and infrared projector that allow it to make over 250,000 3D measurements per second to build a 3D model. The goal of Project Tango is to provide mobile devices with a human-scale understanding of 3D space to enable new applications around augmented reality, indoor navigation, and 3D modeling.
Google unveiled Project Tango, an experimental project from Google's Advanced Technology and Projects (ATAP) group to develop smartphones and tablets that can track motion in 3D and map environments. Project Tango devices use advanced sensors and computer vision to give mobile devices a human-like understanding of space and motion. The Project Tango prototype is an Android device that can create a 3D model of its surroundings without GPS or other external signals by tracking its own 3D motion and the infrared light it projects.
The document summarizes Project Tango, an experimental project from Google's Advanced Technology and Projects (ATAP) group. It discusses how Project Tango uses sensors and computer vision to allow mobile devices to understand their physical environment and motion in 3D without relying on external signals. The key capabilities of Project Tango devices include simultaneous localization and mapping, depth perception through infrared projection and cameras, and area learning to recognize previously mapped locations. Potential applications mentioned include indoor navigation, augmented reality games, and assisting emergency responders.
Project Tango is a technology developed by Google that enables mobile devices like smartphones and tablets to detect their position and orientation in 3D space without relying on GPS or other external signals. It uses computer vision technologies like cameras, sensors, and processors to capture depth information and create a 3D model of the device's environment. This document describes the hardware and software capabilities of Project Tango prototype devices. It discusses how Tango uses cameras, sensors, and its own processor to track motion, map areas, and understand depth. The document also outlines some potential applications of Tango technology and companies involved in its development.
Project Glass is an augmented reality head-mounted display developed by Google. The glasses allow hands-free access to information and allow users to interact with the internet via voice commands. Key features include a small video display, front-facing camera, speaker, and a single button. The glasses operate using Google's Android platform and can access information from Google services and the internet through a 4G or WiFi connection.
Project Tango is a project by Google that aims to give mobile devices a 3D understanding of space using advanced sensors and computer vision. The Tango prototype is an Android device that tracks its own 3D motion and creates a 3D model of the surrounding environment in real-time. It uses motion tracking, depth perception, and area learning technologies. Potential applications include improved indoor navigation, more efficient shopping, emergency response, augmented reality gaming, and 3D modeling of objects.
Project soli is a gesture based technology.developed by google ATAP Team.Projct soli is working on the basis of RADAR.Human hand is one of the interactive mechanism to deals with any machine...
This slides must help you to get a great idea about "Project soli"
By,
BHAVIN.B
Bhavinbhadran7u@gmail.com
Google Glass is a research project by Google to develop augmented reality smart glasses. The glasses will have a small video display and camera that will allow the user to access information from the internet hands-free via voice commands. Some key features will include navigation assistance, social media integration, and object recognition capabilities. However, there are also privacy and safety concerns about the technology that will need to be addressed. Overall, Google Glass aims to develop the first mainstream smart glasses and represents an ambitious effort to create an augmented reality device.
The document discusses the Sixth Sense technology, a wearable gestural interface that augments the physical world with digital information. It can project information onto surfaces using a camera, projector and mirror. The technology recognizes hand gestures to allow interactions like getting maps, photos and product information without devices. It offers advantages like connectivity and accessibility but faces issues like privacy, health effects and lack of durability. The technology may transform fields like education, e-commerce and assistance for disabled people.
Google unveiled Project Tango, an experimental project from Google's Advanced Technology and Projects (ATAP) group to develop smartphones and tablets that can track motion in 3D and map environments. Project Tango devices use advanced sensors and computer vision to give mobile devices a human-like understanding of space and motion. The Project Tango prototype is an Android device that can create a 3D model of its surroundings without GPS or other external signals by tracking its own 3D motion and the infrared light it projects.
The document summarizes Project Tango, an experimental project from Google's Advanced Technology and Projects (ATAP) group. It discusses how Project Tango uses sensors and computer vision to allow mobile devices to understand their physical environment and motion in 3D without relying on external signals. The key capabilities of Project Tango devices include simultaneous localization and mapping, depth perception through infrared projection and cameras, and area learning to recognize previously mapped locations. Potential applications mentioned include indoor navigation, augmented reality games, and assisting emergency responders.
Project Tango is a technology developed by Google that enables mobile devices like smartphones and tablets to detect their position and orientation in 3D space without relying on GPS or other external signals. It uses computer vision technologies like cameras, sensors, and processors to capture depth information and create a 3D model of the device's environment. This document describes the hardware and software capabilities of Project Tango prototype devices. It discusses how Tango uses cameras, sensors, and its own processor to track motion, map areas, and understand depth. The document also outlines some potential applications of Tango technology and companies involved in its development.
Project Glass is an augmented reality head-mounted display developed by Google. The glasses allow hands-free access to information and allow users to interact with the internet via voice commands. Key features include a small video display, front-facing camera, speaker, and a single button. The glasses operate using Google's Android platform and can access information from Google services and the internet through a 4G or WiFi connection.
Project Tango is a project by Google that aims to give mobile devices a 3D understanding of space using advanced sensors and computer vision. The Tango prototype is an Android device that tracks its own 3D motion and creates a 3D model of the surrounding environment in real-time. It uses motion tracking, depth perception, and area learning technologies. Potential applications include improved indoor navigation, more efficient shopping, emergency response, augmented reality gaming, and 3D modeling of objects.
Project soli is a gesture based technology.developed by google ATAP Team.Projct soli is working on the basis of RADAR.Human hand is one of the interactive mechanism to deals with any machine...
This slides must help you to get a great idea about "Project soli"
By,
BHAVIN.B
Bhavinbhadran7u@gmail.com
Google Glass is a research project by Google to develop augmented reality smart glasses. The glasses will have a small video display and camera that will allow the user to access information from the internet hands-free via voice commands. Some key features will include navigation assistance, social media integration, and object recognition capabilities. However, there are also privacy and safety concerns about the technology that will need to be addressed. Overall, Google Glass aims to develop the first mainstream smart glasses and represents an ambitious effort to create an augmented reality device.
The document discusses the Sixth Sense technology, a wearable gestural interface that augments the physical world with digital information. It can project information onto surfaces using a camera, projector and mirror. The technology recognizes hand gestures to allow interactions like getting maps, photos and product information without devices. It offers advantages like connectivity and accessibility but faces issues like privacy, health effects and lack of durability. The technology may transform fields like education, e-commerce and assistance for disabled people.
Google glass, A new innovation leading to new technology Ekta Agrawal
This presentation will help you to understand better the working of Google glass the innovation that makes changes in the world and bring new innovation to you
Google Glass is an augmented reality project led by Google to develop smart glasses. The glasses are designed to display information to the user through a small video screen and can be controlled through voice commands or touch gestures. Some key technologies used include Android, 4G connectivity, cameras, and augmented reality capabilities to overlay information on the real world. The goal is to create a hands-free device that allows users access information and communicate remotely.
Augmented reality (AR) is a live direct or indirect view of a physical, real-world environment whose elements are "augmented" by computer-generated or extracted real-world sensory input such as sound, video, graphics, haptics or GPS data.[1] It is related to a more general concept called computer-mediated reality, in which a view of reality is modified (possibly even diminished rather than augmented) by a computer. Augmented reality enhances one’s current perception of reality, whereas in contrast, virtual reality replaces the real world with a simulated one.
This document discusses Google Glass, a wearable computer with an optical head-mounted display created by Google. It describes Google Glass's features like hands-free display and control using voice commands. The document also outlines the technologies used like Android and 4G that allow access to information and communication. Both advantages like accessibility and disadvantages potential privacy issues are presented.
Project Soli is a radar-based gesture recognition technology developed by Google's Advanced Technology and Projects group. It uses miniature radar sensors to detect touchless hand gestures without the need for physical controls. This allows for more natural interactions with wearable devices. When integrated into wearables, Project Soli could enable interactions like turning an invisible dial to control volume or tapping invisible buttons to select options. The technology is still in development by Google but aims to release the Soli sensor and an API to developers to build new interactive applications.
Google Glass is a wearable computer with an optical head-mounted display that is being developed by Google. It functions as a smartphone that can be controlled via voice commands to access information like maps, emails, photos and more. The device has a camera, microphone, touchpad, Android operating system and can connect to the internet via WiFi and Bluetooth. While promising hands-free access to information, concerns exist around privacy and safety issues that come with an internet-connected device worn on a user's face.
Here is the new Google glass seminar presentation of office-2013.A new report suggests Google Glass will get a complete redesign for version two. Google Glass captured our imagination with the idea of Internet-connected smart glasses, but delivering on that promise feels further away than ever.
Google Glass is a wearable computer with an optical head-mounted display that is being developed by Google. It displays information in a smartphone-like format and responds to voice commands. Key features include a 1.5GHz processor, Bluetooth, WiFi, 5MP camera, and battery allowing 6 hours of use. Technologies powering Glass include wearable computing, augmented reality, Android OS, and 4G connectivity. Potential applications include phone calls, photos, translation, maps, and social sharing. While promising greater accessibility, concerns include privacy and potential eye strain.
This document summarizes a technical seminar presentation on Google Glass. It includes an introduction to Google Glass specifications and capabilities. The presentation describes the Google Glass architecture, the Mirror API, and how to develop apps using timeline cards, contacts, and location information. It covers design principles for Google Glass apps and discusses benefits and limitations of the technology.
6thsensetechnology by www.avnrpptworld.blogspot.comavnrworld
Sixth Sense is a wearable gestural interface developed by Pranav Mistry that consists of a camera, projector, and mirror coupled in a pendant. The camera tracks hand gestures and sends the data to a smartphone for processing. The projector then projects the digital information onto any surface via the mirror. This allows users to interact with digital information in the physical world using natural hand gestures. Some applications include making calls, getting maps, checking the time, and accessing information about objects by pointing at them. The system has advantages like automatically accessing information and interacting with it intuitively through gestures.
This document presents a summary of Google Glass. It was presented by Nidhin P Koshy for the ECE department at TKMIT. Google Glass is a wearable computer with an augmented reality display developed by Google. It features a camera, display, touchpad, battery and microphone built into a spectacle frame. The display uses a prism to project 640x360 resolution graphics equivalent to a 25 inch screen from 8 feet away. Voice commands through the microphone allow users to take pictures, get directions, send messages and more just by speaking. While innovative, some disadvantages are potential privacy issues from photos taken without permission and distraction from the visual display blocking the user's line of sight.
The document provides an overview of Google Glass, including its design features such as the video display and camera, and the technologies that enable it like wearable computing, ambient intelligence, and 4G networks. It also discusses how Google Glass works hands-free using voice commands and displays information to the user through the video display mounted on the glasses. The document serves as a technical report submitted by a student to fulfill the requirements for a Bachelor of Technology degree.
Project Glass is a Google research project to develop smart glasses featuring a head-mounted display and allowing hands-free access to information via natural language voice commands. The glasses are being developed by Google X Lab and will communicate with mobile phones via WiFi to display notifications and respond to voice commands. Some key features of Google Glass include a small video display, camera, speaker, microphone and touchpad. [/SUMMARY]
seminar report on night vision technologyAmit Satyam
This document summarizes the history and technology behind night vision devices. It describes how early generations used multiple image intensifier tubes to amplify light, while later generations employed microchannel plates and gallium arsenide photocathodes to improve light sensitivity and gain. The document outlines the key technological advances between each generation, from Generation 0 devices that used infrared illumination to Generation 4's filmless and gated technology offering improved resolution and reduced noise in varying light conditions.
Project Soli is a Google technology that uses radar sensors and machine learning to enable touchless gesture control. A small Soli chip contains radar that can detect subtle hand motions and movements. This allows devices to be controlled through gestures without touching screens. Google is developing a Soli developer kit to allow creators to explore uses for areas like health, art, smartwatches, and other interfaces. The technology provides an alternative to camera-based gesture systems by offering higher motion tracking speeds and the ability to sense movements through certain materials.
#Google announced a new product called #googlelens, that amounts to an entirely new way of searching the internet through your camera. Once you take a photo, #googlelens collects information behind the photo. If you take a photo of a restaurant, Lens can do more than just say “it’s a restaurant,” which you know, or the name of the restaurant. It can automatically find hours, reservations and a menu.
The document discusses night vision technology. It begins with an overview of night vision and its history, including early methods using flares and spotlights. It then describes the two main technologies used - thermal imaging and image intensification. Generations of night vision devices are categorized based on technological improvements. Applications include military, hunting, security and aviation. While night vision provides benefits, there are also limitations such as lack of color and potential user fatigue.
This document provides an overview of night vision technology, including how night vision devices work and their applications. It discusses the two main approaches to night vision - enhancing spectral range and intensity range. It describes the components and working of image intensification and thermal imaging devices. The document outlines the four generations of night vision devices and how each generation improved light amplification and detection range. In conclusion, it notes that while night vision was originally for military use, it has expanded to civilian applications like security and hunting.
This document provides an overview of Google Glass, including its intended purpose, key technologies, development phases, features, specifications, advantages, and disadvantages. Google Glass is an augmented reality project that allows hands-free access to information through a small video display. It utilizes technologies like augmented reality, bone conduction, an Android operating system, and a front-facing camera to respond to voice commands and project images and notifications in the user's field of vision. The document outlines the device's development process and teardown, and discusses its potential benefits like accessibility and information access, as well as challenges regarding privacy and device care.
Google provides platforms, services, and tools to help developers build and monetize games. Platforms include Cardboard, Chrome, Android TV, Project Tango, and Google Cloud. Services include Play Games Services for user acquisition and engagement, Google Analytics for analytics, and AdMob for monetization. Google also offers open source libraries, samples, and Unity plugins to assist with development for areas like VR, fluid simulations, and integrating Google services. Documentation includes best practices, guidelines, and checklists.
Project Tango is a Google project that uses 3D motion tracking and depth perception to allow devices like smartphones and tablets to know their position relative to their surroundings. At Google I/O in May 2016, Google demonstrated Project Tango's real-time depth perception capabilities. Lenovo then announced the Phab 2 Pro, the first smartphone featuring Project Tango, available for purchase in September 2016 for $499.
Google glass, A new innovation leading to new technology Ekta Agrawal
This presentation will help you to understand better the working of Google glass the innovation that makes changes in the world and bring new innovation to you
Google Glass is an augmented reality project led by Google to develop smart glasses. The glasses are designed to display information to the user through a small video screen and can be controlled through voice commands or touch gestures. Some key technologies used include Android, 4G connectivity, cameras, and augmented reality capabilities to overlay information on the real world. The goal is to create a hands-free device that allows users access information and communicate remotely.
Augmented reality (AR) is a live direct or indirect view of a physical, real-world environment whose elements are "augmented" by computer-generated or extracted real-world sensory input such as sound, video, graphics, haptics or GPS data.[1] It is related to a more general concept called computer-mediated reality, in which a view of reality is modified (possibly even diminished rather than augmented) by a computer. Augmented reality enhances one’s current perception of reality, whereas in contrast, virtual reality replaces the real world with a simulated one.
This document discusses Google Glass, a wearable computer with an optical head-mounted display created by Google. It describes Google Glass's features like hands-free display and control using voice commands. The document also outlines the technologies used like Android and 4G that allow access to information and communication. Both advantages like accessibility and disadvantages potential privacy issues are presented.
Project Soli is a radar-based gesture recognition technology developed by Google's Advanced Technology and Projects group. It uses miniature radar sensors to detect touchless hand gestures without the need for physical controls. This allows for more natural interactions with wearable devices. When integrated into wearables, Project Soli could enable interactions like turning an invisible dial to control volume or tapping invisible buttons to select options. The technology is still in development by Google but aims to release the Soli sensor and an API to developers to build new interactive applications.
Google Glass is a wearable computer with an optical head-mounted display that is being developed by Google. It functions as a smartphone that can be controlled via voice commands to access information like maps, emails, photos and more. The device has a camera, microphone, touchpad, Android operating system and can connect to the internet via WiFi and Bluetooth. While promising hands-free access to information, concerns exist around privacy and safety issues that come with an internet-connected device worn on a user's face.
Here is the new Google glass seminar presentation of office-2013.A new report suggests Google Glass will get a complete redesign for version two. Google Glass captured our imagination with the idea of Internet-connected smart glasses, but delivering on that promise feels further away than ever.
Google Glass is a wearable computer with an optical head-mounted display that is being developed by Google. It displays information in a smartphone-like format and responds to voice commands. Key features include a 1.5GHz processor, Bluetooth, WiFi, 5MP camera, and battery allowing 6 hours of use. Technologies powering Glass include wearable computing, augmented reality, Android OS, and 4G connectivity. Potential applications include phone calls, photos, translation, maps, and social sharing. While promising greater accessibility, concerns include privacy and potential eye strain.
This document summarizes a technical seminar presentation on Google Glass. It includes an introduction to Google Glass specifications and capabilities. The presentation describes the Google Glass architecture, the Mirror API, and how to develop apps using timeline cards, contacts, and location information. It covers design principles for Google Glass apps and discusses benefits and limitations of the technology.
6thsensetechnology by www.avnrpptworld.blogspot.comavnrworld
Sixth Sense is a wearable gestural interface developed by Pranav Mistry that consists of a camera, projector, and mirror coupled in a pendant. The camera tracks hand gestures and sends the data to a smartphone for processing. The projector then projects the digital information onto any surface via the mirror. This allows users to interact with digital information in the physical world using natural hand gestures. Some applications include making calls, getting maps, checking the time, and accessing information about objects by pointing at them. The system has advantages like automatically accessing information and interacting with it intuitively through gestures.
This document presents a summary of Google Glass. It was presented by Nidhin P Koshy for the ECE department at TKMIT. Google Glass is a wearable computer with an augmented reality display developed by Google. It features a camera, display, touchpad, battery and microphone built into a spectacle frame. The display uses a prism to project 640x360 resolution graphics equivalent to a 25 inch screen from 8 feet away. Voice commands through the microphone allow users to take pictures, get directions, send messages and more just by speaking. While innovative, some disadvantages are potential privacy issues from photos taken without permission and distraction from the visual display blocking the user's line of sight.
The document provides an overview of Google Glass, including its design features such as the video display and camera, and the technologies that enable it like wearable computing, ambient intelligence, and 4G networks. It also discusses how Google Glass works hands-free using voice commands and displays information to the user through the video display mounted on the glasses. The document serves as a technical report submitted by a student to fulfill the requirements for a Bachelor of Technology degree.
Project Glass is a Google research project to develop smart glasses featuring a head-mounted display and allowing hands-free access to information via natural language voice commands. The glasses are being developed by Google X Lab and will communicate with mobile phones via WiFi to display notifications and respond to voice commands. Some key features of Google Glass include a small video display, camera, speaker, microphone and touchpad. [/SUMMARY]
seminar report on night vision technologyAmit Satyam
This document summarizes the history and technology behind night vision devices. It describes how early generations used multiple image intensifier tubes to amplify light, while later generations employed microchannel plates and gallium arsenide photocathodes to improve light sensitivity and gain. The document outlines the key technological advances between each generation, from Generation 0 devices that used infrared illumination to Generation 4's filmless and gated technology offering improved resolution and reduced noise in varying light conditions.
Project Soli is a Google technology that uses radar sensors and machine learning to enable touchless gesture control. A small Soli chip contains radar that can detect subtle hand motions and movements. This allows devices to be controlled through gestures without touching screens. Google is developing a Soli developer kit to allow creators to explore uses for areas like health, art, smartwatches, and other interfaces. The technology provides an alternative to camera-based gesture systems by offering higher motion tracking speeds and the ability to sense movements through certain materials.
#Google announced a new product called #googlelens, that amounts to an entirely new way of searching the internet through your camera. Once you take a photo, #googlelens collects information behind the photo. If you take a photo of a restaurant, Lens can do more than just say “it’s a restaurant,” which you know, or the name of the restaurant. It can automatically find hours, reservations and a menu.
The document discusses night vision technology. It begins with an overview of night vision and its history, including early methods using flares and spotlights. It then describes the two main technologies used - thermal imaging and image intensification. Generations of night vision devices are categorized based on technological improvements. Applications include military, hunting, security and aviation. While night vision provides benefits, there are also limitations such as lack of color and potential user fatigue.
This document provides an overview of night vision technology, including how night vision devices work and their applications. It discusses the two main approaches to night vision - enhancing spectral range and intensity range. It describes the components and working of image intensification and thermal imaging devices. The document outlines the four generations of night vision devices and how each generation improved light amplification and detection range. In conclusion, it notes that while night vision was originally for military use, it has expanded to civilian applications like security and hunting.
This document provides an overview of Google Glass, including its intended purpose, key technologies, development phases, features, specifications, advantages, and disadvantages. Google Glass is an augmented reality project that allows hands-free access to information through a small video display. It utilizes technologies like augmented reality, bone conduction, an Android operating system, and a front-facing camera to respond to voice commands and project images and notifications in the user's field of vision. The document outlines the device's development process and teardown, and discusses its potential benefits like accessibility and information access, as well as challenges regarding privacy and device care.
Google provides platforms, services, and tools to help developers build and monetize games. Platforms include Cardboard, Chrome, Android TV, Project Tango, and Google Cloud. Services include Play Games Services for user acquisition and engagement, Google Analytics for analytics, and AdMob for monetization. Google also offers open source libraries, samples, and Unity plugins to assist with development for areas like VR, fluid simulations, and integrating Google services. Documentation includes best practices, guidelines, and checklists.
Project Tango is a Google project that uses 3D motion tracking and depth perception to allow devices like smartphones and tablets to know their position relative to their surroundings. At Google I/O in May 2016, Google demonstrated Project Tango's real-time depth perception capabilities. Lenovo then announced the Phab 2 Pro, the first smartphone featuring Project Tango, available for purchase in September 2016 for $499.
Google project tango - Giving mobile devices a human scale understanding of s...Harsha Madusankha
Project Tango is a Google initiative to develop smartphones that can understand 3D space. It uses sensors and computer vision techniques to create 3D maps of environments in real-time. The Project Tango smartphone has motion tracking cameras, an infrared projector, and processors to make over a quarter million 3D measurements per second. This allows the phone to create 3D models of spaces and understand its position within physical environments. Potential applications include indoor mapping, navigation for the visually impaired, augmented reality gaming, and autonomous robotics. Google is working with other companies and universities to develop this technology further.
Project Tango is a mobile device that uses advanced sensors and computer vision to understand its position and motion in 3D space. It can map indoor environments in 3D and provide indoor navigation assistance. The technology uses infrared and depth sensors to build a detailed depth map of the surrounding space. It has potential applications for indoor mapping, augmented reality gaming, and emergency response. Currently Google is providing early access to developers to explore using the technology.
This document presents the design of an online job portal created by students Akshay Ghanekar, Deepak Yadav, Pradeep Kumar, and Ajay Maurya under the guidance of Prof. Sachin Narkhede. The project aims to minimize problems faced by job applicants in finding suitable jobs. Key modules include ones for job seekers to post resumes and employers to post vacancies. An administration module manages user profiles and the site. The proposed system offers advanced filtering, cost-effectiveness, SMS notifications, and support to benefit both job seekers and employers.
This document provides an agenda for a workshop on exploring the Raspberry Pi. The agenda includes introductions, an overview of the Raspberry Pi hardware, installing the operating system, using remote access like SSH and VNC, GPIO and sensor interfacing, Python and C programming, and demos of blinking LEDs, using buttons as inputs, and PWM. The document also discusses connecting the Raspberry Pi to devices like Arduino, cameras, and sound. It concludes with a 2 hour hackathon for participants to build projects with the Raspberry Pi.
Internet (Intelligence) of Things (IOT) with DrupalPrateek Jain
Talks about some of application in IOT space already and potential growth and impact IOT will have in next few years taking Nube as a case study.
Also talks about how to build your own end-to-end IOT solution using open hardware like Raspberry PI, Cloud Platform and Drupal.
The document describes a proposed online job portal system developed by students. It includes sections that describe the major modules like applicant registration, company registration, job search, and vacancy registration. It also includes an admin registration module. The document discusses the spiral model used for development and includes entity relationship diagrams, data flow diagrams, and database tables to support the system. It highlights benefits like reduced manual work, data accuracy, and faster information retrieval.
Seminar report on Raspberry Pi, submitted in SEMINAR subject of GTU Gujarat Technological University by Nipun Parikh from Bhagwan Mahavir College of Engineering & Technology
Its an Online Job Portal..
it was our BE Project..
u can view it on http://jobportal.akshay.uco.im/
if is case you want our project or the contents just mail me on ajay.maurya24@yahoo.in
The document summarizes a job portal web application project. The project aims to provide information about new jobs and allow users to search for jobs by location and skills. It will allow job seekers to upload resumes for employers to view. Employers can post new job openings to the site. The project uses technologies like Java, JSP, HTML, and a MS Access database. It has modules for jobseekers to login, employers to login and post jobs, and an admin section. The scope is to benefit both job seekers and recruiters.
The "Job Portal" where you can find different UML diagrams of this system and that includes:
1) Use case diagram
2) Fully dressed use case
3) Sequence Diagram
4) Activity Diagram
5) Class Diagram
6) Component Diagram
Project Tango is a prototype smartphone developed by Google that uses motion tracking and depth sensing to allow the phone to create a 3D map of its surroundings. It uses a combination of cameras, sensors, and processors to take over 250,000 3D measurements per second to track its position and orientation in 3D space in real-time. This allows it to build a 3D model of the environment. The goal of Project Tango is to give mobile devices a human-scale understanding of 3D space and motion. Two prototype devices were developed - a 7-inch tablet and a 5-inch smartphone prototype. The hardware includes multiple cameras, an infrared projector, motion tracking cameras, and a vision processing chip to analyze the
Project Tango is a prototype smartphone developed by Google that uses computer vision to allow mobile devices to understand their position and orientation in 3D space. It contains specialized cameras and sensors that enable features like motion tracking, area mapping, and depth perception. The main challenges were implementing simultaneous localization and mapping (SLAM) algorithms typically requiring high-powered computers onto a mobile device. It works by using a combination of cameras, sensors, and custom computer vision chips to generate real-time 3D models of environments.
Google Cardboard is a virtual reality platform developed by Google that allows users to place their smartphone in a cardboard viewer to experience VR. It was created in 2014 as an inexpensive way to encourage interest and development in VR. Users can build their own viewer using specifications published by Google or purchase one from third parties. Compatible apps use the smartphone's display and lenses in the viewer to provide stereoscopic 3D images. While low-cost, it also has limitations like lack of sensors in some phones and risk of motion sickness. Over 5 million Cardboard viewers have shipped and many educational and entertainment apps are available.
I/O developer’s conference emerged out with some really interesting facades this year presented by Google. The introduction of updated version of Virtual Reality viewer Cardboard and Android M from its android application development section, Google has made sure to provide some electrifying products from its campaign for its users in the future ahead.
This document is a seminar report on Google Glass submitted by Ghanshyam Devra to Rajasthan Technical University. It includes an introduction to virtual and augmented reality and Google Glass. It discusses the technology used in Google Glass like wearable computing, ambient intelligence, smart clothing, eye tap technology, smart grid technology, 4G technology, and the Android operating system. It describes the design components of Google Glass like the video display, camera, speaker, button, and microphone. It explains how Google Glass works and its features, advantages, disadvantages, and future scope. The report aims to provide information on Google Glass and discuss how it can be used.
Seminar report on Google Glass, Blu-ray & Green ITAnjali Agrawal
Google Glass is a research project by Google to develop augmented reality glasses. The glasses will have a small video display to show information and will be controlled by voice commands. Key features include a camera, speaker, button, and microphone. The glasses will connect to smartphones and tablets using WiFi and Android software. They will recognize objects and overlay information like maps, photos and translations. This could improve accessibility but also raises privacy concerns. The future potential is promising if technical and social issues are addressed.
Project Tango is a smartphone project by Google that uses motion tracking and depth perception to create a 3D model of the environment. It has an infrared projector, cameras, and sensors that allow it to track its position and map its surroundings in 3D. The phone emits infrared light pulses and records reflections to build detailed depth maps. Developers are exploring uses like augmented reality applications and helping robots perform tasks autonomously. The technology could also be integrated with devices like Google Glass in the future.
introduction and abstract on Google Glass Major reportJawhar Ali
This document discusses Google Glass and its potential role in network surveillance. It provides background on Google Glass and augmented reality. The document will investigate whether Glass could contribute to network surveillance by analyzing its capabilities and comparing its potential outcomes to George Orwell's dystopian novel Nineteen Eighty-Four. Theories will be applied to analyze Glass's possibilities for surveillance and interpret its impacts on privacy.
A smartphone from Google ATAP which creates a live 3D image of your nearby space, such that you can access those data anywhere and anytime.
For any queries contact me at : akhilanair94@gmail.com
The document provides an overview of a seminar report submitted by Prakhar Gupta on Google Glass. The report includes an introduction to concepts like virtual reality and augmented reality. It discusses the key technologies powering Google Glass like wearable computing, ambient intelligence and 4G. The report also covers the design and working of Google Glass and analyzes its advantages and disadvantages. It concludes with the future scope of augmented reality devices like Google Glass.
M S Reza Jony is presently pursuing his MBA degree at Postgraduate Institute of Management, University of Sri Jayewardenepura, Sri Lanka. He wrote this report on Google Glass during his participation in the Information Management (IM) course........
Presentation on Google Tango By Atharva Jawalkar Atharva Jawalkar
Tango (formerly named Project Tango, while in testing) was an augmented reality computing platform, developed and authored by the Advanced Technology and Projects (ATAP), a skunkworks division of Google. It uses computer vision to enable mobile devices, such as smartphones and tablets, to detect their position relative to the world around them without using GPS or other external signals
The Meta SpaceGlasses present virtual objects in the real world and allow users to interact with them, like sculpting a virtual vase. Users can then send the digital model to a 3D printer to create a physical version. The glasses use a Kinect sensor and processor to detect real objects and track hand movements, while projectors display virtual images on the lenses. Meta hopes to release the SpaceGlasses for consumer use by 2014.
This document summarizes augmented reality (AR) technology. It discusses how AR enhances the real-world environment by incorporating digital information like graphics. Examples of AR applications discussed include Intel's x-ray glasses that allow seeing inside objects and Google's Project Tango, which uses sensors and cameras to integrate 3D environments into mobile devices. The document traces the history of AR concepts back to Rene Descartes in the 1600s and discusses ongoing research areas like improving depth sensing and object recognition to advance AR capabilities.
This document provides an overview of Google Glass, including what it is, its key features and specifications. Google Glass is an optical head-mounted display developed by Google that resembles a pair of eyeglasses. It uses voice commands and visual cues to provide information directly to the user's field of vision through an augmented reality experience. The document outlines Google Glass' development history and testing program, as well as its potential applications and the technologies that enable its functionality, such as Android and augmented reality. Programming approaches for Glass include developing native Android apps or creating "Glassware" apps using the Mirror API.
It's a presentation on the 21st century device ''Google Glass''..which talks about the technology used in making of it along with the feasibility of having a superb gadget which can perform multiple tasks at a particular moment of time...!!
Keep using keep learning.. :-)
This document provides an overview of Google Glass, an augmented reality head-mounted display being developed by Google. It discusses the technologies behind virtual and augmented reality, as well as an introduction to Project Glass. The document then covers the key technologies powering Google Glass like wearable computing, ambient intelligence and 4G networks. It also describes the design components of Google Glass and how it works. Later chapters discuss advantages and disadvantages, future applications and conclusions.
Google Cardboard is a low-cost virtual reality platform developed by Google that uses cardboard viewers and smartphones. Users can build their own viewer using specifications published by Google or purchase one from a third party. Google provides software development kits for creating Cardboard apps for Android and Unity. The cardboard viewers are assembled from basic components like cardboard, lenses, magnets, and fasteners. Compatible apps split the smartphone display into stereo images and apply distortion to create a stereoscopic 3D effect when viewed through the lenses.
This document provides an overview of Google Glass. It discusses how Google Glass is a wearable computer with an optical head-mounted display that is being developed by Google. The glasses will run on Android and allow hands-free access to information by communicating with the internet via voice commands. Key features will include a camera, GPS, motion sensors, and the ability to pull in augmented reality information from Google services to be displayed on the lenses. While the glasses are not meant to be worn constantly, they will function as a see-through computer monitor for accessing information as needed, similar to how smartphones are used.
This document discusses mobile augmented reality technologies. It begins by defining augmented reality and how mobile AR overlays digital information onto the real world viewed through a camera. It then discusses the hardware capabilities of modern smartphones that enable AR applications like cameras, sensors, and high-resolution displays. It also reviews several open-source and proprietary AR software development kits (SDKs) and tools that facilitate creating AR applications. Examples are given of many existing AR applications across different domains.
Similar to Google project tango seminar report (20)
Literature Review Basics and Understanding Reference Management.pptxDr Ramhari Poudyal
Three-day training on academic research focuses on analytical tools at United Technical College, supported by the University Grant Commission, Nepal. 24-26 May 2024
Low power architecture of logic gates using adiabatic techniquesnooriasukmaningtyas
The growing significance of portable systems to limit power consumption in ultra-large-scale-integration chips of very high density, has recently led to rapid and inventive progresses in low-power design. The most effective technique is adiabatic logic circuit design in energy-efficient hardware. This paper presents two adiabatic approaches for the design of low power circuits, modified positive feedback adiabatic logic (modified PFAL) and the other is direct current diode based positive feedback adiabatic logic (DC-DB PFAL). Logic gates are the preliminary components in any digital circuit design. By improving the performance of basic gates, one can improvise the whole system performance. In this paper proposed circuit design of the low power architecture of OR/NOR, AND/NAND, and XOR/XNOR gates are presented using the said approaches and their results are analyzed for powerdissipation, delay, power-delay-product and rise time and compared with the other adiabatic techniques along with the conventional complementary metal oxide semiconductor (CMOS) designs reported in the literature. It has been found that the designs with DC-DB PFAL technique outperform with the percentage improvement of 65% for NOR gate and 7% for NAND gate and 34% for XNOR gate over the modified PFAL techniques at 10 MHz respectively.
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODELgerogepatton
As digital technology becomes more deeply embedded in power systems, protecting the communication
networks of Smart Grids (SG) has emerged as a critical concern. Distributed Network Protocol 3 (DNP3)
represents a multi-tiered application layer protocol extensively utilized in Supervisory Control and Data
Acquisition (SCADA)-based smart grids to facilitate real-time data gathering and control functionalities.
Robust Intrusion Detection Systems (IDS) are necessary for early threat detection and mitigation because
of the interconnection of these networks, which makes them vulnerable to a variety of cyberattacks. To
solve this issue, this paper develops a hybrid Deep Learning (DL) model specifically designed for intrusion
detection in smart grids. The proposed approach is a combination of the Convolutional Neural Network
(CNN) and the Long-Short-Term Memory algorithms (LSTM). We employed a recent intrusion detection
dataset (DNP3), which focuses on unauthorized commands and Denial of Service (DoS) cyberattacks, to
train and test our model. The results of our experiments show that our CNN-LSTM method is much better
at finding smart grid intrusions than other deep learning algorithms used for classification. In addition,
our proposed approach improves accuracy, precision, recall, and F1 score, achieving a high detection
accuracy rate of 99.50%.
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
We have compiled the most important slides from each speaker's presentation. This year’s compilation, available for free, captures the key insights and contributions shared during the DfMAy 2024 conference.
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...IJECEIAES
Climate change's impact on the planet forced the United Nations and governments to promote green energies and electric transportation. The deployments of photovoltaic (PV) and electric vehicle (EV) systems gained stronger momentum due to their numerous advantages over fossil fuel types. The advantages go beyond sustainability to reach financial support and stability. The work in this paper introduces the hybrid system between PV and EV to support industrial and commercial plants. This paper covers the theoretical framework of the proposed hybrid system including the required equation to complete the cost analysis when PV and EV are present. In addition, the proposed design diagram which sets the priorities and requirements of the system is presented. The proposed approach allows setup to advance their power stability, especially during power outages. The presented information supports researchers and plant owners to complete the necessary analysis while promoting the deployment of clean energy. The result of a case study that represents a dairy milk farmer supports the theoretical works and highlights its advanced benefits to existing plants. The short return on investment of the proposed approach supports the paper's novelty approach for the sustainable electrical system. In addition, the proposed system allows for an isolated power setup without the need for a transmission line which enhances the safety of the electrical network
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressionsVictor Morales
K8sGPT is a tool that analyzes and diagnoses Kubernetes clusters. This presentation was used to share the requirements and dependencies to deploy K8sGPT in a local environment.
1. Google’s Project Tango
Department of CSE
1 | P a g e
1. INTRODUCTION
3D models represent a 3D object using a collection of points in a given 3D space, connected by
various entities such as curved surfaces, triangles, lines, etc. Being a collection of data which
includes points and other information, 3D models can be created by hand, scanned (procedural
modeling), or algorithmically. The "Project Tango" prototype is an Android smartphone- like
device which tracks the 3D motion of particular device, and creates a 3D model of the environment
around it.
Project Tango was introduced by Google initially in early 2013, they described this as a
Simultaneous Localization and Mapping (SLAM) system capable of operating in real-time on a
phone. Google’s ATAP teamed up with a number of organizations to create Project Tango from
this description.
The team at Google’s Advanced Technology and Projects Group (ATAP) has been
working with various Universities and Research labs to harvest ten years of research in Robotics
and Computer Vision to concentrate that technology into a very unique mobile phone. We are
physical being that live in a 3D world yet the mobile devices today assume that the physical world
ends the boundaries of the screen. Project Tango’s goal is to give mobile devices a human-scale
understanding of space and motion. This project will help people interact with the environment in
a fundamentally different way and using this technology we can prototype in a couple of hours
something that would take us months or even years before because we did not have this technology
readily available. Imagine having all this in a smartphone and see how things would change.
The first product to emerge from Google's ATAP skunkworks group,[1] Project Tango was
developed by a team led by computer scientist Johnny Lee, a core contributor
to Microsoft's Kinect. In an interview in June 2015, Lee said, "We're developing the hardware and
software technologies to help everything and everyone understand precisely where they are,
anywhere."[2]
This device runs Android and includes development APIs to provide alignment, position
or location, and depth data to regular Android apps written in C/C++, Java as well as the Unity
Game Engine (UGE). These early algorithms, prototypes, and APIs are still in active development.
So, these are experimental devices and are intended only for the exploratory and adventurous are
not a final shipping product.
Project Tango technology gives a mobile device the ability to navigate the physical world
similar to how we do as humans.
Project Tango brings a new kind of spatial perception to the Android device platform by
adding advanced computer vision, image processing, and special vision sensors.
Project Tango is a prototype phone containing highly customized hardware and software
designed to allow the phone to track its motion in full 3D in real-time. The sensors make over a
quarter million 3D measurements every single second updating the position and rotation of the
2. Google’s Project Tango
Department of CSE
2 | P a g e
phone, blending this data into a single 3D model of the environment. It tracks ones position as one
goes around the world and also makes a map of that. It can scan a small section of your room and
then are able to generate a little game world in it. It is an open source technology. ATAP has
around 200 development kits which has already been distributed among the developers.
Google has produced two devices to demonstrate the Project Tango technology: the Peanut
phone (no longer available) and the Yellowstone 7-inch tablet. More than 3,000 of these devices
had been sold as of June 2015,[3] chiefly to researchers and software developers interested in
building applications for the platform. In the summer of 2015, Qualcomm and Intel both
announced that they are developing Project Tango reference devices as models for device
manufacturers who use their mobile chipsets.[4][5
At CES, in January 2016, Google announced a partnership with Lenovo to release a
consumer smartphone during the summer of 2016 to feature Project Tango technology marketed
at consumers, noting a less than $500 price-point and a small form factor below 6.5 inches. At the
same time, both companies also announced an application incubator to get applications developed
to be on the device on launch.
Fig (1) Google’s Project Tango Logo
3. Google’s Project Tango
Department of CSE
3 | P a g e
Which companies are behind Project Tango?
A number of companies came together to develop Project Tango. All of these are listed in the
credits of the Google Project Tango introduction video called “Say hello to Project Tango!” Each
company has had a different amount of involvement. The following are the list of participating
companies listed in that video:
· Bosch
· BSquare
· CompalComm
· ETH Zürich
· Flyby Media
· George Washington University
· HiDOF
· MMSolutions
· Movidius
· University of Minnesota
· NASA JPL
· Ologic
· OmniVision
· Open Source Robotics Foundation
· ParaCosm
· Sunny Optical tech
· Speck Design
4. Google’s Project Tango
Department of CSE
4 | P a g e
2. OVERVIEW
WHAT IS PROJECT TANGO?
Tango allows a device to build up an accurate 3D model of its immediate surroundings, which
Google says will be useful for everything from AR gaming to navigating large shopping centres.
Fig (2) A view of Googles Project Tango 3D Model Mapping
Google isn't content with making software for phones that can merely capture 2D photos and
videos. Nor does it just want to take stereoscopic 3D snaps. Instead, Project Tango is a bid to equip
every mobile device with a powerful suite of software and sensors that can capture a complete 3D
picture of the world around it, in real-time. Why? So you can map your house, furniture and all,
simply by walking around it. Bingo - no more measuring up before going shopping for a new
wardrobe. Or so you can avoid getting lost next time you go to the hospital - you'll have instant
access to a 3D plan of its labyrinthine corridors. Or so you can easily find the 'unhealthy snacks'
section in your local megamart. Or so can play amazing augmented reality games. Or so that the
visually impaired can receive extra help in getting around. In fact, as with most Google projects,
the ways in which Tango could prove useful are only limited by our imagination.
5. Google’s Project Tango
Department of CSE
5 | P a g e
WHAT DOES THE PHONE LOOK LIKE?
There are two prototypes of Tango phone yet. A 7inch tablet and another prototype of a 5 inch
phone.
Fig (3) Prototype 1
It's a fairly standard 7in slate with a slight wedge at the back to accommodate the extra sensors.
As far as we can tell, it has three cameras including the webcam. Inside, it has one of Nvidia's
so-far-untested Tegra K1 mobile processors with a beefy 4GB of RAM and a 128GB
SSD. Google is at pains to point out that it's not a consumer device, but one is supposedly on
the way. The depth-sensing array consists of an infrared projector, 4MP rear camera and front-
facing fisheye view lens with 180-degree field of vision. Physically, it's a standard phone shape
but rather chunky compared to the class of 2014. More like something from about 2010
6. Google’s Project Tango
Department of CSE
6 | P a g e
Fig (4) Prototype 2
Prototype 2 is an android 5 inch smartphone with the same tango hardware as that of the tablet.
Fig (5) A simple Overview of Components of Tango Phone
Google's Project Tango is a smartphone equipped with a variety of cameras and vision sensors that
provides a whole new perspective on the world around it. The Tango smartphone can capture a
7. Google’s Project Tango
Department of CSE
7 | P a g e
wealth of data never before available to application developers, including depth and
object-tracking and instantaneous 3D mapping. And it is almost as powerful and as big as a typical
smartphone.
Project Tango is different from other emerging 3D-sensing computer vision products, such
as Microsoft Hololens, in that it's designed to run on a standalone mobile phone or tablet and is
chiefly concerned with determining the device's position and orientation within the environment.
The high-end Android tablet with 7-inch HD display, 4GB of RAM, 128GB of internal
SSD storage and an NVIDIA Tegra K1 graphics chip (the first in the
US and second in the world) that features desktop GPU architecture. It also has a distinctive design
that consists of an array of cameras and sensors near the top and a couple of subtle grips on the
sides. Movidius which is the company that developed some of the technology which has been used
in Tango has been working on computer vision technology for the past seven years — it developed
the processing chips used in Project Tango, which Google paired with sensors and cameras to give
the smartphone the same level of computer vision and tracking that formerly required much larger
equipment. The phone is equipped with a standard 4-megapixel camera paired with a special
combination of RGB and IR sensor and a lower-resolution image-tracking camera. These combos
of image sensors give the smartphone a similar perspective on the world, complete with 3-D
awareness and a awareness of depth. They supply information to Movidius custom Myriad 1 low-
power computer-vision processor, which can then process the data and feed it to apps through a
set of APIs. The phone also contains a Motion Tracking camera which is used to keep track of all
the motions made by the user.
8. Google’s Project Tango
Department of CSE
8 | P a g e
3. SMARTPHONE SPECIFICATION
Tango wants to deconstruct reality, taking a quarter million 3D measurements each second to
create a real-time 3D model that describes the physical depth of its surroundings.
The smartphone specs are
The above specs include Snapdragon 800 quad core CPU running up to 2.3 GHz per core, 2GB or
4GB of memory, an expandable 64GB or 128 of internal storage, and a nine axis
accelerometer/gyroscope/compass. There’s also a Mini-USB, a Micro-USB, and USB 3.0.
In addition to above specs Tango’s specs also include: a rear-facing four megapixel
RGB/infrared camera, a 180-degree field-of-view fisheye rear-facing camera, a 120-degree field-
of-view front facing camera, and a 320 x 180 depth sensor – plus a vision processor with one
teraflop of computer power. Project Tango uses a 3000 mAh battery.
9. Google’s Project Tango
Department of CSE
9 | P a g e
4. HARDWARE
Project Tango is basically a camera and sensor array that happens to run on an Android phone.
The smartphone is equipped with a variety of cameras and vision sensors that provides a whole
new perspective on the world around it. The Tango smartphone can capture a wealth of data never
before available to application developers, including depth and object-tracking and instantaneous
3D mapping. And it is almost as powerful and as big as a typical smartphone. The Front View and
Back View of a Tango Phone is shown below.
It is same like some other phones but the phone is having variety of cameras and sensors that make
the 3D modelling of the environment possible.
Fig (6) Tango Phone Front View
The device tracks the 3D motion and creates a 3D model of the environment around it by using
the array of cameras and sensors. The phone emits pulses of infrared light from the IR projector
and records how it is reflected back allowing it to build a detailed depth map of the surrounding
space.
There are three cameras that capture a 120-degree wide-angle field of view. 3D camera captures
the 3D structure of a scene. Most cameras are 2D, meaning they are a projection of the scene onto
the camera's imaging plane; any depth information is lost. In contrast, a 3D camera also captures
the depth dimension (in addition to the standard 2D data).A rear-facing four megapixel
RGB/infrared camera, a 180-degree field-of-view fisheye rear-facing camera, a 120-degree field-
10. Google’s Project Tango
Department of CSE
10 | P a g e
of-view front facing camera, and a 320 x 180 depth sensor are the components of the phone at the
rear end that works together to give the 3D structure of the scene.
Fig (7) Tango Phone Back View
Project Tango, which Google paired with sensors and cameras to give the smartphone the same
level of computer vision and tracking that formerly required much larger equipment. The phone is
equipped with a standard 4-megapixel camera paired with a special combination of RGB and IR
sensor and a lower-resolution image-tracking camera. These combos of image sensors give the
smartphone a similar perspective on the world, complete with 3-D awareness and a awareness of
depth. They supply information to Movidius custom Myriad 1 low-power computer-vision
processor, which can then process the data and feed it to apps through a set of APIs. The phone
also contains a Motion Tracking camera which is used to keep track of all the motions made by
the user. The motherboard which contains all of these components is shown below
11. Google’s Project Tango
Department of CSE
11 | P a g e
Fig (8) Tango phone Motherboard
Elpida FA164A1PB 2 GB LPDDR3 RAM, layered above a Qualcomm 8974(Snapdragon 800)
processor. (RED)
Two Movidius Myriad 1 computer vision co-processors. (ORANGE)
Two AMIC A25L016 16 Mbit low voltage serial flash memory ICs. (YELLOW)
InvenSense MPU-9150 9-axis gyroscope/accelerometer/compass MEMS motion tracking
device. (GREEN)
Skyworks 77629 multimode multiband power amplifier module for quad-band GSM/EDGE.
(BLUE)
PrimeSense PSX1200 Capri PS1200 3D sensor SoC. (VIOLET)
The figure above is the motherboard: the red is 2GB LPDDR3 RAM, along with Qualcomm
Snapdragon 800 CPU, the orange is computer image processor Movidius Myriad 1, the
green which contain 9-axis acceleration sensor / gyroscope / compass, motion tracking, the
yellow is two memory ICs AMIC A25L016 flash 16Mbit, the purple is the SoC 3D sensor
PrimeSense PSX1200 Capri PS1200, the blue is SPI flash memory Winbond W25Q16CV
16Mbit. Internally, the Myriad 2 consists of 12 128-bit vector processors called Streaming
Hybrid Architecture Vector Engines, or SHAVE in general, which run at 60MHz. The
Myriad 2 chip gives five times the SHAVE performance of the Myriad 1, and the SIPP
engines are 15x to 25x more powerful than the 1st generation chip.
The phone is equipped with a standard 4-megapixel camera paired with a special
combination of RGB and IR sensor and a lower-resolution image-tracking camera..As the
main camera, the Tango uses OmniVision’s OV4682. It is the eye of Project Tango’s
12. Google’s Project Tango
Department of CSE
12 | P a g e
mobile device. The OV4682 is a 4MP RGB IR image sensor that captures
high-resolution images and video as well as IR information, enabling depth analysis.
Fig (9) Front and Rear Camera Fig (10) Fisheye Camera
Fig (11) IR Projector
Integrated Depth Sensor
13. Google’s Project Tango
Department of CSE
13 | P a g e
5. TECHNOLOGY BEHIND TANGO
5.1 TANGO’S SENSOR
Myriad 1 vision processor platform developed by Movidius Company. The sensors
allow the device to make "over a quarter million 3D measurements every second, updating
its position and orientation in real time, combining that data into a single 3D model of the
space around you. Movidius which is the company that developed some of the technology
which has been used in Tango has been working on computer vision technology for the
past seven years — it developed the processing chips used in Project Tango, which Google
paired with sensors and cameras to give the smartphone the same level of computer vision
and tracking that formerly required much larger equipment.
5.2 IMAGE SENSORS
Image sensors give the smartphone a similar perspective on the world, complete
with 3-D awareness and a awareness of depth which is then supplied information to
Movidius custom Myriad 1 low-power computer-vision processor, which can then process
the data and feed it to apps through a set of APIs. The Motion Tracking camera keeps track
of all the motions made by the user. . There are three cameras that capture a 120-degree
wide-angle field of view from the front. An even wider 180 degree span from the back The
phone is equipped with a standard 4-megapixel camera paired with a special combination
of RGB and IR sensor and a lower-resolution image-tracking camera. Its depth-sensing
array consists of an infrared projector, 4MP rear camera and front-facing fisheye view lens
with 180-degree field of vision. The phone emits pulses of infrared light from the IR
projector and records how it is reflected back allowing it to build a detailed depth map of
the surrounding space. The data collected from sensors and camera is processed by the
Myriad vision processor for delivering 3D structure of the view to the apps.
14. Google’s Project Tango
Department of CSE
14 | P a g e
6. WORKING CONCEPT
Project Tango devices combine the camera, gyroscope and accelerometer to estimate six degrees
of freedom motion tracking, providing developers the ability to track 3D motion of a device while
simultaneously creating a map of the environment
An IR projector provides infrared light that other (non-RGB) cameras can use to get a sense of an
area in 3D space. The phone emits pulses of infrared light from the IR projector and records how
it is reflected back allowing it to build a detailed depth map of the surrounding space. There are
three cameras that capture a 120-degree wide-angle field of view from the front. An even wider
180 degree span from the back. A 4-MP color camera sensor can also be used for snapping regular
pics. A 3D camera captures the 3D structure of a scene. Most cameras are 2D, meaning they are a
projection of the scene onto the camera's imaging plane; any depth information is lost. In contrast,
a 3D camera also captures the depth dimension (in addition to the standard 2D data).
The main camera, the Tango uses OmniVision’s OV4682. It is the eye of Project Tango’s
mobile device. The OV4682 is a 4MP RGB IR image sensor that captures
high-resolution images and video as well as IR information, enabling depth analysis. The sensor
features a 2um OmniBSI-2 pixel and records 4MP images and video in a 16:9 format at 90fps. The
sensor's 2-micron OmniBSI-2 pixel delivers excellent signal-to-noise ratio and IR sensitivity, and
offers best-in-class low-light sensitivity. The OV4682's unique architecture and pixel optimization
bring not only the best IR performance but also best-in-class image quality. The OV4682 records
full-resolution 4-megapixel video in a native 16:9 format at 90 frames per second (fps), with a
quarter of the pixels dedicated to capturing IR. The 1/3- inch sensor can also record 1080p high
definition (HD) video at 120 fps with electronic image stabilization (EIS), or 720p HD at 180 fps.
The OV7251 Camera Chip sensor is capable of capturing VGA resolution video at 100fps using a
global shutter. RGB infrared (IR) single sensor that captures high-resolution images and video as
well as IR information. Its dual RGB and IR capabilities allow it to bring a host of additional
features to mobile and machine vision applications, including gesture sensing, depth analysis, iris
detection and eye tracking.
The another camera is fisheye lens enables a 180º FOV, while the sensor balances
resolution and frames per second to record black and white images for motion tracking. So if the
users moves the devices left or right, it draws the path that the devices and that path followed is
show in the image on the right in real-time. Thus through this we have a motion capture
capabilities in our device. The device also has a depth sensor.
15. Google’s Project Tango
Department of CSE
15 | P a g e
Fig (12) The image represents the feed from the fish-eye lens. Fig (13) Computer Vision
The figure above illustrates depth sensing by displaying a distance heat map on top of what the
camera sees, showing blue colors on distant objects and red colors on close by objects. It also the
data from the image sensors and paired with the device's standard motion sensors and gyroscopes
to map out paths of movement down to 1 percent accuracy and then plot that onto an interactive
3D map. It uses the Sensor fusion technology which combines sensory data or data derived from
sensory data from disparate sources such that the resulting information is in some sense better than
would be possible when these sources were used separately. Thus it means a more precise, more
comprehensive, or more reliable, or refer to the result of an emerging view, such as stereoscopic
vision.
These combos of image sensors give the smartphone a similar perspective on the world,
complete with 3-D awareness and a awareness of depth. They supply information to Movidius
custom Myriad 1 low-power computer-vision processor, which can then process the
data and feed it to apps through a set of APIs. The phone also contains a Motion Tracking camera
which is used to keep track of all the motions made by the user.
Mantis Vision, a developer of some of the world's most advanced 3D enabling technologies
research MV4D technology platform is the core 3D engine behind Google's Project Tango. Mantis
Vision provides the 3D sensing platform, consisting of flash projector hardware components and
Mantis Vision's core MV4D technology which includes structured light-based depth sensing
algorithms which generates realistic, dense maps of the world. It focuses to provide reliable
estimates of the pose of a phone i.e. position and alignment, relative to its
environment, dense maps of the world. It focuses to provide reliable estimates of the pose of a
phone (position and alignment), relative to its environment
16. Google’s Project Tango
Department of CSE
16 | P a g e
7. PROJECT TANGO CONCEPTS
Project Tango is different from other emerging 3D-sensing computer vision products, such as
Microsoft Hololens, in that it's designed to run on a standalone mobile phone or tablet and is chiefly
concerned with determining the device's position and orientation within the environment.
The software works by integrating three types of functionality:
7.1 Motion Tracking:
Motion tracking allows a device to understand position and orientation using
Project Tango's custom sensors. This gives you real-time information about the 3D motion
of a device. Motion-tracking: using visual features of the environment, in combination
with accelerometer and gyroscope data, to closely track the device's movements in space.
Project Tango’s core functionality is measuring movement through space and
understanding the area moved through. Google API’s provide the position and orientation
of the user’s device in full six degrees of freedom, referred to as its pose.
Fig (14) Motion Tracking
17. Google’s Project Tango
Department of CSE
17 | P a g e
7.2 Area Learning:
Using area learning, a Project Tango device can remember the visual features of
the area it is moving through and recognize when it sees those features again. These
features can be saved in an Area Description File (ADF) to use again later.
Project Tango devices can use visual cues to help recognize the world around them. They
can self-correct errors in motion tracking and relocalize in areas they've seen before. . With
an ADF loaded, Project Tango devices gain a new feature called drift corrections or
improved motion tracking.
Area learning is the way of storing environment data in a map that can be re-used
later, shared with other Project Tango devices, and enhanced with metadata such as notes,
instructions, or points of interest
Fig (15) Area Learning
7.3 Depth Perception:
Project Tango devices are equipped with integrated 3D sensors that measure the
distance from a device to objects in the real world. This configuration gives good depth at
a distance while balancing power requirements for infrared illumination and depth
processing.
The depth data allows an application to understand the distance of visible objects to the
device. By combining depth perception with motion tracking, you can also measure distance
between points in an area that aren’t in the same fame.
18. Google’s Project Tango
Department of CSE
18 | P a g e
Project Tango devices are equipped with integrated 3D sensors that measure the distance
from a device to objects in the real world. Current devices are designed to work best indoors
at moderate distances (0.5 to 4 meters). It may not be ideal for close range object scanning.
Because the technology relies on viewing infrared light using the device's camera, there
are some situations where accurate depth perception is difficult. Areas lit with light sources
high in IR like sunlight or incandescent bulbs, or objects that do not reflect IR light cannot
be scanned well.
By combining depth perception with motion tracking, you can also measure distances
between points in an area that aren't in the same frame.
Fig (16) Depth Perception
Together, these generate data about the device in "six degrees of freedom"
(3 axes of orientation plus 3 axes of motion) and detailed three-dimensional information about the
environment.
Applications on mobile devices use Project Tango's C and Java APIs to access this data in
real time. In addition, an API is also provided for integrating Project Tango with the Unity game
engine; this enables the rapid conversion or creation of games that allow the user to interact and
navigate in the game space by moving and rotating a Project Tango device in real space. These
APIs are documented on the Google developer website.
19. Google’s Project Tango
Department of CSE
19 | P a g e
8. DEVICES DEVELOPED SO FAR
As a platform for software developers and a model for device manufacturers, Google has
created two Project Tango devices to date.
The Yellowstone tablet
Google's Project Tango tablet, 2014
"Yellowstone" is a 7-inch tablet with full Project
Tango functionality, released in June 2014, and sold as the
Project Tango Tablet Development Kit.[7] It features a
2.3 GHz quad-core Nvidia Tegra K1 processor, 128GB flash
memory, 1920x1200-pixel touchscreen, 4MP color
camera, fisheye-lens (motion-tracking) camera, integrated
depth sensing, and 4G LTE connectivity. The device is sold
through the official Project Tango website [8] and the Google Play Store.
The Peanut phone
"Peanut" was the first production Project Tango device, released in the first quarter of 2014.
It was a small Android phone with a Qualcomm MSM8974 quad-core processor and additional
special hardware including a fisheye-lens camera (for motion tracking), "RGB-IR" camera (for
color images and infrared depth detection), and Movidius image-processing chips. A high-
performance accelerometer and gyroscope were added after testing several competing models in
the MARS lab at the University of Minnesota.
Several hundred Peanut devices were distributed to early-access partners including
university researchers in computer vision and robotics, as well as application developers and
technology.Google stopped supporting the Peanut device in September 2015, as by then the Project
Tango software stack had evolved beyond the versions of Android that run on the device.
Testing by NASA
In May 2014, two Peanut phones were delivered to the International Space Station to be
part of a NASA project to develop autonomous robots that navigate in a variety of environments,
including outer space. The soccer-ball-sized, 18-sided polyhedral SPHERES robots were
developed at the NASA Ames Research Center, adjacent to the Google campus in Mountain View,
California. Andres Martinez, SPHERES manager at NASA, said "We are researching how
effective Project Tango's vision-based navigation abilities are for performing localization and
navigation of a mobile free flyer on ISS.
20. Google’s Project Tango
Department of CSE
20 | P a g e
9. FUTURE SCOPE
Project Tango seeks to take the next step in this mapping evolution. Instead of depending
on the infrastructure, expertise, and tools of others to provide maps of the world, Tango empowers
users to build their own understanding, all with a phone. Imagine knowing your exact position to
within inches. Imagine building 3D maps of the world in parallel with other users around you.
Imagine being able to track not just the top down location of a device, but also its
full 3D position and alignment. The technology is ambitious, the potential applications are
powerful. The Tango device really enables augmented reality which opens a whole frontier for
playing games in the scenery around you. You can capture the room, you can then render the scene
that includes the room but also adds characters and adds objects so that you can create games that
operate in your natural environment. The applications even go beyond gaming. Imagine if you
could see what room would look like and decorate it with different types
of furniture and walls and create a very realistic scene. This Technology can be used the guide the
visually impaired to give them auditory queues or where they are going. Can even be used by
soldiers in war to replicate the war-zone and prepare for combat or can even be used to
live out one’s own creative fantasies. The possibilities are really endless for this amazing
technology and the future is looking very bright.
Things Project Tango can do
DIRECTIONS: When you need directions inside a building or structure that current mapping
solutions just don’t provide. Shopping - who just like to get in and out as quickly as possible.
Having an indoor map of the store in your hand could make shopping trips more efficient by
leading you directly to the shelf you want.
EMERGENCY RESPONSE: To help emergency response workers such as firefighters find their
way through buildings by projecting the blueprints onto the screen.
It has the potential to provide valuable information in situations where knowing the exact layout
of a room can be a matter of life or death
AUGMENTED REALITY GAMING: It could combine the room-mapping with augmented
reality. “Imagine competing against a friend for control over territories in your own home with
your own miniature army.
Mapping in-game textures onto your real walls through the
smartphone would arguably produce the best game of Cops
and Robbers in history.
MODELLING OBJECTS:
A simple image showing image Modelling using Project
Tango.
Fig (17)
21. Google’s Project Tango
Department of CSE
21 | P a g e
10.CONCLUSION
Project Tango enables apps to track a device's position and orientation within a detailed 3D
environment, and to recognize known environments. This makes possible applications such as in-
store navigation, visual measurement and mapping utilities, presentation and design tools, and a
variety of immersive games.
At this moment, Tango is just a project but is developing quite rapidly with early prototypes and
development kits already distributed among many developers. It is all up to the developers now to
create more clever and innovative apps to take advantage of this technology. It is just the
beginning and there is a lot of work to do to fine-tune this amazing technology. Thus, if Project
Tango works – and we've no reason to suspect it won't - it could prove every bit as revolutionary
as Maps or earth or android. It just might take a while for its true genius to become clear
22. Google’s Project Tango
Department of CSE
22 | P a g e
11.REFERENCE
[1] Announcement on ATAP Google+ site, 30 January 2015.
[2] "Future Phones Will Understand, See the World". 3 June 2015. Retrieved 4
November 2015.
[3] ^"Slamdance: inside the weird virtual reality of Google's Project Tango". 29 May 2015.
[4] Qualcomm Powers Next Generation Project Tango Development Platform, 2015-05-29
[5] IDF 2015: Intel teams with Google to bring RealSense to Project Tango, 2015-08-18
[6] https://developers.google.com/project-tango/ Google developer website
[7] Product announcement on ATAP Google+ page, 5 June 2014, retrieved 4 November 2015
[8] https://www.google.com/atap/project-tango/ Project Tango website