Introduction: My name is Pete Wassell and I have a problem with reality – mostly that it is not good enough. I believe that one should be able to metaphorically “right click” on any object to get more information about it. I started a company to do just that, to merge the digital world and physical world, and we do this through AR Glasses. Today I will discuss their potential uses in the modern military to create an augmented soldier.
So lets start with the basics. This is Milgram Continuum where VR is on the right side and is a totally immersive computer generated world, and the actual physical world reality is on the far left hand side. Within those extremes are two ranges called Augmented Virtually where real objects are placed into the virtual world and Augmented Reality where virtual objects are placed in the real world. It is an early and simple diagram that was developed in 1994 Paul Milgram and Fumio Kishino. This was just four years after Thomas Caudell coined the term Augmented Reality. Thomas worked for Boeing in 1990 and needed a system to show wiring schematics to technicians running cables on new 747 jets. AR uses superimposed digital images and contextual objects as an overlay to the real world in real-time. It provides situational awareness, it has computer generated sensory elements which can assist in human interpretation of the world around them. The value proposition of AR is that it puts information where you need it, directly on top od physical world objects. Using digital eyewear, there can be up to a 30% efficiency gain of 'time on task' when information is displayed directly within a users view. This unobtrusive, mobile, and 'hands free' system of instruction also reduces errors and reduces the cost of training. There is a better model….
The model has evolved. VR is a subset of AR and this model includes Diminished Reality for example when corporate logos, license places, or faces are blurred and this is subtracting reality. Instead of diminished reality, military command centers may want to make augmented reality information available based on rank, mission, or location. Sergeants and below may see one level of data, above Sergeants, and expanded set of data, and officers may see most or all data based on security clearance. Just a thought.
So how did we get to AR Glasses? [not referring to the progression of HMD technology from the Air Force] Anthropology is the study of people and their customs, and habits, its about how they spend their time. On my way over here today, I saw hundreds of people staring at their cell phones, this a relatively new phenomena and the make decision based on info on their screens. Likewise, you may have a laptop and spend lots of time on it If you were to print out all that were on your laptop it would be over 1000 kilos and if you lost it or your cellphone or your computer crashed, you would feel lost. This is because you are using those devices as a second brain. Cyborg Definition: an organism to which components have been added to adapt to new environments. This is from a 1960 NASA paper on space travel Cyborg Anthropology states that throughout the history of mankind, tools have been an extension of the body (self), so a a bullet or bomb is better fist. however in the last 30 years, we have created tools that are an extension of the mind. If you can believe that you are on the path to being a cyborb - Eyetap takes this one step further… in lieu of a computer wired directly to the brain (what you may have thought a cyborb was), the next greatest bandwidth for getting information into the brain is to tap the visual field.
There is ambient information that is woven into the fabric of the battlefield – you just need to be able to see it. What can we extract through vision? Though most of us don't hunt, our eyes are still the great monopolists of our senses. To taste your food or or touch your enemy, you have to be unnervingly close to it. But vision can occur at a safe distance. It may even be that abstract thinking evolved from our eyes' elaborate struggle to make sense of what they saw. Seventy percent of the body's sense receptors cluster in the eyes, and it is mainly through seeing the world that we appraise and understand it. An image is worth 1000 words and a video is worth a million – this is critical to military intelligence and information warfare. This DARPA video shows a project which extracts information from confiscated pictures and video, but what if every soldier had a video camera? This shows the who / what / where / how / and when of the types of information gleaned from an image.
An important component of AR Glasses is the optical waveguide . An optical waveguide is a physical structure that guides electromagnetic waves in the optical spectrum This is how AR Glasses work.
Diffractive: Uses: deep slanted diffraction gratings to couple collimated light using Total Internal Reflection. Issues: 1) costly 2) can produce color non-uniformity in the image, 3) small field of view Holographic: Uses: similar technique as diffraction but with holograms. Issues: 1) light looses intensity with angular variation 2) color crosstalk rainbow effect Polarized: Uses: multi-layer coatings and embedded polarized reflectors. Issues: 1) high cost 2) uses glass not plastic 3) needs 25-30 layers glued together 4) can not be mass produced Reflective: Uses: standard straight reflective optical components Issues: 1) wedge structures require high precision molding You will be approached by many venders and contractors over the next year so it is important to know the different types of this technology.
Requirements: D aylight-readable : Electronic tinting S ee-through : STAR (See thru Augmented Reality) >30° FOV, Better 40° FOV, Great 50-60° FOV C olor video display & video camera: resolution of 720 pixels minimum / monochrome is easier due to color bleeding in some gave guide techniques Compact Form factor: the same as what's socially acceptable – eye glasses / sunglasses Comfortable / light weight Supported by Command Center / IT Infrastructure : AR Glasses are no good without data to see Benefits: Hands free means that you can also operate a weapon in battle Picture shown is of the Vuzix TacEye
So why should jet pilots have all the fun? These are the five feature areas that I would focus on: 1. Camera & Video Display Uses 2. Software Recognition Types and Uses 3. Electromagnetic Spectrum Uses 4. GPS Map Features & Uses 5. Sensor Based Squad Health Uses While there are other features and uses, I have prioritized these as the top five and will cover examples for each one. I am both an engineer and former soldier. The convergence of these disciplines should provide a better view of what is possible.
I am combining both the camera and eyewear display into this feature since they go hand-in-hand What is the biggest benefit of a shared remote live video camera feed?: you can be two places at once What is the second biggest benefit of a shared remote live video camera feed?: battlefield POV (related to first benefit) Remote See from perspective of Drones & IED robots and take appropriate action Be a viewer for remote weapon operation (weapons notoriously draw attention to themselves and often place themselves in the line of fire.) Can also be used to shoot around corners with a handheld weapon when the scope feed is connected to the digital eyewear Visual Communication is stealthy, covert, and silent, yet it allows for data, instructions, and msgs to & from team members and command center. Send POV battlefield pictures and video instantly to command center Intelligence Collecting uses camera for gathering visual intel with every patrol, house search, or person search. Local: - A non visual light scope places a mark on a target, it is an unseen laser scope capable of varying light waves seen only through digital eyewear. - Simulate battlefield conditions (buildings, landscapes, and enemy forces) before a mission. - You now can 360 degree panoramic viewing or have eyes in the back of your head, and never have a blind spot.
Computer Vision: aims to duplicate the abilities of human vision by electronically perceiving and understanding an image Facial Recognition: useful for finding a specific enemy Weapon Recognition: get instructions or maintenance steps for any weapon Motion detection highlighter: know when the enemy makes a move Vehicle Recognition: useful for intel gathering Friend or foe recognition / buddy beacon reduces friendly fire mistakes Lingual translation of signs, writings, (OCR) reduces the need of a language dictionary translator Sound Recognition Lingual translation of speech reduces the need of an interpreter Gunfire locater: Show Direction, Volume, & Type of Sound Sound Amplification (augmented audio) to hear whispers at a distance These are some of the things that can be done when software is added to AR Glasses.
Notice the small band? that is visible light. There are more wavelengths to tap into.
There is no need to strap a bean can (Night Vision Google) to your head since 11.2 micron infrared sensors are small and are in the form of a clip-on module for AR Glasses They sees through smoke and haze since the particle size is small enough for heat waves to go around them Examples: See bodies, recently fired weapons, track vehicles and objects in low light conditions. X-ray Vision (actually the terahertz band of the electromagnetic spectrum, one of the wavelength ranges that falls between microwave and infrared) Sees through walls and clothing You may want to know how many people are in a house or if the person in front of you is carrying a weapon.
Examples: See bodies, recently fired weapons, track vehicles and objects in low light conditions. Pixel combiners use software algorithms to get the best effect, they determine to which pixel (IR or Visual Field) to implement for the user.
Uses: Navigation, alternate routes, distance to and from next coordinate Tagging, highlighting, or annotating the real world with digital information Drawing a virtual fence around a hazardous zone like a mine field Marking danger spots like ambush zones Marking or reading Strategic POI (Points of interest): enemy locations, pathways, or supply caches, or marking strike targets. There are many uses for this area.
This system requires health sensors on the eyewear or wrist worn by soldiers. Heart rate (ECG) / Heart rate variability monitor Pulse rate monitor Blood pressure monitor Skin temperature monitor Oximeter (O2) sensor Respiration rate monitor For example, you use accelerometers to gauge blast effects on soldiers from IED’s, warning the medic for Potential Traumatic Brain Injury (TBI) Determine how far a soldier traveled? Are they dehydrated? Are they in danger of having a heat stroke? Combat strength is not just number of soldiers, but the number of soldiers times their capacity (in this case health). How close to 100% health capacity are your soldiers? Do you know? Do you find out too late?
Not prioritize list – the AR features are dependent on the needs of the specific mission.
What might this technology look like to the ground troops? One only has to see video games where the UI (user interface) has been well thought out. First person shooter games often have a panel showing a compass, map, or instructions – even children know that additional information helps complete a task. The managing and framing of AR information is critical to a successful implementation. Should the data go in center focus or in the periphery? How translucent should the data be? AR Glasses technology has arrived and the cost is low enough to make it widely available and deployable. A Future Force will be made up of augmented soldiers.
I hope this presentation was informative. I will available for questions throughout the day.
Meadow Hawk Dragonfly Competitive Advantage: 360 degree field of vision The dragonfly, possibly the most formidable aerial hunter among insects, also has some of the most amazing eyes in the animal world. They are so big that they cover almost the entire head, giving it a helmeted appearance, and a full 360 degree field of vision. These eyes are made up of 30,000 visual units called ommatidia, each one containing a lens and a series of light sensitive cells. Their eyesight is superb; they can detect colors and polarized light, and are particularly sensitive to movement, allowing them to quickly discover any potential prey or enemy. Some dragonfly species that hunt at dusk can see perfectly in low light conditions, when we humans can barely see anything. Not only that; dragonflies also have three smaller eyes named ocelli which can detect movement faster than the huge compound eyes can; these ocelli quickly send visual information to the dragonflies’ motor centers, allowing it to react in a fraction of a second and perhaps explaining the insect’s formidable acrobatic skills. Although dragonflies are not the only insects with ocelli (some wasps and flies have them too), they do have the most developed ones.
Stalk Eyed Fly Competitive Advantage: Depth perception an adaptation by which a sensory system is better matched to the special problems encountered in a densely structured habitat (in that the field of view is extended and the ability to estimate distance and size and to identify objects at a large distance is improved
Ogre-faced spider Competitive Advantage: Night Vision Ogre faced spiders have superb night vision not only because of their huge eyes, but because of an extremely light sensitive layer of cells covering them. This membrane is so sensitive in fact, that it is destroyed at dawn and a new one is produced every night. Ogre faced spiders are unusual because they can see perfectly at night even though they lack tapetum lucidum, a reflective membrane that helps others spiders (and other predators such as cats) to see in low light conditions. As a matter of facts, scientists believe that ogre faced spiders have better night vision than cats, sharks, or even owls (which can see up to 100 times better than humans at night!).
Mantis Shrimp Competitive Advantage: they see ultraviolet, infrared, and polarized light, and their color vision is four times better than humans. The eyes are located at the end of stalks, and can be moved independently from each other, rotating up to 70 degrees. Interestingly, the visual information is processed by the eyes themselves, not the brain. Even more bizarre; each of the mantis shrimp’s eyes is divided in three sections allowing the creature to see objects with three different parts of the same eye. In other words, each eye has “ trinocular vision ” and complete depth perception, meaning that if a mantis shrimp lost an eye, its remaining eye would still be able to judge depth and distance as well as a human with his two eyes.
Future Force Presentation To Ministry Of Defence
Future ForcePotential Augmented Reality applications in the military Pete Wassell, CEO Augmate Corporation
How did we get toAR Glasses? • Cyborg Anthropology states that throughout the history of mankind, tools have been an extension of the body (self), Amber Case however in the last 30 years, we have created tools that are an extension of the mind. • EyeTap philosophy says, in lieu of a computer wired directly to the brain, the next greatest bandwidth for getting Steve Mann information into the brain is to tap the visual field.
Visual Information Technology http://www.darpa.mil/Opportunities/Solicitations/I2O_Solicitations_VMR_Concept_Video.aspx
Wave Guide Technology An optical waveguide is a physical structure that guides electromagnetic waves in the optical spectrum
ave Guide Technology Diffractive Holographic Uses: deep slanted diffraction Uses: similar technique as gratings to couple collimated diffraction but with light using Total Internal holograms. Reflection. Issues: 1) light looses Issues: 1) costly 2) can intensity with angular produce color non-uniformity variation 2) color crosstalk in the image, 3) small field of rainbow effect view Polarized Reflective Uses: multi-layer coatings and Uses: standard straight embedded polarized reflective optical components reflectors. Issues: wedge structures Issues: 1) high cost 2) uses require high precision glass not plastic 3) needs 25- molding 30 layers glued together 4) can not be mass produced
AR Glasses / Digital EyewearRequirements: Benefits:•Daylight-readable •Hands free•See-through, >30° FOV •Situational / Contextual•Color video display & camera awareness•Compact Form factor•Comfortable / light weight Optional:•Supported by Command Center / •Night Vision, binocular, andIT Infrastructure 3D range finder
Why should jet pilotshave all the fun?Five AR Eyewear Feature Uses for Ground Forces 1. Camera & Video Display 5. Sensor Based Uses Squad Health Uses 2. Software Recognition 4. GPS Map Types and Uses Features & Uses 3. Electromagnetic Spectrum Uses
Camera Video / Display• Remote: – Drones & IED robots – Weapon operation – Visual Communication – Intelligence Collecting• Local: – Non visual field light scope – Immersive / Virtual / Simulated Training – Increased FOV, Panoramic 360 Viewing, Digital Rearview Mirror
Software RecognitionComputer Vision:•Facial Recognition•Weapon Recognition•Motion detection highlighter•Vehicle Recognition•Friend or foe recognition / buddy beacon•Lingual translation of signs, writings, (OCR) and speechSound Recognition•Lingual translation of speech•Gunfire locater: Show Direction, Volume, & Type ofSound•Sound Amplification (augmented audio) to hearwhispers at a distance