MicroVision's MEMS Laser Beam Scanning based 3D Depth Sensing Technology presentation by Jari Honkanen at MEMS & Sensors Industry Group Conference Asia 2016, Shanghai, China, September 13-14, 2016
MicroVision Scanning Engines Overview | January 2017MicroVision
MicroVision develops scanning technology that enables projected displays, interactive experiences, and 3D sensing capabilities. Their PicoP scanning engines can be used to create small form factor displays, interactive displays integrated with touch and gesture sensing, and mid-range LiDAR sensors. The document provides details on MicroVision's current and upcoming product offerings that utilize their scanning technology platform.
"Laser Beam Scanning LiDAR: MEMS-Driven 3D Sensing Automotive Applications from Interior to the Exterior" presentation by Jari Honkanen at FutureCar 2017: New Era of Automotive Electronics Workshop, Nov 8-10, 2017, Georgia Institute of Technology, Atlanta, GA
MEMS Laser Scanning, the platform for next generation of 3D Depth SensorsMicroVision
MicroVision's PicoP® scanning technology is a MEMS-based Laser Beam Scanning (LBS) solution for pico projection, heads-up-display, and augmented reality eyewear applications. The same flexible technology can also be applied to exciting new sensing applications, such as 3D depth sensing. Demand for small and low cost 3D depth sensing solutions is growing rapidly, driven by increasing demand for new Natural User Interface, Machine Vision, Robotic Navigation, Metrology, and Advanced Driver Assistance System (ADAS) solutions.
This presentation, prepared by MicroVision's Jari Honkanen and presented at the MEMS & Sensors Industry Group Conference Asia 2016, compares the existing 3D depth sensor solutions based on stereo cameras, structured light and 3D CMOS Cameras.
MicroVision then presents a new MEMS LBS depth sensor platform solution that can enable a new generation of tiny 3D depth sensors with capabilities such as dynamic variable resolution and variable acquisition speed. These dynamic LBS depth sensors are an enabling technology for a completely new set of innovative products and applications.
Applications Generated from a MEMS-based Laser Beam Scanning Technology PlatformMicroVision
At this year's MEMS Engineer Forum, MicroVision's Director of Product Engineering, Jari Honkanen, spoke on Applications Generated from a MEMS-based Laser Beam Scanning
MEMS and Sensors in Automotive Applications on the Road to Autonomous Vehicle...Jari Honkanen
MicroVision's MEMS Laser Beam Scanning Technology applied to HUD and ADAS applications presentation by Jari Honkanen at the MEMS & Sensors Executive Congress 2016, Scottsdale, AZ, November 10-11, 2016
Laser Beam Scanning Short Throw Displays & an Exploration of Laser-Based Virt...MicroVision
Selvan Viswanathan, a MicroVision principal engineer, presented on Laser Beam Scanning Short Throw Displays & Laser-Based Virtual Touchscreens at LDC 2017.
UAV-Borne LiDAR with MEMS Mirror Based Scanning Capability Ping Hsu
Firstly, we demonstrated a wirelessly controlled MEMS scan module with imaging and laser tracking capability which can be mounted and flown on a small UAV quadcopter. The MEMS scan module was reduced down to a small volume of <90mm><70mm><50g when powered by the UAV‟s battery. The MEMS mirror based LiDAR system allows for ondemand ranging of points or areas within the FoR without altering the UAV‟s position. Increasing the LRF ranging frequency and stabilizing the pointing of the laser beam by utilizing the onboard inertial sensors and the camera are additional goals of the next design. Keywords: MEMS Mirrors, laser tracking, laser imaging, laser range finder, UAV, drone, LiDAR.
MicroVision Scanning Engines Overview | January 2017MicroVision
MicroVision develops scanning technology that enables projected displays, interactive experiences, and 3D sensing capabilities. Their PicoP scanning engines can be used to create small form factor displays, interactive displays integrated with touch and gesture sensing, and mid-range LiDAR sensors. The document provides details on MicroVision's current and upcoming product offerings that utilize their scanning technology platform.
"Laser Beam Scanning LiDAR: MEMS-Driven 3D Sensing Automotive Applications from Interior to the Exterior" presentation by Jari Honkanen at FutureCar 2017: New Era of Automotive Electronics Workshop, Nov 8-10, 2017, Georgia Institute of Technology, Atlanta, GA
MEMS Laser Scanning, the platform for next generation of 3D Depth SensorsMicroVision
MicroVision's PicoP® scanning technology is a MEMS-based Laser Beam Scanning (LBS) solution for pico projection, heads-up-display, and augmented reality eyewear applications. The same flexible technology can also be applied to exciting new sensing applications, such as 3D depth sensing. Demand for small and low cost 3D depth sensing solutions is growing rapidly, driven by increasing demand for new Natural User Interface, Machine Vision, Robotic Navigation, Metrology, and Advanced Driver Assistance System (ADAS) solutions.
This presentation, prepared by MicroVision's Jari Honkanen and presented at the MEMS & Sensors Industry Group Conference Asia 2016, compares the existing 3D depth sensor solutions based on stereo cameras, structured light and 3D CMOS Cameras.
MicroVision then presents a new MEMS LBS depth sensor platform solution that can enable a new generation of tiny 3D depth sensors with capabilities such as dynamic variable resolution and variable acquisition speed. These dynamic LBS depth sensors are an enabling technology for a completely new set of innovative products and applications.
Applications Generated from a MEMS-based Laser Beam Scanning Technology PlatformMicroVision
At this year's MEMS Engineer Forum, MicroVision's Director of Product Engineering, Jari Honkanen, spoke on Applications Generated from a MEMS-based Laser Beam Scanning
MEMS and Sensors in Automotive Applications on the Road to Autonomous Vehicle...Jari Honkanen
MicroVision's MEMS Laser Beam Scanning Technology applied to HUD and ADAS applications presentation by Jari Honkanen at the MEMS & Sensors Executive Congress 2016, Scottsdale, AZ, November 10-11, 2016
Laser Beam Scanning Short Throw Displays & an Exploration of Laser-Based Virt...MicroVision
Selvan Viswanathan, a MicroVision principal engineer, presented on Laser Beam Scanning Short Throw Displays & Laser-Based Virtual Touchscreens at LDC 2017.
UAV-Borne LiDAR with MEMS Mirror Based Scanning Capability Ping Hsu
Firstly, we demonstrated a wirelessly controlled MEMS scan module with imaging and laser tracking capability which can be mounted and flown on a small UAV quadcopter. The MEMS scan module was reduced down to a small volume of <90mm><70mm><50g when powered by the UAV‟s battery. The MEMS mirror based LiDAR system allows for ondemand ranging of points or areas within the FoR without altering the UAV‟s position. Increasing the LRF ranging frequency and stabilizing the pointing of the laser beam by utilizing the onboard inertial sensors and the camera are additional goals of the next design. Keywords: MEMS Mirrors, laser tracking, laser imaging, laser range finder, UAV, drone, LiDAR.
"MEMS-based Laser Beam Scanning Technology Platform; Basis for Applications from Displays to 3D Sensors and 3D Printers" presentation by Jari Honkanen at MEMS Engineer Forum, April 26-27, 2017, Ryogoku, Tokyo, Japan
MEMS and Sensors in Automotive Applications on the Road to Autonomous Vehicle...MicroVision
MicroVision’s Director of Technical Marketing and Applications Development, Jari Honkanen was invited to speak at MSIG’s 12th annual MEMS & Sensors Executive Congress 2016 on MEMS and sensors as key enabling technologies in the automotive market. Honkanen also discussed the benefits of applying MicroVision’s MEMS scanned virtual image HUD and LIDAR sensor concept for ADAS applications.
Next Generation Intelligent Camera technologies - Composec 2014 KeynoteJacob Jose
This document summarizes the current and future intelligent camera technologies from Texas Instruments (TI). It outlines TI's 30+ year history in video and imaging innovation, their current leadership in intelligent video technologies like their DMVAx processors. It provides details on TI's DMVA3 and DM8127 solutions that enable customizable video analytics at the edge for mainstream cameras. Finally, it previews TI's next generation DaVinci video processors that will enable megapixel smart analytics.
Laser Beam Scanning LiDAR: MEMS-Driven 3D Sensing Automotive Applications fro...MicroVision
MicroVision’s Director of Product Engineering, Jari Honkanen, gave a presentation at FUTURECAR 2017 detailing how MicroVision's Laser Beam Scanning technology for MEMS-based LiDAR solutions provides a unique approach that enables new 3D sensor capabilities in areas such as dynamic and variable resolution, acquisition speed, and field of view.
The document discusses the history and future of wearable display technology, specifically microvision eyewear. It outlines 3 time horizons: the past with limited connectivity, present with location-based services, and future extending everyday computing to eyewear. Microvision has been developing wearable displays since the 1990s and their eyewear project from 2007-2009 proved the optics and form factor for a see-through, daylight readable platform. The document discusses potential use cases for mobile device eyewear for situational awareness, productivity, and as an interface to access mobile applications and content.
Mitchell Reifel (pmdtechnologies ag): pmd Time-of-Flight – the Swiss Army Kni...AugmentedWorldExpo
The document discusses 3D imaging technologies and their applications. It provides an overview of different 3D sensing techniques such as stereo vision, structured light, and time-of-flight. Time-of-flight allows for compact solutions and flexible modes. The document outlines the growth of 3D markets from mobile to augmented reality. It argues that time-of-flight will be important for applications like mobile AR as it enables gesture control through depth sensing.
Jonathan Waldern (DigiLens): DigiLens Switchable Bragg Grating Waveguide Opti...AugmentedWorldExpo
A talk from the Develop Track at AWE USA 2018 - the World's #1 XR Conference & Expo in Santa Clara, California May 30- June 1, 2018.
Jonathan Waldern (DigiLens): DigiLens Switchable Bragg Grating Waveguide Optics for Augmented Reality Applications
This session will look at the key features of DigiLens waveguide technology and discusses our optical design methodology for designing and simulating the performance of DigiLens waveguides, giving examples from motorcycle, and autoHUD products.
http://AugmentedWorldExpo.com
Neil Sarkar (AdHawk Microsystems): Ultra-Fast Eye Tracking Without Cameras fo...AugmentedWorldExpo
A talk from the Develop Track at AWE USA 2018 - the World's #1 XR Conference & Expo in Santa Clara, California May 30- June 1, 2018.
Neil Sarkar (AdHawk Microsystems): Ultra-Fast Eye Tracking Without Cameras for Mobile AR Headsets
This session showcases the first camera-free eye-tracking microsystem. A MEMS (microelectromechanical system) device on a tiny chip scans a beam of light across the eye 4,500 times every second. The latest specifications to be revealed at AWE are enabling foveated rendering in mobile platforms, endpoint prediction during saccades, and unprecedented insights into the state of the user.
http://AugmentedWorldExpo.com
Khaled Sarayeddine (Optinvent): Optical Technologies & Challenges for Next Ge...AugmentedWorldExpo
A talk from the Develop Track at AWE USA 2018 - the World's #1 XR Conference & Expo in Santa Clara, California May 30- June 1, 2018.
Khaled Sarayeddine (Optinvent): Optical Technologies & Challenges for Next Generation AR
The talk will describe the current status on key optical technologies and ongoing development to meet Small footprint & Large FOV High resolution Display, as well to accommodate Light field feature.
http://AugmentedWorldExpo.com
This document presents a marker detection algorithm for augmented reality applications in science textbooks. It discusses what augmented reality and markers are, and describes a 7-step marker detection algorithm to efficiently detect markers that may be partially occluded. The algorithm is implemented using Vuforia and Unity for augmented reality and 3DS Max for 3D models. Examples of using the algorithm to detect water and carbon dioxide molecule markers in an 8th standard science textbook are provided. Benefits of augmented reality in education are also outlined.
Drone Market Forecasts: Promises and RealityColin Snow
Drone Market Forecasts: Promises and Reality presented at the Small Unmanned Business Expo in San Francisco on May 4, 2017. This presentation reviews and busts the hype of most market reports and offers our views on industry growth.
A talk from the XR Enablement Track at AWE USA 2019 - the World's #1 XR Conference & Expo in Santa Clara, California May 29-31, 2019.
Chris Pickett (DigiLens): XR is Hard: Here’s Why
Not only does it take great hardware and apps to bridge our digital and physical worlds, but also endurance to survive as the market matures.
https://awexr.com
The Future Of Augmented Reality - Lynne d Johnson WebVisions Portland 2014 #W...Lynne d Johnson
Augmented reality (AR) enhances the real-world environment with virtual objects that align with the real world. AR is expanding into many industries like architecture, education, manufacturing, and more. The global AR market is forecast to grow over 130% annually through 2018 as mobile AR applications drive growth. Future AR technologies may eliminate the need for devices by projecting images directly into our eyes. However, AR still faces challenges in technology, societal acceptance, and proving business value before it becomes mainstream.
Application of augmented reality in librariessafiullah93
This document discusses augmented reality (AR) and its potential applications in libraries. It defines AR as technology that adds digital information and layers to the physical world, unlike virtual reality which replaces the real world. The document outlines different types of AR including marker-based, markerless, projection-based, and superimposition-based AR. It also lists the key components needed for AR like hardware, applications, internet of things, and 5G networks. Several potential applications of AR are described for gaming, retail, medicine, military, arts, tourism, broadcasting, and education. Specific applications of AR suggested for libraries include virtual tours, interactive models, interior/exterior designing, books in augmented environments, and staff training.
Augmented reality (AR) overlays computer-generated images on real-world environments in real time. AR uses cameras on devices like smartphones or head-mounted displays to blend virtual objects with the real world. Common techniques for AR include using fiducial markers, computer vision, simultaneous localization and mapping (SLAM), and pose estimation to track objects and overlay virtual content. AR has applications in gaming, advertising, education, and more.
For the full video of this presentation, please visit:
http://www.embedded-vision.com/platinum-members/qualcomm/embedded-vision-training/videos/pages/may-2016-embedded-vision-summit-mangen
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Michael Mangen, Product Manager for Camera and Computer Vision at Qualcomm, presents the "High-resolution 3D Reconstruction on a Mobile Processor" tutorial at the May 2016 Embedded Vision Summit.
Computer vision has come a long way. Use cases that were previously not possible in mass-market devices are now more accessible thanks to advances in depth sensors and mobile processors. In this presentation, Mangen provides an overview of how we are able to implement high-resolution 3D reconstruction – a capability typically requiring cloud/server processing – on a mobile processor. This is an exciting example of how new sensor technology and advanced mobile processors are bringing computer vision capabilities to broader markets.
These slides use concepts from my (Jeff Funk) course entitled analyzing hi-tech opportunities to analyze how Light Field Technology is becoming economic feasible for an increasing number of applications. Light Field Cameras record all of the light fields in a picture instead of just one light field. This capability enables users to change the focus of pictures after they have been taken and to more easily record 3D data. These features are becoming economically feasible improvements because of rapid improvements in camera chips and micro-lens arrays (an example of micro-electronic mechanical systems, MEMS). These features offer alternative ways to do 3D sensing for automated vehicles and augmented reality and can enable faster data collection with telescopes.
This document provides an overview of the state of augmented reality (AR) and virtual reality (VR) technologies as of February 2017. It discusses the history and development of AR/VR technologies over time, including major companies and investments in the space. It also analyzes the current state and future outlook of both AR and VR, noting that VR is more developed for consumers currently while AR faces challenges but has potential for enterprise use. Revenue forecasts and startup ecosystem data are also presented.
This document provides an overview and market forecast of the 3D sensing industry from 2022 to 2028. It discusses key applications and trends in mobile, consumer, automotive, and other sectors. Emerging technologies like SWIR, metasurfaces, and event-based imaging are also reviewed, along with the supply chain dynamics and a forecast of strong growth in the 3D sensing market from $8.2 billion in 2022 to $17.2 billion by 2028.
iMinds insights - 3D Visualization TechnologiesiMindsinsights
Transforming the way we deal with information - from consumption to interaction.
iMinds insights is a quarterly publication providing you with relevant tech updates based on interviews with academic and industry experts. iMinds is a digital research center and incubator based in Belgium.
"MEMS-based Laser Beam Scanning Technology Platform; Basis for Applications from Displays to 3D Sensors and 3D Printers" presentation by Jari Honkanen at MEMS Engineer Forum, April 26-27, 2017, Ryogoku, Tokyo, Japan
MEMS and Sensors in Automotive Applications on the Road to Autonomous Vehicle...MicroVision
MicroVision’s Director of Technical Marketing and Applications Development, Jari Honkanen was invited to speak at MSIG’s 12th annual MEMS & Sensors Executive Congress 2016 on MEMS and sensors as key enabling technologies in the automotive market. Honkanen also discussed the benefits of applying MicroVision’s MEMS scanned virtual image HUD and LIDAR sensor concept for ADAS applications.
Next Generation Intelligent Camera technologies - Composec 2014 KeynoteJacob Jose
This document summarizes the current and future intelligent camera technologies from Texas Instruments (TI). It outlines TI's 30+ year history in video and imaging innovation, their current leadership in intelligent video technologies like their DMVAx processors. It provides details on TI's DMVA3 and DM8127 solutions that enable customizable video analytics at the edge for mainstream cameras. Finally, it previews TI's next generation DaVinci video processors that will enable megapixel smart analytics.
Laser Beam Scanning LiDAR: MEMS-Driven 3D Sensing Automotive Applications fro...MicroVision
MicroVision’s Director of Product Engineering, Jari Honkanen, gave a presentation at FUTURECAR 2017 detailing how MicroVision's Laser Beam Scanning technology for MEMS-based LiDAR solutions provides a unique approach that enables new 3D sensor capabilities in areas such as dynamic and variable resolution, acquisition speed, and field of view.
The document discusses the history and future of wearable display technology, specifically microvision eyewear. It outlines 3 time horizons: the past with limited connectivity, present with location-based services, and future extending everyday computing to eyewear. Microvision has been developing wearable displays since the 1990s and their eyewear project from 2007-2009 proved the optics and form factor for a see-through, daylight readable platform. The document discusses potential use cases for mobile device eyewear for situational awareness, productivity, and as an interface to access mobile applications and content.
Mitchell Reifel (pmdtechnologies ag): pmd Time-of-Flight – the Swiss Army Kni...AugmentedWorldExpo
The document discusses 3D imaging technologies and their applications. It provides an overview of different 3D sensing techniques such as stereo vision, structured light, and time-of-flight. Time-of-flight allows for compact solutions and flexible modes. The document outlines the growth of 3D markets from mobile to augmented reality. It argues that time-of-flight will be important for applications like mobile AR as it enables gesture control through depth sensing.
Jonathan Waldern (DigiLens): DigiLens Switchable Bragg Grating Waveguide Opti...AugmentedWorldExpo
A talk from the Develop Track at AWE USA 2018 - the World's #1 XR Conference & Expo in Santa Clara, California May 30- June 1, 2018.
Jonathan Waldern (DigiLens): DigiLens Switchable Bragg Grating Waveguide Optics for Augmented Reality Applications
This session will look at the key features of DigiLens waveguide technology and discusses our optical design methodology for designing and simulating the performance of DigiLens waveguides, giving examples from motorcycle, and autoHUD products.
http://AugmentedWorldExpo.com
Neil Sarkar (AdHawk Microsystems): Ultra-Fast Eye Tracking Without Cameras fo...AugmentedWorldExpo
A talk from the Develop Track at AWE USA 2018 - the World's #1 XR Conference & Expo in Santa Clara, California May 30- June 1, 2018.
Neil Sarkar (AdHawk Microsystems): Ultra-Fast Eye Tracking Without Cameras for Mobile AR Headsets
This session showcases the first camera-free eye-tracking microsystem. A MEMS (microelectromechanical system) device on a tiny chip scans a beam of light across the eye 4,500 times every second. The latest specifications to be revealed at AWE are enabling foveated rendering in mobile platforms, endpoint prediction during saccades, and unprecedented insights into the state of the user.
http://AugmentedWorldExpo.com
Khaled Sarayeddine (Optinvent): Optical Technologies & Challenges for Next Ge...AugmentedWorldExpo
A talk from the Develop Track at AWE USA 2018 - the World's #1 XR Conference & Expo in Santa Clara, California May 30- June 1, 2018.
Khaled Sarayeddine (Optinvent): Optical Technologies & Challenges for Next Generation AR
The talk will describe the current status on key optical technologies and ongoing development to meet Small footprint & Large FOV High resolution Display, as well to accommodate Light field feature.
http://AugmentedWorldExpo.com
This document presents a marker detection algorithm for augmented reality applications in science textbooks. It discusses what augmented reality and markers are, and describes a 7-step marker detection algorithm to efficiently detect markers that may be partially occluded. The algorithm is implemented using Vuforia and Unity for augmented reality and 3DS Max for 3D models. Examples of using the algorithm to detect water and carbon dioxide molecule markers in an 8th standard science textbook are provided. Benefits of augmented reality in education are also outlined.
Drone Market Forecasts: Promises and RealityColin Snow
Drone Market Forecasts: Promises and Reality presented at the Small Unmanned Business Expo in San Francisco on May 4, 2017. This presentation reviews and busts the hype of most market reports and offers our views on industry growth.
A talk from the XR Enablement Track at AWE USA 2019 - the World's #1 XR Conference & Expo in Santa Clara, California May 29-31, 2019.
Chris Pickett (DigiLens): XR is Hard: Here’s Why
Not only does it take great hardware and apps to bridge our digital and physical worlds, but also endurance to survive as the market matures.
https://awexr.com
The Future Of Augmented Reality - Lynne d Johnson WebVisions Portland 2014 #W...Lynne d Johnson
Augmented reality (AR) enhances the real-world environment with virtual objects that align with the real world. AR is expanding into many industries like architecture, education, manufacturing, and more. The global AR market is forecast to grow over 130% annually through 2018 as mobile AR applications drive growth. Future AR technologies may eliminate the need for devices by projecting images directly into our eyes. However, AR still faces challenges in technology, societal acceptance, and proving business value before it becomes mainstream.
Application of augmented reality in librariessafiullah93
This document discusses augmented reality (AR) and its potential applications in libraries. It defines AR as technology that adds digital information and layers to the physical world, unlike virtual reality which replaces the real world. The document outlines different types of AR including marker-based, markerless, projection-based, and superimposition-based AR. It also lists the key components needed for AR like hardware, applications, internet of things, and 5G networks. Several potential applications of AR are described for gaming, retail, medicine, military, arts, tourism, broadcasting, and education. Specific applications of AR suggested for libraries include virtual tours, interactive models, interior/exterior designing, books in augmented environments, and staff training.
Augmented reality (AR) overlays computer-generated images on real-world environments in real time. AR uses cameras on devices like smartphones or head-mounted displays to blend virtual objects with the real world. Common techniques for AR include using fiducial markers, computer vision, simultaneous localization and mapping (SLAM), and pose estimation to track objects and overlay virtual content. AR has applications in gaming, advertising, education, and more.
For the full video of this presentation, please visit:
http://www.embedded-vision.com/platinum-members/qualcomm/embedded-vision-training/videos/pages/may-2016-embedded-vision-summit-mangen
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Michael Mangen, Product Manager for Camera and Computer Vision at Qualcomm, presents the "High-resolution 3D Reconstruction on a Mobile Processor" tutorial at the May 2016 Embedded Vision Summit.
Computer vision has come a long way. Use cases that were previously not possible in mass-market devices are now more accessible thanks to advances in depth sensors and mobile processors. In this presentation, Mangen provides an overview of how we are able to implement high-resolution 3D reconstruction – a capability typically requiring cloud/server processing – on a mobile processor. This is an exciting example of how new sensor technology and advanced mobile processors are bringing computer vision capabilities to broader markets.
These slides use concepts from my (Jeff Funk) course entitled analyzing hi-tech opportunities to analyze how Light Field Technology is becoming economic feasible for an increasing number of applications. Light Field Cameras record all of the light fields in a picture instead of just one light field. This capability enables users to change the focus of pictures after they have been taken and to more easily record 3D data. These features are becoming economically feasible improvements because of rapid improvements in camera chips and micro-lens arrays (an example of micro-electronic mechanical systems, MEMS). These features offer alternative ways to do 3D sensing for automated vehicles and augmented reality and can enable faster data collection with telescopes.
This document provides an overview of the state of augmented reality (AR) and virtual reality (VR) technologies as of February 2017. It discusses the history and development of AR/VR technologies over time, including major companies and investments in the space. It also analyzes the current state and future outlook of both AR and VR, noting that VR is more developed for consumers currently while AR faces challenges but has potential for enterprise use. Revenue forecasts and startup ecosystem data are also presented.
This document provides an overview and market forecast of the 3D sensing industry from 2022 to 2028. It discusses key applications and trends in mobile, consumer, automotive, and other sectors. Emerging technologies like SWIR, metasurfaces, and event-based imaging are also reviewed, along with the supply chain dynamics and a forecast of strong growth in the 3D sensing market from $8.2 billion in 2022 to $17.2 billion by 2028.
iMinds insights - 3D Visualization TechnologiesiMindsinsights
Transforming the way we deal with information - from consumption to interaction.
iMinds insights is a quarterly publication providing you with relevant tech updates based on interviews with academic and industry experts. iMinds is a digital research center and incubator based in Belgium.
Direct Dimensions SME 3D Imaging 2009 Conference Keynote Presentation V2c Re...Direct Dimensions, Inc.
This is Michael Raphael's Keynote Presentation given May 14, 2009, called "A Perspective on the 3D Imaging Industry," at the SME 3D Imaging Conference in Schaumburg, IL. This presentation describes the 'state-of-the-industry' in terms of the technology, manufacturers, applications, and adoption of 3D imaging. Raphael is considered one of the leading experts in the field.
For the full video of this presentation, please visit:
http://www.embedded-vision.com/platinum-members/qualcomm/embedded-vision-training/videos/pages/may-2016-embedded-vision-summit-talluri
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Raj Talluri, Senior Vice President of Product Management at Qualcomm Technologies, presents the "Is Vision the New Wireless?" tutorial at the May 2016 Embedded Vision Summit.
Over the past 20 years, digital wireless communications has become an essential technology for many industries, and a primary driver for the electronics industry. Today, computer vision is showing signs of following a similar trajectory. Once used only in low-volume applications such as manufacturing inspection, vision is now becoming an essential technology for a wide range of mass-market devices, from cars to drones to mobile phones. In this presentation, Talluri examines the motivations for incorporating vision into diverse products, presents case studies that illuminate the current state of vision technology in high-volume products, and explores critical challenges to ubiquitous deployment of visual intelligence.
3D perception is crucial for understanding the real world. It offers many benefits and new capabilities over 2D across diverse applications, from XR and autonomous driving to IOT, camera, and mobile. 3D perception with machine learning is creating the new state of the art (SOTA) in areas, such as depth estimation, object detection, and neural scene representation. Making these SOTA neural networks feasible for real-world deployment on mobile devices constrained by power, thermal, and performance has been a challenge. Qualcomm AI Research has developed not only novel AI techniques for 3D perception but also full-stack AI optimizations to enable real-world deployments and energy-efficient solutions. This presentation explores the latest research that is enabling efficient 3D perception while maintaining neural network model accuracy. You’ll learn about:
- The advantages of 3D perception over 2D and the need for 3D perception across applications
- Advancements in 3D perception research by Qualcomm AI Research
- Our future 3D perception research directions
The document discusses the 3D imaging industry, including an introduction to 3D scanning technologies by Michael Raphael, president and chief engineer of a 3D imaging company. It outlines the types of 3D scanning technologies, key manufacturers, parameters for 3D scanning projects, market trends driving growth, and major application areas. The 3D imaging industry is growing due to expanding uses of 3D scanning in areas like digital heritage, museums, medicine, forensics, entertainment, and mass customization.
Explore the transformative world of point cloud technology, a cutting-edge 3D data visualization tool that's reshaping industries from architecture to gaming. Dive into the blog to uncover what point clouds are, how they work, and their diverse applications. Discover how these digital data points can empower architects, surveyors, environmental scientists, and even gamers, offering precision, realism, and immersive experiences. Whether you're a professional in a relevant field or just curious about the digital frontier, point cloud technology promises an exciting journey into the future of 3D data visualization.
Explore the transformative world of point cloud technology, a cutting-edge 3D data visualization tool that's reshaping industries from architecture to gaming. Dive into the blog to uncover what point clouds are, how they work, and their diverse applications. Discover how these digital data points can empower architects, surveyors, environmental scientists, and even gamers, offering precision, realism, and immersive experiences. Whether you're a professional in a relevant field or just curious about the digital frontier, point cloud technology promises an exciting journey into the future of 3D data visualization.
More information on that report at http://www.i-micronews.com/reports.html
7 MEMS VALUE PROPOSITION IN MOBILE DEVICES
And 3D imaging is supposed to be the next big thing…
• High SNR
• Noise cancellation
• Voice recognition/activation
• Waterproofing
• Haptic feedback
• Gesture recognition
• Add dimensions to the interface
• 3D changing interface (microfluidic)
• High resolution imaging
• Liveness detection
• All environment detection (dry, wet, dirt)
• Anti-spoofing
• Mobile payment
• Multiple bandwidth handling (Worldphone)
• Low power consumption
• Low loss
• Accurate timing
• Accurate indoor positioning
• Accurate motion tracking
• Healthier life (sport, walking orientation)
• Danger and damage preventing
• Weather forecast/probe
pressure
smart building, automotive
With fingerprint
With sensor fusion
Activity monitoring 2020
With gyroscopes
With 3D
camera Enhanced communication
Gaming + 3D Avatar
With microphone
Mobile payment
Virtual Personal Assistance
Always-on Virtual Personal Assistance
With gas sensor Gas detec
Two industries controlled by giant companies with ~$200B in revenue OIS, microphone and dead reckoning sensors could drive the demand
apple, facebook, Google, samsung, autonomous vehicle
Revolutionizing Surveying: The Power of 3D Laser Scanning Services.pptxfalconsurveyme
we discuss, examine the advantages, uses, and future of 3D laser scanning survey services.
Visit us: https://www.falconsurveyme.com/our-services/laser-scanning/
The document discusses different types of sensors used for 3D digitization, including passive and active vision techniques. It describes synchronization circuit-based dual photocells that improve measurement stability and repeatability. Position sensitive detectors are discussed that can measure the position of a light spot in one or two dimensions on a sensor surface to acquire high-resolution 3D images. A proposed sensor architecture combines color and range sensing for applications like hand-held 3D cameras.
FUTURE-PROOFING VEHICLES BY LEVERAGING PERCEPTION RADARiQHub
This document discusses future proofing vehicles for autonomous driving capabilities through 2030. It outlines a progression of autonomous features from 2024-2030, with "eyes free" autonomous driving becoming available around 2030. Future proofing requires massive investment in hardware and software capabilities. Perception radar is highlighted as offering significant potential for achieving a competitive advantage due to its ability to provide dense radar images and process millions of scenarios, even unknown unknowns. A specialized perception radar processor is proposed to provide high processing capabilities at a retail cost.
Next Gen Computational Ophthalmic Imaging for Neurodegenerative Diseases and ...PetteriTeikariPhD
Shallow literature analysis on recent trends in computational ophthalmic imaging with focus on neurodegenerative disease imaging / oculomics.
Open-ended literature review on what you could be building next.
#1/2: Hardware
#2/2: Computational imaging
Alternative download link:
https://www.dropbox.com/scl/fi/d34pgi3xopfjbrcqj2lvi/retina_imaging_2024_computational.pdf?rlkey=xnt1dbe8rafyowocl9cbgjh3p&dl=0
Inspection of Suspicious Human Activity in the Crowd Sourced Areas Captured i...IRJET Journal
The document proposes a system to detect suspicious human activity in crowdsourced video captured by surveillance cameras. The system uses Advanced Motion Detection (AMD) to detect moving objects and generate a reliable background model for analysis. A camera connected to a monitoring room would produce alert messages for any detected suspicious activity based on height, time, and body movement constraints. The system aims to automate real-time video processing for security applications like detecting unauthorized access. It extracts human objects from frames and identifies suspicious behavior using the AMD algorithm before sending alerts.
Facet technology 4DFlash(tm) vision sensor for vehiclesJohn Dolejsi
This document discusses Facet Technology, a provider of machine vision processing hardware, software, and intellectual property. It describes Facet's 4DFlash technology, a vision sensor that integrates LiDAR, camera, and radar data from a single sensor. Key advantages of 4DFlash over other vision sensors include higher resolution (5.2 MP), longer range (200m), superior performance in adverse weather, and lower overall system cost. The document promotes potential partnerships for Facet to license its sensor technology and intellectual property to automotive companies.
The document discusses the future of imaging and cameras. It notes that cameras are now ubiquitous due to camera phones. It outlines a wish list for future camera capabilities including super human vision, seeing inside the body, and automatically finding relevant photos. Computational photography is presented as a way to achieve these goals using techniques like computational illumination. The document discusses using cameras for applications in healthcare, entertainment, interfaces and industry. It outlines the work of the MIT Media Lab and Ramesh Raskar in developing new camera and imaging technologies.
1. The document discusses camera culture and computational photography led by Ramesh Raskar at the MIT Media Lab.
2. It outlines Raskar's vision of using emerging technologies to better capture and share visual information through new imaging platforms.
3. Some goals include giving consumers superhuman vision, seeing inside the body for health, and putting the photographer back in the photo.
This document discusses augmented reality (AR), which superimposes computer-generated input such as sound, video and graphics over views of the real world. The goal of AR is to enhance a user's perception without them being able to distinguish between real and virtual elements. Key AR hardware includes processors, displays and sensors in devices like smartphones. Special software generates 3D virtual images stored and retrieved from remote servers. Common AR applications include medical, gaming, fashion and education. Challenges to AR development include multi-user experiences, GPS limitations and software integration.
this presentation help to understand about the basic of digital photogrammetry,, its also help for understand about the concept of digital photography software available now a days , and uses of various software in the field of RS and GIS.
Similar to MEMS Laser Scanning, the platform for next generation of 3D Depth Sensors (20)
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
20 Comprehensive Checklist of Designing and Developing a WebsitePixlogix Infotech
Dive into the world of Website Designing and Developing with Pixlogix! Looking to create a stunning online presence? Look no further! Our comprehensive checklist covers everything you need to know to craft a website that stands out. From user-friendly design to seamless functionality, we've got you covered. Don't miss out on this invaluable resource! Check out our checklist now at Pixlogix and start your journey towards a captivating online presence today.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
Building RAG with self-deployed Milvus vector database and Snowpark Container...Zilliz
This talk will give hands-on advice on building RAG applications with an open-source Milvus database deployed as a docker container. We will also introduce the integration of Milvus with Snowpark Container Services.
Building RAG with self-deployed Milvus vector database and Snowpark Container...
MEMS Laser Scanning, the platform for next generation of 3D Depth Sensors
1. Don’t just think outside the box.
See outside the box.
MEMS Laser Scanning, the platform for
next generation of 3D Depth Sensors
Jari Honkanen
Rev. 0.4; Sep 12, 2016
2. 2 MICROVISION, INC. COPYRIGHT 2016. ALL RIGHTS RESERVED.9/12/2016
Abstract
MicroVision's MEMS Laser Beam Scanning technology is a leading display technology
for pico projection, heads-up-display, and augmented reality eyewear applications.
But the same flexible technology can also be applied to exciting new sensing
applications, such as 3D Depth sensing. Demand for small and low cost 3D Depth
sensing solutions is growing rapidly driven by increasing demand for new Natural
User Interface, Machine Vision, Robotic Navigation, Metrology, and Advanced Driver
Assistance System solutions.
This session will compare the existing 3D depth sensor solutions based on stereo
cameras, structured light and 3D CMOS Cameras. It will then present a new MEMS Laser
Scanning based depth sensor platform solution that will enable
new generation of tiny 3D Depth Sensors with new capabilities such as dynamic
variable resolution and variable acquisition speed. These dynamic scanning MEMS
based depth sensors will become an enabling technology for a completely new
set of innovative products and applications for years to come.
3. 9/12/20163 MICROVISION, INC. COPYRIGHT 2016. ALL RIGHTS RESERVED.
Agenda
• What is a 3D Depth Sensor?
• 3D Depth Sensing Applications
• 3D Depth Sensing Market Opportunity
• 3D Depth Sensor Technologies
• 3D Depth Sensor Competitive Analysis
• Case Study:
• MicroVision MEMS Technology and Applications
• MicroVision MEMS used for Depth Sensing
• MEMS Laser Scanning for Depth Sensing unique
benefits
• Conclusions & Call to Action
4. 4 MICROVISION, INC. COPYRIGHT 2016. ALL RIGHTS RESERVED.9/12/2016
3D Depth Sensor – what is it?
3D sensors allow devices to observe the
environment in 3 dimensions
3D imagers measure distance for every pixel
within detection field.
Number of
measurements
within detection
field of view
Depth
Sensor/Imager
X
Y
Distance,
Z
Depth Map Point Cloud
3D imagers produce a 2D addressable array, a
depth map, or further a 3-dimensional collection
of points, a point cloud
5. 5
MICROVISION, INC. PROPRIETARY AND CONFIDENTIAL
COPYRIGHT 2016. ALL RIGHTS RESERVED.
9/12/2016
3D Depth Sensing Applications (just a small sample)
Application Targets Markets
Natural User Interfaces /
Gesture Recognition
Internet of Things
Gaming
Interactive Displays
3D Scanning 3D Modeling
Gaming, Virtual Worlds
3D Printing
Metrology, Location and
Mapping
Indoor Measurements
3D Room Mapping
Robot Navigation
Range Finding
Advanced Driver Assistance Systems (ADAS)
Drone Collision Avoidance
Machine Vision /
Object Recognition
Security
Industrial Automation
6. 6
MICROVISION, INC. PROPRIETARY AND CONFIDENTIAL
COPYRIGHT 2014. ALL RIGHTS RESERVED.
9/12/2016
3D Imaging & Sensor Market Opportunity
3D imaging & sensor market is still at an early stage and growth is driven by new
application areas and cost reduction of sensor technologies.
Leading CE companies like Apple (PrimeSense acquisition), Microsoft (Canesta and
3DV Systems acquisitions), Sony (Softkinetic acquisition) and Google (Project Tango)
have been investing heavily in the space.
Leading Chip companies like Intel (RealSense) and Infineon (Real3) are offering silicon
& systems.
The global 3D sensor market is expected to grow to > $3B in 2020 at an estimated
CAGR of 23.4%.
[Markets and Markets]
7. 7 MICROVISION, INC. COPYRIGHT 2016. ALL RIGHTS RESERVED.9/12/2016
3D Depth Sensing Technologies
Technology
Stereo Camera
Principle
Two cameras, displaced horizontally to obtain
different views of the scene. Calculate depth from
relative positions of objects in the two perspectives
Technology
Triangulation
Principle
Project Laser dot or line to the scene from a laser
source with known displacement to camera. Detect
with camera and calculate depth based on the
location in camera’s field of view
8. 8 MICROVISION, INC. COPYRIGHT 2016. ALL RIGHTS RESERVED.9/12/2016
3D Depth Sensing Technologies
Technology
Structured Light (Fixed or Variable pattern)
Principle
Project known pattern(s) of pixels on the scene.
Captured with Camera sensor. Based on deformation,
calculate depth and surface information of objects in
the scene
Technology
Time of Flight (ToF) Imager
Principle
Measure time from light emittance to reflection delay,
determine distance based on speed of light
9. 9 MICROVISION, INC. COPYRIGHT 2016. ALL RIGHTS RESERVED.9/12/2016
3D Depth Sensing Technologies
Technology
Time of Flight (ToF) - LIDAR
Principle
Emit pulse of light, detect reflection, measure the
delay between emitted and reflected light, determine
distance based on speed of light
Target Scene
Detector
Emitter
Reference Signal
Emitted Light
Reflected Light
10. 10
MICROVISION, INC. PROPRIETARY AND CONFIDENTIAL
COPYRIGHT 2014. ALL RIGHTS RESERVED.
9/12/2016
3D Sensing Technology Comparison
Technology Distance
Range
Depth
Accuracy
Acquisition
Speed
Hardware
Size
Software
Complexity
Low Light
Performance
Outdoor
Performance
Stereo Camera Mid range mm ~cm Medium Large High Weak Good
Structured Light
(Fixed Pattern)
Short range
(cm) to mid
range
(~5m)
mm ~ cm Fast Large Medium Good Weak
Structured Light
(Variable
Pattern)
Short range
(cm) to mid
range
(~5m)
μm ~ cm Medium Large High Good Weak
Triangulation Short range
(~1m) to
long range
(~40 m)
μm ~ cm Fast Large Low Good Good
Time of Flight Short range
(~1m) to
long range
(~100 m)
mm ~ cm Fast Medium Low Good Good
11. 11 MICROVISION, INC. COPYRIGHT 2016. ALL RIGHTS RESERVED.9/12/2016
Comparison of selected 3D Depth Sensing Solutions1
(Consumer Electronics & Near Field)
Advertised
Specification
Microsoft
Kinect V1
Microsoft
Kinect V2
Intel
RealSense
F200 & SR300
Intel
RealSense
R200
Softkinetic PMD Tech
Technology Structured
Light
ToF Camera Structured Light ToF Camera
(x2)
ToF Camera ToF Camera
Sensor
Manufacturer
PrimeSense Microsoft Intel Intel Texas
Instruments
Infineon
Depth “Camera”
Resolution
(pixel x pixel)
320x240 512x424 640x480 640x480 320x240 352x288
Depth “Camera”
frame rate (fps)
30 30 30 30 12 - 60 5 – 45
FOV [H x V]
(degrees)
57 x 43 70 x 60 72 x 60 70 x 59 74 x 59 62 x 45
Depth Range (m) 0.4-4.0 0.5 - 4.5 0.2 - 1.2 Up to 4 Up to 4 0.1 - 4
1Based on published advertised specifications
12. 12
MICROVISION, INC. PROPRIETARY AND CONFIDENTIAL
COPYRIGHT 2016. ALL RIGHTS RESERVED.
9/12/1016
MicroVision has developed MEMS Laser Beam Scanning Technology as a platform to address diverse
applications in large and growing markets
Mobile
Projection
Application Industry Drivers
Anytime, Anywhere Content Sharing
Industry
Growth
32.4%
AR / VR
Display
194%Personal Mobility
Heads-Up
Display
27%Driver Safety & Infotainment
Personal Projection: CAGR 2014 – 2019, Source: TechNavio; AR / VR Display: CAGR 2014 – 2019, Source: TechNavio; Heads-Up Display: CAGR 2014 – 2024, ABI Research
Case Study: MicroVision MEMS, One solution, multiple markets
Platform
Technology
13. 13 MICROVISION, INC. COPYRIGHT 2016. ALL RIGHTS RESERVED.9/12/2016
PicoP® Scanning Technology - Projection Display
Red laser
Green laserBlue laser
2D MEMS
Micro mirror
Case Study: How PicoP® Scanning Technology Works
A single MEMS
scanning mirror
in an extremely tiny,
low power package
14. 14 MICROVISION, INC. COPYRIGHT 2013. ALL RIGHTS RESERVED.9/12/2016
Case Study: MicroVision MEMS Evolution
MicroVision MEMS development started in 1997 to enable the long term cost and size goals
Gen-1Vacuum Scanner (~4 cc) Gen-2Vacuum Scanner
(~2 cc)
Early Gen-3 Atmospheric
Scanner (~2 cc)
Gen-3Atmospheric
Scanner (< 1 cc)
Gen-3 G3T-P5 0.65 cc
Gen 3 MEMS Scanner: Simplified Magnetic Drive for Atmospheric Operation
Early Generation MEMS Scanners: Magnetic plus Capacitive drive
15. 15
MICROVISION, INC. PROPRIETARY AND CONFIDENTIAL
COPYRIGHT 2016. ALL RIGHTS RESERVED.
9/12/1016
The same MicroVision MEMS can also be applied for 3D Imaging & Sensing Applications.
Mobile
Projection
Application Industry Drivers
Anytime, Anywhere Content Sharing
Industry
Growth
32.4%
AR / VR
Display
194%Personal Mobility
Heads-Up
Display
27%Driver Safety & Infotainment
Personal Projection: CAGR 2014 – 2019, Source: TechNavio; AR / VR Display: CAGR 2014 – 2019, Source: TechNavio; Heads-Up Display: CAGR 2014 – 2024, ABI Research; 3D Imaging & Sensing: CAGR 2014 – 2020, Source:
Markets&Markets;
Case Study: MicroVision MEMS, One solution, multiple markets
Platform
Technology
3D Imaging
& Sensing
23.4%Information Capture & Interaction
16. 16
MICROVISION, INC. PROPRIETARY AND CONFIDENTIAL
COPYRIGHT 2016. ALL RIGHTS RESERVED.
9/12/2016
Red laser
Green laser
Blue laser
2D MEMS
Micro mirror
IR laser
IR Photodiode
Measure time of flight from IR laser light emittance to photodiode
reception. Calculate distance based on speed of light.
Combined RGB Projection Display & Depth Sensing
👈
👈
Case Study: How MEMS Technology for Depth Sensing works
Applications:
• Natural User Interfaces
• 3D Scanning
• Industrial & Medical
• Metrology
17. 17
MICROVISION, INC. PROPRIETARY AND CONFIDENTIAL
COPYRIGHT 2016. ALL RIGHTS RESERVED.
9/12/2016
2D MEMS
Micro mirror
IR laser
IR Photodiode
👈
👈
Case Study: How MEMS Technology for Depth Sensing works
IR laser
IR laser
Depth Sensing only
Applications:
• Robotics
• Navigation
• Mapping
18. 18
MICROVISION, INC. PROPRIETARY AND CONFIDENTIAL
COPYRIGHT 2016. ALL RIGHTS RESERVED.
9/12/1016
Case Study: Unique capabilities of MicroVision
MEMS Laser Scanning for Depth Sensing
Feature
PicoP® Scanning Technology
Targets Benefit
Size Smallest size, thinnest (6mm) Enables new class of devices
Platform
Technology
Same platform for both Projection
Display and Depth Sensing
Enables interactive displays from single
integrated platform
Flexibility Programmable:
• Variable resolution
• Variable frame rate
• Supports both Time of Flight and
Structured Light
Wide variety of resolution and frame
rate combinations.
Enables both slower high resolution and
faster lower resolution captures from
the same platform.
Time of Flight Integrated IR laser(s) and Photo
Detector
Compact size, integrated device. No
camera sensor needed.
Structured
Light
Integrated IR laser(s) and separate IR
camera
Focus free structured light with
Programmable & Dynamic Patterns
Depth map
resolution
~128 – 2,308 x ~180 - 720 Variable resolution as needed by
application
Frame rate 15 ~ 120Hz Variable frame rate and latency as
needed by the application
Pixel
Persistence
~ 15ns Blur-free capture of moving objects
19. 19
MICROVISION, INC. PROPRIETARY AND CONFIDENTIAL
COPYRIGHT 2016. ALL RIGHTS RESERVED.
9/12/2016
Conclusion & Call to Action
• 3D Sensing is a new fast growing application area for tomorrow’s Internet of Things presenting new
opportunities for Sensor industry and supply chain.
• 3D depth sensors can be implemented with variety of technologies, but MEMS based sensors can
provide unique capabilities to enable new product innovation and consequently drive further MEMS
and sensor industry growth.
• Designing platform solutions that can be applied to variety of applications can reduce development
costs and shorten time to market for new applications.
• In addition to producing great MEMS and Sensor hardware, collaboration with 3rd party software
developers (middleware, algorithms, OS platforms) is needed to enable full stack MEMS based off-
the-shelf advanced sensor solutions that reduce time from product idea to working prototype and
ultimately reduce time to market for innovative new products.