Firstly, we demonstrated a wirelessly controlled MEMS scan module with imaging and laser tracking capability which can be mounted and flown on a small UAV quadcopter. The MEMS scan module was reduced down to a small volume of <90mm><70mm><50g when powered by the UAV‟s battery. The MEMS mirror based LiDAR system allows for ondemand ranging of points or areas within the FoR without altering the UAV‟s position. Increasing the LRF ranging frequency and stabilizing the pointing of the laser beam by utilizing the onboard inertial sensors and the camera are additional goals of the next design. Keywords: MEMS Mirrors, laser tracking, laser imaging, laser range finder, UAV, drone, LiDAR.
MEMS Laser Scanning, the platform for next generation of 3D Depth SensorsMicroVision
MicroVision's PicoP® scanning technology is a MEMS-based Laser Beam Scanning (LBS) solution for pico projection, heads-up-display, and augmented reality eyewear applications. The same flexible technology can also be applied to exciting new sensing applications, such as 3D depth sensing. Demand for small and low cost 3D depth sensing solutions is growing rapidly, driven by increasing demand for new Natural User Interface, Machine Vision, Robotic Navigation, Metrology, and Advanced Driver Assistance System (ADAS) solutions.
This presentation, prepared by MicroVision's Jari Honkanen and presented at the MEMS & Sensors Industry Group Conference Asia 2016, compares the existing 3D depth sensor solutions based on stereo cameras, structured light and 3D CMOS Cameras.
MicroVision then presents a new MEMS LBS depth sensor platform solution that can enable a new generation of tiny 3D depth sensors with capabilities such as dynamic variable resolution and variable acquisition speed. These dynamic LBS depth sensors are an enabling technology for a completely new set of innovative products and applications.
MEMS Laser Scanning, the platform for next generation of 3D Depth SensorsJari Honkanen
MicroVision's MEMS Laser Beam Scanning based 3D Depth Sensing Technology presentation by Jari Honkanen at MEMS & Sensors Industry Group Conference Asia 2016, Shanghai, China, September 13-14, 2016
MicroVision Scanning Engines Overview | January 2017MicroVision
CES was an excellent venue for MicroVision to meet with a number of customers and partners to present the capabilities of the three engine products they have announced.
Applications Generated from a MEMS-based Laser Beam Scanning Technology PlatformMicroVision
At this year's MEMS Engineer Forum, MicroVision's Director of Product Engineering, Jari Honkanen, spoke on Applications Generated from a MEMS-based Laser Beam Scanning
"Laser Beam Scanning LiDAR: MEMS-Driven 3D Sensing Automotive Applications from Interior to the Exterior" presentation by Jari Honkanen at FutureCar 2017: New Era of Automotive Electronics Workshop, Nov 8-10, 2017, Georgia Institute of Technology, Atlanta, GA
"MEMS-based Laser Beam Scanning Technology Platform; Basis for Applications from Displays to 3D Sensors and 3D Printers" presentation by Jari Honkanen at MEMS Engineer Forum, April 26-27, 2017, Ryogoku, Tokyo, Japan
MEMS Laser Scanning, the platform for next generation of 3D Depth SensorsMicroVision
MicroVision's PicoP® scanning technology is a MEMS-based Laser Beam Scanning (LBS) solution for pico projection, heads-up-display, and augmented reality eyewear applications. The same flexible technology can also be applied to exciting new sensing applications, such as 3D depth sensing. Demand for small and low cost 3D depth sensing solutions is growing rapidly, driven by increasing demand for new Natural User Interface, Machine Vision, Robotic Navigation, Metrology, and Advanced Driver Assistance System (ADAS) solutions.
This presentation, prepared by MicroVision's Jari Honkanen and presented at the MEMS & Sensors Industry Group Conference Asia 2016, compares the existing 3D depth sensor solutions based on stereo cameras, structured light and 3D CMOS Cameras.
MicroVision then presents a new MEMS LBS depth sensor platform solution that can enable a new generation of tiny 3D depth sensors with capabilities such as dynamic variable resolution and variable acquisition speed. These dynamic LBS depth sensors are an enabling technology for a completely new set of innovative products and applications.
MEMS Laser Scanning, the platform for next generation of 3D Depth SensorsJari Honkanen
MicroVision's MEMS Laser Beam Scanning based 3D Depth Sensing Technology presentation by Jari Honkanen at MEMS & Sensors Industry Group Conference Asia 2016, Shanghai, China, September 13-14, 2016
MicroVision Scanning Engines Overview | January 2017MicroVision
CES was an excellent venue for MicroVision to meet with a number of customers and partners to present the capabilities of the three engine products they have announced.
Applications Generated from a MEMS-based Laser Beam Scanning Technology PlatformMicroVision
At this year's MEMS Engineer Forum, MicroVision's Director of Product Engineering, Jari Honkanen, spoke on Applications Generated from a MEMS-based Laser Beam Scanning
"Laser Beam Scanning LiDAR: MEMS-Driven 3D Sensing Automotive Applications from Interior to the Exterior" presentation by Jari Honkanen at FutureCar 2017: New Era of Automotive Electronics Workshop, Nov 8-10, 2017, Georgia Institute of Technology, Atlanta, GA
"MEMS-based Laser Beam Scanning Technology Platform; Basis for Applications from Displays to 3D Sensors and 3D Printers" presentation by Jari Honkanen at MEMS Engineer Forum, April 26-27, 2017, Ryogoku, Tokyo, Japan
Laser Beam Scanning Short Throw Displays & an Exploration of Laser-Based Virt...MicroVision
Selvan Viswanathan, a MicroVision principal engineer, presented on Laser Beam Scanning Short Throw Displays & Laser-Based Virtual Touchscreens at LDC 2017.
MEMS and Sensors in Automotive Applications on the Road to Autonomous Vehicle...MicroVision
MicroVision’s Director of Technical Marketing and Applications Development, Jari Honkanen was invited to speak at MSIG’s 12th annual MEMS & Sensors Executive Congress 2016 on MEMS and sensors as key enabling technologies in the automotive market. Honkanen also discussed the benefits of applying MicroVision’s MEMS scanned virtual image HUD and LIDAR sensor concept for ADAS applications.
MEMS and Sensors in Automotive Applications on the Road to Autonomous Vehicle...Jari Honkanen
MicroVision's MEMS Laser Beam Scanning Technology applied to HUD and ADAS applications presentation by Jari Honkanen at the MEMS & Sensors Executive Congress 2016, Scottsdale, AZ, November 10-11, 2016
Laser Beam Scanning LiDAR: MEMS-Driven 3D Sensing Automotive Applications fro...MicroVision
MicroVision’s Director of Product Engineering, Jari Honkanen, gave a presentation at FUTURECAR 2017 detailing how MicroVision's Laser Beam Scanning technology for MEMS-based LiDAR solutions provides a unique approach that enables new 3D sensor capabilities in areas such as dynamic and variable resolution, acquisition speed, and field of view.
Mitchell Reifel (pmdtechnologies ag): pmd Time-of-Flight – the Swiss Army Kni...AugmentedWorldExpo
A talk from the Develop Track at AWE USA 2018 - the World's #1 XR Conference & Expo in Santa Clara, California May 30- June 1, 2018.
Mitchell Reifel (pmdtechnologies ag): pmd Time-of-Flight – the Swiss Army Knife of 3D depth sensing
pmd's Time-of-Flight technology is integrated into two AR-smartphones on the market! pmd ToF is in 4 AR headsets! This talk will show what pmd has achieved, what they can do with our 3D ToF technology and why depth sensing is one secret sauce for AR, VR and MR.
http://AugmentedWorldExpo.com
Neil Sarkar (AdHawk Microsystems): Ultra-Fast Eye Tracking Without Cameras fo...AugmentedWorldExpo
A talk from the Develop Track at AWE USA 2018 - the World's #1 XR Conference & Expo in Santa Clara, California May 30- June 1, 2018.
Neil Sarkar (AdHawk Microsystems): Ultra-Fast Eye Tracking Without Cameras for Mobile AR Headsets
This session showcases the first camera-free eye-tracking microsystem. A MEMS (microelectromechanical system) device on a tiny chip scans a beam of light across the eye 4,500 times every second. The latest specifications to be revealed at AWE are enabling foveated rendering in mobile platforms, endpoint prediction during saccades, and unprecedented insights into the state of the user.
http://AugmentedWorldExpo.com
Jonathan Waldern (DigiLens): DigiLens Switchable Bragg Grating Waveguide Opti...AugmentedWorldExpo
A talk from the Develop Track at AWE USA 2018 - the World's #1 XR Conference & Expo in Santa Clara, California May 30- June 1, 2018.
Jonathan Waldern (DigiLens): DigiLens Switchable Bragg Grating Waveguide Optics for Augmented Reality Applications
This session will look at the key features of DigiLens waveguide technology and discusses our optical design methodology for designing and simulating the performance of DigiLens waveguides, giving examples from motorcycle, and autoHUD products.
http://AugmentedWorldExpo.com
These slides use concepts from my (Jeff Funk) course entitled analyzing hi-tech opportunities to analyze how Light Field Technology is becoming economic feasible for an increasing number of applications. Light Field Cameras record all of the light fields in a picture instead of just one light field. This capability enables users to change the focus of pictures after they have been taken and to more easily record 3D data. These features are becoming economically feasible improvements because of rapid improvements in camera chips and micro-lens arrays (an example of micro-electronic mechanical systems, MEMS). These features offer alternative ways to do 3D sensing for automated vehicles and augmented reality and can enable faster data collection with telescopes.
Khaled Sarayeddine (Optinvent): Optical Technologies & Challenges for Next Ge...AugmentedWorldExpo
A talk from the Develop Track at AWE USA 2018 - the World's #1 XR Conference & Expo in Santa Clara, California May 30- June 1, 2018.
Khaled Sarayeddine (Optinvent): Optical Technologies & Challenges for Next Generation AR
The talk will describe the current status on key optical technologies and ongoing development to meet Small footprint & Large FOV High resolution Display, as well to accommodate Light field feature.
http://AugmentedWorldExpo.com
In response to an increasing threat of terrorism, personal surveillance at security checkpoints, such as airports, railways stations, metro stations etc, is becoming increasingly important. Conventional systems in place at high-security checkpoints include metal detectors for personnel and X-ray systems for hand-carried items. These systems have been very effective, but have a number of shortcomings that will need to be addressed in future systems. Metal detectors can only detect metal targets, such as ordinary handguns and knives. The effectiveness of these detectors can vary depending on the quantity, orientation, and type of metal. Furthermore, no discrimination is possible between simple innocuous items, such as glasses, belt buckles, keys, etc., and actual threats. This leads to a rather high number of nuisance alarms. Modern threats include plastic or ceramic handguns and knives, as well as extremely dangerous items such as “plastic” and liquid explosives. These items cannot be detected with metal detectors.
A fundamentally different security measure is required to handle these threats. One type of system that addresses these concerns “concealed weapon detection using image processing”, which can penetrate clothing barriers to image items concealed by common clothing. This can be accomplished using X-ray imaging systems, and by recently developed millimetre-wave imaging systems, X-ray systems are potentially effective for this purpose.
However, the perceived adverse health effects of X-ray exposure may hamper the public acceptance of these systems. Millimetre waves are high-frequency electromagnetic waves usually defined to be within the 30–300-GHz frequency band. The techniques and systems discussed here could operate at lower microwave frequency bands as well.
Millimetre-wave systems are nonionizing and, therefore, pose no known health hazard at moderate power levels. Millimeter-wave imaging systems are capable of penetrating common clothing barriers to form an image of a person as well as any concealed items. Millimetre-wave systems can be very high resolution due to the relatively short wavelength (1–10 mm).
Presented at Softwarica College of IT, Kathmandu
This presentation includes:
1. About AR
a. Definition
b. Examples
c. Image Recognition and Tracking
d. SLAM (Simultaneous Localization and Mapping)
e. Difference between VR and AR
2. History of AR
3. Current Scenario of AR
a. Statistics
b. Mobile AR Examples
c. Magic Leap and Hololens
4. Getting Started with Unity
a. SDK Cheatsheet
For the full video of this presentation, please visit:
http://www.embedded-vision.com/platinum-members/qualcomm/embedded-vision-training/videos/pages/may-2016-embedded-vision-summit-mangen
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Michael Mangen, Product Manager for Camera and Computer Vision at Qualcomm, presents the "High-resolution 3D Reconstruction on a Mobile Processor" tutorial at the May 2016 Embedded Vision Summit.
Computer vision has come a long way. Use cases that were previously not possible in mass-market devices are now more accessible thanks to advances in depth sensors and mobile processors. In this presentation, Mangen provides an overview of how we are able to implement high-resolution 3D reconstruction – a capability typically requiring cloud/server processing – on a mobile processor. This is an exciting example of how new sensor technology and advanced mobile processors are bringing computer vision capabilities to broader markets.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/embedded-vision-alliance/embedded-vision-training/videos/pages/may-2019-embedded-vision-summit-osterwood-tue
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Chris Osterwood, Founder and CEO of Capable Robot Components, presents the "How to Choose a 3D Vision Sensor" tutorial at the May 2019 Embedded Vision Summit.
Designers of autonomous vehicles, robots and many other systems are faced with a critical challenge: Which 3D vision sensor technology to use? There are a wide variety of sensors on the market, employing modalities including passive stereo, active stereo, time of flight, 2D and 3D lasers and monocular approaches. This talk provides an overview of 3D vision sensor technologies and their capabilities and limitations, based on Osterwood's experience selecting the right 3D technology and sensor for a diverse range of autonomous robot designs.
There is no perfect sensor technology and no perfect sensor, but there is always a sensor which best aligns with the requirements of your application—you just need to find it. Osterwood describes a quantitative and qualitative evaluation process for 3D vision sensors, including testing processes using both controlled environments and field testing, and some surprising characteristics and limitations he's uncovered through that testing.
Track 6. Technological innovations in biomedical training and practice
Authors: Santiago González Izard, Juan A. Juanes Méndez, Pablo Ruisoto Palomera and Francisco J. García-Peñalvo
VRaaS [Virtual Reality as a Service]: Integrated architecture for VR Applicat...ijsrd.com
Cloud Computing is all about sharing resources and moving services, computation, data for cost and advantage. Many cloud providers host their services over the web and mobile apps for their peers. SaaS, PaaS, IaaS have been the basic providing strategies. Many applications use one of these areas of cloud computing. But all the applications may not be run accurately and cheaply using the existing strategies particularly virtual reality applications. A new architecture or strategy has to be built for such applications. In this paper, we suggest a new strategy named VRaaS (Virtual Reality as a Service) and its architecture for running such virtual reality applications. We also illustrate an example application which can be run using this architecture.
Emerging MEMSPatent Investigation 2014 Report by Yole DeveloppementYole Developpement
Who owns key patents for high-growth emerging MEMS innovations?
The report provides essential patent data, technology analysis and market forecasts for 7 selected Emerging MEMS (eMEMS) technologies that we believe will soon reach the marketplace: autofocus, AOC MEMS, chemical sensors, micro-speakers, scanning micro-mirrors, Si microfluidic and ultrasonic MEMS. The market for emerging MEMS covered in this report is expected to grow from US$171M in 2013 to US$2.8B in 2019, corresponding to a 58.2% CAGR.
UNDERSTAND WHICH COMPANIES OWN THE KEY PATENTS FOR THE FUTURE OF EMERGING MEMS
Patent activity related to eMEMS began in the 1990s and we observe a takeoff of patent publications for AutoFocus (AF), scanning micro-mirrors and ultrasonic MEMS technologies since the mid-2000s.
MEMS AF patent activity is new, but has increased since its emergence in 2009. This period also corresponds to the growth of the CMOS image sensor market for consumer applications, requiring more compact camera modules with more functionalities. Cambridge Mechatronics and DigitalOptics are currently the leading patent holders, but Korean giant LG has become a major force since 2010. Today MEMS AF is still a hot topic especially for cell phones and tablets, but wasn’t adopted in 2013. Although MEMS AF has promising market potential, it still faces strong market and technical challenges, which might explain the increasing patent activity of recent years.
More information on that report at http://www.i-micronews.com/mems-sensors-report/product/emerging-mems-patent-investigation-2014.html
Laser Beam Scanning Short Throw Displays & an Exploration of Laser-Based Virt...MicroVision
Selvan Viswanathan, a MicroVision principal engineer, presented on Laser Beam Scanning Short Throw Displays & Laser-Based Virtual Touchscreens at LDC 2017.
MEMS and Sensors in Automotive Applications on the Road to Autonomous Vehicle...MicroVision
MicroVision’s Director of Technical Marketing and Applications Development, Jari Honkanen was invited to speak at MSIG’s 12th annual MEMS & Sensors Executive Congress 2016 on MEMS and sensors as key enabling technologies in the automotive market. Honkanen also discussed the benefits of applying MicroVision’s MEMS scanned virtual image HUD and LIDAR sensor concept for ADAS applications.
MEMS and Sensors in Automotive Applications on the Road to Autonomous Vehicle...Jari Honkanen
MicroVision's MEMS Laser Beam Scanning Technology applied to HUD and ADAS applications presentation by Jari Honkanen at the MEMS & Sensors Executive Congress 2016, Scottsdale, AZ, November 10-11, 2016
Laser Beam Scanning LiDAR: MEMS-Driven 3D Sensing Automotive Applications fro...MicroVision
MicroVision’s Director of Product Engineering, Jari Honkanen, gave a presentation at FUTURECAR 2017 detailing how MicroVision's Laser Beam Scanning technology for MEMS-based LiDAR solutions provides a unique approach that enables new 3D sensor capabilities in areas such as dynamic and variable resolution, acquisition speed, and field of view.
Mitchell Reifel (pmdtechnologies ag): pmd Time-of-Flight – the Swiss Army Kni...AugmentedWorldExpo
A talk from the Develop Track at AWE USA 2018 - the World's #1 XR Conference & Expo in Santa Clara, California May 30- June 1, 2018.
Mitchell Reifel (pmdtechnologies ag): pmd Time-of-Flight – the Swiss Army Knife of 3D depth sensing
pmd's Time-of-Flight technology is integrated into two AR-smartphones on the market! pmd ToF is in 4 AR headsets! This talk will show what pmd has achieved, what they can do with our 3D ToF technology and why depth sensing is one secret sauce for AR, VR and MR.
http://AugmentedWorldExpo.com
Neil Sarkar (AdHawk Microsystems): Ultra-Fast Eye Tracking Without Cameras fo...AugmentedWorldExpo
A talk from the Develop Track at AWE USA 2018 - the World's #1 XR Conference & Expo in Santa Clara, California May 30- June 1, 2018.
Neil Sarkar (AdHawk Microsystems): Ultra-Fast Eye Tracking Without Cameras for Mobile AR Headsets
This session showcases the first camera-free eye-tracking microsystem. A MEMS (microelectromechanical system) device on a tiny chip scans a beam of light across the eye 4,500 times every second. The latest specifications to be revealed at AWE are enabling foveated rendering in mobile platforms, endpoint prediction during saccades, and unprecedented insights into the state of the user.
http://AugmentedWorldExpo.com
Jonathan Waldern (DigiLens): DigiLens Switchable Bragg Grating Waveguide Opti...AugmentedWorldExpo
A talk from the Develop Track at AWE USA 2018 - the World's #1 XR Conference & Expo in Santa Clara, California May 30- June 1, 2018.
Jonathan Waldern (DigiLens): DigiLens Switchable Bragg Grating Waveguide Optics for Augmented Reality Applications
This session will look at the key features of DigiLens waveguide technology and discusses our optical design methodology for designing and simulating the performance of DigiLens waveguides, giving examples from motorcycle, and autoHUD products.
http://AugmentedWorldExpo.com
These slides use concepts from my (Jeff Funk) course entitled analyzing hi-tech opportunities to analyze how Light Field Technology is becoming economic feasible for an increasing number of applications. Light Field Cameras record all of the light fields in a picture instead of just one light field. This capability enables users to change the focus of pictures after they have been taken and to more easily record 3D data. These features are becoming economically feasible improvements because of rapid improvements in camera chips and micro-lens arrays (an example of micro-electronic mechanical systems, MEMS). These features offer alternative ways to do 3D sensing for automated vehicles and augmented reality and can enable faster data collection with telescopes.
Khaled Sarayeddine (Optinvent): Optical Technologies & Challenges for Next Ge...AugmentedWorldExpo
A talk from the Develop Track at AWE USA 2018 - the World's #1 XR Conference & Expo in Santa Clara, California May 30- June 1, 2018.
Khaled Sarayeddine (Optinvent): Optical Technologies & Challenges for Next Generation AR
The talk will describe the current status on key optical technologies and ongoing development to meet Small footprint & Large FOV High resolution Display, as well to accommodate Light field feature.
http://AugmentedWorldExpo.com
In response to an increasing threat of terrorism, personal surveillance at security checkpoints, such as airports, railways stations, metro stations etc, is becoming increasingly important. Conventional systems in place at high-security checkpoints include metal detectors for personnel and X-ray systems for hand-carried items. These systems have been very effective, but have a number of shortcomings that will need to be addressed in future systems. Metal detectors can only detect metal targets, such as ordinary handguns and knives. The effectiveness of these detectors can vary depending on the quantity, orientation, and type of metal. Furthermore, no discrimination is possible between simple innocuous items, such as glasses, belt buckles, keys, etc., and actual threats. This leads to a rather high number of nuisance alarms. Modern threats include plastic or ceramic handguns and knives, as well as extremely dangerous items such as “plastic” and liquid explosives. These items cannot be detected with metal detectors.
A fundamentally different security measure is required to handle these threats. One type of system that addresses these concerns “concealed weapon detection using image processing”, which can penetrate clothing barriers to image items concealed by common clothing. This can be accomplished using X-ray imaging systems, and by recently developed millimetre-wave imaging systems, X-ray systems are potentially effective for this purpose.
However, the perceived adverse health effects of X-ray exposure may hamper the public acceptance of these systems. Millimetre waves are high-frequency electromagnetic waves usually defined to be within the 30–300-GHz frequency band. The techniques and systems discussed here could operate at lower microwave frequency bands as well.
Millimetre-wave systems are nonionizing and, therefore, pose no known health hazard at moderate power levels. Millimeter-wave imaging systems are capable of penetrating common clothing barriers to form an image of a person as well as any concealed items. Millimetre-wave systems can be very high resolution due to the relatively short wavelength (1–10 mm).
Presented at Softwarica College of IT, Kathmandu
This presentation includes:
1. About AR
a. Definition
b. Examples
c. Image Recognition and Tracking
d. SLAM (Simultaneous Localization and Mapping)
e. Difference between VR and AR
2. History of AR
3. Current Scenario of AR
a. Statistics
b. Mobile AR Examples
c. Magic Leap and Hololens
4. Getting Started with Unity
a. SDK Cheatsheet
For the full video of this presentation, please visit:
http://www.embedded-vision.com/platinum-members/qualcomm/embedded-vision-training/videos/pages/may-2016-embedded-vision-summit-mangen
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Michael Mangen, Product Manager for Camera and Computer Vision at Qualcomm, presents the "High-resolution 3D Reconstruction on a Mobile Processor" tutorial at the May 2016 Embedded Vision Summit.
Computer vision has come a long way. Use cases that were previously not possible in mass-market devices are now more accessible thanks to advances in depth sensors and mobile processors. In this presentation, Mangen provides an overview of how we are able to implement high-resolution 3D reconstruction – a capability typically requiring cloud/server processing – on a mobile processor. This is an exciting example of how new sensor technology and advanced mobile processors are bringing computer vision capabilities to broader markets.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/embedded-vision-alliance/embedded-vision-training/videos/pages/may-2019-embedded-vision-summit-osterwood-tue
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Chris Osterwood, Founder and CEO of Capable Robot Components, presents the "How to Choose a 3D Vision Sensor" tutorial at the May 2019 Embedded Vision Summit.
Designers of autonomous vehicles, robots and many other systems are faced with a critical challenge: Which 3D vision sensor technology to use? There are a wide variety of sensors on the market, employing modalities including passive stereo, active stereo, time of flight, 2D and 3D lasers and monocular approaches. This talk provides an overview of 3D vision sensor technologies and their capabilities and limitations, based on Osterwood's experience selecting the right 3D technology and sensor for a diverse range of autonomous robot designs.
There is no perfect sensor technology and no perfect sensor, but there is always a sensor which best aligns with the requirements of your application—you just need to find it. Osterwood describes a quantitative and qualitative evaluation process for 3D vision sensors, including testing processes using both controlled environments and field testing, and some surprising characteristics and limitations he's uncovered through that testing.
Track 6. Technological innovations in biomedical training and practice
Authors: Santiago González Izard, Juan A. Juanes Méndez, Pablo Ruisoto Palomera and Francisco J. García-Peñalvo
VRaaS [Virtual Reality as a Service]: Integrated architecture for VR Applicat...ijsrd.com
Cloud Computing is all about sharing resources and moving services, computation, data for cost and advantage. Many cloud providers host their services over the web and mobile apps for their peers. SaaS, PaaS, IaaS have been the basic providing strategies. Many applications use one of these areas of cloud computing. But all the applications may not be run accurately and cheaply using the existing strategies particularly virtual reality applications. A new architecture or strategy has to be built for such applications. In this paper, we suggest a new strategy named VRaaS (Virtual Reality as a Service) and its architecture for running such virtual reality applications. We also illustrate an example application which can be run using this architecture.
Emerging MEMSPatent Investigation 2014 Report by Yole DeveloppementYole Developpement
Who owns key patents for high-growth emerging MEMS innovations?
The report provides essential patent data, technology analysis and market forecasts for 7 selected Emerging MEMS (eMEMS) technologies that we believe will soon reach the marketplace: autofocus, AOC MEMS, chemical sensors, micro-speakers, scanning micro-mirrors, Si microfluidic and ultrasonic MEMS. The market for emerging MEMS covered in this report is expected to grow from US$171M in 2013 to US$2.8B in 2019, corresponding to a 58.2% CAGR.
UNDERSTAND WHICH COMPANIES OWN THE KEY PATENTS FOR THE FUTURE OF EMERGING MEMS
Patent activity related to eMEMS began in the 1990s and we observe a takeoff of patent publications for AutoFocus (AF), scanning micro-mirrors and ultrasonic MEMS technologies since the mid-2000s.
MEMS AF patent activity is new, but has increased since its emergence in 2009. This period also corresponds to the growth of the CMOS image sensor market for consumer applications, requiring more compact camera modules with more functionalities. Cambridge Mechatronics and DigitalOptics are currently the leading patent holders, but Korean giant LG has become a major force since 2010. Today MEMS AF is still a hot topic especially for cell phones and tablets, but wasn’t adopted in 2013. Although MEMS AF has promising market potential, it still faces strong market and technical challenges, which might explain the increasing patent activity of recent years.
More information on that report at http://www.i-micronews.com/mems-sensors-report/product/emerging-mems-patent-investigation-2014.html
An Introduction to Laser Scanning - Part 1: How does it all work?3D Laser Mapping
How does laser scanning work? Explore the ins and outs of laser scanning or LiDAR as it is commonly known.
If you would like to find out more - please drop us a line at info@3dlasermapping.com or have a look at our website www.3dlasermapping.com
MEMS Microphone: Market, Applications and Business Trends 2014 2014 Report by...Yole Developpement
New opportunities and emerging applications continue to boost the MEMS microphone market with strong new players
MEMS MICROPHONE MARKET GROWTH WILL CONTINUE TO SOAR OVER THE NEXT SEVERAL YEARS
MEMS microphone market has been growing since its first appearance. The huge worldwide adoption of smartphones, each using more than one MEMS microphone, creates the wide integration and rocketing growth of the MEMS microphone market. Some smartphones are now using three microphones: one for voice capture, one or two for noise cancellation and one for voice recognition improvement.
The Internet of Things (IoT) and wearable electronics applications are emerging markets for MEMS microphones: new opportunities will create new momentum in this fast growing business. Smart watch, smart glasses, smart home & building, and vehicle voice control will be promising applications in the next few years.
Yole Développement believes that the MEMS microphone will remain one of the fastest growing MEMS components due to existing and new opportunities. The market will grow from $785M in 2013 to $1.65B by 2019. Shipments are expected to grow from 2.4B in 2013 to 6.6B units in 2019 which points to a market filled with opportunities for existing and new players.
In this report, Yole Développement has completely re-examined and updated the MEMS microphone market data to provide you with a clear 2013 to 2019 market forecast.
Medical and automotive MEMS microphone markets will grow, but the current markets are still small. The Internet of Things will definitely result in new opportunities for the automotive market. MEMS microphones facilitate man-machine communication and enable voice detection and control to make devices more effective. Increased device effectiveness will lead to greater opportunity.
In this report, we will provide a complete overview of traditional applications – mobile phone, notebook, camera and tablet, other consumer electronics, medical applications, and automotive applications – as well as identify emerging applications.
KNOWLES IS STILL ALIVE AND KICKING, BUT COMPETITION IS INCREASING!
The MEMS microphone market is huge and has many opportunities. Big players compete with different approaches and new comers keep entering the market.
The market is changing: Knowles’ domination is weakening; it has about 61% of the market now down from the 83% it had in 2010. However, total market expansion has increased Knowles’ MEMS microphone revenue from $173.5M in 2010 to $477M in 2013.
Even though Knowles is still the market leader, other players are competing to take its market share. However compared to 2012, Knowles managed to regain some market share in 2013: iPhone 5S and 5C are using two microphones from Knowles and one from AAC Technologies (AAC). Other players still have a long way to go if they want to take market share from Knowles...
These slides discuss the falling cost of sensors, MEMs, and the Internet of Things. The costs of MEMs, transceivers and other components are falling and making the IoT economically feasible. These slides discusses these cost reductions in detail and many examples of how the IoT is emerging for many types of industrial products.
Photolithography Equipment and Materials for Advanced Packaging, MEMS and LED...Yole Developpement
Growing photolithography equipment markets in Advanced Packaging, MEMS and LEDs are attracting new players – but they have to navigate complex roadmaps.
Clear leaders and outsiders: At first glance, the projection systems industry serving the “More Moore” and the “More than Moore” markets are similar…
The semiconductor industry is very often identified by its “More Moore” players, driven by technology downscaling and cost reduction. There’s one clear leader supplying photolithography tools to the “More Moore” industry: ASML, based in The Netherlands. It’s followed by two Japanese outsiders, Nikon and Canon. Providing this market with photolithography equipment is highly complex and there are gigantic barriers to market entry. Enormous R&D investments are required as the key features to print shrink ever further. Also, the tolerances specified are very aggressive and thus equipment complexity keeps on increasing...
MEMS microphone Patent Infringement April 2015 report published by Yole Devel...Yole Developpement
MEMS Microphone
Knowles, STMicroelectronics/OMRON, AAC Technologies/Infineon Technologies, InvenSense/Analog Devices
Market growth, challenged leader : patent battle can start
NOW IT’S TIME FOR PATENT BATTLE IN MEMS MICROPHONE BUSINESS
The MEMS Microphone will remain one of the fastest growing MEMS components due to existing and new opportunities. According to Yole Développement, the market will grow from $785M in 2013 to $1.65B by 2019. Shipments are expected to grow from 2.4B in 2013 to 6.6B units in 2019 which points to a market filled with opportunities for existing new players. Knowles is the dominant player with 61% market shares in 2013 and 58% in 2014. A decrease is again expected in the next years with new challengers entering the market: STMicroelectronics, OMRON, AAC Technologies, Infineon Technologies and InvenSense(/ Analog Devices).
The global market share of these 6 players account for more than 80% of MEMS Microphone market in 2014. These players are all developing innovative technical and manufacturing solutions and, in parallel of course, the right patents to protect their inventions.
In a patent infringement action, the potential sales volume plays a major role for assessing the damage award. Thereby, this study is naturally focused on the last MEMS Microphone components supplied by these market leaders and challengers: Knowles (S1157, iPhone 5S/6), ST/OMRON (MP45DT01), AAC/Infineon (SR595, iPhone 5S/6), InvenSense/ADI (ICS-43432).
This raises interesting questions: What are the similarities and the differences in term of technical and manufacturing choices at the devices level? What is the related patent situation? Knowmade (specialized in patent analysis) and System Plus Consulting (specialized in reverse engineering and reverse costing) are joining their unique added value in order to combined technology and manufacturing analysis with patent claims understanding to highlight the risks of patent infringement between Knowles, STMicroelectronics, OMRON, AAC Technologies, Infineon Technologies, Analog Devices and InvenSense in the field of MEMS Microphone. As the MEMS Microphone market is growing very fast, it is the right time now to understand what could happen between these companies and how to differentiate patents and claims compared to the other players.
This report provides an overview of technology data and manufacturing process of S1157, MP45DT01, SR595 and ICS-43432 MEMS Microphone components.
A comparative study of the technology and manufacturing process of these MEMS Microphone components has been performed in order to highlight the technical similarities and differences of the product features.
More information on that report at http://www.i-micronews.com/reports.html
An unmanned aerial vehicle (UAV), commonly known as a Drone, is an aircraft without a human pilot on board. UAVs can be remote controlled aircraft (e.g. flown by a pilot at a ground control station) or can fly autonomously based on pre-programmed flight plans or more complex dynamic automation systems
A UAV is defined as being capable of controlled, sustained level flight and powered by a jet or reciprocating engine. In addition, a cruise missile can be considered to be a UAV, but is treated separately on the basis that the vehicle is the weapon.
Unmanned Aerial Vehicles (UAVs) are aircrafts that fly without any humans being onboard. They are either remotely piloted, or piloted by an onboard computer. This kind of aircrafts can be used in different military missions such as surveillance, reconnaissance, battle damage assessment, communications relay, minesweeping, hazardous substances detection and radar jamming. However they can be used in other than military missions like detection of hazardous objects on train rails and investigation of infected areas. Aircrafts that are able of hovering and vertical flying can also be used for indoor missions like counter terrorist operations
To download this ppt click on this link
https://adf.ly/PdL4V
This presentation outlines some of the most exciting medical MEMS and sensors devices that were introduced to the marketplace in the past few years. Some of the devices are already in volume production, and some are still being commercialized.
Apple's Smart Sensor Technologies -- market research report (sample)MEMS Journal, Inc.
This comprehensive 204-page report covers the latest and emerging sensors, microsystems, and MEMS technologies which Apple is developing and using for its products including the iPhone and the Watch. To order the full version, please go here: https://fs8.formsite.com/medved44/form33/index.html
TEDx Manchester: AI & The Future of WorkVolker Hirsch
TEDx Manchester talk on artificial intelligence (AI) and how the ascent of AI and robotics impacts our future work environments.
The video of the talk is now also available here: https://youtu.be/dRw4d2Si8LA
Closed Loop Control of Gimbal-less MEMS Mirrors for Increased Bandwidth in Li...Ping Hsu
we presented a low SWaP wirelessly controlled MEMS mirror
-
based LiDAR
prototype
which utilized an OEM
laser rangefinder for distance measurement
[1]
.
The MEMS mirror was run in open loop based on its e
xceptional
ly fast
design and high
repeatability performance.
However, to
further
extend the bandwidth and incorporate necessary eye
-
safety features, we recently focused on providing mirror position feedback and running the system in closed loop control.
A Fast Single-Pixel Laser Imager for VR/AR Headset TrackingPing Hsu
In this work we demonstrate a highly flexible laser imaging system for 3D sensing applications such as in tracking of VR/AR headsets, hands and gestures. The system uses a MEMS mirror scan module to transmit low power laser pulses over programmable areas within a field of view and uses a single photodiode to measure the reflected light...
Next Gen Computational Ophthalmic Imaging for Neurodegenerative Diseases and ...PetteriTeikariPhD
Shallow literature analysis on recent trends in computational ophthalmic imaging with focus on neurodegenerative disease imaging / oculomics.
Open-ended literature review on what you could be building next.
#1/2: Hardware
#2/2: Computational imaging
Alternative download link:
https://www.dropbox.com/scl/fi/d34pgi3xopfjbrcqj2lvi/retina_imaging_2024_computational.pdf?rlkey=xnt1dbe8rafyowocl9cbgjh3p&dl=0
Visual and light detection and ranging-based simultaneous localization and m...IJECEIAES
In recent years, there has been a strong demand for self-driving cars. For safe navigation, self-driving cars need both precise localization and robust mapping. While global navigation satellite system (GNSS) can be used to locate vehicles, it has some limitations, such as satellite signal absence (tunnels and caves), which restrict its use in urban scenarios. Simultaneous localization and mapping (SLAM) are an excellent solution for identifying a vehicle’s position while at the same time constructing a representation of the environment. SLAM-based visual and light detection and ranging (LIDAR) refer to using cameras and LIDAR as source of external information. This paper presents an implementation of SLAM algorithm for building a map of environment and obtaining car’s trajectory using LIDAR scans. A detailed overview of current visual and LIDAR SLAM approaches has also been provided and discussed. Simulation results referred to LIDAR scans indicate that SLAM is convenient and helpful in localization and mapping.
AUTO LANDING PROCESS FOR AUTONOMOUS FLYING ROBOT BY USING IMAGE PROCESSING BA...csandit
In today’s technological life, everyone is quite familiar with the importance of security
measures in our lives. So in this regard, many attempts have been made by researchers and one
of them is flying robots technology. One well-known usage of flying robot, perhaps, is its
capability in security and care measurements which made this device extremely practical, not
only for its unmanned movement, but also for the unique manoeuvre during flight over the
arbitrary areas. In this research, the automatic landing of a flying robot is discussed. The
system is based on the frequent interruptions that is sent from main microcontroller to camera
module in order to take images; these images have been distinguished by image processing
system based on edge detection, after analysing the image the system can tell whether or not to
land on the ground. This method shows better performance in terms of precision as well as
experimentally.
Ultra High Focusing Speed up to 12000Hz
Long Term Reliability more than 1 billion operating cycle
Ultra Small Power Consumption less than 1mA
Shock Resistance more than 5000G
Operating Temperature Range: -30 ~ 100°C
Single Camera Based 3D Camera
Volumetrically 25% Smaller than Current Technology Based Module
Real-Time Multi Focusing
Applicable to Volumetric 3D DISPLAY, Compact Auto Focus and 3D camera module
Air Programming on Sunspot with use of Wireless Networksijsrd.com
Wireless Sensor Networks (WSN) provides us an effective means to monitor physical environments. The computing nodes in a WSN are resource constrained devices whose resources need to be used sparingly. The main requirement of a WSN is to operate unattended in remote locations for extended periods of time. Physical conditions, environmental conditions, upgrades, user preferences, and errors within the code can all contribute to the need to modify currently running applications. Therefore, reprogramming of sensor nodes is required. One of the important terminologies that are associated with WSN network is that of OVER THE AIR PROGRAMMING. This concept has been utilized so far as for Imote2 sensors that has been relatively utilized for the processing of the Deluge port (DP). They have so far been able to successfully reboot each application. But this rebooting is still not reliable and secure as there are certain security features that are affected in the processing. I will provide a more reliable, robust and secure system that would have enriched functionalities of that of OTA programming on SUNSPOT. Some important features of the SUN SPOT that I will be utilizing in this practical approach is that it supports isolated application models such as sensor networks, it allows running multiple applications in one. It does not create overhead on the entire system node as it provides lower level asynchronous for proper message delivery. It also supports migration from one device to another. This paper focuses on developing a multi-hop routing protocol for communication among Sun SPOT sensor nodes and a user front-end (i.e. visualizer) to visualise the collected values from all the nodes. To test the routing protocol before deploying it to sensor nodes, a simulation using J-Sim is created.
Design and Structural Analysis for an Autonomous UAV System Consisting of Sla...IOSR Journals
Abstract: An Unmanned Aerial Vehicle (UAV) is an aircraft without a human pilot. It can either be controlled manually by a pilot on the ground using a trans-receiver or it can be programmed to operate autonomously. In this proposed control system, multiple slave Micro Aerial Vehicles(MAV) are dispatched from a master UAV for surveillance. All the MAVs are synchronized with each other through the master UAV which highlights their purpose and position. The master UAV acts as a mobile base for the surveillance, it stores the data collected by the MAVs and transmits them to a remote base. A design of the UAV-MAV system and its performance analysis is presented.
Keywords- Autonomous control, Characteristics, Linux, Master / Slave Aerial Vehicles, NX 8.0 Nastran, Surveillance.
Design and Structural Analysis for an Autonomous UAV System Consisting of Sla...IOSR Journals
An Unmanned Aerial Vehicle (UAV) is an aircraft without a human pilot. It can either be controlled manually by a pilot on the ground using a trans-receiver or it can be programmed to operate autonomously. In this proposed control system, multiple slave Micro Aerial Vehicles(MAV) are dispatched from a master UAV for surveillance. All the MAVs are synchronized with each other through the master UAV which highlights their purpose and position. The master UAV acts as a mobile base for the surveillance, it stores the data collected by the MAVs and transmits them to a remote base. A design of the UAV-MAV system and its performance analysis is presented.
Wireless Multimedia Sensor Network: A Survey on Multimedia Sensorsidescitation
Recently due to progress in Complementary Metal
Oxide Semiconductor (CM OS) technology, Wireless
Multimedia Sensor Networks (WMSNs) become focus of
research in a broader range of applications. In this survey
paper different WMSNs applications, research & design
challenges are outlined. In addition to this, different available
commercial multimedia sensors are discussed in detail and
compared. Also other then commercial available multimedia
sensors, some experimental multimedia sensor prototypes are
discussed. In addition to this different experimental deployed
test beds for WMSNs are outlined. Also few Wireless Sensor
Networks (WSNs) simulators and emulators are reviewed.
Depending upon the requirement a few physical multimedia
sensors can be integrated or embedded within available
simulators to observe more accurate results or to visualize in
a better way.
Similar to UAV-Borne LiDAR with MEMS Mirror Based Scanning Capability (20)
Two-Axis Scanning Mirror for Free-Space Optical Communication between UAVsPing Hsu
We have developed a SOI/SOI wafer bonding process to design and fabricate two-axis scanning mirrors with excellent performance. These mirrors are used to steer laser beams in free-space optical communication between UAVs.
Fast and High-Precision 3D Tracking and Position Measurement with MEMS Microm...Ping Hsu
We demonstrate real-time fast-motion tracking of an object in a 3D volume, while obtaining its precise XYZ co-ordinates.
Two separate scanning MEMS micromirror sub-systems track the object in a 20 kHz closed-loop. A demonstration system capable
of tracking full-speed human hand motion provides position information at up to 5m distance with 16-bit precision, or <=20μm
precision on the X and Y axes (up/down, left/right,) and precision on the depth (Z-axis) from 10μm to 1.5mm, depending on distance.
A fully-functional 2x2-element array of tip-tilt-piston micromirrors with large deflection angles and large piston range is
presented. A control system has also been developed which includes an extensive software package and a multi-channel highvoltage
amplifier that allows the user to independently control all available degrees of freedom (i.e. tip, tilt, or piston) for each
individual device in the array.
MEMS Mirror Based Dynamic Solid State Lighting ModulePing Hsu
Lighting in Homes will be programmable and designable
Commercial and home security systems will have steerable spotlights
UAVs and helicopter spotlights for search & rescue, security, etc.
MEMS BASED OPTICAL COHERENCE TOMOGRAPHY IMAGING SYSTEM AND OPTICAL BIOPSY PRO...Ping Hsu
A fully-functional, real-time optical coherence tomography (OCT) system based on a high-speed, gimbal-less micromachined scanning
mirror is presented. The designed MEMS control architecture allows the MEMS based imaging probes to be connected to a time-domain, a
Fourier domain or a spectral domain OCT system. Furthermore, a variety of probes optimized for specific laboratory or clinical
applications including various minimally invasive endoscopic, handheld or lab-bench mounted probes may be switched between effortlessly
and important driving parameters adjusted in real-time. In addition, artifact free imaging speeds of 33μs per voxel have been achieved
while imaging a 1.4mm×1.4mm×1.4mm region with 5μm×5μm×5μm sampling resolution (SD-OCT system.)
A MEMS BASED OPTICAL COHERENCE TOMOGRAPHY IMAGING SYSTEM AND OPTICAL BIOPSY P...Ping Hsu
A fully-functional, real-time optical coherence tomography (OCT) system based on a high-speed, gimbal-less micromachined scanning
mirror is presented. The designed MEMS control architecture allows the MEMS based imaging probes to be connected to a time-domain, a
Fourier domain or a spectral domain OCT system. Furthermore, a variety of probes optimized for specific laboratory or clinical
applications including various minimally invasive endoscopic, handheld or lab-bench mounted probes may be switched between effortlessly
and important driving parameters adjusted in real-time. In addition, artifact free imaging speeds of 33μs per voxel have been achieved
while imaging a 1.4mm×1.4mm×1.4mm region with 5μm×5μm×5μm sampling resolution (SD-OCT system.)
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
20 Comprehensive Checklist of Designing and Developing a WebsitePixlogix Infotech
Dive into the world of Website Designing and Developing with Pixlogix! Looking to create a stunning online presence? Look no further! Our comprehensive checklist covers everything you need to know to craft a website that stands out. From user-friendly design to seamless functionality, we've got you covered. Don't miss out on this invaluable resource! Check out our checklist now at Pixlogix and start your journey towards a captivating online presence today.
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex ProofsAlex Pruden
This paper presents Reef, a system for generating publicly verifiable succinct non-interactive zero-knowledge proofs that a committed document matches or does not match a regular expression. We describe applications such as proving the strength of passwords, the provenance of email despite redactions, the validity of oblivious DNS queries, and the existence of mutations in DNA. Reef supports the Perl Compatible Regular Expression syntax, including wildcards, alternation, ranges, capture groups, Kleene star, negations, and lookarounds. Reef introduces a new type of automata, Skipping Alternating Finite Automata (SAFA), that skips irrelevant parts of a document when producing proofs without undermining soundness, and instantiates SAFA with a lookup argument. Our experimental evaluation confirms that Reef can generate proofs for documents with 32M characters; the proofs are small and cheap to verify (under a second).
Paper: https://eprint.iacr.org/2023/1886
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
UAV-Borne LiDAR with MEMS Mirror Based Scanning Capability
1. SPIE Defense and Commercial Sensing Conference 2016, April 20th
, 2016, Baltimore, MD
*abhishek@mirrorcletech.com Phone: +1-510-524-8820 www.mirrorcletech.com
UAV-Borne LiDAR with MEMS Mirror Based Scanning Capability
Abhishek Kasturi*, Veljko Milanovic, Bryan H. Atwood, James Yang
Mirrorcle Technologies, Inc., 4905 Central Ave, Richmond, CA 94804
ABSTRACT
Firstly, we demonstrated a wirelessly controlled MEMS scan module with imaging and laser tracking capability which
can be mounted and flown on a small UAV quadcopter. The MEMS scan module was reduced down to a small volume
of <90mm x 60mm x 40mm, weighing less than 40g and consuming less than 750mW of power using a ~5mW laser.
This MEMS scan module was controlled by a smartphone via Bluetooth while flying on a drone, and could project
vector content, text, and perform laser based tracking. Also, a “point-and-range” LiDAR module was developed for
UAV applications based on low SWaP (Size, Weight and Power) gimbal-less MEMS mirror beam-steering technology
and off-the-shelf OEM LRF modules. For demonstration purposes of an integrated laser range finder module, we used a
simple off-the-shelf OEM laser range finder (LRF) with a 100m range, +/-1.5mm accuracy, and 4Hz ranging capability.
The LRFs receiver optics were modified to accept 20° of angle, matching the transmitter‟s FoR. A relatively large
(5.0mm) diameter MEMS mirror with +/-10° optical scanning angle was utilized in the demonstration to maintain the
small beam divergence of the module. The complete LiDAR prototype can fit into a small volume of <70mm x 60mm x
60mm, and weigh <50g when powered by the UAV‟s battery. The MEMS mirror based LiDAR system allows for on-
demand ranging of points or areas within the FoR without altering the UAV‟s position. Increasing the LRF ranging
frequency and stabilizing the pointing of the laser beam by utilizing the onboard inertial sensors and the camera are
additional goals of the next design.
Keywords: MEMS Mirrors, laser tracking, laser imaging, laser range finder, UAV, drone, LiDAR.
1. INTRODUCTION
1.1 Background
With the increased commercial availability of UAV technology, UAVs or drones have become popular as toys,
development projects for hobbyists, and even tools to aid in imaging, delivery, etc. There are drones available with open-
source access to the hardware and software that can allow for modifications on its control and flight. This allows for
introduction of new “add-on” technologies that can aid in how the drone operates, especially in the case of autonomous
flight. Since cameras have already been introduced for drones to allow for aerial filming and imaging, the simplest step
would be to use the camera for 3D imaging to assist the drone as it flies. Camera based imaging can be used to try to
determine 3D information within a given field of view. Camera systems are prevalent especially in the electronic gaming
industry and they are used on drones to help users monitor their flight path, and they are also used in various other
camera based monitoring systems. However, the use of a single camera has limitations – one is the camera‟s fixed and
relatively limited resolution, and the second is the difficulty of 3D information detection even with high quality imagery
without any true distance information. In addition to that, a camera‟s ultimate limitation is always lighting, i.e. that it
cannot obtain any reliable data from a relatively or completely dark scene or object.
Laser based measurement systems are commonly used in various industries in situations where an accurate measurement
in position, distance, or the azimuth and elevation of a target is required. There are various technologies that can be used
to perform these measurements. One method would be time-of-flight (TOF) measurement using a laser range finder
(LRF). There are many commercially available LRFs that are used in construction, sports, recreation, etc. A system such
as active LiDAR can be used to obtain a cloud of 3D data. A compact system that utilizes MEMS mirrors demonstrated
by Moss et al [4] has excellent performance parameters but is still significantly „overweight‟ and „oversized‟ for UAV
applications, with its 8.9cm x 15.2cm x 28cm dimensions and ~2.63kg mass [4],[5]. This type of system or similar
commercially available LiDAR systems with a wide field of regard are presently too big and costly even for automotive
applications and require significant further development. Its power consumption of 30W [4] would also need to be
significantly reduced. Due to the limitations of present camera-based and LiDAR-based systems for drone applications, a
solution utilizing a laser based measurement with light and a cheap off-the-shelf system should be considered.
A possible solution for a simple and functional UAV-borne LiDAR could be to have a compact MEMS-mirror based
beam steering system integrated with a pocket-sized off-the-shelf laser rangefinder (LRF) to image a “point cloud” in
2. SPIE Defense and Commercial Sensing Conference 2016, April 20th
, 2016, Baltimore, MD
*abhishek@mirrorcletech.com Phone: +1-510-524-8820 www.mirrorcletech.com
front of the drone‟s flight path – allowing the system to adjust for any objects in the way. This point cloud will be less
dense compared to the image derived by LiDAR, but simplifies the payload of the drone to an LRF module, beam
steering device and a controller. In fact, it is anticipated that multiple such scanning systems would be necessary or
could be utilized in each drone and therefore each one should really represent a nearly negligible payload in size, weight,
and power consumption (SWaP) – as well as ultimately in cost as well.
1.2 Goal of this Work
The first goal of this project is to demonstrate that it is possible to have a fully wirelessly-controllable and programmable
laser beam scanning module with SWaP so low that it can be flown on a toy quad-copter. Specifically, a module
consisting of a MEMS Scan Head (laser, MEMS mirror and optics), electronic Controller, and a power source (Li-Po)
battery which can be operated by the drone controller or by a second user wirelessly. Therefore, our first goal is low
SWaP for a laser scanning system.
The second goal is to demonstrate that with a simple addition of a sensitive photodetector, tuned for the system‟s own
laser wavelength and modulation, we can achieve a number of additional functions with hardly any penalty on the total
SWaP. We want to demonstrate imaging functions such as simple threshold-based collision avoidance, as well as
tracking functions such as keeping the laser beam pointed at a marked spot on a wall while the drone is in flight.
Finally, the third goal is to combine an off-the-shelf OEM laser rangefinder with the module and obtain point clouds,
arrays of distances, in the forward looking field of regard of 24° while in flight.
2. MEMS MIRROR BASED IMAGER AND LASER TRACKER
2.1 Gimbal-less MEMS mirrors overview
Gimbal-less MEMS mirror devices (Figure 1) are designed and optimized for point-to-point or “quasistatic” optical
beam steering. This means that any steady-state analog actuation voltage results in a specific steady-state analog angle
of rotation of the mirror and consequently in a specific optical beam direction. Near dc (0 Hz), there is a one-to-one
correspondence of actuation voltages and resulting angles: it is highly repeatable with no measurable degradation over
time. A sequence of actuation voltages results in a sequence of mirror angles for point-to-point beam-steering. These
devices can be operated over a very wide bandwidth from dc (maintaining position at constant voltage with nearly zero
power consumption at the device) to several thousand Hertz with mechanical tilt range of -6° to +6° on each axis or
larger depending on the design [1]. At higher frequencies closer to device resonance, the full device response must be
taken into account; however, it can be generally stated that this technology enables arbitrary control of laser beam
position and velocity up to a certain velocity limit. Such fast and broadband capability allows nearly arbitrary waveforms
such as vector graphics, constant velocity line scanning, point-to-point step scanning, and resonant-quasistatic rastering
(one axis resonant, the other quasi-static) discussed more in the next section. These capabilities are utilized in established
applications such as 3D scanning [2], biomedical imaging [3], free-space communication, LiDAR [4],[5], and laser
tracking [6],[7].
Figure 1. a) A4QQ8.3 devices with a bonded 6.4mm mirror, b) A typical MEMS scan head consisting of a laser module, a
folding mirror, a MEMS mirror, and a scanning lens, c) A miniaturized MEMS driver connected to a 3.6mm mirror
2.2 Various scanning modes
Each axis of the gimbal-less two-axis MEMS mirrors [9][10] can be independently commanded to arbitrary position and
velocity – in a given bandwidth of available steering movement. Hence the user has a wide range of possibilities to
design scanning patterns for different applications. In Figure 2 we show some different modalities of scanning which are
frequently used in applications. There are no clear distinctions between different scanning modes – in all cases there are
simply different waveforms provided to each axis which can continually vary from dc to any frequency and mix of
(a) (b) (c)
Scanning
Lens
Laser
Module
MEMS Mirror
Folding Mirror
MEMS Driver MEMS Mirror
3. SPIE Defense and Commercial Sensing Conference 2016, April 20th
, 2016, Baltimore, MD
*abhishek@mirrorcletech.com Phone: +1-510-524-8820 www.mirrorcletech.com
frequencies. This category is generalized as the quasi-static mode – where the mirror position responds directly to the
driver commands, proportionally and with minimal phase delay. The same MEMS mirrors can also operate in the
dynamic, resonant mode. When operated near the resonant frequency, devices give significantly more angle at lower
operating voltages and sinusoidal motion. Furthermore there is a phase delay. Namely, the MEMS actuators utilize
single-crystal silicon springs to support the micromirror and to provide restoring force during actuation. The
combination of the springs and the mirror‟s inertia result in a 2nd order mass-spring system with a relatively high quality
factor (Q) of 50-100. Therefore, in this mode, low actuation voltages at frequencies near resonance result in large bi-
directional rotation angles. Resonant frequencies are in the range of several kHz.
It is possible to define three modes of operation, as described here and depicted in photos of green laser beam steering in
Figure 2:
a) First mode is point-to-point mode or quasi-static mode. In this case both axes are utilizing the wide bandwidth
of operation of the device from dc to some frequency, and not allowing for resonance and ringing. Therefore the mirror
can hold a dc position, move at a uniform velocity, perform vector graphics, etc.
b) The second mode is a mixed mode in which one axis is used in quasi-static mode, and the other axis is used in
resonant mode. A typical use case is to run one axis very fast (e.g. few kHz,) to create horizontal lines, and to run the
other axis with a sawtooth-like waveform to create a raster pattern that covers a rectangular display or imaging area.
Again, the axis operating at resonance should have its parameters carefully obtained, initially at low voltages and angles,
to avoid exceeding maximum mechanical angles.
c) Third mode is resonant mode. In this case both axes are utilizing the narrow, high gain resonance to obtain
large angles of deflection and relatively low voltages and high speeds. Motion is limited to very narrowband, sinusoidal
trajectories with a phase lag to the applied voltage. It is not necessary to drive the device at the exact resonant peak as
the resonant mode can be obtained within few percent of the highest gain point. Resulting 2D motion describes circles,
ellipses, and various higher order Lissajous patterns and can be modulated at some rate. Devices designed for point-to-
point mode, when driven near and at resonance, easily exceed safe operating angles and break. It is important to
approach resonant mode of operation with very small sinusoidal driving voltages about the bias position and very
carefully search for a desired operating point and angle, so as not to exceed a given device‟s maximum mechanical angle
limit.
Figure 2 Photographs of examples of using Mirrorcle MEMS Mirror in (a) point-to-point scanning mode (quasi-static) on
both axes with the laser beam stopping at each angle, then stepping to the next angle, (b) resonant scanning mode on the x-
axis (sinusoidal beam motion) and quasi-static on the y-axis (triangle wave motion in this example), and (c) resonant
scanning mode on both axes, showing a 2D resonant Lissajous pattern.
2.3 Laser Imaging with MEMS Mirrors
As mentioned above, a laser beam can be arbitrarily deflected to a desired location on a target, and can for example
raster over an area on a target object. In applications such as laser marking, 3D printing, target designation or dazzling,
that is sufficient. On the other hand, in some applications we wish to obtain information about the target object by
receiving some feedback from the laser-illuminated point or pixel. In this laser-based imaging mode, a photodetector is
arranged to receive back any retro-reflected radiation from the illuminated spot on the target. Laser imaging with this
type of MEMS mirror has been implemented in several research and commercial applications, such as a handheld
biomedical imaging system [3]. In the optical coherence tomography (OCT) system of [3], a laser is scanned in a raster
pattern with a small MEMS mirror and the intensity of the reflection is measured to generate a volumetric scan. In fact it
is very typical that a specific optical metrology method or modality such as OCT has initial applications as a single-point
(a) (b) (c)
4. SPIE Defense and Commercial Sensing Conference 2016, April 20th
, 2016, Baltimore, MD
*abhishek@mirrorcletech.com Phone: +1-510-524-8820 www.mirrorcletech.com
technique, and then in later development seeks to add two-axis beam steering to add full imaging or mapping capability
that comes with that. Therefore, there are seemingly countless applications in which laser beams are used to obtain
information about samples, tissue, eyes, etc., which may initially utilize only a single-point measurement or moving
stages to achieve scanning – and can now utilize beam steering MEMS mirrors.
Low volume and low weight scanning and tracking applications can be achieved with simple photodiode detection.
These systems work in the same manner as those used before in biomedical imaging applications. The MEMS mirror
steers the laser to a certain target and the current of the photodetector is measured to determine the magnitude of laser
light that is reflected back. If the current exceeds a set threshold, the object is considered detected. While not highly
accurate, this technique is sufficient for certain imaging applications. For example, drone collision avoidance might be
achieved by determining whether the laser is being reflecting off of a nearby wall. A security system could be devised
that can trigger an alarm if a non-reflected laser is suddenly reflected by a person or object passing through the beam.
Figure 3 a) Single aperture laser imaging system using a MEMS mirror and single photo-detector; b) Dual aperture laser
imaging system. The MEMS mirror can be much smaller in this system since the mirror is not responsible for handling the
returning light; c) demonstration of a dual aperture imaging system with the MEMS mirror and laser in the aluminum
housing and an attached photo-detector and receiving lens
It should be mentioned that a single photo-detector based imaging setup can be constructed in many ways. In Figure 3a
and 3b we compare two architectures which are often used and have considerably different effects on the MEMS
requirements. A single aperture system using a beam splitter – such as in [7] – can be constructed so that a laser is passed
through a beam splitter to the MEMS mirror, which directs the beam towards a target (Figure 3a). Any diffuse or retro-
reflected light which returns in the direction of the mirror reflects off of the MEMS mirror back through the beam splitter
where it is redirected to a photo-detector. The setup has many advantages in terms of its high sensitivity for the specific
target area and low sensitivity to external unwanted radiation. However its main disadvantage is that the receive aperture
is limited to the size of the MEMS mirror which is typically quite small. A dual aperture system – such as in [4],[5] – is
characterized by a laser beam that is steered with a MEMS mirror through one aperture towards a reflective target, while
any light returning from the target is received through a proximate and larger second aperture which contains the photo-
detector (Figure 3b).
Figure 4. a) “Single pixel camera” system made from a MEMS scanning mirror and photo-detector which is able to
reconstruct images from raster scanning a laser and reading the reflected light for each offset of the MEMS mirror, b) The
“single pixel camera” system showing an image reconstructed by raster scanning the laser over a user’s hand.
The key advantage is that the receiver can be effectively as large as desired independent of the size of the MEMS mirror.
The disadvantage is that the receiver is constantly “looking” at the complete field of view and may be more susceptible
to unwanted background radiation. While customers for medical applications utilize both depending on the specific
requirements of the system, in general the preferred architecture with MEMS mirrors is the dual aperture style. This way
(a) (b) (c)
Laser Diode
Photodetector
MEMS
Mirror
Object
Beam SplitterLaser Diode
Photodetector
MEMS Mirror
Object Photodetector
Receive
Lens
Scan Lens
(Laser Output)
Receive
Lens
Receive Lens
5. SPIE Defense and Commercial Sensing Conference 2016, April 20th
, 2016, Baltimore, MD
*abhishek@mirrorcletech.com Phone: +1-510-524-8820 www.mirrorcletech.com
the MEMS mirror size can be kept small which helps with speed and shock tolerance. At the same time receiving
aperture can be independently designed, presumably of considerably larger size.
We have demonstrated a dual aperture imaging system (Figure 3c) with an aluminum housing that contains the laser,
MEMS mirror and lens. The receiving lens and sensor is mounted on top of the aluminum housing. The laser and MEMS
driver and processing system (not shown) is housed in a separate unit. A scanning system with a single photo-detector
can in effect be used as a “single pixel” camera. By raster scanning the laser beam with the MEMS mirror, the value of
the returned light can be measured for each pixel. An image constructed by using the offsets of the laser system for the
pixel position. Figure 4a and Figure 4b show such as system when used to scan objects with different patterns. In
particular for objects that have highly contrasting reflection patterns, a clearly defined image can be generated.
2.4 Laser Tracking with MEMS Mirrors
These laser imaging systems can be extended to provide real-time tracking of targets. One example is to place a retro-
reflective tape marked object in the laser scanner‟s field of view. When the laser is aimed to reflect off of the object, the
photosensor reads a high current. As the object is moved, a feedback system can steer the laser to continue tracking the
“high current” area and therefore the object. The advantage of course would be that the system does not need to continue
to raster over the entire field of view, a full scan which may take e.g. 100ms-200ms. Instead the system only
scans/rasters over a very limited „tracking field of view‟ area taking e.g. 5ms to complete the localized search. To
achieve this tracking, the laser is steered in a small, repeating pattern such as a Lissajous pattern (sinusoidal waveforms
for each axis). This is depicted in Figure 5a. At the time Tn-2, the center of the pattern is clearly not centered over the
target circle. As the pattern is drawn by the laser beam, the photo-detector values are measured so that the system can
map the strength of reflected light for each sample (segment) of the pattern. In this example as the system integrates the
waveform signal for each axis with the strength of the reflected light, it will find that a positive number on X and Y will
result, giving the system correction values to „nudge‟ its own center offset position toward the right and up. This
feedback system can then steer the centroid of the pattern closer to the centroid of the target circle. At time Tn-1, another
pattern is drawn sample by sample, and again at each sample measurement of retro-reflected light strength is taken. The
integrals for x and y will again result in positive values, recommending another adjustment to the right and up. Finally at
time Tn, the pattern is centered of the target object. In this case the integration will yield zeros for X and Y, as there are
equal number of hits on all sides of the pattern. In this manner, an object can be continuously tracked and the
corresponding azimuth and elevation values are provided from the MEMS mirror. A typical setup is to have an e.g.
200Hz sinusoid and a 400Hz sinusoid drawing a figure eight which is completed every 5ms. Thus, every 5ms a new
integration is completed and new adjustments of the center of the figure can be made. Setting the gains and other
parameters of this feedback loop as well as those exact frequencies leaves a lot of design space for the user since this is a
fully programmable and flexible scheme or „localized imaging‟.
Figure 5 a) A laser tracking system functions by drawing a Lissajous pattern and reading the photo-detector value for each
part of the reflected pattern. A feedback system can then move the centroid of the laser pattern to correspond to the retro-
reflective object. b) A laser tracking system based on a MEMS mirror and single photo-detector is used to track a UAV with
a retro-reflective object attached. c) MEMS mirror based laser tracking system is controlled by a Tablet via Bluetooth. As it
tracks a Ping-Pong ball in a table-tennis game, the ball’s trajectory is plotted in the Tablet view.
We have built several test systems that utilize this real-time tracking technique. An industrial ruggedized version of the
tracker has been tested outdoors in various weather conditions for three months, continuously providing tracking data.
The goal of this outdoor tracker was to demonstrate being able to track a target with direct sunlight, and any other
erroneous reflective surfaces seen in a daily environment. Target can be defined as highest retro-reflection part of the
field of view or anything over certain threshold of retro-reflection, typically a retro-reflective tape marked object.
Tn-2 Tn-1 Tn
(a) (b) (c)
Lissajous scan
wall image
Illuminated
Target
Ball
Ball trajectory
6. SPIE Defense and Commercial Sensing Conference 2016, April 20th
, 2016, Baltimore, MD
*abhishek@mirrorcletech.com Phone: +1-510-524-8820 www.mirrorcletech.com
Another tracking development system was used for tracking the position of a UAV by placing a retro-reflective object on
the UAV (Figure 5b). As the UAV moves, the feedback system of the tracking system steers the laser pattern to be
drawn at the centroid of the reflective object. Similarly, the MEMS tracking system has been mounted directly on a
drone and the system can track a stationary reflective object on the wall while the drone moves (Figure 5c). In this
system, the offset values of the MEMS mirror are wirelessly transmitted back to a smart phone so that the values can be
read and recorded. A MEMS based laser tracking system has many potential applications where a beam needs to be
continually directed at a desired target, such as free space communication between moving objects.
3. DISTANCE MEASUREMENT METHODOLOGY
In the previous section we discussed how a MEMS mirror based laser scan system can obtain information about the
environment in its 40° field of regard by observing the intensity of light from each chosen direction. This kind of
intensity information can be very useful and can supplement e.g. camera based imagery since it is obtained with active
lighting. Namely each pixel is providing its own illumination with laser power chosen to meet eye safety requirements.
While the fusion of camera data and laser-imaging or laser-tracking data can provide significantly more information, it is
still short of providing critical distance information to the drone.
3.1 Signal Strength Based
A rudimentary method for a coarse measurement of distance to a target with a uniform surface is measuring brightness of
the light reflected back from the target surface. In a setup where the MEMS mirror illuminates a spot on a target (e.g. a
wall), the received optical power from that diffuse source point source scales inversely with the square of distance [8]. If
the system is calibrated to a given target reflectance, e.g. in the case of indoor navigation, it may be possible to obtain
accuracy on the order of 1cm in distance measurement. However it is clear that any variations in texture, color, etc.
would result in significant changes of accuracy of such a setup if it is required to provide „multi-bit‟ accuracy data.
On the other hand, the system can be setup to work in a single-bit mode where it must simply decide whether the light
reflected back is above a set threshold value. This value could be set or calibrated e.g. when the receiver is a given
“safe” distance from the target. Any light received by the photodetector during a 2D raster by the MEMS scan head that
is above that level signifies an object that is closer than that „safe‟ distance at a given azimuth and elevation provided by
the MEMS scan system. This methodology can be used as a simple collision detection warning system - if the MEMS
imaging system mounted on a UAV approaches close to a wall or an object, it would trigger and alarm the user or the
drone firmware.
3.2 Triangulation Based
Figure 6 a) Triangulation based setup using a MEMS mirror and an Android tablet with a camera, using minimal distance
between MEMS and camera for an overlapping field of view. b) The MEMS scan module displaying vector content while the
detect and display system tracks the location of the IR laser gun. c) Triangulation based range finding setup using a MEMS
mirror and a tablet, with known distance between the tablet camera and MEMS to image the cup, creating a 3D point cloud.
A second method of measuring distance is through triangulation. We have demonstrated triangulation based distance
measurement using two of the MEMS based laser tracking systems or “MEMSEyes” [6]. In that work distance could be
quite accurately measured to a specific target marked with a retro-reflective tape. Both MEMSEyes lock onto the target
independently and maintain lock as the target moves. But for a more general distance measurement to an unmarked
surface, e.g. to a wall, methods of triangulation utilizing one MEMS based scan system and one camera are more
applicable. To obtain a distance measurement using a single MEMS scanner and a camera, they are first placed a known
distance apart. The MEMS scan head (Figure 1b) directs the laser beam at an object by pointing the MEMS device at the
target and the light is reflected back to the sensor. The camera observes the illuminated target, and is able to image the
(a) (b) (c)Tablet w/ camera
and control software
MEMS Mirror
Scan Head
IR Laser Gun
Detect and
Display
System
MEMS Mirror
Scan Head
Camera endCamera end
7. SPIE Defense and Commercial Sensing Conference 2016, April 20th
, 2016, Baltimore, MD
*abhishek@mirrorcletech.com Phone: +1-510-524-8820 www.mirrorcletech.com
location of the target. Using simple geometry to calculate the angle of the target from the center of the camera,
comparing with the angle of the MEMS device will give very accurate results at small distances relative to the separation
distance of the MEMS scan head and camera. At larger distances, the accuracy drops due to the angle changing much
slower than the increase of distance. Alternately, the same system where the distance between the camera and MEMS
scanner is reduced to almost zero, can be used to project a laser beam within the camera‟s field of view. A calibration
maybe needed to overlap the scanner‟s FoV onto the camera‟s FoV. A target designated based on the camera‟s FoV can
be transformed into the MEMS scanner‟s FoV, and a laser beam can be used to highlight the designated target. Figure 6b
demonstrates such a system where an IR laser is used to designate a point in the camera‟s FoV, and the MEMS scanner
highlights the point by projecting a green circle around the designated point. Since triangulation methodologies also have
limitations in accuracy, dropping off with distance, the MEMs scanner‟s laser source and receiver can be supplemented
or replaced with standard LRF module.
3.3 Laser Range Finder Based
Many LRF modules for low-cost and hand-held applications have a similar construction with a separate transmitter and
receiver aperture as in the example shown in Figure 7a. The outgoing laser beam is chosen to have low divergence and
may be visible or it may be invisible but accompanied with a visible wavelength to help aim and to improve laser safety.
When a measurement is requested, the laser beam is modulated with a specific pattern of very high frequencies. The
receiver in the LRF is a separate aperture of considerably larger area as seen in Figure 7a, designed to have a very small
FOV to maximize the sensitivity for the light being reflected back from the same axis it was transmitted. The light
returned is demodulated with the outgoing beam‟s pattern and the difference in time between the outgoing beam, and the
receiver‟s measurements, are converted to distance. There are numerous algorithms used to obtain distance information
from the phase of the received modulated light, and the specific algorithm of the units we experimented with is not
known. These LRFs are constructed to be used in commercial settings, therefore are limited to eye-safe laser powers.
The visible laser limits the distance and accuracy of the LRF module, or alternately, moving the laser source and receiver
optics to the infrared range, which are more expensive, to maintain accuracy at larger distances. Another key limitation
of the LRF is the single beam that needs to be directed by a user, or a mechanical mount for each measurement.
Figure 7 a) Off the shelf laser range finder module by Egismos, model: LDK-M2-RS, b) The parts that make up the laser
rangefinder module - The receiver shroud and ranging laser were modified for the experiments.
4. SETUP AND REDUCTION OF SWAP FOR UAV APPLICATIONS
4.1 SWaP Driver and Optical Setup
The USB-SL MEMS Controller is a compact, USB powered controller used to drive the MEMS devices, lasers, any
additional peripherals and receive analog inputs. The controller has multiple communication platforms, via USB
protocol, serial COM port, or wirelessly via Bluetooth. The controller consists of an embedded MEMS driver with
programmable hardware low-pass filters. The USB-SL MEMS Controller is the development platform for most
environments, since it is compact and portable, and fully supported by a various operating systems such as Windows,
Android, and most recently, Raspberry Pi OS. The development kit comes with SDKs in C++, Matlab, LabView, and
Java-for Android. The MEMS device, laser, any beam shaping or scanning optics, and photo receivers are mounted on
optical bread boarding for easy prototyping Typically, this controller connects to MEMS devices, and peripherals on an
optical bench for a table top demonstration. A typical development kit weighs >500g due to the heavy aluminum optical
breadboarding components. The controller itself is approximately ~140g, consuming ~1.5W, with a 5mW laser.
Ranging Laser Receiver Sensor
Receiver Electronics
Receiver shroud without lens
(a) (b)
8. SPIE Defense and Commercial Sensing Conference 2016, April 20th
, 2016, Baltimore, MD
*abhishek@mirrorcletech.com Phone: +1-510-524-8820 www.mirrorcletech.com
For certain applications where the size, weight and power of the controller and the optical scan head are critical, the
electronics hardware and optical hardware are simplified to meet the needs of the application. The optical scan head can
be reduced from the bread boarding mounted parts to a simplified opto-mechanical cell, with a diode laser, beam
reducing optics, the MEMS device in compact packaging, and finally a scan lens to increase the field of view. The lenses
for this optical cell can be scaled down in size from the typical ½” or 1” lenses used for breadboarding, down to 3mm-
6mm diameters. Additionally, with the availability of 3D printing tools, it is easy to prototype light-weight optical scan
heads (Figure 8a) as plastic parts for small quantities, and transfer to injection molded plastic housing for mass
production.
The components of the USB-SL MEMS controller that allow for flexibility or controlling multiple peripherals, is
simplified to a processor, a communication protocol via Bluetooth, an efficient power management system designed to
run from a single cell, 3.7V Li-Polymer battery, and a simplified MEMS driver without the hardware low-pass filters.
The new reduced design is called the Playzer Controller (Figure 8b). The overall footprint of the controller has been
reduced from 90mm x 60mm x 22mm for the USB-SL Controller to 60mm x 26mm x 15mm for the Playzer, while
maintaining the same processor and memory, and in turn the same firmware, driving a simplified controller. This
reduced controller still contains an analog input channel to connect a photosensor (Figure 8c) for any imaging or tracking
applications, and contains a laser driver circuit capable of driving up to 100mA of current to the laser and modulate at
>50kHz rates. Finally, the Playzer can be assembled together into a single package with a battery to make a compact,
portable laser scanner. The reduced Playzer MEMS controller with integrated optical scan head weighs ~50g without a
battery, consuming less than 1W of power, driving a 10mW laser diode, and powering a wireless Bluetooth transceiver
with ~30ft range.
Figure 8 a) A 3D printed optical cell consisting of a laser module, the MEMS mirror, and a scanning lens b) Playzer
controller to drive the MEMS and track c) A single photodiode for tracking
4.2 Drone Mount
In order to mount the Playzer onto a drone, they need to meet the weight restrictions on how much the drone can carry.
For larger drones such as the DJI Phantom II, the weight limit is ~400g, well above the weight of the Playzer. However,
to demonstrate the Playzer mounted on smaller (less than 50cm in length/width) drones that can be flown indoors, the
weight limit can get challenging. Typically the medium sized indoor drones have the capability of carrying <100g. In
order to meet these requirements, the Playzer‟s outer casing was dismantled, with just the controller and scan head
mounted onto the drone. Another reduction of weight comes from removing Playzer‟s battery, and operating directly
from the drone‟s 3.7V Li-Polymer batter. This reduces the overall flight time of the drone, but allows the concept of a
laser based tracker on a drone demonstrable (Figure 9).
Figure 9 a) A complete UAV tracking system using a Lissajous pattern to scan for the marker. b) The tracking system found
the marker and sticks to it. c)and d) UAV flies in various directions while maintaining a lock on the target.
(a) (b) (c)
Scanning
Lens
Laser
Module
MEMS Mirror Optical Cell
Photo-
detector
15mm
20mm
26mm60mm
(a) (b) (c) (d)
Marker
on wallLissajous scan
Tracking
system
on UAV
Scan caught on marker
Scan remains on marker
9. SPIE Defense and Commercial Sensing Conference 2016, April 20th
, 2016, Baltimore, MD
*abhishek@mirrorcletech.com Phone: +1-510-524-8820 www.mirrorcletech.com
Once the hardware and optics are onto the drone, the user flying the drone can connect to the Playzer using an android
device via Bluetooth. This allows the drone to fly as needed, and display vector content such as text, symbols, etc, or use
some of the photosensor based features such as laser-based imaging to image what is in front of the drone, or laser based
tracking to track retro-reflective objects within the drone‟s field of view.
5. UAV-BORNE LIDAR
5.1 Integration of an OEM LRF with the MEMS mirror
As we described above, the OEM LRF in Figure 7 was altered to prepare for experimentation with a MEMS mirror
based scanning system by removing the receive lens which strongly limited its FoV and by removing the laser such that
it can be directed freely (Figure 7b). The ranging laser could then be pointed at a MEMS mirror such that its beam could
be re-directed at different targets. The receiver‟s FoV has to be increased to accommodate the field of regard addressed
by the MEMS scanner. This increase in FoV automatically reduces the sensitivity of the sensor and would ultimately
significantly limit the distance the LRF can range, but the reduction in range from 100s of meters, to 10s of meters is
enough distance away from the drone to find an object that is in the way, and adjust its flight path accordingly.
Figure 10 a) An optical breadboard based demonstration of concept of the LRF module integrated with a MEMS device, b)
The kit is reduced in size by replacing the optical breadboarding with a 3D-printed optical cell.
For this study, an off-the-shelf LRF module by Egismos was used in demonstrating ranging. As mentioned above, we
separated the laser from the original packaged and directed the outgoing laser beam onto a MEMS scanner as shown in
Figure 10, such that the MEMS mirror‟s FoV is the same as the receiver‟s aperture. This was demonstrated to point the
MEMS device within a +/-2° FoV of the receiver for accurate ranging measurements up to ~5m. Any larger angles from
the MEMS device was out of the receiver‟s FoV.
The final demonstration of concept (Figure 10a) was to integrate the laser into a MEMS scan head to reduce any other
losses in the outgoing laser beam, and to remove the receiver‟s lens which was limiting the FoV. The removal of the
receiver‟s lens dramatically reduces the distance measurement range of the receiver, but increases the FoV. This system
demonstrated accurate distance measurements of ~2m, with a +/-8° FoV. The system included the modified Egismos
LRF module combined with the MEMS scan system as described above and a USB SL MEMS Controller connected to a
host PC. A Matlab script was used which moved the MEMS device through positions, and at each position requested
and received a ranging measurement from the LRF module. The placement/aim of the receiver was important in this
demonstration since the shroud around the receiver was still used to limit the light returning to the sensor, even though
the lens in the shroud was removed. The refresh rate of the LRF module was another limiting factor since each
measurement takes approximately ~0.5s to 1s.
5.2 Miniaturization for UAV Application
The optical breadboard-based concept in the previous section demonstrated the LRF can be integrated with the MEMS
device, and used together to point and range over a field of regard. The next step is to reduce the size and weight of the
optics and electronics to a compact module, capable of flying on a drone. The optical breadboarding is replaced by a 3D
printed cell to mount the MEMS device and the ranging laser. The development MEMS controller is replaced by the
Playzer controller to drive the MEMS device, and interface with a host. The LRF module communicates with the Playzer
controller, such that it can range, and return the measured value back to the Playzer. The LRF ranged values are available
MEMS
Mirror Ranging Laser
Receiver
without Lens
MEMS
MirrorRanging
Laser
Receiver without Lens
(a) (b)
10. SPIE Defense and Commercial Sensing Conference 2016, April 20th
, 2016, Baltimore, MD
*abhishek@mirrorcletech.com Phone: +1-510-524-8820 www.mirrorcletech.com
for the host to access from the Playzer controller through additional API commands for this application. The fully
miniaturized unit (Figure 10b) is reduced down to ~23g for the MEMS scan head and LRF module, and an additional
~20g for the Playzer electronics.
6. CONCLUSIONS
We have demonstrated a MEMS laser scan and track module which addresses a 40° field of regard with an eye-safe laser
beam and can be used to display and annotate, image, and track targets. The module is less than 40g in weight, fits in a
volume of <90mm x 60mm x 40mm, and consumes less than 750mW of power. This MEMS scan module has been
mounted onto a toy drone with very minimal or no payload capability, and successfully used to display text and vector
content on walls. Furthermore it was successfully used to track retro-reflective targets while the drone was in flight, and
could therefore maintain the laser beam pointed at a marked spot on the wall while the drone was erratically moving.
The MEMS based LRF module has been prepared, weighing ~45g in weight, fitting in a volume of <70mm x 60mm x
60mm, consuming less than 1W of power in normal operation. This final version of the LRF module has not been flown
on a UAV and demonstrated ranging yet. The results of the LRF module‟s flight will be presented at the conference talk.
7. REFERENCES
[1] Mirrorcle Technologies, Inc. Website, Support Page 2016, http://mirrorcletech.com/support.html, March, 2016.
[2] Keshavmurthy, S.P., et al, “Non-contact Sensing System Having MEMS-based Light Source,” Patent No.
US20120062706 A1, March 15, 2012.
[3] Chen D. Lu, Martin F. Kraus, Benjamin Potsaid, Jonathan J. Liu, WooJhon Choi, Vijaysekhar Jayaraman, Alex
E. Cable, Joachim Hornegger, Jay S. Duker, and James G. Fujimoto, "Handheld ultrahigh speed swept source optical
coherence tomography instrument using a MEMS scanning mirror," Biomed. Opt. Express 5, 293-311 (2014)
[4] Moss, R., Yuan, P., Quesada, E., Sudharsanan, R., Stann, B., Dammann, J., Giza, M., Lawler, W., "Low-cost
compact MEMS scanning ladar system for robotic applications", Proc. SPIE 8379, Laser Radar Technology and
Applications XVII, 837903, May , 2012
[5] Spectrolab Website, SpectroScan 3D MEMS LIDAR System Model MLS 201 Datasheet, 2012,
http://www.spectrolab.com/sensors/pdfs/products/SPECTROSCAN3D_RevA%20071912.pdf, March 2016.
[6] Milanović, V., Siu, N., Kasturi, A., Radojičić, M., Su, Y., ""MEMSEye" for Optical 3D Position and
Orientation Measurement," Proc. SPIE 7930, MOEMS and Miniaturized Systems X, 7930-27, March, 2011.
[7] Richter, S., Stutz, M., Gratzke, A., Schleitzer, Y., Krampert, G., Hoeller, F., Wolf, U., Doering, D., "Position
sensing and tracking with quasistatic MEMS mirrors," Proc. SPIE 8616, MOEMS and Miniaturized Systems XII,
86160D, March, 2013.
[8] Cassinelli, A., et al, “Smart Laser-Scanner for 3D Human-Machine Interface,” Int. Conf. on Human Factors in
Computing Systems, Portland, OR, Apr. 02 - 07, 2005, pp. 1138 - 1139.
[9] Milanović, V., Matus, G., McCormick, D.T., "Gimbal-less Monolithic Silicon Actuators For Tip-Tilt-Piston
Micromirror Applications," IEEE J. of Select Topics in Quantum Electronics, vol. 10, no. 3, pp. 462-471, Jun. (2004).
[10] Milanović, V. "Linearized Gimbal-less Two-Axis MEMS Mirrors," 2009 Optical Fiber Communication
Conference and Exposition (OFC'09), San Diego, CA, Mar. 25, 2009.