Selvan Viswanathan, a MicroVision principal engineer, presented on Laser Beam Scanning Short Throw Displays & Laser-Based Virtual Touchscreens at LDC 2017.
Applications Generated from a MEMS-based Laser Beam Scanning Technology PlatformMicroVision
At this year's MEMS Engineer Forum, MicroVision's Director of Product Engineering, Jari Honkanen, spoke on Applications Generated from a MEMS-based Laser Beam Scanning
"Laser Beam Scanning LiDAR: MEMS-Driven 3D Sensing Automotive Applications from Interior to the Exterior" presentation by Jari Honkanen at FutureCar 2017: New Era of Automotive Electronics Workshop, Nov 8-10, 2017, Georgia Institute of Technology, Atlanta, GA
Laser Beam Scanning LiDAR: MEMS-Driven 3D Sensing Automotive Applications fro...MicroVision
MicroVision’s Director of Product Engineering, Jari Honkanen, gave a presentation at FUTURECAR 2017 detailing how MicroVision's Laser Beam Scanning technology for MEMS-based LiDAR solutions provides a unique approach that enables new 3D sensor capabilities in areas such as dynamic and variable resolution, acquisition speed, and field of view.
MicroVision Scanning Engines Overview | January 2017MicroVision
CES was an excellent venue for MicroVision to meet with a number of customers and partners to present the capabilities of the three engine products they have announced.
"MEMS-based Laser Beam Scanning Technology Platform; Basis for Applications from Displays to 3D Sensors and 3D Printers" presentation by Jari Honkanen at MEMS Engineer Forum, April 26-27, 2017, Ryogoku, Tokyo, Japan
MEMS Laser Scanning, the platform for next generation of 3D Depth SensorsJari Honkanen
MicroVision's MEMS Laser Beam Scanning based 3D Depth Sensing Technology presentation by Jari Honkanen at MEMS & Sensors Industry Group Conference Asia 2016, Shanghai, China, September 13-14, 2016
MEMS and Sensors in Automotive Applications on the Road to Autonomous Vehicle...Jari Honkanen
MicroVision's MEMS Laser Beam Scanning Technology applied to HUD and ADAS applications presentation by Jari Honkanen at the MEMS & Sensors Executive Congress 2016, Scottsdale, AZ, November 10-11, 2016
Applications Generated from a MEMS-based Laser Beam Scanning Technology PlatformMicroVision
At this year's MEMS Engineer Forum, MicroVision's Director of Product Engineering, Jari Honkanen, spoke on Applications Generated from a MEMS-based Laser Beam Scanning
"Laser Beam Scanning LiDAR: MEMS-Driven 3D Sensing Automotive Applications from Interior to the Exterior" presentation by Jari Honkanen at FutureCar 2017: New Era of Automotive Electronics Workshop, Nov 8-10, 2017, Georgia Institute of Technology, Atlanta, GA
Laser Beam Scanning LiDAR: MEMS-Driven 3D Sensing Automotive Applications fro...MicroVision
MicroVision’s Director of Product Engineering, Jari Honkanen, gave a presentation at FUTURECAR 2017 detailing how MicroVision's Laser Beam Scanning technology for MEMS-based LiDAR solutions provides a unique approach that enables new 3D sensor capabilities in areas such as dynamic and variable resolution, acquisition speed, and field of view.
MicroVision Scanning Engines Overview | January 2017MicroVision
CES was an excellent venue for MicroVision to meet with a number of customers and partners to present the capabilities of the three engine products they have announced.
"MEMS-based Laser Beam Scanning Technology Platform; Basis for Applications from Displays to 3D Sensors and 3D Printers" presentation by Jari Honkanen at MEMS Engineer Forum, April 26-27, 2017, Ryogoku, Tokyo, Japan
MEMS Laser Scanning, the platform for next generation of 3D Depth SensorsJari Honkanen
MicroVision's MEMS Laser Beam Scanning based 3D Depth Sensing Technology presentation by Jari Honkanen at MEMS & Sensors Industry Group Conference Asia 2016, Shanghai, China, September 13-14, 2016
MEMS and Sensors in Automotive Applications on the Road to Autonomous Vehicle...Jari Honkanen
MicroVision's MEMS Laser Beam Scanning Technology applied to HUD and ADAS applications presentation by Jari Honkanen at the MEMS & Sensors Executive Congress 2016, Scottsdale, AZ, November 10-11, 2016
MEMS and Sensors in Automotive Applications on the Road to Autonomous Vehicle...MicroVision
MicroVision’s Director of Technical Marketing and Applications Development, Jari Honkanen was invited to speak at MSIG’s 12th annual MEMS & Sensors Executive Congress 2016 on MEMS and sensors as key enabling technologies in the automotive market. Honkanen also discussed the benefits of applying MicroVision’s MEMS scanned virtual image HUD and LIDAR sensor concept for ADAS applications.
MEMS Laser Scanning, the platform for next generation of 3D Depth SensorsMicroVision
MicroVision's PicoP® scanning technology is a MEMS-based Laser Beam Scanning (LBS) solution for pico projection, heads-up-display, and augmented reality eyewear applications. The same flexible technology can also be applied to exciting new sensing applications, such as 3D depth sensing. Demand for small and low cost 3D depth sensing solutions is growing rapidly, driven by increasing demand for new Natural User Interface, Machine Vision, Robotic Navigation, Metrology, and Advanced Driver Assistance System (ADAS) solutions.
This presentation, prepared by MicroVision's Jari Honkanen and presented at the MEMS & Sensors Industry Group Conference Asia 2016, compares the existing 3D depth sensor solutions based on stereo cameras, structured light and 3D CMOS Cameras.
MicroVision then presents a new MEMS LBS depth sensor platform solution that can enable a new generation of tiny 3D depth sensors with capabilities such as dynamic variable resolution and variable acquisition speed. These dynamic LBS depth sensors are an enabling technology for a completely new set of innovative products and applications.
UAV-Borne LiDAR with MEMS Mirror Based Scanning Capability Ping Hsu
Firstly, we demonstrated a wirelessly controlled MEMS scan module with imaging and laser tracking capability which can be mounted and flown on a small UAV quadcopter. The MEMS scan module was reduced down to a small volume of <90mm><70mm><50g when powered by the UAV‟s battery. The MEMS mirror based LiDAR system allows for ondemand ranging of points or areas within the FoR without altering the UAV‟s position. Increasing the LRF ranging frequency and stabilizing the pointing of the laser beam by utilizing the onboard inertial sensors and the camera are additional goals of the next design. Keywords: MEMS Mirrors, laser tracking, laser imaging, laser range finder, UAV, drone, LiDAR.
Mitchell Reifel (pmdtechnologies ag): pmd Time-of-Flight – the Swiss Army Kni...AugmentedWorldExpo
A talk from the Develop Track at AWE USA 2018 - the World's #1 XR Conference & Expo in Santa Clara, California May 30- June 1, 2018.
Mitchell Reifel (pmdtechnologies ag): pmd Time-of-Flight – the Swiss Army Knife of 3D depth sensing
pmd's Time-of-Flight technology is integrated into two AR-smartphones on the market! pmd ToF is in 4 AR headsets! This talk will show what pmd has achieved, what they can do with our 3D ToF technology and why depth sensing is one secret sauce for AR, VR and MR.
http://AugmentedWorldExpo.com
Dave Goldman (Lumus): State of Smartglasses Today and How We Got HereAugmentedWorldExpo
A talk from the Main Stage at AWE Tel Aviv 2018 - the World's #1 XR Conference & Expo in Tel Aviv, Israel, November 5, 2018.
State of Smartglasses Today and How We Got Here
We will address the challenges for hardware and specifically how the optics piece (incidently the only hardware specific for smartglasses) is evolving to help.
http://AugmentedWorldExpo.com
These slides use concepts from my (Jeff Funk) course entitled analyzing hi-tech opportunities to analyze how Light Field Technology is becoming economic feasible for an increasing number of applications. Light Field Cameras record all of the light fields in a picture instead of just one light field. This capability enables users to change the focus of pictures after they have been taken and to more easily record 3D data. These features are becoming economically feasible improvements because of rapid improvements in camera chips and micro-lens arrays (an example of micro-electronic mechanical systems, MEMS). These features offer alternative ways to do 3D sensing for automated vehicles and augmented reality and can enable faster data collection with telescopes.
Neil Sarkar (AdHawk Microsystems): Ultra-Fast Eye Tracking Without Cameras fo...AugmentedWorldExpo
A talk from the Develop Track at AWE USA 2018 - the World's #1 XR Conference & Expo in Santa Clara, California May 30- June 1, 2018.
Neil Sarkar (AdHawk Microsystems): Ultra-Fast Eye Tracking Without Cameras for Mobile AR Headsets
This session showcases the first camera-free eye-tracking microsystem. A MEMS (microelectromechanical system) device on a tiny chip scans a beam of light across the eye 4,500 times every second. The latest specifications to be revealed at AWE are enabling foveated rendering in mobile platforms, endpoint prediction during saccades, and unprecedented insights into the state of the user.
http://AugmentedWorldExpo.com
Hiren Bhinde (Qualcomm Technologies): Unlocking the Mysteries of SLAM (simult...AugmentedWorldExpo
A talk from the Develop Track at AWE USA 2018 - the World's #1 XR Conference & Expo in Santa Clara, California May 30- June 1, 2018.
Hiren Bhinde (Qualcomm Technologies): Unlocking the Mysteries of SLAM (simultaneous localization & mapping)
Qualcomm will be presenting a tutorial on XR technologies.
http://AugmentedWorldExpo.com
Presented at Softwarica College of IT, Kathmandu
This presentation includes:
1. About AR
a. Definition
b. Examples
c. Image Recognition and Tracking
d. SLAM (Simultaneous Localization and Mapping)
e. Difference between VR and AR
2. History of AR
3. Current Scenario of AR
a. Statistics
b. Mobile AR Examples
c. Magic Leap and Hololens
4. Getting Started with Unity
a. SDK Cheatsheet
For the full video of this presentation, please visit:
http://www.embedded-vision.com/platinum-members/qualcomm/embedded-vision-training/videos/pages/may-2016-embedded-vision-summit-mangen
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Michael Mangen, Product Manager for Camera and Computer Vision at Qualcomm, presents the "High-resolution 3D Reconstruction on a Mobile Processor" tutorial at the May 2016 Embedded Vision Summit.
Computer vision has come a long way. Use cases that were previously not possible in mass-market devices are now more accessible thanks to advances in depth sensors and mobile processors. In this presentation, Mangen provides an overview of how we are able to implement high-resolution 3D reconstruction – a capability typically requiring cloud/server processing – on a mobile processor. This is an exciting example of how new sensor technology and advanced mobile processors are bringing computer vision capabilities to broader markets.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/embedded-vision-alliance/embedded-vision-training/videos/pages/may-2019-embedded-vision-summit-osterwood-tue
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Chris Osterwood, Founder and CEO of Capable Robot Components, presents the "How to Choose a 3D Vision Sensor" tutorial at the May 2019 Embedded Vision Summit.
Designers of autonomous vehicles, robots and many other systems are faced with a critical challenge: Which 3D vision sensor technology to use? There are a wide variety of sensors on the market, employing modalities including passive stereo, active stereo, time of flight, 2D and 3D lasers and monocular approaches. This talk provides an overview of 3D vision sensor technologies and their capabilities and limitations, based on Osterwood's experience selecting the right 3D technology and sensor for a diverse range of autonomous robot designs.
There is no perfect sensor technology and no perfect sensor, but there is always a sensor which best aligns with the requirements of your application—you just need to find it. Osterwood describes a quantitative and qualitative evaluation process for 3D vision sensors, including testing processes using both controlled environments and field testing, and some surprising characteristics and limitations he's uncovered through that testing.
Recent Trends And Challenges In Augmented Realitysaurabh kapoor
Augmented Reality is a developing area in the field of virtual reality research. Similarly like Virtual Reality, Augmented Reality is becoming an emerging platform for numerous applications. The work done here reveals the current state-of-the-art in Augmented Reality. Moreover current issues, trends and challenges are analyzed here.
Keywords: Signal processing, Applied optics, Computer graphics and vision, Electronics, Art, and Online photo collections
A computational camera attempts to digitally capture the essence of visual information by exploiting the synergistic combination of task-specific optics, illumination, sensors and processing. We will discuss and play with thermal cameras, multi-spectral cameras, high-speed, and 3D range-sensing cameras and camera arrays. We will learn about opportunities in scientific and medical imaging, mobile-phone based photography, camera for HCI and sensors mimicking animal eyes.
We will learn about the complete camera pipeline. In several hands-on projects we will build several physical imaging prototypes and understand how each stage of the imaging process can be manipulated.
We will learn about modern methods for capturing and sharing visual information. If novel cameras can be designed to sample light in radically new ways, then rich and useful forms of visual information may be recorded -- beyond those present in traditional protographs. Furthermore, if computational process can be made aware of these novel imaging models, them the scene can be analyzed in higher dimensions and novel aesthetic renderings of the visual information can be synthesized.
In this couse we will study this emerging multi-disciplinary field -- one which is at the intersection of signal processing, applied optics, computer graphics and vision, electronics, art, and online sharing through social networks. We will examine whether such innovative camera-like sensors can overcome the tough problems in scene understanding and generate insightful awareness. In addition, we will develop new algorithms to exploit unusual optics, programmable wavelength control, and femto-second accurate photon counting to decompose the sensed values into perceptually critical elements.
MEMS and Sensors in Automotive Applications on the Road to Autonomous Vehicle...MicroVision
MicroVision’s Director of Technical Marketing and Applications Development, Jari Honkanen was invited to speak at MSIG’s 12th annual MEMS & Sensors Executive Congress 2016 on MEMS and sensors as key enabling technologies in the automotive market. Honkanen also discussed the benefits of applying MicroVision’s MEMS scanned virtual image HUD and LIDAR sensor concept for ADAS applications.
MEMS Laser Scanning, the platform for next generation of 3D Depth SensorsMicroVision
MicroVision's PicoP® scanning technology is a MEMS-based Laser Beam Scanning (LBS) solution for pico projection, heads-up-display, and augmented reality eyewear applications. The same flexible technology can also be applied to exciting new sensing applications, such as 3D depth sensing. Demand for small and low cost 3D depth sensing solutions is growing rapidly, driven by increasing demand for new Natural User Interface, Machine Vision, Robotic Navigation, Metrology, and Advanced Driver Assistance System (ADAS) solutions.
This presentation, prepared by MicroVision's Jari Honkanen and presented at the MEMS & Sensors Industry Group Conference Asia 2016, compares the existing 3D depth sensor solutions based on stereo cameras, structured light and 3D CMOS Cameras.
MicroVision then presents a new MEMS LBS depth sensor platform solution that can enable a new generation of tiny 3D depth sensors with capabilities such as dynamic variable resolution and variable acquisition speed. These dynamic LBS depth sensors are an enabling technology for a completely new set of innovative products and applications.
UAV-Borne LiDAR with MEMS Mirror Based Scanning Capability Ping Hsu
Firstly, we demonstrated a wirelessly controlled MEMS scan module with imaging and laser tracking capability which can be mounted and flown on a small UAV quadcopter. The MEMS scan module was reduced down to a small volume of <90mm><70mm><50g when powered by the UAV‟s battery. The MEMS mirror based LiDAR system allows for ondemand ranging of points or areas within the FoR without altering the UAV‟s position. Increasing the LRF ranging frequency and stabilizing the pointing of the laser beam by utilizing the onboard inertial sensors and the camera are additional goals of the next design. Keywords: MEMS Mirrors, laser tracking, laser imaging, laser range finder, UAV, drone, LiDAR.
Mitchell Reifel (pmdtechnologies ag): pmd Time-of-Flight – the Swiss Army Kni...AugmentedWorldExpo
A talk from the Develop Track at AWE USA 2018 - the World's #1 XR Conference & Expo in Santa Clara, California May 30- June 1, 2018.
Mitchell Reifel (pmdtechnologies ag): pmd Time-of-Flight – the Swiss Army Knife of 3D depth sensing
pmd's Time-of-Flight technology is integrated into two AR-smartphones on the market! pmd ToF is in 4 AR headsets! This talk will show what pmd has achieved, what they can do with our 3D ToF technology and why depth sensing is one secret sauce for AR, VR and MR.
http://AugmentedWorldExpo.com
Dave Goldman (Lumus): State of Smartglasses Today and How We Got HereAugmentedWorldExpo
A talk from the Main Stage at AWE Tel Aviv 2018 - the World's #1 XR Conference & Expo in Tel Aviv, Israel, November 5, 2018.
State of Smartglasses Today and How We Got Here
We will address the challenges for hardware and specifically how the optics piece (incidently the only hardware specific for smartglasses) is evolving to help.
http://AugmentedWorldExpo.com
These slides use concepts from my (Jeff Funk) course entitled analyzing hi-tech opportunities to analyze how Light Field Technology is becoming economic feasible for an increasing number of applications. Light Field Cameras record all of the light fields in a picture instead of just one light field. This capability enables users to change the focus of pictures after they have been taken and to more easily record 3D data. These features are becoming economically feasible improvements because of rapid improvements in camera chips and micro-lens arrays (an example of micro-electronic mechanical systems, MEMS). These features offer alternative ways to do 3D sensing for automated vehicles and augmented reality and can enable faster data collection with telescopes.
Neil Sarkar (AdHawk Microsystems): Ultra-Fast Eye Tracking Without Cameras fo...AugmentedWorldExpo
A talk from the Develop Track at AWE USA 2018 - the World's #1 XR Conference & Expo in Santa Clara, California May 30- June 1, 2018.
Neil Sarkar (AdHawk Microsystems): Ultra-Fast Eye Tracking Without Cameras for Mobile AR Headsets
This session showcases the first camera-free eye-tracking microsystem. A MEMS (microelectromechanical system) device on a tiny chip scans a beam of light across the eye 4,500 times every second. The latest specifications to be revealed at AWE are enabling foveated rendering in mobile platforms, endpoint prediction during saccades, and unprecedented insights into the state of the user.
http://AugmentedWorldExpo.com
Hiren Bhinde (Qualcomm Technologies): Unlocking the Mysteries of SLAM (simult...AugmentedWorldExpo
A talk from the Develop Track at AWE USA 2018 - the World's #1 XR Conference & Expo in Santa Clara, California May 30- June 1, 2018.
Hiren Bhinde (Qualcomm Technologies): Unlocking the Mysteries of SLAM (simultaneous localization & mapping)
Qualcomm will be presenting a tutorial on XR technologies.
http://AugmentedWorldExpo.com
Presented at Softwarica College of IT, Kathmandu
This presentation includes:
1. About AR
a. Definition
b. Examples
c. Image Recognition and Tracking
d. SLAM (Simultaneous Localization and Mapping)
e. Difference between VR and AR
2. History of AR
3. Current Scenario of AR
a. Statistics
b. Mobile AR Examples
c. Magic Leap and Hololens
4. Getting Started with Unity
a. SDK Cheatsheet
For the full video of this presentation, please visit:
http://www.embedded-vision.com/platinum-members/qualcomm/embedded-vision-training/videos/pages/may-2016-embedded-vision-summit-mangen
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Michael Mangen, Product Manager for Camera and Computer Vision at Qualcomm, presents the "High-resolution 3D Reconstruction on a Mobile Processor" tutorial at the May 2016 Embedded Vision Summit.
Computer vision has come a long way. Use cases that were previously not possible in mass-market devices are now more accessible thanks to advances in depth sensors and mobile processors. In this presentation, Mangen provides an overview of how we are able to implement high-resolution 3D reconstruction – a capability typically requiring cloud/server processing – on a mobile processor. This is an exciting example of how new sensor technology and advanced mobile processors are bringing computer vision capabilities to broader markets.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/embedded-vision-alliance/embedded-vision-training/videos/pages/may-2019-embedded-vision-summit-osterwood-tue
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Chris Osterwood, Founder and CEO of Capable Robot Components, presents the "How to Choose a 3D Vision Sensor" tutorial at the May 2019 Embedded Vision Summit.
Designers of autonomous vehicles, robots and many other systems are faced with a critical challenge: Which 3D vision sensor technology to use? There are a wide variety of sensors on the market, employing modalities including passive stereo, active stereo, time of flight, 2D and 3D lasers and monocular approaches. This talk provides an overview of 3D vision sensor technologies and their capabilities and limitations, based on Osterwood's experience selecting the right 3D technology and sensor for a diverse range of autonomous robot designs.
There is no perfect sensor technology and no perfect sensor, but there is always a sensor which best aligns with the requirements of your application—you just need to find it. Osterwood describes a quantitative and qualitative evaluation process for 3D vision sensors, including testing processes using both controlled environments and field testing, and some surprising characteristics and limitations he's uncovered through that testing.
Recent Trends And Challenges In Augmented Realitysaurabh kapoor
Augmented Reality is a developing area in the field of virtual reality research. Similarly like Virtual Reality, Augmented Reality is becoming an emerging platform for numerous applications. The work done here reveals the current state-of-the-art in Augmented Reality. Moreover current issues, trends and challenges are analyzed here.
Keywords: Signal processing, Applied optics, Computer graphics and vision, Electronics, Art, and Online photo collections
A computational camera attempts to digitally capture the essence of visual information by exploiting the synergistic combination of task-specific optics, illumination, sensors and processing. We will discuss and play with thermal cameras, multi-spectral cameras, high-speed, and 3D range-sensing cameras and camera arrays. We will learn about opportunities in scientific and medical imaging, mobile-phone based photography, camera for HCI and sensors mimicking animal eyes.
We will learn about the complete camera pipeline. In several hands-on projects we will build several physical imaging prototypes and understand how each stage of the imaging process can be manipulated.
We will learn about modern methods for capturing and sharing visual information. If novel cameras can be designed to sample light in radically new ways, then rich and useful forms of visual information may be recorded -- beyond those present in traditional protographs. Furthermore, if computational process can be made aware of these novel imaging models, them the scene can be analyzed in higher dimensions and novel aesthetic renderings of the visual information can be synthesized.
In this couse we will study this emerging multi-disciplinary field -- one which is at the intersection of signal processing, applied optics, computer graphics and vision, electronics, art, and online sharing through social networks. We will examine whether such innovative camera-like sensors can overcome the tough problems in scene understanding and generate insightful awareness. In addition, we will develop new algorithms to exploit unusual optics, programmable wavelength control, and femto-second accurate photon counting to decompose the sensed values into perceptually critical elements.
Next Gen Computational Ophthalmic Imaging for Neurodegenerative Diseases and ...PetteriTeikariPhD
Shallow literature analysis on recent trends in computational ophthalmic imaging with focus on neurodegenerative disease imaging / oculomics.
Open-ended literature review on what you could be building next.
#1/2: Hardware
#2/2: Computational imaging
Alternative download link:
https://www.dropbox.com/scl/fi/d34pgi3xopfjbrcqj2lvi/retina_imaging_2024_computational.pdf?rlkey=xnt1dbe8rafyowocl9cbgjh3p&dl=0
Closed Loop Control of Gimbal-less MEMS Mirrors for Increased Bandwidth in Li...Ping Hsu
we presented a low SWaP wirelessly controlled MEMS mirror
-
based LiDAR
prototype
which utilized an OEM
laser rangefinder for distance measurement
[1]
.
The MEMS mirror was run in open loop based on its e
xceptional
ly fast
design and high
repeatability performance.
However, to
further
extend the bandwidth and incorporate necessary eye
-
safety features, we recently focused on providing mirror position feedback and running the system in closed loop control.
COMIT Community Day Winter 2018 - Assystem Machine VisionComit Projects Ltd
Presentation on Machine Vision delivered by Assystem at the COMIT Community Day held in London at Maryleybone hotel on 6th December 2018. Hosted by Dropbox.
Soon gi Park (LetinAR): PinMR: Novel Optical Solution for AR GlassesAugmentedWorldExpo
A talk from the XR Enablement Track at AWE USA 2019 - the World's #1 XR Conference & Expo in Santa Clara, California May 29-31, 2019.
Soon gi Park (LetinAR): PinMR: Novel Optical Solution for AR Glasses
LetinAR is a Seoul-based startup developing see-through optical systems for wearable augmented reality devices. Based on our unique pin mirror technology which has quite different approaches from any other existing combiner optics, LetinAR has demonstrated ultrawide field-of-view more than 80 degrees with 8K high resolution. LetinAR also presented a form-factor-oriented glasses prototype device having the same appearance of normal glasses. In this presentation, we introduce a pin mirror technology and its benefits and contributions to wearable AR hardware including high image quality without any degradation, visual comfort supporting correct vergence and accommodation, wide field-of-view providing immersive experience as well as cost effective manufacturing process. We believe that the advantages of the pin mirror technology will not only suffice the optical requirements in future AR devices, but also shorten the beginning of AR/VR/MR era.
https://awexr.com
A product world is trying to change focusing on "Content-Centric". Then, it is necessary to design our products by "Content-Centric". For that purpose, it is necessary to perform a trial production and a products design quickly flexibly.
Therefore we created one robot based on the soul of "DIwO(Do It with Others)" used as basic concepts, such as Make:, in order to realize it.
It is created by combining various products used as SoC which Pandaboard.
--Brain wave sensor(http://www.neurosky.com/)
--2-leg Robot
--see-through display(http://www.brother.com/en/news/2011/airscouter/index.htm)
--Xtion(http://www.asus.com/Multimedia/Motion_Sensor/Xtion_PRO/)
-Software
--Android
--openFrameworks(http://www.openframeworks.cc/)
This is an "AR(augmented reality)-Treasure Hunting Game“
You get virtual treasures by controlling real robot!
Rule:
-Look at radar window like dragon radar.
--Show the treasure on radar as red star.
--Center is a place in which a robot is present.
---Blue arrow is direction of robot.
-Look at line graph. This is brain wave line graph.
--You control the robot to the treasure point by your brain wave.
--If you feel some feeling, you control the robot by each feelings.
---Exciting -> Turn left
---Normal -> Go toward
---Relax -> Turn right
Such a reason, it is possible to make trial production and commercial production quickly.
Ultra High Focusing Speed up to 12000Hz
Long Term Reliability more than 1 billion operating cycle
Ultra Small Power Consumption less than 1mA
Shock Resistance more than 5000G
Operating Temperature Range: -30 ~ 100°C
Single Camera Based 3D Camera
Volumetrically 25% Smaller than Current Technology Based Module
Real-Time Multi Focusing
Applicable to Volumetric 3D DISPLAY, Compact Auto Focus and 3D camera module
Applying Deep Learning Vision Technology to low-cost/power Embedded SystemsJenny Midwinter
Slides from Ottawa Machine Learning Meetup from January 16, 2016.
Pierre Paulin, Director of R&D at Synopsys (Embedded Vision Subsystems) , will be will be making a presentation on:
“Applying Deep Learning Vision Technology to Low-Cost, Low-Power Embedded Systems: An Industrial Perspective”
For the full video of this presentation, please visit:
http://www.embedded-vision.com/platinum-members/nxp/embedded-vision-training/videos/pages/may-2016-embedded-vision-summit
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Tom Wilson, ADAS Product Line Manager at NXP Semiconductors, presents the "Sensing Technologies for the Autonomous Vehicle" tutorial at the May 2016 Embedded Vision Summit.
Autonomous vehicles will necessarily utilize a range of sensing technologies to see and react to their surroundings. We are witnessing dramatic advances not just for embedded vision, but also in complementary technologies like radar and LiDAR. Each of these sensing technologies provides unique capabilities for giving a vehicle a complete view of its surroundings. This presentation compares vision-based sensing with complementary sensing technologies, explores key trends in sensors for autonomous vehicles, and analyses challenges and opportunities in fusing the output of multiple sensor technologies to enable robust perception and mapping for autonomous vehicles.
If you are inspired by an idea 'X', how will you come up with the neXt idea? This presentation shows 6 different ways you can exercise your mind in an attempt to develop the next cool idea.
http://raskar.info
http://cameraculture.info
Towards Realization of 6M Visualization in Manufacturing SitesKurata Takeshi
In this paper, we first survey technologies on indoor positioning and motion recognition which is one of the principal IoH technologies. Next, we illustrate an example of system designs to extract information on 6M consisting of Man, Machine, Material, Method, Mother-nature, and Money by performing intelligent compression of sensor data, facility data, and work records for various modelling, simulation, and mieruka. Finally, we discuss the advantages for introducing measurement technologies.
Presented in IEEE VR 2019 Workshop on Smart Work Technologies (WSWT)
http://seam.pj.aist.go.jp/symposium/WSWT2019/
Similar to Laser Beam Scanning Short Throw Displays & an Exploration of Laser-Based Virtual Touchscreens (20)
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
Laser Beam Scanning Short Throw Displays & an Exploration of Laser-Based Virtual Touchscreens
1. Don’t just think outside the box.
See outside the box.
Laser Beam Scanning Short Throw Displays &
an Exploration of Laser-Based Virtual Touchscreens
Selvan Viswanathan [selvan@ieee.org]
LDC2-2, LDC 2017, Yokohama, Japan
2. 2 MICROVISION, INC. COPYRIGHT 2016. ALL RIGHTS RESERVED.5/2/2017
PicoP Scanning
Technology Platform
4. 4
MICROVISION, INC. PROPRIETARY AND CONFIDENTIAL
COPYRIGHT 2014. ALL RIGHTS RESERVED.
5/2/2017
SLOW SCAN AXIS
(Vertical)
FAST SCAN AXIS
(Horizontal)
MEMS = Micro Electro Mechanical System
Coil
Mirror
Φ1.2mm
PicoP® Scanning Technology - MEMS
5. 5 MICROVISION, INC. COPYRIGHT 2013. ALL RIGHTS RESERVED.5/2/2017
MICROVISION, INC. COPYRIGHT 2013. ALL RIGHTS RESERVED.
Vertical Trajectory:
Drive vs. Time
Generation of Raster Pattern
Horizontal Trajectory: Drive vs. Time
Single drive input contains both
Slow Scan Ramp and Fast Scan
Sinusoidal drive signals
PicoP® Scanning Technology – MEMS Actuation
6. PicoP® Scanning Technology - Projection Display
Red laser
Green laserBlue laser
2D MEMS
Micro mirror
Raster Scan
8. 8
MICROVISION, INC. PROPRIETARY AND CONFIDENTIAL
COPYRIGHT 2014. ALL RIGHTS RESERVED.
5/2/2017
500mm Laser spot size grows
proportionally to the
image size
PicoP® Scanning Technology – Always in Focus
10. 10 MICROVISION, INC. COPYRIGHT 2016. ALL RIGHTS RESERVED.5/2/2017
Distortions in Short Throw Configuration
Projection distal end
Projection proximal end
Spot-size bloom
Raster separation
Keystone distortion
Proximal End Distal End
Proximal End
Distal End
[SIDE VIEW]
[TOP VIEW]
Raw Display Simulated
Corrected Display
11. 11 MICROVISION, INC. COPYRIGHT 2016. ALL RIGHTS RESERVED.5/2/2017
MicroVision’s Distortion Correction Methods
Modulate horizontal
sine amplitude
(or modify horizontal
interpolation of laser pulsing)
Reduce vertical
scan amplitude
Non-linear vertical
ramp
Optional optical
correction
Keystoned image
showing raster
separation
Final image rendered
accurately. Image
expanded in short
throw application.
Correct image
aspect ratio
Apply keystone
correction
Correct raster
separation. Spot size
bloom might be
objectionable.
13. 13 MICROVISION, INC. COPYRIGHT 2016. ALL RIGHTS RESERVED.5/2/2017
Conventional Interactivity – IR Light Plane
An IR light sheet or light plane skims the surface
Intrusions into plane detected by IR camera
Offset camera and projection FOVs
14. 14 MICROVISION, INC. COPYRIGHT 2016. ALL RIGHTS RESERVED.5/2/2017
Conventional Interactivity – IR Camera
Structured light with conventional CMOS sensor
(or) Flash IR with CMOS ToF imager
Offset projection and detection fields-of-view
15. 15
MICROVISION, INC. PROPRIETARY AND CONFIDENTIAL
COPYRIGHT 2014. ALL RIGHTS RESERVED.
5/2/2017
Red laser
Green laser
Blue laser
2D MEMS
Micro mirror
IR laser
IR Photodiode
👈
• X,Y location of object known because of
knowledge of MEMS pointing angle.
• Z determined by Time-of-Flight
Interactivity – The MicroVision Way
16. 16 MICROVISION, INC. COPYRIGHT 2016. ALL RIGHTS RESERVED.5/2/2017
Interactivity – The MicroVision Way (contd.)
Omni-directional IR receiver (photodiode)
Self-registering Projection and Detection Fields-of-View
Focus-free projection and detection
17. 17
MICROVISION, INC. PROPRIETARY AND CONFIDENTIAL
COPYRIGHT 2014. ALL RIGHTS RESERVED.
5/2/2017
Demo Video Showing the MicroVision ToF
Time-of-flight amplitude
data visualized as a 2D
image
Unfiltered raw shape
classification processing
18. 18 MICROVISION, INC. COPYRIGHT 2016. ALL RIGHTS RESERVED.5/2/2017
Objective Technology Comparison (Interactivity)
Criteria IR Light Plane IR Camera
(Structured Light
or ToF Imager)
MicroVision
Flying-Spot ToF
Framerate Fast Medium Fast
Software Complexity Low High Medium
System Size Large Large Small
Works with arbitrary
product
configurations
No No Yes
Works on irregular
surfaces
No Yes Yes
Support both virtual
touch and in-air
gestures
No Yes Yes
19. 19 MICROVISION, INC. COPYRIGHT 2016. ALL RIGHTS RESERVED.5/2/2017
Conclusion
Laser beam scanning displays have inherent advantages when
considering short throw displays with interactivity
Reduced optics complexity by offloading distortion compensation to
the MEMS drive scheme
Highly integrated projection and detection components
Self-registering detection field-of-view allows freedom in various
consumer product designs