AI INSIGHTS | NOVEMBER 2019
EVENT-BASED VISION SYSTEMS
TECHNOLOGY AND R&D TREND ANALYSIS
TABLE OF CONTENTS
Event-Based Vision Systems 2
1.
2.
3.
4.
Introduction............................................................5
Methodology of the Study.........................................8
Entities Active in the Event-Based Vision System........9
Patent Trend Analysis.............................................12
•	 Filing Trends............13
•	 Assignee Landscape............14
•	 Patenting Activities by Startups............16
•	 Patent Trend Focused on Key Challenges............17
•	 Patent Publications Mapped to Automotive Applications............21
»» Collision Avoidance............21
»» Monitoring of Parked Vehicles............21
»» Always On Operations............22
»» Analysis of a Road Surface............22
»» In-car Installment of DVS Camera............23
»» Object Detection and Classification...........23
»» Multi-object Tracking...........24
»» Inaccuracies Introduced by Non-event Pixel Points...........24
»» LiDAR and 3D Point Cloud...........25
»» 3D Pose Estimation...........25
»» Hardware Security...........26
»» Edge Processing...........26
Event-Based Vision Systems 3
5.
6.
7.
Competitive Landscape..........................................28
•	 Prophesee............29
•	 iniVation............34
•	 Insightness............39
•	 Qelzal............43
•	 MindTrace............47
•	 CelePixel............50
•	 Sunia............51
•	 Australian Institute of Technology............52
•	 Samsung............54
•	 Sony............55
•	 Benchmarking of the Commercialized/In-pipeline Event-based
Vision Products............56
•	 Key Takeaways............57
Projects.................................................................58
•	 Project 1 – Ultra-Low Power Event-Based Camera (ULPEC)............59
•	 Project 2 – The Internet of Silicon Retinas (IoSiRe): Machine to machine
communications for neuromorphic vision sensing data............60
•	 Project 3 – Event-Driven Compressive Vision for Multimodal
Interaction with Mobile Devices (ECOMODE)............61
•	 Project 4 – Convolution Address-Event-Representation (AER)
Vision Architecture for Real Time (CAVIAR)............62
•	 Project 5 – Embedded Neuromorphic Sensory Processor – NeuroPsense...........63
•	 Project 6 – Event–Driven Morphological Computation for
Embodied Systems (eMorph)............63
•	 Project 7 – EB-SLAM: Event-based simultaneous localization and mapping...........64
•	 Project 8 – SLAMCore............65
Research Laboratories............................................66
•	 Lab 1: Robotics and Perception Group............67
•	 Other Highlights............27
•	 Key Takeaways............27
Event-Based Vision Systems 4
8.
9.
10.
11.
12.
•	 Lab 2: Neuroscientific System Theory (NST)............68
•	 Lab 3: Perception and Robotics Labs............70
•	 Lab 4: Robot Vision Group............71
•	 Key Takeaways............72
Research Institutes Focusing on Event Cameras........73
Insights and Recommendations.....................................78
Concluding Remarks..........................................................81
Acronyms..............................................................................82
References............................................................................83
Event-Based Vision Systems 5
Computer vision has emerged as a robust trend in the Artificial Intelligence (AI) landscape.
With several prominent use cases across industries including automotive, manufacturing,
consumer, IoT, and healthcare, the technology is unleashing the transformative benefits of
the humble and traditional camera.
State of the art computer vision systems are based on frame-based acquisition and
processing of images and videos for capturing the motion of an object using a number of
still frames per second (FPS). These systems employ vision sensors that collect a large chunk
of data consisting of all the frames in the field of view of the camera.
However, a significant portion of the data collected during the frame-based approach
is unnecessary, and only adds to the overall size and intensity of data transmission and
computation. Therefore, although these vision systems work perfectly for various image
processing use cases, they don’t specifically meet the more stringent requirements of
mission-critical applications.
AI advancements in computer vision are concentrated on emulating the characteristics of
the human eye on a vision sensor system, otherwise known as a neuromorphic or event-
based vision system, neuromorphic or event-based camera, silicon retina or camera, or
dynamic vision sensor (DVS) camera. These systems are set to transform the computer
vision landscape by meeting some of the most essential requirements of reduced latency
and lower power consumption for upcoming applications in the connected environment.
Event-based vision cameras operate on the changes in the brightness level of individual
pixels, unlike frame-based architectures where all the pixels in the acquired frame are
recorded at the same time. Event-based vision sensors follow a stream of events that
encode time, location, and asynchronous changes in the brightness levels of individual pixels
to provide real time output. This approach involves certain trade-offs in terms of latency,
sensitivity, and power consumption, versus accuracy, bandwidth, and processing capacity.
They offer significant advantages over traditional frame-based cameras such as higher
dynamic range, high temporal resolution, low power consumptions, and do not suffer from
motion blur.
INTRODUCTION
For an object
moving in
circles
Typical Motion
Y
Y
X
Time
Time
No Motion Rapid Motion
Event-Based Vision Systems 6
Event-based cameras therefore, offer significant potential for driving the vision systems in
autonomous vehicles (offering low latency, HDR object detection, and low memory storage),
robotics, IoT (for low-power, always on devices), augmented reality/virtual reality (AR/VR)
(low-power and low-latency tracking), and other industrial automation use cases.(1)
The comparison below highlights the differences between the output from frame-based
and event-based approaches.(2)
Frame-based Vision
•	 Records data at the same time
•	 Pre-defined architecture
•	 Architecture acquires large
volumes of data causing
undersampling
•	 Highly redundant
•	 Continuous frames increase
latency and lower dynamic
range
Event-based Vision
•	 Works asynchronously
•	 Operates on the brightness
intensities and acquires only
scene dynamics
•	 Minimal redundancy
•	 Higher dynamic range and low
latencies
•	 Intelligent activation of pixels
Event-based vision technologies have been in development for over 15 years now. They have
been evolving for most parts of the 2000s and have started to pick up the pace during the
current decade. The scale of opportunity is significant in the coming years. Going forward,
bio-inspired and event-based vision sensors will be a natural choice for applications that
require highly efficient processing.
Event-Based Vision Systems 7
The need for developing a better alternative
to frame-based solutions has always
been on the cards and event-based
vision techniques perfectly fit the bill. The
advantages offered by these technologies
are immense, but there are quite a few
challenges in the implementation, given the
complex nature of the operation.
The event-based vision systems
community is still limited to a few startups
and fewer established companies. However,
certain academic research groups,
universities, and research labs are actively
pursuing event-based vision systems.
These R&D groups dominate the technology
area, and of late, they have started working
in collaboration to fill gaps and bundled
solutions for accelerating the development
cycles. Some of the prominent research
people from these groups are coming
forward and starting new companies with
the aim to deliver event-based vision
solutions at a commercial level.
From the commercialization point of view,
event-based camera technology is still at
a nascent stage with some prototypes and
proofs of concept implemented for various
use cases. A few companies have also
released their event-based sensors and
related camera products commercially.
However, the technology is not yet mature
and has still to overcome some of the
key hurdles that are keeping it away from
achieving full commercial status.
From a technology point of view,
neuromorphic cameras have a huge
disruption value in the upcoming markets.
These cameras will continue to grow
out of the small community into a much
bigger spectrum of use cases to the point
where they become mandatory and start
dominating in the decades to come.
It is important to understand the innovation
and R&D space of event-based cameras
for determining the level of readiness in
terms of the prominent challenges that
the participants are trying to tackle for
making the technology commercially
viable. Studying the growth of the patenting
activities can be expected to deliver a clear
picture of the state of innovation and R&D
in the technology domain. Event-based
cameras are just around the corner, and
the focus areas of the patent filings and the
distribution of assignees across these areas
will set the course for the next revolution in
the intelligent vision landscape.
2000-09
Decade 1
2009-19
Decade 2
>2020
Technology gaining
acceleration for
testing & real-life
implementation
Rise in the Scope and Opportunity for Event-based Vision Systems
Event-Based Vision Systems 8
The present report is a comprehensive analysis covering patent filings, the startup
ecosystem, the activities of reputed research institutes and laboratories, and
projects related to the development of event-based vision systems.
The term “Event-based Vision System” has been used in the report for event-
based cameras, event-based vision sensors, neuromorphic vision sensors, and
neuromorphic cameras.
The patent analysis was performed using Derwent Innovation, with priority years 2010-
2019 in consideration. Unique patent publications were considered for the study.
Different trends related to the patent study were analyzed. The information related
to target challenges in event-based vision systems and different scenarios in the
automotive industry where the system would be deployed are also included.
Secondary research was conducted to profile the different companies constituting the
ecosystem. Data related to the companies was collated from the company’s website,
datasheets, relevant third-party websites, and videos.
Universities and research institutes were also covered to capture the R&D aspect of
the technology.
METHODOLOGY OF THE STUDY
Event-Based Vision Systems 9
ENTITIES ACTIVE IN THE
EVENT-BASED VISION SYSTEM
EVENT-BASED CAMERA COMPANIES
AUTOMOTIVE COMPANIES
SEMICONDUCTOR COMPANIES
RESEARCH LABS/INSTITUTES
LOGO
LOGO
LOGO
LOGO
LOGO
LOGO
LOGO
LOGO
LOGO LOGOLOGO
LOGO
LOGO LOGO
LOGOLOGO LOGO
LOGO
LOGO
LOGO
LOGO
Event-Based Vision Systems 10
ELECTRONICS TECHNOLOGY GIANTS
UNIVERSITIES
LOGO
LOGO
LOGO
LOGO
LOGO
LOGO
LOGO
LOGO
LOGO
LOGO
LOGO
LOGO
LOGO
LOGO
LOGO
LOGO
LOGO
LOGO
LOGO
LOGO
LOGO
LOGO
LOGO
LOGO
LOGO
LOGO
LOGO
LOGO
LOGO
LOGO
LOGO
LOGO
LOGO
LOGO
Event-Based Vision Systems 11
OTHER ORGANIZATIONS
LiDAR
LOGO
LOGO LOGO LOGO
LOGO
LOGO
LOGOLOGOLOGO
PATENT TREND ANALYSIS
52
Exploration of neuromorphic engineering
has lead to research in vision technologies
Event-Based Vision Systems 13
FILING TREND
Analysis of patent data from 2010 to 2019 provides several insights into the event-based
vision systems landscape:
Patent Filing Trend (2010-2019)
xxx relevant patents were identified.
The patent filings have been increasing every year since 2010, with 2017 representing the
peak in the number of patents filed.
---------------------------------------------------------------------------------
---------------------------------------------------------------------------------
---------------------------------------
---------------------------------------------------------------------------------
---------------------------------------------------------------------------------
--------------------------------------------------------------
---------------------------------------------------------------------------------
---------------------------------------------------------------------------------
------------------------------------------
Earliest Priority Year
No.ofPatentPublications
2010 2011 2012 2013 2014 2015 2016 2017 2018 2019
1% 1%
2%
2%
4%
6%
8%
10%
19%
47%
Event-Based Vision Systems 14
ASSIGNEE LANDSCAPE
XXXX companies hold a 47% share of the total unique patent filings in the event-based
camera domain. Out of this, XXXX is the leading filer, with 42% of the patent distribution.
Other prominent companies in this category include XXXX, XXXX, XXXX, and XXXX.
XXXX hold the second position amongst the patent assignees as the technology is in the
R&D stage and has garnered interest from universities and research labs globally. Tianjin
University holds the maximum number of patent publications in the category. Other
universities include the XXXX, XXXX, XXXX, and XXXX.
Startups hold about 10% of all patent filings between 2010-2019. These include entities such
as CelePixel, Prophesee, Qelzal, Insightness, XXXX, XXXX, and XXXX. Further, ----------------
--------------------------------------------------------- form 8% of the patent filings
related to event-based image sensors.
Automotive companies including manufacturers and suppliers have also filed patents for
the advantages offered by event-based cameras in the field. These companies include
XXXX, XXXX, XXXX, XXXX, XXXX, XXXX, and XXXX.
Additionally, technology giants Facebook, Apple, and Microsoft own one ------------------
---- related to the event-based vision system.
Overall, the top five assignees in the space are XXXX, XXXX, XXXX, XXXX, and XXXX. Samsung
is the leader with maximum filings during 2015. While XXXX started filing patents related to
event-based vision systems as early as 2010, XXXX and XXXX started filing in 2013 and 2014
respectively. On the other hand, XXXX filed maximum patents in 2017, and XXXX in 2018.
This suggests that new companies have been entering this space lately and might face
difficulties in gaining a competitive edge over XXXX.
Types of
Assignees
101
12
1111
11
87
5
5
4
4
4
3
3
54
Event-Based Vision Systems 15
Patent Assignee Distribution
(2010-2019)
Distribution of the Top 5 Assignees Over the Years
0
5
0
5
0
5
2010 2011 2012 2013 2014 2015 2016 2017 2018 2019
Samsung Tianjin University Celepixel Qualcomm Intel
20100
5
10
15
20
25
2011 2012 2013 2014 2015 2016 2017 2018 2019
XXXX XXXX XXXX XXXX XXXX
0
2
4
6
No.ofPublications
Prophesee’s
filing on pixel
circuit
Celepixel filed
maximum
patents in
2018
Earliest Priority Year
8
10
12
2010 2016 2017 2018
Filing Trend for Startups
2019
1
1
Prophesee
1
1
1
1
4
5
1111
Distribution of Startups
in the Patent Trend
Event-Based Vision Systems 16
PATENTING ACTIVITIES BY STARTUPS
Prophesee first filed its patent on the pixel circuit in 2013. The filing trend has been increasing
since then, and other startups are coming into the picture. Prominent event-based camera
startups include CelePixel, Insightness, and Qelzal. These startups have majorly filed patents for
event-based sensor modules or a pixel array. Apart from these, the other highlights from the
patent trend of these companies include:
Apart from these event-camera
startups, several new companies
are also filing patents in the domain.
XXX, a MEMS-focused startup
owns a patent publication on a 3D
detection method using event-
based cameras.
XXXX is another startup focusing on localization
problems for self-driving cars. The company
has also filed a patent publication on vehicle
localization using event-based cameras. XXXX, a
Chinese computer vision startup, has also filed a
patent on event processing in a camera system.
Prophesee’s focus on -------------------------------------------------------
CelePixel’s interest in ----------------------------------------------------------------
-------------------------
Qelzal’s focus on --------------------------------------------------
Insightness’ emphasis on 3D ICs and in-pixel subtraction method for high signal-to-noise
ratio.
Event-Based Vision Systems 17
PATENT TREND FOCUSED ON
KEY CHALLENGES
Overview
Current optical flow algorithms are complex and
computationally intensive. These are used in conventional
frame-based cameras.
There is a requirement for developing new optical flow
techniques for event-based cameras as these are driven
by brightness. Additionally, in fast-moving objects, pixels
generated by the events cannot provide adequate
information for calculation of light streams, specifically for
the finer ones.(3)(4)
Overview
The inherent asynchronous nature of DVS adds noise
to the address event flow of data. Different factors
accounting for noise include thermal noise, hardware jitter,
and fixed-pattern noise. There is a need for noise filtering
mechanisms to reduce error events and ultimately reduce
power consumption.(6)(7)
Challenge 1
Accuracy in Optical Flow Calculation
Challenge 2
Background Noise
Companies Targeting the Challenge
Companies Targeting the Challenge
Entities Targeting
the Challenge
Entities Targeting
the Challenge
•	The University of Zurich has
filed a patent related to
block matching algorithm
for optical flow.(5)
•	CelePixel has launched
a DVS product based on
optical flow computations
detailed in the startup
section of the report.
•	Samsung owns patent
publications on de-noising
circuits and difference
amplifiers with reduced
noise for event-based
cameras.(8)
•	Under ECOMODE project
event-driven sensors that
eliminate noise have been
developed.
Event-Based Vision Systems 18
Automotive supplier Magna Electronics owns a patent
US10356337B2 on DVS for collision avoidance applications.
The invention highlights the use of a vision system with a
sensing array comprising event-based gray level transition
sensitive pixels to capture images of the exterior of a vehicle.
The camera combines an infrared, near infrared and/or
visible wavelength sensitive or IV-pixel imager with the gray
level transition sensitive DVS pixel imager. The end objective
of the invention is to raise an alert to the driver and an
overlay at the displayed image for enhancing the display
of the detected object. This makes the driver aware of the
detected object while driving the vehicle.(18)
1.
2.
Renault has filed a WIPO publication WO2019158287A1
focused on a method for monitoring the environment of a
parked motor vehicle using an asynchronous camera.
Conventional cameras are power intensive and therefore,
it is not possible to leave them on for several days for
monitoring parked vehicles. The invention leverages the
low power ability of the asynchronous camera in a static
environment to monitor whether the engine is stopped
when the vehicle is parked. The asynchronous cameras
scan the vehicle environment for motion detection
and alert the owner of the vehicle by means of wireless
communication when some object is detected in the
vehicle’s surroundings.(19)
Collision
Avoidance
Monitoring of
Parked Vehicles
PATENT PUBLICATIONS MAPPED TO
AUTOMOTIVE APPLICATIONS
Event-Based Vision Systems 19
OTHER HIGHLIGHTS
Semiconductor companies are moving towards a trend of combining frame and event-
based systems. Qualcomm is focusing on addressing the compatibility issue between frame
and event-based processing systems in US10147024B2, where the focus is on a method of
interfacing the two diverse systems. On the other hand, Intel is focusing on the hybrid system
for motion estimation with a higher temporal resolution and efficiency.
LOGO
LOGO
LOGO
LOGO
LOGO
LOGO
LOGO
LOGO
LOGO
LOGO
The next trend is where
companies are filing
patent publications
for event-based
feature extraction for
enabling depth imaging.
Convolutional layers,
spiking neural networks,
and other layered
network structures have
been explored to perform
feature extraction
operations. Samsung has
filed a patent on feature
mapping for slow moving
objects using DVS.
Lastly, gesture recognition has also been a trend in event-based cameras. Error detection,
data processing, and communication and processing gesture input based on elliptical arc
and rotation direction are just some of the key areas in focus.
Combination of
Frame & Event
Systems
Feature
Extraction
Gesture
Recognition
KEY TAKEAWAYS
XXXX holds the highest number of patent filings in the event-based camera domain.
Event-based cameras are gaining interest among automotive companies, especially with the
growing focus on autonomous vehicles.
Spin-offs from universities are -------------------------------------------------------------
-------------------------------------------------------
Collaboration with ------------------------------------------------------------------------
-------------------------------
XXXX and XXXX are gearing up for the next-generation event-based vision technologies.
Event-Based Vision Systems 20
COMPETITIVE LANDSCAPE
Montreal, Canada
Manchester, UK
Zurich, Switzerland
Seoul, South Korea
Tokyo, Japan
Australia
Shanghai, China
California, USA
Paris, France
Event-Based Vision Systems 21
Prophesee
Founded in 2014, Prophesee has its headquarters in Paris, France. Formerly known as
Chronocam, it was started as a part of the iBionext Startup Studio by Ryad Bensosman,
Bernard Gilly, Christoph Posch, and Luca Verre. The company is building a strong technology
foundation across image sensing, neuromorphic computing, and VLSI design.(22)
Prophesee uses a combinational approach of its patented neuromorphic vision sensor and
AI algorithms to develop a hyper-fast, event-based vision system. This integrated approach
markedly reduces the energy consumption and computational power requirements for
smart machines.(23)
The company raised a funding of USD 68 million, from a host of key investors including Intel
Capital, iBionext Network, Supernova Invest, Renault, 360 Capital partners, and Robert Bosch
Venture Capital. Prophesee has received over thirty global recognitions, the Innovation
Award from the Italian Government in June 2019 being the most recent.(24)
It is also a
member of the Embedded Vision Alliance, a global industry group dedicated to developing
computer vision technologies.(25)
Prophesee uses a combination of bio-inspired CMOS
sensor design and AI algorithms.(26)
It leverages ATIS, which
independently samples different scenes at different rates
depending on the optical changes taking place. Every
object in the field of view (FOV) is counted by triggering
individual pixels, as the object moves by the FOV.
The output of these image sensors is a continuous stream of pixel data. Each pixel then
autonomously controls the sampling, and transmits without clock inputs, depending on
the visual scenes. The event-based vision sensor has an optical format and possesses a
dynamic range greater than 120 db.(27)
This technique eliminates redundancy in the data and achieves pixel acquisition and
readout times in microseconds. Since the temporal resolution of the image sampling
process is independent of fixed driving clock pixels, data volume now depends on
the dynamic stream of scenes. The whole embedded system is based on processing
algorithms, which detect and track motion, and segment data into groups of
spatiotemporal events.(28)
TECHNOLOGY
STACK
Event-Based Vision Systems 22
SOLUTIONS AND OFFERINGS
Prophesee, along with Imago Technologies,
has developed the IMAGO VisionCam. In this,
Prophesee has embedded its event-based
vision sensor and AI algorithms. Prophesee’s
approach allows the VisionCam to provide
a significant counting speed, and an
accuracy greater than 90%.(29)
Further, Prophesee revealed its third
generation CIS Metavision Sensor, available
for Industry 4.0 applications. The 640×480
VGA event-based sensor, has a dynamic
range greater than 120 dB, and a maximum
bandwidth of 66 mega-events per second
(Meps). It has a minimal, 1-pixel amplitude
detection and a throughput of 1000 objects
per second. The product can tackle
machine vision challenges with better
results in speed, precision, fast counting,
vibration measurement, and kinematic
monitoring. Such high-performance devices
can enable smarter and more robust
predictive maintenance strategies.
Prophesee has also introduced its event-
based reference system – Onboard, a
combination of a VGA resolution camera
integrated with Prophesee’s ASIC,
Qualcomm’s quad-core Snapdragon
processor, and 6-axis inertial measurement
unit. Onboard is compatible with
connectivity units including Ethernet, USB,
HDMI, and Wi-Fi. The reference system is
intended to be a guide for implementing
event-based vision techniques for various
use cases. The complete architecture
works through a Linux operating
system, which tracks its proprietary
AI algorithms. Onboard’s embedded
processing capabilities including vibration
measurement, high-speed counting, and
area monitoring, make it ideal for industry
4.0 applications.(30)
Additionally, the company has also
introduced a Robot Operating System
(ROS) driver for its event-based vision
solutions. The software libraries will enhance
performance in terms of speed, robustness,
and efficiency of SLAM; navigation, obstacle
detection, and other bio-inspired robotic
operations.(31)
Prophesee also offers N-CARS, a detection
dataset for vehicle classification which
was released in 2019. The dataset was
collected from various driving sessions
where ATIS camera was mounted behind
the windshield of a car.(32)
The company also
offers datasets for corner detection.(33)
Metavision VisionCam
Metavision Sensor
Event-Based Vision Systems 23
PARTNERSHIPS AND ALLIANCES
KEY PERSONNEL
Huawei has partnered with Prophesee to leverage its technology in the development of AR/
VR and IoT programs.(34)
Prophesee also has partnerships with Renault Nissan, Intel, DARPA,
Bosch, and Institute of Vision at Paris in various domains.
Prophesee has collaborated with Imago
Technologies for developing industrial
grade event-based vision system
prototypes.(35)
Prophesee has partnered with GenSight
Biologics to restore vision to the blind using
event-based sensors and optogenetics.(36)
The two companies shared the ISCAS Best
Demo Award for 2018.
Christoph has a doctorate from the Technical University
of Vienna. As a senior scientist to the Austrian Institute of
Technology, he has worked on analog signal processing,
neuromorphic engineering, and CMOS VLSI design.
Prior to his stint as the co-founder of Prophesee, Christoph
Christoph Posch, Co-founder and CTO
served as a Principal Investigator in the Institute of Vision, France. He was also a co-founder and
Scientific Advisor of Pixium Vision S.A., where he worked on developing bionic vision solutions.(37)
Christoph’s areas of interest include development of vision techniques and he has several
patents assigned for the same.(38)
Christoph has also authored several research papers on the
ATLAS experiment.(39)
Event-Based Vision Systems 24
Ryad has a keen interest in robotic navigation, object recognition,
neuromorphic vision and computation, artificial retinas, and
event-based sensing and computing. A PhD in robotics from
the University of Pierre and Marie Curie, Sorbonne, France (40)
,
he has been developing event-based SLAM technology using
neuromorphic time-based techniques.(41)
Luca, an MBA from INSEAD, has a background in the automotive
and electronics industries. He holds a double honors in Physics,
Electronics, and Industrial Engineering from the Polytechnic University
of Milan and Ecole Centrale.(43)
Prior to founding Prophesee, Luca worked as an engineer at the
Ryad Benosman, Co-founder
Luca Verre, Co-founder and CEO
Ryad is currently a Professor of Ophthalmology in the University of Pittsburgh School of Medicine.
He is also an Adjunct Faculty Member at the Robotics Institute of the Carnegie Mellon University.
An author with over 100 publications and 9 patents to his credit, Ryad is also a co-founder at GrAI
Matter Labs, and several other companies. Besides, Ryad continues to be an active participant of
Telluride and CapoCaccia workshops of the neuromorphic community.(42)
Toyota Motor Corporation and Altis Semiconductor and as a strategy and business development
manager at Schneider Electric. Luca was also a Research Assistant in Photonics at the Imperial
College of London during 2016.(44)
Event-Based Vision Systems 25
FUTURE ROADMAP
LIMITATIONS
Prophesee’s bio-inspired technology aims
to speed up data acquisition while reducing
data volume processing by exploring the
possibility of applying its event-driven
approach to other sensors such as LiDAR
and RADAR.(45)
The company has joined the IRT Nanoelec
consortium to help broaden the field of
potential applications for 3D hybrid wafer-
to-wafer bonding with fine interconnect
pitches. The goal of the program is to
develop technologies for new uses and new
applications using wafer-to-wafer direct
hybrid technology.(46)
Prophesee is targeting expansion in the US
and Asian markets. Recently, the company
signed five partnerships with global
distributors and is targeting the production
of its vision sensors at a wider scale. The
company will also aim at enhancing the
parameters such as light intensity, color
contrasts, pre-process movements, and
depths.(47)
In October 2019, Prophesee raised USD 28
million from a funding round led by the
European Investment Bank. This will help
Prophesee successfully launch its first off
the shelf event-based sensor, which was
previously only available through product
integrations like the one with Imago
Technologies. In addition, Prophesee can
now address the broader market adoption
requirements moving past the industrial
sector, catering to automotive and
consumer markets including autonomous
driving, ADAS, VR/AR, and IoT.(48)
Although, event-based cameras are efficient as compared to
the traditional approach of frame-based networks, they may
be limited in terms of dynamic range and power efficiency for
certain fast processing operations.(49)
Prophesee may also
face the limitation of LiDAR’s resolution limit boundaries that
act as a barrier in the expansion of event-based machine
vision systems using LiDARs and RADARs.
Prophesee is a key company in the event-
based vision processing space, with its
Metavision sensors already integrated with
camera products. In addition, the launch of
its industrial-grade neuromorphic sensors in
the commercial market has opened doors
for camera technology providers across
verticals to harness the benefits of event-
based vision including area monitoring,
high-speed counting, real time inspection,
and high frequency vibration measurement.
Strong investor backing will help accelerate
the company’s commercialization plans for
this sensor technology.
Further, Prophesee’s patented event-based
technology unleashes new machine vision
capabilities for several industries including
automotive, healthcare, robotics, industrial
automation, and consumer markets.
Leveraging the advanced processing
algorithms and event-based signal
processing techniques of Prophesee, visual
data acquisition is becoming faster and
sparse, which in turn results in lower power
consumption, lower transmission bandwidth
requirements, and minimal memory
requirements.
Additionally, Prophesee’s flexible ROS
framework is encouraging collaborative
robotics software development. Such
frameworks help simplify complex tasks
across various domains.(50)
The event-based
approach also enables higher dynamic
ranges usually related to high-speed vision.
These systems are therefore emerging
as an asset for systems-based predictive
maintenance.
Prophesee is also working with Intel and IBM
to combine AI with an artificial brain, which
will ensure high speed, low latency, and
power efficiency.(51)
Event-Based Vision Systems 26
WHY PROPHESEE?
Event-Based Vision Systems 27
XXXX
XXXX
XXXX(131)
XXXX(129)
XXXX
XXXX
XXXX
XXXX
XXXX
XXXX
2018
-
Announced
in 2018
2016
2018
Still in R&D
phase
9
15
1.5
18
9.8
4.95
90
120
96
>120
>120
90
640 x 480
2 x 256
2560 x
1536
768 x 640
1280 x 800
1280 x 960
-
20
-
200
100
-
100%
-
-
9%
-
-
Not Listed
Not Listed
Not
Released
Not Listed
Not Listed
Not
Released
Benchmarking of the Commercialized/In-pipeline Event-based Vision Products
Company Dynamic
Range (dB)
Released Resolution
XXXX124)
Prophesee
XXXX(126)
XXXX
Metavision
CMOS Image
Sensor
XXXX
XXXX
XXXX
2018
2019
-
2018
Expected
in 2020
18.5
15
18.5
13
7.2
120
>120
120
>100
>100
346 x 260
640 x 480
240 x 180
320 × 262
60 x 480 for
VGA, 1024 x
768 for XGA
12
66
12
>50
>80
22%
-
22%
22%
~100%
~6,600(125)
~4,000
~3,000
6,000
Not
Released
Product Bandwidth
(MEPS)
Pixel Size
(µm)
Fill Factor
Cost
(USD)
Event-Based Vision Systems 28
KEY TAKEAWAYS
A majority of the startups active in the domain are focusing on ------------------------
-------------------------------------------------------------
Samsung’s technology stack is ------------------------------------------------------
----------------------------------
XXXX and XXXX are developing stack technologies for pixel circuits.
XXXX has entered into the maximum number of partnerships and plans to use the recent
funding amount to move towards a broader market adoption of event-based cameras
with the focus on Industry 4.0, automotive, and IoT.
The European region has a greater number of startups in comparison to the USA, Asia,
and others.
PROJECTS
Event-Based Vision Systems 30
DVS technology research is in focus and several projects are currently underway across
various institutes, especially in the area of ready to deploy models. These initiatives
are approaching DVS with various innovative methods and architectures and are of
significance for a range of applications including autonomous vehicles, robotics, smart
cities, and security. Developing improved architectures can unleash the current scenario
of event-based cameras.
Overview:
ULPEC aims at developing advanced vision cameras with ultra-low power requirement and
ultra-low latency. It connects the neuromorphic camera to a high speed ultra-low power
consumption asynchronous visual data processing system and spiking neural networks with
memristive synapses. ULPEC caters to autonomous driving and recognition of traffic events
where issues of power budget and heat dissipation are prime concerns. This combination of
bio-inspired optical sensors and neural networks help enhance the cameras present in the
autonomous vehicles, including drones.
Low latency further addresses the security concerns in autonomous driving. Bosch, a project
partner, is investigating these self-driving applications. The ULPEC device is targeting to
reach technology readiness level 4 (TRL 4).
Apart from the automotive industry, the project also focuses on advanced data processing in
PROJECT 1
Principal Investigators: Dr. Sylvain Saigi(133)
Ultra-Low Power Event-Based Camera (ULPEC)
Start Date
Collaborations:
End Date Status Region
1st
January
2017
31st
December
2020(132)
Active •	 The project is funded under the Smart
System Integration of the European
Commission Horizon 2020 research
and innovation program under grant
agreement No 732642
•	 Maximum grant amount: USD 5.3 million
Europe
Funding
Event-Based Vision Systems 31
hardware-native neural networks. It also focuses on developing interoperability for integration
in “systems of systems”. Additionally, memristive neural networks and novel vision systems
targeting perception tasks in computer vision offer the potential to explore related areas
like robotics, unmanned military transportation, health care and bio-medicals, security, and
environmental monitoring.(134)
The project is providing participants such as Prophesee, Toshiba Samsung Storage
Technology Corporation (TSST), and Bosch with an opportunity to increase their market share
and increase competitiveness.
ULPEC recently published “Speed Invariant Time Surface for Learning to Detect Corner Points
with Event-Based Cameras”. The research is based on the application of Speed Invariant Time
Surface for corner detection of scene data from the event-based cameras.(135)
Furthermore,
the ULPEC team from CNRS-Thales and the University of Bordeaux featured in Nature
Communications for its research on artificial synapses, which can learn autonomously.
RESEARCH LABORATORIES
Event-Based Vision Systems 33
ROBOTICS AND PERCEPTION GROUPLAB 1
The Robotics and Perception Group
was started in 2012 and is a part of
the Department of Informatics at the
University of Zurich, and the Institute of
Neuroinformatics, a joint institute affiliated
with both the University of Zurich and ETH
Zurich.(154)
This group is developing DVS that
can instantaneously react to the changes
in light. Their objective is the creation of
autonomous machines that can navigate
using only onboard cameras, without
depending on external infrastructure (GPS
or motion capture systems).
The DVS design follows the neuromorphic
approach of emulating parameters of
biological retinas. It observes challenges
such as pixel size, low dynamic range,
motion blur, and high latencies.(155)
The
Robot and Perception Group is currently
exploring different aspects of the event-
based cameras. One of the group’s
research has released a Color Event
Camera Dataset (CED), which contains
some 50 minutes of footage with both
color frames and events. The research uses
Color DAVIS346 and provides color event
data sets for accelerating research in the
field.
Additionally, the research group is also
working on an Open Event Camera
Simulator that can simulate standard
cameras and IMU. It has an open source
code and works in real time.(156)
It is a useful
prototype for visual-odometry or event-
based feature tracking algorithms.(157)
The research group has developed the
datasets with an event-based camera
for high-speed robotics. The datasets
include the intensity of images, inertial
measurements, and ground truth from a
motion-capture system. These datasets
are generated from using a DAVIS240C
from iniLabs. The research group is
also working on other aspects such as
photometric mapping, 6-DoF camera
tracking, etc.(158)
Event-based motion
segmentation by motion
compensation
Comprehensive learning of
representations for asynchronous
event-based data
High speed and high
dynamic range video with
an event camera
The present research topics include:
Event-based, direct camera
tracking from a photometric 3D
map using nonlinear optimization
Event-Based Vision Systems 34
Semi-dense 3D
reconstruction with a
stereo event camera
Visual and inertial
state estimation
and mapping
Collaboration of aerial
and ground robots
Autonomous navigation
of flying robots
Event-based vision for
agile flight(159)
Funding(160)
Event-based vision meets deep
learning on steering prediction for
self-driving cars
Event-Based Vision Systems 35
KEY TAKEAWAYS
The projects are from the Europe and have been funded by bodies such as the XXXX, XXXX,
and XXXX.
Target application areas of the projects undertaken include a focus on applications such
as ----------------------------------------------------------
------------------------------------------------------- have been the key areas of
interest in the projects.
XXXX, XXXX, XXXX, and XXXX are the leading automotive companies that have participated in
the research efforts.
As a part of the research, event-based vision systems have been implemented in ---------
------------------------------
Algorithms for event-based cameras are a ------------
RESEARCH INSTITUTES
FOCUSING ON EVENT CAMERAS
Event-Based Vision Systems 37
An Improved Approach for Visualizing
DynamicVision Sensor and its Video De-noising:
The publication proposes an approach of overlapped events for improved visualization
in dynamic vision sensors. Additionally, the paper also focuses on shared dictionaries for
video denoising. The proposed framework allows for the visualization of the events with
high speed and less noise.(172)
GRASP Laboratory,
University of Pennsylvania
Xidian University
CNRS and Sorbonne University
Event-based Visual Inertial Odometry:
The research describes an asynchronous algorithm which can successfully track
camera motions based on event-camera datasets. It fuses event-based tracking
algorithm with an IMU to ensure an accurate metric tracking of camera’s 6 DoF pose.
The algorithm selects features in the image plane and tracks spatiotemporal windows
within the event streams. An Extended Kalman Filter with a structureless is used to fuse
the feature tracks with the output of the IMU. The study proposes a data association
scheme where multiple spatially neighboring events are associated in a soft manner
with one feature, whose motion is computed using the weighted event positions.(171)
Event-based Face Detection and Tracking
in the Blink of an Eye:
This paper is in collaboration with University of Pittsburgh. The higher temporal resolution
of an event-based camera has been used for detecting eye blinks. Additionally, the
organizations have released annotated data for future work(173)
1.
2.
3.
INSIGHTS AND RECOMMENDATIONS
Event-Based Vision Systems 39
As an emerging technology, event-based vision has the potential to transform
traditional vision sensing architectures. The technology is currently in early stages
of commercialization and awaits adoption in real life applications across sectors.
The competitive scenario of the technology is restricted to a few startups. Samsung
is the only major company active in this space. Sony is slowly making an entry with
patents and a product announcement.
Region Analysis
Opportunities for the Automotive Industry
European universities and research institutes are leading
the event-based vision space and have given birth to
startups such as iniVation and Insightness from ETH Zurich,
and Prophesee from the Institut de la Vision, France. Most
of the research-based projects in the domain are --------
--------------------------------------------
-----------------------------------------------------
-----------------------------------------------------
-----------------------------------------------------
The autonomous vehicle market is going to witness a huge
transformation by combining the potential benefits of
existing sensors with intelligent event-based concepts and
algorithms.
Automotive companies should consider -----------------
-----------------------------------------------------
-----------------------------------------------------
-----------------------------------------------------
-----------------------------------------------------
-----------------------------------------------------------------------------------
-----------------------------------------------------------------------------------
--------------------------------------------------------
------------------------------------------------------------------------
--------------------------------------------------------------------------------
---------------------------------------------------------------------------------
---------------------------------------------------------------------------------
---------------------------------------------------------------------------------
Event-Based Vision Systems 40
Additionally, funding projects in the domain can help automotive companies to become
a part of the event-based camera ecosystem. Companies such as XXXX, XXXX, and XXXX
have already become part of projects with research establishments like XXXX or XXXX.
LiDAR in combination with event-based camera could be -------------------------------
-----------------------------------------------------------------------------------
----------------------------------------------. XXXX is one of the companies that aims
to use its event-driven approach to sensors such as LiDARs and RADARs.
Collaboration and Investment Opportunities
Companies across sectors can consider partnership
opportunities with startups focusing on event-based
cameras and other enablers who are working on --------
----------------------------------------------------
-------. A few startups like XXXX and XXXX have released
their own event-based datasets. These steps are aiming to
address ---------------------------------------------
-----------------------------------------------------
-----.
The event-based vision domain has a greater number of universities working on R&D stages
than it has companies working on a commercial scale. This increases -------------------
----------------------------------------------------------------------------------
--------------------------------------------. XXXX, XXXX, XXXX, XXXX, XXXX, XXXX, XXXX,
and XXXX are some of the proactive participants in the field of asynchronous vision systems.
Event-Based Vision Systems 41
CONCLUDING REMARKS
It is highly likely that event-based vision systems will have promising implications in the future
with applications across dynamic environments. While the early generation of the technology
started with helping the visually impaired, over the next 2-3 years it is estimated to have PoCs
for self-driving cars, factory automation devices, and VR/AR applications. New companies,
partnerships, and investments will drive the development and commercialization of the
event-based camera systems. The different benefits of this new concept over the traditional
camera system make it suitable for object tracking, pose estimation, 3D reconstruction of
scenes, depth estimation, feature tracking and other perception tasks that are the major
requirements for the connected systems of tomorrow.
Vision systems are getting adopted on a large scale at both the consumer and enterprise
level. Event-based approach has the potential to -----------------------------------
-----------------------------------------------------------------------------
-----------------------------------------------------------------------------
-------------------------------------
The patent filings in the domain -----------------------------------------------
---------------------------------------------- can potentially contribute to the
technology adoption.
Many research centers have emerged with -------------------------------------
-----------------------------------------------------------------------------
------------------------------------------------------- onboard for leading the
research operations.
The event-based camera companies started with back side illumination process for
fabricating pixel circuits and are now moving towards new 3D/stacked IC technologies
that allow shrinkage of pixel size and independent control of wafer layers. XXXX, XXXX
and XXXX are some of the companies taking steps in this direction.
-------------------------------------------------------------------------------
-------------------------------------------------------------------------------
---------------
-------------------------------------------------------------------------------
-------------------------------------------------------------------------------
-------------------------------------------------------------------------------
-------------------------------------------------------------------------------
----------
-------------------------------------------------------------------------------
-------------------------------------------------------------------------------
-------------------------------------------------------------------------------
-------------------------------------------------------------------------------
-------------------------------------------------------------------------------
--------------------
Event-Based Vision Systems 42
ACRONYMS
Abbreviation
AI
IoT
FPS
DVS
ATIS
DAVIS
CMOS
HDR
APS
AR
VR
ROI
SPAD
AER
MIMCAP
Cu-Cu
ROS
jAER
YARP
TRL
G-AER
IMU
N-MNIST
N-Caltech 101
N-CARS
vSLAM
CIS
EDC
HMI
DLS
IoSiRe
LiDAR
RADAR
EDC
GPS
Meps
Artificial Intelligence
Internet of Things
Frames Per Second
Dynamic Vision Sensor
Asynchronous Time Image Sensor
Dynamic and Active Pixel Vision Sensor
Complementary Metal Oxide Semiconductor
High Dynamic Range
Active Pixel Sensor
Augmented Reality
Virtual Reality
Region of Interest
Single Photon Avalanche Diode
Address Event Representation
Metal Insulator Metal capacitor
Copper Copper
Robot Operating System
Java Address Event Representation
Yet Another Robotics Program
Technology Readiness Level
Group Address Event Representation
Inertial Measurement Unit
Neuromorphic Modified National Institute of Standards
and Technology database
Neuromorphic California Institute of Technology dataset
Neuromorphic CARS datasets
Visual Simultaneous Localization AND Mapping
CMOS Image Sensor
Event Driven Compressive sensing
Human Machine Interface
Dual line Sensor
Internet of Silicon Retinas
Light Detection and Ranging
Radio Detection and Ranging
Event Driven Compressive sensing
Global Positioning System
Maximum Event Per Second
Explanation
Event-Based Vision Systems 43
REFERENCES
1.	 http://rpg.ifi.uzh.ch/docs/scaramuzza/2019.07.11_Scaramuzza_Event_Cameras_Tutorial.pdf
2.	 http://rpg.ifi.uzh.ch/docs/scaramuzza/2019.07.11_Scaramuzza_Event_Cameras_Tutorial.pdf
3.	 https://patents.google.com/patent/CN109377516A/en?oq=CN109377516A
4.	 https://patents.google.com/patent/CN108961318A/en?oq=CN108961318A
5.	 https://patents.google.com/patent/WO2018219931A1/en?oq=WO2018219931A1
6.	 https://patents.google.com/patent/CN109726356A/en?oq=CN109726356A
7.	 https://patents.google.com/patent/US9739660B2/en?oq=US9739660B2
8.	 https://patents.google.com/patent/US9739660B2/en?oq=US9739660B2
9.	 https://patents.google.com/patent/US20170094249A1/en?oq=US20170094249A1
10.	 https://patents.google.com/patent/US20160094814A1/en?oq=US10237481B2
11.	 https://patents.google.com/patent/US10295669B2/en?oq=US10295669B2
12.	 https://patents.google.com/patent/US20170094249A1/en?oq=US20170094249A1
13.	 https://worldwide.espacenet.com/publicationDetails/biblio?CC=CN&NR=108038888A&KC=A&FT=D
14.	 https://patents.google.com/patent/US20180308253A1/en?oq=US20180308253A1
15.	 https://ec.europa.eu/research/participants/documents/
downloadPublic?documentIds=080166e5b61e1831&appId=PPGMS
16.	 https://patents.google.com/patent/US9001220B2/en?oq=US9001220B2
17.	 https://patents.google.com/patent/WO2018152214A1/en?oq=WO2018152214A1
18.	 https://patents.google.com/patent/US10356337B2/en?oq=US10356337B2
19.	 https://patents.google.com/patent/WO2019158287A1/en?oq=WO2019158287A1
20.	 https://patents.google.com/patent/US8825306B2/en?oq=US8825306B2
21.	 https://patents.google.com/patent/CN106127800A/en?oq=CN106127800A
22.	 https://www.prophesee.ai/wp-content/uploads/2018/12/Prophesee-Grenoble-Press-Release-EN.pdf
23.	 https://www.prophesee.ai/
24.	 https://www.prophesee.ai/recognition/
25.	 https://www.prophesee.ai/2019/04/30/prophesee-joins-embedded-vision-alliance__trashed/
26.	 https://www.prophesee.ai/2016/11/28/chronocam-receives-15-million-funding-led-by-intel/
27.	 https://www.engineering.com/AdvancedManufacturing/ArticleID/17968/Machine-Vision-System-Can-
Track-Vibrations-for-Production-Monitoring.aspx
28.	 https://www.imveurope.com/analysis-opinion/innovative-imaging-tech-shortlisted-vision-stuttgart-
award
29.	 https://www.prophesee.ai/2019/02/14/imago-prophesee/
Event-Based Vision Systems 44
About Netscribes
Netscribes is a global market intelligence and content services provider that helps
corporations achieve strategic objectives through a wide range of offerings. Our
solutions rely on a unique combination of qualitative and quantitative primary
research, secondary/desk research, social media analytics, and IP research.
For more than 15 years, we have helped our clients across a range of industries,
including technology, financial services, healthcare, retail, and CPG. Fortune 500
companies, as well as small- to mid-size firms, have benefited from our partnership
with relevant market and competitive insights to drive higher growth, faster
customer acquisition, and a sustainable edge in their business.
APPENDIX
Event-Based Vision Systems 45
DISCLAIMER
This report is prepared by Netscribes (India) Private Limited (”Netscribes”), a market
intelligence and content service provider.
The content of this report is developed in accordance with Netscribes’ professional standards.
Accordingly, the information provided herein has been obtained from sources which are
reasonably believed to be reliable. All information provided in this report is on an “as-is” and
an “as-available” basis, and no representations are made about the completeness, veracity,
reliability, accuracy, or suitability of its content for any purpose whatsoever. All statements of
opinion and all projections, forecasts, or statements relating to expectations regarding future
events represent ROGM’s own assessment and interpretation of information available to it. All
liabilities, however arising, in each of the foregoing respects are expressly disclaimed.
This report is intended for general information purposes only. This report does not constitute
an offer to sell or issue securities, an invitation to purchase or subscribe for securities, or a
recommendation to purchase, hold, sell, or abstain from purchasing, any securities. This
report is not intended to be used as a basis for making an investment in securities. This report
does not form a fiduciary relationship or constitute investment advice. Nothing in this report
constitutes legal advice.
The information and opinions contained in this report are provided as of the date of the report
and are subject to change. Reports may or may not be revised in the future. Any liability to
revise any out-of-date report, or to inform recipients about an updated version of such report,
is expressly disclaimed.
A bonafide recipient is hereby granted a worldwide, royalty-free, enterprise-wide limited
license to use the content of this report, subject to the condition that any citation from this
report is properly referenced and credited to Research On Global Markets. Nothing herein
conveys to the recipients, by implication or by way of estoppel, any intellectual property rights
in the report (other than the foregoing limited license) or impairs Netscribes’ intellectual
property rights, including but not limited to any rights available to Netscribes under any law or
contract.
To the maximum extent permitted by law, all liabilities in respect of this report and any related
material is expressly disclaimed. Netscribes does not assume any liability or duty of care for
any consequences of any person acting, or refraining to act, by placing reliance on the basis
of information contained in this report.
All disputes and claims arising in relation to this report will be submitted to arbitration, which
shall be held in Mumbai, India under the Indian Arbitration and Conciliation Act. The exclusive
jurisdiction of the courts in Mumbai, India, applies to all disputes concerning this report and
the interpretation of these terms, and the same shall be governed by and construed in
accordance with Indian law without reference to the principles of conflict of laws.
Event-Based Vision Systems 46
GET IN TOUCH WITH US
Singapore
10 Dover Rise, #20-11,
Heritage View,
Singapore - 138680
Phone: +65 31580712
Gurugram
806, 8th Floor, Tower B,
Unitech Cyber Park, Sector 39,
Gurugram - 122001, Haryana,
India +91-124-491-4800
Mumbai
Office no.504, 5th Floor,
Lodha Supremus, Lower Parel
Mumbai 400013, Maharashtra,
India +91-22-4098-7600
New York
41 East, 11th Street,
New York NY10003,
USA +1-917-885-5983
Kolkata
3rd Floor, Saberwal House 55B
Mirza Ghalib Street,
Kolkata - 700 016, West Bengal,
India +91-33-4027-6200
US toll free: 1-888-448-4309 India: +91-22-4098-7690 subscriptions@netscribes.com

Event-Based Vision Systems – Technology and R&D Trends Analysis Report

  • 1.
    AI INSIGHTS |NOVEMBER 2019 EVENT-BASED VISION SYSTEMS TECHNOLOGY AND R&D TREND ANALYSIS
  • 2.
    TABLE OF CONTENTS Event-BasedVision Systems 2 1. 2. 3. 4. Introduction............................................................5 Methodology of the Study.........................................8 Entities Active in the Event-Based Vision System........9 Patent Trend Analysis.............................................12 • Filing Trends............13 • Assignee Landscape............14 • Patenting Activities by Startups............16 • Patent Trend Focused on Key Challenges............17 • Patent Publications Mapped to Automotive Applications............21 »» Collision Avoidance............21 »» Monitoring of Parked Vehicles............21 »» Always On Operations............22 »» Analysis of a Road Surface............22 »» In-car Installment of DVS Camera............23 »» Object Detection and Classification...........23 »» Multi-object Tracking...........24 »» Inaccuracies Introduced by Non-event Pixel Points...........24 »» LiDAR and 3D Point Cloud...........25 »» 3D Pose Estimation...........25 »» Hardware Security...........26 »» Edge Processing...........26
  • 3.
    Event-Based Vision Systems3 5. 6. 7. Competitive Landscape..........................................28 • Prophesee............29 • iniVation............34 • Insightness............39 • Qelzal............43 • MindTrace............47 • CelePixel............50 • Sunia............51 • Australian Institute of Technology............52 • Samsung............54 • Sony............55 • Benchmarking of the Commercialized/In-pipeline Event-based Vision Products............56 • Key Takeaways............57 Projects.................................................................58 • Project 1 – Ultra-Low Power Event-Based Camera (ULPEC)............59 • Project 2 – The Internet of Silicon Retinas (IoSiRe): Machine to machine communications for neuromorphic vision sensing data............60 • Project 3 – Event-Driven Compressive Vision for Multimodal Interaction with Mobile Devices (ECOMODE)............61 • Project 4 – Convolution Address-Event-Representation (AER) Vision Architecture for Real Time (CAVIAR)............62 • Project 5 – Embedded Neuromorphic Sensory Processor – NeuroPsense...........63 • Project 6 – Event–Driven Morphological Computation for Embodied Systems (eMorph)............63 • Project 7 – EB-SLAM: Event-based simultaneous localization and mapping...........64 • Project 8 – SLAMCore............65 Research Laboratories............................................66 • Lab 1: Robotics and Perception Group............67 • Other Highlights............27 • Key Takeaways............27
  • 4.
    Event-Based Vision Systems4 8. 9. 10. 11. 12. • Lab 2: Neuroscientific System Theory (NST)............68 • Lab 3: Perception and Robotics Labs............70 • Lab 4: Robot Vision Group............71 • Key Takeaways............72 Research Institutes Focusing on Event Cameras........73 Insights and Recommendations.....................................78 Concluding Remarks..........................................................81 Acronyms..............................................................................82 References............................................................................83
  • 5.
    Event-Based Vision Systems5 Computer vision has emerged as a robust trend in the Artificial Intelligence (AI) landscape. With several prominent use cases across industries including automotive, manufacturing, consumer, IoT, and healthcare, the technology is unleashing the transformative benefits of the humble and traditional camera. State of the art computer vision systems are based on frame-based acquisition and processing of images and videos for capturing the motion of an object using a number of still frames per second (FPS). These systems employ vision sensors that collect a large chunk of data consisting of all the frames in the field of view of the camera. However, a significant portion of the data collected during the frame-based approach is unnecessary, and only adds to the overall size and intensity of data transmission and computation. Therefore, although these vision systems work perfectly for various image processing use cases, they don’t specifically meet the more stringent requirements of mission-critical applications. AI advancements in computer vision are concentrated on emulating the characteristics of the human eye on a vision sensor system, otherwise known as a neuromorphic or event- based vision system, neuromorphic or event-based camera, silicon retina or camera, or dynamic vision sensor (DVS) camera. These systems are set to transform the computer vision landscape by meeting some of the most essential requirements of reduced latency and lower power consumption for upcoming applications in the connected environment. Event-based vision cameras operate on the changes in the brightness level of individual pixels, unlike frame-based architectures where all the pixels in the acquired frame are recorded at the same time. Event-based vision sensors follow a stream of events that encode time, location, and asynchronous changes in the brightness levels of individual pixels to provide real time output. This approach involves certain trade-offs in terms of latency, sensitivity, and power consumption, versus accuracy, bandwidth, and processing capacity. They offer significant advantages over traditional frame-based cameras such as higher dynamic range, high temporal resolution, low power consumptions, and do not suffer from motion blur. INTRODUCTION
  • 6.
    For an object movingin circles Typical Motion Y Y X Time Time No Motion Rapid Motion Event-Based Vision Systems 6 Event-based cameras therefore, offer significant potential for driving the vision systems in autonomous vehicles (offering low latency, HDR object detection, and low memory storage), robotics, IoT (for low-power, always on devices), augmented reality/virtual reality (AR/VR) (low-power and low-latency tracking), and other industrial automation use cases.(1) The comparison below highlights the differences between the output from frame-based and event-based approaches.(2) Frame-based Vision • Records data at the same time • Pre-defined architecture • Architecture acquires large volumes of data causing undersampling • Highly redundant • Continuous frames increase latency and lower dynamic range Event-based Vision • Works asynchronously • Operates on the brightness intensities and acquires only scene dynamics • Minimal redundancy • Higher dynamic range and low latencies • Intelligent activation of pixels Event-based vision technologies have been in development for over 15 years now. They have been evolving for most parts of the 2000s and have started to pick up the pace during the current decade. The scale of opportunity is significant in the coming years. Going forward, bio-inspired and event-based vision sensors will be a natural choice for applications that require highly efficient processing.
  • 7.
    Event-Based Vision Systems7 The need for developing a better alternative to frame-based solutions has always been on the cards and event-based vision techniques perfectly fit the bill. The advantages offered by these technologies are immense, but there are quite a few challenges in the implementation, given the complex nature of the operation. The event-based vision systems community is still limited to a few startups and fewer established companies. However, certain academic research groups, universities, and research labs are actively pursuing event-based vision systems. These R&D groups dominate the technology area, and of late, they have started working in collaboration to fill gaps and bundled solutions for accelerating the development cycles. Some of the prominent research people from these groups are coming forward and starting new companies with the aim to deliver event-based vision solutions at a commercial level. From the commercialization point of view, event-based camera technology is still at a nascent stage with some prototypes and proofs of concept implemented for various use cases. A few companies have also released their event-based sensors and related camera products commercially. However, the technology is not yet mature and has still to overcome some of the key hurdles that are keeping it away from achieving full commercial status. From a technology point of view, neuromorphic cameras have a huge disruption value in the upcoming markets. These cameras will continue to grow out of the small community into a much bigger spectrum of use cases to the point where they become mandatory and start dominating in the decades to come. It is important to understand the innovation and R&D space of event-based cameras for determining the level of readiness in terms of the prominent challenges that the participants are trying to tackle for making the technology commercially viable. Studying the growth of the patenting activities can be expected to deliver a clear picture of the state of innovation and R&D in the technology domain. Event-based cameras are just around the corner, and the focus areas of the patent filings and the distribution of assignees across these areas will set the course for the next revolution in the intelligent vision landscape. 2000-09 Decade 1 2009-19 Decade 2 >2020 Technology gaining acceleration for testing & real-life implementation Rise in the Scope and Opportunity for Event-based Vision Systems
  • 8.
    Event-Based Vision Systems8 The present report is a comprehensive analysis covering patent filings, the startup ecosystem, the activities of reputed research institutes and laboratories, and projects related to the development of event-based vision systems. The term “Event-based Vision System” has been used in the report for event- based cameras, event-based vision sensors, neuromorphic vision sensors, and neuromorphic cameras. The patent analysis was performed using Derwent Innovation, with priority years 2010- 2019 in consideration. Unique patent publications were considered for the study. Different trends related to the patent study were analyzed. The information related to target challenges in event-based vision systems and different scenarios in the automotive industry where the system would be deployed are also included. Secondary research was conducted to profile the different companies constituting the ecosystem. Data related to the companies was collated from the company’s website, datasheets, relevant third-party websites, and videos. Universities and research institutes were also covered to capture the R&D aspect of the technology. METHODOLOGY OF THE STUDY
  • 9.
    Event-Based Vision Systems9 ENTITIES ACTIVE IN THE EVENT-BASED VISION SYSTEM EVENT-BASED CAMERA COMPANIES AUTOMOTIVE COMPANIES SEMICONDUCTOR COMPANIES RESEARCH LABS/INSTITUTES LOGO LOGO LOGO LOGO LOGO LOGO LOGO LOGO LOGO LOGOLOGO LOGO LOGO LOGO LOGOLOGO LOGO LOGO LOGO LOGO LOGO
  • 10.
    Event-Based Vision Systems10 ELECTRONICS TECHNOLOGY GIANTS UNIVERSITIES LOGO LOGO LOGO LOGO LOGO LOGO LOGO LOGO LOGO LOGO LOGO LOGO LOGO LOGO LOGO LOGO LOGO LOGO LOGO LOGO LOGO LOGO LOGO LOGO LOGO LOGO LOGO LOGO LOGO LOGO LOGO LOGO LOGO LOGO
  • 11.
    Event-Based Vision Systems11 OTHER ORGANIZATIONS LiDAR LOGO LOGO LOGO LOGO LOGO LOGO LOGOLOGOLOGO
  • 12.
  • 13.
    52 Exploration of neuromorphicengineering has lead to research in vision technologies Event-Based Vision Systems 13 FILING TREND Analysis of patent data from 2010 to 2019 provides several insights into the event-based vision systems landscape: Patent Filing Trend (2010-2019) xxx relevant patents were identified. The patent filings have been increasing every year since 2010, with 2017 representing the peak in the number of patents filed. --------------------------------------------------------------------------------- --------------------------------------------------------------------------------- --------------------------------------- --------------------------------------------------------------------------------- --------------------------------------------------------------------------------- -------------------------------------------------------------- --------------------------------------------------------------------------------- --------------------------------------------------------------------------------- ------------------------------------------ Earliest Priority Year No.ofPatentPublications 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019
  • 14.
    1% 1% 2% 2% 4% 6% 8% 10% 19% 47% Event-Based VisionSystems 14 ASSIGNEE LANDSCAPE XXXX companies hold a 47% share of the total unique patent filings in the event-based camera domain. Out of this, XXXX is the leading filer, with 42% of the patent distribution. Other prominent companies in this category include XXXX, XXXX, XXXX, and XXXX. XXXX hold the second position amongst the patent assignees as the technology is in the R&D stage and has garnered interest from universities and research labs globally. Tianjin University holds the maximum number of patent publications in the category. Other universities include the XXXX, XXXX, XXXX, and XXXX. Startups hold about 10% of all patent filings between 2010-2019. These include entities such as CelePixel, Prophesee, Qelzal, Insightness, XXXX, XXXX, and XXXX. Further, ---------------- --------------------------------------------------------- form 8% of the patent filings related to event-based image sensors. Automotive companies including manufacturers and suppliers have also filed patents for the advantages offered by event-based cameras in the field. These companies include XXXX, XXXX, XXXX, XXXX, XXXX, XXXX, and XXXX. Additionally, technology giants Facebook, Apple, and Microsoft own one ------------------ ---- related to the event-based vision system. Overall, the top five assignees in the space are XXXX, XXXX, XXXX, XXXX, and XXXX. Samsung is the leader with maximum filings during 2015. While XXXX started filing patents related to event-based vision systems as early as 2010, XXXX and XXXX started filing in 2013 and 2014 respectively. On the other hand, XXXX filed maximum patents in 2017, and XXXX in 2018. This suggests that new companies have been entering this space lately and might face difficulties in gaining a competitive edge over XXXX. Types of Assignees
  • 15.
    101 12 1111 11 87 5 5 4 4 4 3 3 54 Event-Based Vision Systems15 Patent Assignee Distribution (2010-2019) Distribution of the Top 5 Assignees Over the Years 0 5 0 5 0 5 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 Samsung Tianjin University Celepixel Qualcomm Intel 20100 5 10 15 20 25 2011 2012 2013 2014 2015 2016 2017 2018 2019 XXXX XXXX XXXX XXXX XXXX
  • 16.
    0 2 4 6 No.ofPublications Prophesee’s filing on pixel circuit Celepixelfiled maximum patents in 2018 Earliest Priority Year 8 10 12 2010 2016 2017 2018 Filing Trend for Startups 2019 1 1 Prophesee 1 1 1 1 4 5 1111 Distribution of Startups in the Patent Trend Event-Based Vision Systems 16 PATENTING ACTIVITIES BY STARTUPS Prophesee first filed its patent on the pixel circuit in 2013. The filing trend has been increasing since then, and other startups are coming into the picture. Prominent event-based camera startups include CelePixel, Insightness, and Qelzal. These startups have majorly filed patents for event-based sensor modules or a pixel array. Apart from these, the other highlights from the patent trend of these companies include: Apart from these event-camera startups, several new companies are also filing patents in the domain. XXX, a MEMS-focused startup owns a patent publication on a 3D detection method using event- based cameras. XXXX is another startup focusing on localization problems for self-driving cars. The company has also filed a patent publication on vehicle localization using event-based cameras. XXXX, a Chinese computer vision startup, has also filed a patent on event processing in a camera system. Prophesee’s focus on ------------------------------------------------------- CelePixel’s interest in ---------------------------------------------------------------- ------------------------- Qelzal’s focus on -------------------------------------------------- Insightness’ emphasis on 3D ICs and in-pixel subtraction method for high signal-to-noise ratio.
  • 17.
    Event-Based Vision Systems17 PATENT TREND FOCUSED ON KEY CHALLENGES Overview Current optical flow algorithms are complex and computationally intensive. These are used in conventional frame-based cameras. There is a requirement for developing new optical flow techniques for event-based cameras as these are driven by brightness. Additionally, in fast-moving objects, pixels generated by the events cannot provide adequate information for calculation of light streams, specifically for the finer ones.(3)(4) Overview The inherent asynchronous nature of DVS adds noise to the address event flow of data. Different factors accounting for noise include thermal noise, hardware jitter, and fixed-pattern noise. There is a need for noise filtering mechanisms to reduce error events and ultimately reduce power consumption.(6)(7) Challenge 1 Accuracy in Optical Flow Calculation Challenge 2 Background Noise Companies Targeting the Challenge Companies Targeting the Challenge Entities Targeting the Challenge Entities Targeting the Challenge • The University of Zurich has filed a patent related to block matching algorithm for optical flow.(5) • CelePixel has launched a DVS product based on optical flow computations detailed in the startup section of the report. • Samsung owns patent publications on de-noising circuits and difference amplifiers with reduced noise for event-based cameras.(8) • Under ECOMODE project event-driven sensors that eliminate noise have been developed.
  • 18.
    Event-Based Vision Systems18 Automotive supplier Magna Electronics owns a patent US10356337B2 on DVS for collision avoidance applications. The invention highlights the use of a vision system with a sensing array comprising event-based gray level transition sensitive pixels to capture images of the exterior of a vehicle. The camera combines an infrared, near infrared and/or visible wavelength sensitive or IV-pixel imager with the gray level transition sensitive DVS pixel imager. The end objective of the invention is to raise an alert to the driver and an overlay at the displayed image for enhancing the display of the detected object. This makes the driver aware of the detected object while driving the vehicle.(18) 1. 2. Renault has filed a WIPO publication WO2019158287A1 focused on a method for monitoring the environment of a parked motor vehicle using an asynchronous camera. Conventional cameras are power intensive and therefore, it is not possible to leave them on for several days for monitoring parked vehicles. The invention leverages the low power ability of the asynchronous camera in a static environment to monitor whether the engine is stopped when the vehicle is parked. The asynchronous cameras scan the vehicle environment for motion detection and alert the owner of the vehicle by means of wireless communication when some object is detected in the vehicle’s surroundings.(19) Collision Avoidance Monitoring of Parked Vehicles PATENT PUBLICATIONS MAPPED TO AUTOMOTIVE APPLICATIONS
  • 19.
    Event-Based Vision Systems19 OTHER HIGHLIGHTS Semiconductor companies are moving towards a trend of combining frame and event- based systems. Qualcomm is focusing on addressing the compatibility issue between frame and event-based processing systems in US10147024B2, where the focus is on a method of interfacing the two diverse systems. On the other hand, Intel is focusing on the hybrid system for motion estimation with a higher temporal resolution and efficiency. LOGO LOGO LOGO LOGO LOGO LOGO LOGO LOGO LOGO LOGO The next trend is where companies are filing patent publications for event-based feature extraction for enabling depth imaging. Convolutional layers, spiking neural networks, and other layered network structures have been explored to perform feature extraction operations. Samsung has filed a patent on feature mapping for slow moving objects using DVS. Lastly, gesture recognition has also been a trend in event-based cameras. Error detection, data processing, and communication and processing gesture input based on elliptical arc and rotation direction are just some of the key areas in focus. Combination of Frame & Event Systems Feature Extraction Gesture Recognition KEY TAKEAWAYS XXXX holds the highest number of patent filings in the event-based camera domain. Event-based cameras are gaining interest among automotive companies, especially with the growing focus on autonomous vehicles. Spin-offs from universities are ------------------------------------------------------------- ------------------------------------------------------- Collaboration with ------------------------------------------------------------------------ ------------------------------- XXXX and XXXX are gearing up for the next-generation event-based vision technologies.
  • 20.
    Event-Based Vision Systems20 COMPETITIVE LANDSCAPE Montreal, Canada Manchester, UK Zurich, Switzerland Seoul, South Korea Tokyo, Japan Australia Shanghai, China California, USA Paris, France
  • 21.
    Event-Based Vision Systems21 Prophesee Founded in 2014, Prophesee has its headquarters in Paris, France. Formerly known as Chronocam, it was started as a part of the iBionext Startup Studio by Ryad Bensosman, Bernard Gilly, Christoph Posch, and Luca Verre. The company is building a strong technology foundation across image sensing, neuromorphic computing, and VLSI design.(22) Prophesee uses a combinational approach of its patented neuromorphic vision sensor and AI algorithms to develop a hyper-fast, event-based vision system. This integrated approach markedly reduces the energy consumption and computational power requirements for smart machines.(23) The company raised a funding of USD 68 million, from a host of key investors including Intel Capital, iBionext Network, Supernova Invest, Renault, 360 Capital partners, and Robert Bosch Venture Capital. Prophesee has received over thirty global recognitions, the Innovation Award from the Italian Government in June 2019 being the most recent.(24) It is also a member of the Embedded Vision Alliance, a global industry group dedicated to developing computer vision technologies.(25) Prophesee uses a combination of bio-inspired CMOS sensor design and AI algorithms.(26) It leverages ATIS, which independently samples different scenes at different rates depending on the optical changes taking place. Every object in the field of view (FOV) is counted by triggering individual pixels, as the object moves by the FOV. The output of these image sensors is a continuous stream of pixel data. Each pixel then autonomously controls the sampling, and transmits without clock inputs, depending on the visual scenes. The event-based vision sensor has an optical format and possesses a dynamic range greater than 120 db.(27) This technique eliminates redundancy in the data and achieves pixel acquisition and readout times in microseconds. Since the temporal resolution of the image sampling process is independent of fixed driving clock pixels, data volume now depends on the dynamic stream of scenes. The whole embedded system is based on processing algorithms, which detect and track motion, and segment data into groups of spatiotemporal events.(28) TECHNOLOGY STACK
  • 22.
    Event-Based Vision Systems22 SOLUTIONS AND OFFERINGS Prophesee, along with Imago Technologies, has developed the IMAGO VisionCam. In this, Prophesee has embedded its event-based vision sensor and AI algorithms. Prophesee’s approach allows the VisionCam to provide a significant counting speed, and an accuracy greater than 90%.(29) Further, Prophesee revealed its third generation CIS Metavision Sensor, available for Industry 4.0 applications. The 640×480 VGA event-based sensor, has a dynamic range greater than 120 dB, and a maximum bandwidth of 66 mega-events per second (Meps). It has a minimal, 1-pixel amplitude detection and a throughput of 1000 objects per second. The product can tackle machine vision challenges with better results in speed, precision, fast counting, vibration measurement, and kinematic monitoring. Such high-performance devices can enable smarter and more robust predictive maintenance strategies. Prophesee has also introduced its event- based reference system – Onboard, a combination of a VGA resolution camera integrated with Prophesee’s ASIC, Qualcomm’s quad-core Snapdragon processor, and 6-axis inertial measurement unit. Onboard is compatible with connectivity units including Ethernet, USB, HDMI, and Wi-Fi. The reference system is intended to be a guide for implementing event-based vision techniques for various use cases. The complete architecture works through a Linux operating system, which tracks its proprietary AI algorithms. Onboard’s embedded processing capabilities including vibration measurement, high-speed counting, and area monitoring, make it ideal for industry 4.0 applications.(30) Additionally, the company has also introduced a Robot Operating System (ROS) driver for its event-based vision solutions. The software libraries will enhance performance in terms of speed, robustness, and efficiency of SLAM; navigation, obstacle detection, and other bio-inspired robotic operations.(31) Prophesee also offers N-CARS, a detection dataset for vehicle classification which was released in 2019. The dataset was collected from various driving sessions where ATIS camera was mounted behind the windshield of a car.(32) The company also offers datasets for corner detection.(33) Metavision VisionCam Metavision Sensor
  • 23.
    Event-Based Vision Systems23 PARTNERSHIPS AND ALLIANCES KEY PERSONNEL Huawei has partnered with Prophesee to leverage its technology in the development of AR/ VR and IoT programs.(34) Prophesee also has partnerships with Renault Nissan, Intel, DARPA, Bosch, and Institute of Vision at Paris in various domains. Prophesee has collaborated with Imago Technologies for developing industrial grade event-based vision system prototypes.(35) Prophesee has partnered with GenSight Biologics to restore vision to the blind using event-based sensors and optogenetics.(36) The two companies shared the ISCAS Best Demo Award for 2018. Christoph has a doctorate from the Technical University of Vienna. As a senior scientist to the Austrian Institute of Technology, he has worked on analog signal processing, neuromorphic engineering, and CMOS VLSI design. Prior to his stint as the co-founder of Prophesee, Christoph Christoph Posch, Co-founder and CTO served as a Principal Investigator in the Institute of Vision, France. He was also a co-founder and Scientific Advisor of Pixium Vision S.A., where he worked on developing bionic vision solutions.(37) Christoph’s areas of interest include development of vision techniques and he has several patents assigned for the same.(38) Christoph has also authored several research papers on the ATLAS experiment.(39)
  • 24.
    Event-Based Vision Systems24 Ryad has a keen interest in robotic navigation, object recognition, neuromorphic vision and computation, artificial retinas, and event-based sensing and computing. A PhD in robotics from the University of Pierre and Marie Curie, Sorbonne, France (40) , he has been developing event-based SLAM technology using neuromorphic time-based techniques.(41) Luca, an MBA from INSEAD, has a background in the automotive and electronics industries. He holds a double honors in Physics, Electronics, and Industrial Engineering from the Polytechnic University of Milan and Ecole Centrale.(43) Prior to founding Prophesee, Luca worked as an engineer at the Ryad Benosman, Co-founder Luca Verre, Co-founder and CEO Ryad is currently a Professor of Ophthalmology in the University of Pittsburgh School of Medicine. He is also an Adjunct Faculty Member at the Robotics Institute of the Carnegie Mellon University. An author with over 100 publications and 9 patents to his credit, Ryad is also a co-founder at GrAI Matter Labs, and several other companies. Besides, Ryad continues to be an active participant of Telluride and CapoCaccia workshops of the neuromorphic community.(42) Toyota Motor Corporation and Altis Semiconductor and as a strategy and business development manager at Schneider Electric. Luca was also a Research Assistant in Photonics at the Imperial College of London during 2016.(44)
  • 25.
    Event-Based Vision Systems25 FUTURE ROADMAP LIMITATIONS Prophesee’s bio-inspired technology aims to speed up data acquisition while reducing data volume processing by exploring the possibility of applying its event-driven approach to other sensors such as LiDAR and RADAR.(45) The company has joined the IRT Nanoelec consortium to help broaden the field of potential applications for 3D hybrid wafer- to-wafer bonding with fine interconnect pitches. The goal of the program is to develop technologies for new uses and new applications using wafer-to-wafer direct hybrid technology.(46) Prophesee is targeting expansion in the US and Asian markets. Recently, the company signed five partnerships with global distributors and is targeting the production of its vision sensors at a wider scale. The company will also aim at enhancing the parameters such as light intensity, color contrasts, pre-process movements, and depths.(47) In October 2019, Prophesee raised USD 28 million from a funding round led by the European Investment Bank. This will help Prophesee successfully launch its first off the shelf event-based sensor, which was previously only available through product integrations like the one with Imago Technologies. In addition, Prophesee can now address the broader market adoption requirements moving past the industrial sector, catering to automotive and consumer markets including autonomous driving, ADAS, VR/AR, and IoT.(48) Although, event-based cameras are efficient as compared to the traditional approach of frame-based networks, they may be limited in terms of dynamic range and power efficiency for certain fast processing operations.(49) Prophesee may also face the limitation of LiDAR’s resolution limit boundaries that act as a barrier in the expansion of event-based machine vision systems using LiDARs and RADARs.
  • 26.
    Prophesee is akey company in the event- based vision processing space, with its Metavision sensors already integrated with camera products. In addition, the launch of its industrial-grade neuromorphic sensors in the commercial market has opened doors for camera technology providers across verticals to harness the benefits of event- based vision including area monitoring, high-speed counting, real time inspection, and high frequency vibration measurement. Strong investor backing will help accelerate the company’s commercialization plans for this sensor technology. Further, Prophesee’s patented event-based technology unleashes new machine vision capabilities for several industries including automotive, healthcare, robotics, industrial automation, and consumer markets. Leveraging the advanced processing algorithms and event-based signal processing techniques of Prophesee, visual data acquisition is becoming faster and sparse, which in turn results in lower power consumption, lower transmission bandwidth requirements, and minimal memory requirements. Additionally, Prophesee’s flexible ROS framework is encouraging collaborative robotics software development. Such frameworks help simplify complex tasks across various domains.(50) The event-based approach also enables higher dynamic ranges usually related to high-speed vision. These systems are therefore emerging as an asset for systems-based predictive maintenance. Prophesee is also working with Intel and IBM to combine AI with an artificial brain, which will ensure high speed, low latency, and power efficiency.(51) Event-Based Vision Systems 26 WHY PROPHESEE?
  • 27.
    Event-Based Vision Systems27 XXXX XXXX XXXX(131) XXXX(129) XXXX XXXX XXXX XXXX XXXX XXXX 2018 - Announced in 2018 2016 2018 Still in R&D phase 9 15 1.5 18 9.8 4.95 90 120 96 >120 >120 90 640 x 480 2 x 256 2560 x 1536 768 x 640 1280 x 800 1280 x 960 - 20 - 200 100 - 100% - - 9% - - Not Listed Not Listed Not Released Not Listed Not Listed Not Released Benchmarking of the Commercialized/In-pipeline Event-based Vision Products Company Dynamic Range (dB) Released Resolution XXXX124) Prophesee XXXX(126) XXXX Metavision CMOS Image Sensor XXXX XXXX XXXX 2018 2019 - 2018 Expected in 2020 18.5 15 18.5 13 7.2 120 >120 120 >100 >100 346 x 260 640 x 480 240 x 180 320 × 262 60 x 480 for VGA, 1024 x 768 for XGA 12 66 12 >50 >80 22% - 22% 22% ~100% ~6,600(125) ~4,000 ~3,000 6,000 Not Released Product Bandwidth (MEPS) Pixel Size (µm) Fill Factor Cost (USD)
  • 28.
    Event-Based Vision Systems28 KEY TAKEAWAYS A majority of the startups active in the domain are focusing on ------------------------ ------------------------------------------------------------- Samsung’s technology stack is ------------------------------------------------------ ---------------------------------- XXXX and XXXX are developing stack technologies for pixel circuits. XXXX has entered into the maximum number of partnerships and plans to use the recent funding amount to move towards a broader market adoption of event-based cameras with the focus on Industry 4.0, automotive, and IoT. The European region has a greater number of startups in comparison to the USA, Asia, and others.
  • 29.
  • 30.
    Event-Based Vision Systems30 DVS technology research is in focus and several projects are currently underway across various institutes, especially in the area of ready to deploy models. These initiatives are approaching DVS with various innovative methods and architectures and are of significance for a range of applications including autonomous vehicles, robotics, smart cities, and security. Developing improved architectures can unleash the current scenario of event-based cameras. Overview: ULPEC aims at developing advanced vision cameras with ultra-low power requirement and ultra-low latency. It connects the neuromorphic camera to a high speed ultra-low power consumption asynchronous visual data processing system and spiking neural networks with memristive synapses. ULPEC caters to autonomous driving and recognition of traffic events where issues of power budget and heat dissipation are prime concerns. This combination of bio-inspired optical sensors and neural networks help enhance the cameras present in the autonomous vehicles, including drones. Low latency further addresses the security concerns in autonomous driving. Bosch, a project partner, is investigating these self-driving applications. The ULPEC device is targeting to reach technology readiness level 4 (TRL 4). Apart from the automotive industry, the project also focuses on advanced data processing in PROJECT 1 Principal Investigators: Dr. Sylvain Saigi(133) Ultra-Low Power Event-Based Camera (ULPEC) Start Date Collaborations: End Date Status Region 1st January 2017 31st December 2020(132) Active • The project is funded under the Smart System Integration of the European Commission Horizon 2020 research and innovation program under grant agreement No 732642 • Maximum grant amount: USD 5.3 million Europe Funding
  • 31.
    Event-Based Vision Systems31 hardware-native neural networks. It also focuses on developing interoperability for integration in “systems of systems”. Additionally, memristive neural networks and novel vision systems targeting perception tasks in computer vision offer the potential to explore related areas like robotics, unmanned military transportation, health care and bio-medicals, security, and environmental monitoring.(134) The project is providing participants such as Prophesee, Toshiba Samsung Storage Technology Corporation (TSST), and Bosch with an opportunity to increase their market share and increase competitiveness. ULPEC recently published “Speed Invariant Time Surface for Learning to Detect Corner Points with Event-Based Cameras”. The research is based on the application of Speed Invariant Time Surface for corner detection of scene data from the event-based cameras.(135) Furthermore, the ULPEC team from CNRS-Thales and the University of Bordeaux featured in Nature Communications for its research on artificial synapses, which can learn autonomously.
  • 32.
  • 33.
    Event-Based Vision Systems33 ROBOTICS AND PERCEPTION GROUPLAB 1 The Robotics and Perception Group was started in 2012 and is a part of the Department of Informatics at the University of Zurich, and the Institute of Neuroinformatics, a joint institute affiliated with both the University of Zurich and ETH Zurich.(154) This group is developing DVS that can instantaneously react to the changes in light. Their objective is the creation of autonomous machines that can navigate using only onboard cameras, without depending on external infrastructure (GPS or motion capture systems). The DVS design follows the neuromorphic approach of emulating parameters of biological retinas. It observes challenges such as pixel size, low dynamic range, motion blur, and high latencies.(155) The Robot and Perception Group is currently exploring different aspects of the event- based cameras. One of the group’s research has released a Color Event Camera Dataset (CED), which contains some 50 minutes of footage with both color frames and events. The research uses Color DAVIS346 and provides color event data sets for accelerating research in the field. Additionally, the research group is also working on an Open Event Camera Simulator that can simulate standard cameras and IMU. It has an open source code and works in real time.(156) It is a useful prototype for visual-odometry or event- based feature tracking algorithms.(157) The research group has developed the datasets with an event-based camera for high-speed robotics. The datasets include the intensity of images, inertial measurements, and ground truth from a motion-capture system. These datasets are generated from using a DAVIS240C from iniLabs. The research group is also working on other aspects such as photometric mapping, 6-DoF camera tracking, etc.(158) Event-based motion segmentation by motion compensation Comprehensive learning of representations for asynchronous event-based data High speed and high dynamic range video with an event camera The present research topics include: Event-based, direct camera tracking from a photometric 3D map using nonlinear optimization
  • 34.
    Event-Based Vision Systems34 Semi-dense 3D reconstruction with a stereo event camera Visual and inertial state estimation and mapping Collaboration of aerial and ground robots Autonomous navigation of flying robots Event-based vision for agile flight(159) Funding(160) Event-based vision meets deep learning on steering prediction for self-driving cars
  • 35.
    Event-Based Vision Systems35 KEY TAKEAWAYS The projects are from the Europe and have been funded by bodies such as the XXXX, XXXX, and XXXX. Target application areas of the projects undertaken include a focus on applications such as ---------------------------------------------------------- ------------------------------------------------------- have been the key areas of interest in the projects. XXXX, XXXX, XXXX, and XXXX are the leading automotive companies that have participated in the research efforts. As a part of the research, event-based vision systems have been implemented in --------- ------------------------------ Algorithms for event-based cameras are a ------------
  • 36.
  • 37.
    Event-Based Vision Systems37 An Improved Approach for Visualizing DynamicVision Sensor and its Video De-noising: The publication proposes an approach of overlapped events for improved visualization in dynamic vision sensors. Additionally, the paper also focuses on shared dictionaries for video denoising. The proposed framework allows for the visualization of the events with high speed and less noise.(172) GRASP Laboratory, University of Pennsylvania Xidian University CNRS and Sorbonne University Event-based Visual Inertial Odometry: The research describes an asynchronous algorithm which can successfully track camera motions based on event-camera datasets. It fuses event-based tracking algorithm with an IMU to ensure an accurate metric tracking of camera’s 6 DoF pose. The algorithm selects features in the image plane and tracks spatiotemporal windows within the event streams. An Extended Kalman Filter with a structureless is used to fuse the feature tracks with the output of the IMU. The study proposes a data association scheme where multiple spatially neighboring events are associated in a soft manner with one feature, whose motion is computed using the weighted event positions.(171) Event-based Face Detection and Tracking in the Blink of an Eye: This paper is in collaboration with University of Pittsburgh. The higher temporal resolution of an event-based camera has been used for detecting eye blinks. Additionally, the organizations have released annotated data for future work(173) 1. 2. 3.
  • 38.
  • 39.
    Event-Based Vision Systems39 As an emerging technology, event-based vision has the potential to transform traditional vision sensing architectures. The technology is currently in early stages of commercialization and awaits adoption in real life applications across sectors. The competitive scenario of the technology is restricted to a few startups. Samsung is the only major company active in this space. Sony is slowly making an entry with patents and a product announcement. Region Analysis Opportunities for the Automotive Industry European universities and research institutes are leading the event-based vision space and have given birth to startups such as iniVation and Insightness from ETH Zurich, and Prophesee from the Institut de la Vision, France. Most of the research-based projects in the domain are -------- -------------------------------------------- ----------------------------------------------------- ----------------------------------------------------- ----------------------------------------------------- The autonomous vehicle market is going to witness a huge transformation by combining the potential benefits of existing sensors with intelligent event-based concepts and algorithms. Automotive companies should consider ----------------- ----------------------------------------------------- ----------------------------------------------------- ----------------------------------------------------- ----------------------------------------------------- ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- -------------------------------------------------------- ------------------------------------------------------------------------ -------------------------------------------------------------------------------- --------------------------------------------------------------------------------- --------------------------------------------------------------------------------- ---------------------------------------------------------------------------------
  • 40.
    Event-Based Vision Systems40 Additionally, funding projects in the domain can help automotive companies to become a part of the event-based camera ecosystem. Companies such as XXXX, XXXX, and XXXX have already become part of projects with research establishments like XXXX or XXXX. LiDAR in combination with event-based camera could be ------------------------------- ----------------------------------------------------------------------------------- ----------------------------------------------. XXXX is one of the companies that aims to use its event-driven approach to sensors such as LiDARs and RADARs. Collaboration and Investment Opportunities Companies across sectors can consider partnership opportunities with startups focusing on event-based cameras and other enablers who are working on -------- ---------------------------------------------------- -------. A few startups like XXXX and XXXX have released their own event-based datasets. These steps are aiming to address --------------------------------------------- ----------------------------------------------------- -----. The event-based vision domain has a greater number of universities working on R&D stages than it has companies working on a commercial scale. This increases ------------------- ---------------------------------------------------------------------------------- --------------------------------------------. XXXX, XXXX, XXXX, XXXX, XXXX, XXXX, XXXX, and XXXX are some of the proactive participants in the field of asynchronous vision systems.
  • 41.
    Event-Based Vision Systems41 CONCLUDING REMARKS It is highly likely that event-based vision systems will have promising implications in the future with applications across dynamic environments. While the early generation of the technology started with helping the visually impaired, over the next 2-3 years it is estimated to have PoCs for self-driving cars, factory automation devices, and VR/AR applications. New companies, partnerships, and investments will drive the development and commercialization of the event-based camera systems. The different benefits of this new concept over the traditional camera system make it suitable for object tracking, pose estimation, 3D reconstruction of scenes, depth estimation, feature tracking and other perception tasks that are the major requirements for the connected systems of tomorrow. Vision systems are getting adopted on a large scale at both the consumer and enterprise level. Event-based approach has the potential to ----------------------------------- ----------------------------------------------------------------------------- ----------------------------------------------------------------------------- ------------------------------------- The patent filings in the domain ----------------------------------------------- ---------------------------------------------- can potentially contribute to the technology adoption. Many research centers have emerged with ------------------------------------- ----------------------------------------------------------------------------- ------------------------------------------------------- onboard for leading the research operations. The event-based camera companies started with back side illumination process for fabricating pixel circuits and are now moving towards new 3D/stacked IC technologies that allow shrinkage of pixel size and independent control of wafer layers. XXXX, XXXX and XXXX are some of the companies taking steps in this direction. ------------------------------------------------------------------------------- ------------------------------------------------------------------------------- --------------- ------------------------------------------------------------------------------- ------------------------------------------------------------------------------- ------------------------------------------------------------------------------- ------------------------------------------------------------------------------- ---------- ------------------------------------------------------------------------------- ------------------------------------------------------------------------------- ------------------------------------------------------------------------------- ------------------------------------------------------------------------------- ------------------------------------------------------------------------------- --------------------
  • 42.
    Event-Based Vision Systems42 ACRONYMS Abbreviation AI IoT FPS DVS ATIS DAVIS CMOS HDR APS AR VR ROI SPAD AER MIMCAP Cu-Cu ROS jAER YARP TRL G-AER IMU N-MNIST N-Caltech 101 N-CARS vSLAM CIS EDC HMI DLS IoSiRe LiDAR RADAR EDC GPS Meps Artificial Intelligence Internet of Things Frames Per Second Dynamic Vision Sensor Asynchronous Time Image Sensor Dynamic and Active Pixel Vision Sensor Complementary Metal Oxide Semiconductor High Dynamic Range Active Pixel Sensor Augmented Reality Virtual Reality Region of Interest Single Photon Avalanche Diode Address Event Representation Metal Insulator Metal capacitor Copper Copper Robot Operating System Java Address Event Representation Yet Another Robotics Program Technology Readiness Level Group Address Event Representation Inertial Measurement Unit Neuromorphic Modified National Institute of Standards and Technology database Neuromorphic California Institute of Technology dataset Neuromorphic CARS datasets Visual Simultaneous Localization AND Mapping CMOS Image Sensor Event Driven Compressive sensing Human Machine Interface Dual line Sensor Internet of Silicon Retinas Light Detection and Ranging Radio Detection and Ranging Event Driven Compressive sensing Global Positioning System Maximum Event Per Second Explanation
  • 43.
    Event-Based Vision Systems43 REFERENCES 1. http://rpg.ifi.uzh.ch/docs/scaramuzza/2019.07.11_Scaramuzza_Event_Cameras_Tutorial.pdf 2. http://rpg.ifi.uzh.ch/docs/scaramuzza/2019.07.11_Scaramuzza_Event_Cameras_Tutorial.pdf 3. https://patents.google.com/patent/CN109377516A/en?oq=CN109377516A 4. https://patents.google.com/patent/CN108961318A/en?oq=CN108961318A 5. https://patents.google.com/patent/WO2018219931A1/en?oq=WO2018219931A1 6. https://patents.google.com/patent/CN109726356A/en?oq=CN109726356A 7. https://patents.google.com/patent/US9739660B2/en?oq=US9739660B2 8. https://patents.google.com/patent/US9739660B2/en?oq=US9739660B2 9. https://patents.google.com/patent/US20170094249A1/en?oq=US20170094249A1 10. https://patents.google.com/patent/US20160094814A1/en?oq=US10237481B2 11. https://patents.google.com/patent/US10295669B2/en?oq=US10295669B2 12. https://patents.google.com/patent/US20170094249A1/en?oq=US20170094249A1 13. https://worldwide.espacenet.com/publicationDetails/biblio?CC=CN&NR=108038888A&KC=A&FT=D 14. https://patents.google.com/patent/US20180308253A1/en?oq=US20180308253A1 15. https://ec.europa.eu/research/participants/documents/ downloadPublic?documentIds=080166e5b61e1831&appId=PPGMS 16. https://patents.google.com/patent/US9001220B2/en?oq=US9001220B2 17. https://patents.google.com/patent/WO2018152214A1/en?oq=WO2018152214A1 18. https://patents.google.com/patent/US10356337B2/en?oq=US10356337B2 19. https://patents.google.com/patent/WO2019158287A1/en?oq=WO2019158287A1 20. https://patents.google.com/patent/US8825306B2/en?oq=US8825306B2 21. https://patents.google.com/patent/CN106127800A/en?oq=CN106127800A 22. https://www.prophesee.ai/wp-content/uploads/2018/12/Prophesee-Grenoble-Press-Release-EN.pdf 23. https://www.prophesee.ai/ 24. https://www.prophesee.ai/recognition/ 25. https://www.prophesee.ai/2019/04/30/prophesee-joins-embedded-vision-alliance__trashed/ 26. https://www.prophesee.ai/2016/11/28/chronocam-receives-15-million-funding-led-by-intel/ 27. https://www.engineering.com/AdvancedManufacturing/ArticleID/17968/Machine-Vision-System-Can- Track-Vibrations-for-Production-Monitoring.aspx 28. https://www.imveurope.com/analysis-opinion/innovative-imaging-tech-shortlisted-vision-stuttgart- award 29. https://www.prophesee.ai/2019/02/14/imago-prophesee/
  • 44.
    Event-Based Vision Systems44 About Netscribes Netscribes is a global market intelligence and content services provider that helps corporations achieve strategic objectives through a wide range of offerings. Our solutions rely on a unique combination of qualitative and quantitative primary research, secondary/desk research, social media analytics, and IP research. For more than 15 years, we have helped our clients across a range of industries, including technology, financial services, healthcare, retail, and CPG. Fortune 500 companies, as well as small- to mid-size firms, have benefited from our partnership with relevant market and competitive insights to drive higher growth, faster customer acquisition, and a sustainable edge in their business. APPENDIX
  • 45.
    Event-Based Vision Systems45 DISCLAIMER This report is prepared by Netscribes (India) Private Limited (”Netscribes”), a market intelligence and content service provider. The content of this report is developed in accordance with Netscribes’ professional standards. Accordingly, the information provided herein has been obtained from sources which are reasonably believed to be reliable. All information provided in this report is on an “as-is” and an “as-available” basis, and no representations are made about the completeness, veracity, reliability, accuracy, or suitability of its content for any purpose whatsoever. All statements of opinion and all projections, forecasts, or statements relating to expectations regarding future events represent ROGM’s own assessment and interpretation of information available to it. All liabilities, however arising, in each of the foregoing respects are expressly disclaimed. This report is intended for general information purposes only. This report does not constitute an offer to sell or issue securities, an invitation to purchase or subscribe for securities, or a recommendation to purchase, hold, sell, or abstain from purchasing, any securities. This report is not intended to be used as a basis for making an investment in securities. This report does not form a fiduciary relationship or constitute investment advice. Nothing in this report constitutes legal advice. The information and opinions contained in this report are provided as of the date of the report and are subject to change. Reports may or may not be revised in the future. Any liability to revise any out-of-date report, or to inform recipients about an updated version of such report, is expressly disclaimed. A bonafide recipient is hereby granted a worldwide, royalty-free, enterprise-wide limited license to use the content of this report, subject to the condition that any citation from this report is properly referenced and credited to Research On Global Markets. Nothing herein conveys to the recipients, by implication or by way of estoppel, any intellectual property rights in the report (other than the foregoing limited license) or impairs Netscribes’ intellectual property rights, including but not limited to any rights available to Netscribes under any law or contract. To the maximum extent permitted by law, all liabilities in respect of this report and any related material is expressly disclaimed. Netscribes does not assume any liability or duty of care for any consequences of any person acting, or refraining to act, by placing reliance on the basis of information contained in this report. All disputes and claims arising in relation to this report will be submitted to arbitration, which shall be held in Mumbai, India under the Indian Arbitration and Conciliation Act. The exclusive jurisdiction of the courts in Mumbai, India, applies to all disputes concerning this report and the interpretation of these terms, and the same shall be governed by and construed in accordance with Indian law without reference to the principles of conflict of laws.
  • 46.
    Event-Based Vision Systems46 GET IN TOUCH WITH US Singapore 10 Dover Rise, #20-11, Heritage View, Singapore - 138680 Phone: +65 31580712 Gurugram 806, 8th Floor, Tower B, Unitech Cyber Park, Sector 39, Gurugram - 122001, Haryana, India +91-124-491-4800 Mumbai Office no.504, 5th Floor, Lodha Supremus, Lower Parel Mumbai 400013, Maharashtra, India +91-22-4098-7600 New York 41 East, 11th Street, New York NY10003, USA +1-917-885-5983 Kolkata 3rd Floor, Saberwal House 55B Mirza Ghalib Street, Kolkata - 700 016, West Bengal, India +91-33-4027-6200 US toll free: 1-888-448-4309 India: +91-22-4098-7690 subscriptions@netscribes.com