Computer Architecture for
Vision Systems
Presented by:
Ritesh Thakur ( 30 )
Vishal Thoke ( 32 )
Utsav Patel ( 36 )
Sushant Vaidkar ( 38 )
Vedant Valsangkar ( 40 )
Bansilal Ramnath Agarwal Charitable Trust's
Vishwakarma Institute of Technology
The Evolution of Vision
Technology
Computer vision:research and fundamental
technology for extracting meaning from images
Machine vision:factory applications
Embedded vision:thousands of applications
1)Consumer, automotive, medical, defense, retail,
gaming, security, education, transportation, …
2)Embedded systems, mobile devices, PCs and the
cloud
What is Computer Vision ?
• Out of the five senses, Vision is known to be the superior
source of data in human beings. We can't do our daily tasks
without our eyes.
• Eyes are the major source of information. What if computers
can also analyse visual data using Cameras?
• Computer Vision is an interdisciplinary scientific field that
deals with how computers can gain high level understanding
from digital sources like images and videos.
Every Computer Vision System Looks Something
Like This
Camera Local Processor Network
Connection
Cloud Backend
ARCHITECTURE
Analyzing the process of a computer vision application, one can
perceive that the different tasks demand different efforts from the
various computational resources. One can conclude that a single
processor architecture may not be able to carry all these operations
efficiently; there is a need for a hybrid processing configuration, with
specific architectures for each level.
Architectures for low-level operations are heavily explored,
due to their specific characteristics and the large amount of
data involved. There is a trend to use SIMD (single instruction
multiple data) parallel architectures for low-level processing,
and a second architecture for medium and high-level
operations. Digital signal processors (DSPs) have also been
used for low level operations, with good performance
Both consist of pixels which use the Photoelectric effect to generate electric signals
Types of Camera Sensors:
CCD
(Charged Coupled Device)
● Passive-pixel device
● Less noise in pixel data
● Expensive, requires more power
● Used in high quality video cameras and
satellites
CMOS
(Complementary Metal Oxide Semiconductor)
● Active-pixel device
● More noise
● Affordable, low power consumption
● Used in smartphones and DSLR
Types of Camera Sensors:
CCD
(Charged Coupled Device)
● No electronics at pixel level
● All electric signals need to be
transferred to external electronics for
conversion into voltage
● Hence, the sensor is quite slow
● Provides better quantum efficiency
● Ideal for poor lighting conditions
CMOS
(Complementary Metal Oxide Semiconductor)
● Each pixel contains separate electronics e.g.
amplifier
● The signal from each pixel can be read directly
without any changes
● Provides higher frame rate
● Image is scanned row-wise causing rolling shutter
effect
Applications of Computer Vision Technology
1. Automotive Safety :
● Vision system can assure safety of vehicles
in auto pilot mode.
● Using cameras we can detect objects nearby
and can avoid the obstacles.
Applications of Computer Vision Technology
2. Tracking Objects :
● Using surveillance cameras we can keep
track of household items.
● We can keep an eye on our important stuff.
Applications of Computer Vision Technology
3. Hazardous Areas Scanning :
● Cameras and drones can go where human
eyes can not reach.
● Humans can take a look inside the
hazardous areas using drones.
Applications of Computer Vision Technology
4. Biological Applications :
● Small cameras are used in surgeries to
detect the area of body.
● Vision systems can also detect various
samples of microbes and DNA.
Development: Future
•Heterogeneity of hardware becomes hidden
•OpenVX: Abstracts hardware, not the algorithm
•Higher-level APIs: Abstract the algorithm and hardware
•Higher-level deep learning abstractions
•Automated optimization of neural networks
•Automated design and training of neural networks
•Development shifts from implementation to integration
Conclusions
•Computer vision will become ubiquitous and invisible
•It will be a huge creator of value, both for suppliers as well as those who leverage
the technology in their applications
•Deep learning will become a dominant technique (but not the only technique)
•Computation distributed between the cloud and the edge
•Heterogeneity in hardware becomes increasingly hidden
•Development shifts from implementation to integration
Computer architecture for vision systems

Computer architecture for vision systems

  • 1.
    Computer Architecture for VisionSystems Presented by: Ritesh Thakur ( 30 ) Vishal Thoke ( 32 ) Utsav Patel ( 36 ) Sushant Vaidkar ( 38 ) Vedant Valsangkar ( 40 ) Bansilal Ramnath Agarwal Charitable Trust's Vishwakarma Institute of Technology
  • 2.
    The Evolution ofVision Technology Computer vision:research and fundamental technology for extracting meaning from images Machine vision:factory applications Embedded vision:thousands of applications 1)Consumer, automotive, medical, defense, retail, gaming, security, education, transportation, … 2)Embedded systems, mobile devices, PCs and the cloud
  • 3.
    What is ComputerVision ? • Out of the five senses, Vision is known to be the superior source of data in human beings. We can't do our daily tasks without our eyes. • Eyes are the major source of information. What if computers can also analyse visual data using Cameras? • Computer Vision is an interdisciplinary scientific field that deals with how computers can gain high level understanding from digital sources like images and videos.
  • 4.
    Every Computer VisionSystem Looks Something Like This Camera Local Processor Network Connection Cloud Backend
  • 5.
    ARCHITECTURE Analyzing the processof a computer vision application, one can perceive that the different tasks demand different efforts from the various computational resources. One can conclude that a single processor architecture may not be able to carry all these operations efficiently; there is a need for a hybrid processing configuration, with specific architectures for each level.
  • 6.
    Architectures for low-leveloperations are heavily explored, due to their specific characteristics and the large amount of data involved. There is a trend to use SIMD (single instruction multiple data) parallel architectures for low-level processing, and a second architecture for medium and high-level operations. Digital signal processors (DSPs) have also been used for low level operations, with good performance
  • 7.
    Both consist ofpixels which use the Photoelectric effect to generate electric signals Types of Camera Sensors: CCD (Charged Coupled Device) ● Passive-pixel device ● Less noise in pixel data ● Expensive, requires more power ● Used in high quality video cameras and satellites CMOS (Complementary Metal Oxide Semiconductor) ● Active-pixel device ● More noise ● Affordable, low power consumption ● Used in smartphones and DSLR
  • 8.
    Types of CameraSensors: CCD (Charged Coupled Device) ● No electronics at pixel level ● All electric signals need to be transferred to external electronics for conversion into voltage ● Hence, the sensor is quite slow ● Provides better quantum efficiency ● Ideal for poor lighting conditions CMOS (Complementary Metal Oxide Semiconductor) ● Each pixel contains separate electronics e.g. amplifier ● The signal from each pixel can be read directly without any changes ● Provides higher frame rate ● Image is scanned row-wise causing rolling shutter effect
  • 9.
    Applications of ComputerVision Technology 1. Automotive Safety : ● Vision system can assure safety of vehicles in auto pilot mode. ● Using cameras we can detect objects nearby and can avoid the obstacles.
  • 10.
    Applications of ComputerVision Technology 2. Tracking Objects : ● Using surveillance cameras we can keep track of household items. ● We can keep an eye on our important stuff.
  • 11.
    Applications of ComputerVision Technology 3. Hazardous Areas Scanning : ● Cameras and drones can go where human eyes can not reach. ● Humans can take a look inside the hazardous areas using drones.
  • 12.
    Applications of ComputerVision Technology 4. Biological Applications : ● Small cameras are used in surgeries to detect the area of body. ● Vision systems can also detect various samples of microbes and DNA.
  • 13.
    Development: Future •Heterogeneity ofhardware becomes hidden •OpenVX: Abstracts hardware, not the algorithm •Higher-level APIs: Abstract the algorithm and hardware •Higher-level deep learning abstractions •Automated optimization of neural networks •Automated design and training of neural networks •Development shifts from implementation to integration
  • 14.
    Conclusions •Computer vision willbecome ubiquitous and invisible •It will be a huge creator of value, both for suppliers as well as those who leverage the technology in their applications •Deep learning will become a dominant technique (but not the only technique) •Computation distributed between the cloud and the edge •Heterogeneity in hardware becomes increasingly hidden •Development shifts from implementation to integration