Agenda:-
Measurement System using
Computer Vision &
Camera as a Sensor
1
Dr Tan Guan Hong
Consultant @ Singapore Technology Land System & Electronics
Vice President of Singapore Industrial Automation Association
Technology Partner @ RekaNext
Technology Partner @ Pestech
RekaNext‘s Director @ XjeraLabs
A*STAR’s Director @ Sentient.io
Business Advisor @ ACKCIO
2
• Measurement Science
• Computer Vision Accuracy challenges
• Camera as a Sensor, is a Contactless Sensor
with many Metadata
• High Accuracy (>99%) for outdoor People
Counting
• PMD Speed Measurement Technique
3
Two-dimensional (2D) camera: These sensors captures data over time frames.
Using various video analytics algorithms, these 2D camera sensors can provide
different information. For example, within the same image, the algorithms can
extract information such as (i) people count, (ii) number and color of cars (iii) lighting
condition, etc. Over time, processed metadata can yield further insights such as
tracking of (iv) people’s movement, (v) dwell time, etc.
Measurement
Measurement Systems
Slow Sensor Data: Temperature, Humidity, Hydrostatic pressure, Strain Gauge, Tilt and
Infra-red sensors acquire data in minutes or hours. These are Quasi-static sensors.
Dynamic (Fast) Sensor Data: Accelerometer provides G m/s2 in milliseconds or
faster. Acoustic sound sensor provides voltage signals over time. When these sensor
data are processed in the Frequency Domain using Fast Fourier Transform, the data
can provide Peak Vibration Level at various Frequencies.
4
Static
Quasi Static
Dynamic
Periodic
Dynamic
Transient
Type of Signals from Sensors
Measurement
High Repeatability
High Accuracy
High Repeatability
Low Accuracy
Low Repeatability
High Accuracy
Low Repeatability
Low Accuracy
7
Which sensor data
do you trust & faster
to process in Real
Time ?
Get Data accurate
and repeatable is
important
Measurement
With sufficient Data
points, Signal
Processing can help
Improve Accuracy
Averaging
Averaging
Offset
6
Accuracy of Data depends :-
Accuracy of Sensor
Maintenance & Calibration of Sensor (Function of Time, Drift, Deterioration)
Video Analytics is Processing of Image Data into Structured Information
Accuracy and Repeatability only in controlled environment
Installation of Sensor
Use of Sensor in its context (monitoring & control function)
Expected functional accuracy for decision making
ICT: Sensor data is Protected in Cyber Space, Stable, Error and
Maintenance free !
Electronics: Sensor operates Physical environment, with Accuracy,
Repeatability, Drift and Noise
Measurement
Accuracy of Sensors affected by Environmental conditions
Average Water Depth of 10 m
Water Depth variation of +/- 0.5 m @ 0.1
Hz in flowing canals
+ 0.5 m
- 0.5 m
Acceptable Accuracy is then +/- 0.25 m
Expected Physical Accuracy to measure
Sensor accuracy needs to be x 2 better to be
cost effective
10 m
Sensor when used outdoor deteriorates over time
Regular Cleaning maintenance, validate and re-calibration
Sensor diaphragm membrane is stiffened by barnacles,
hence affect the readings
7
Measurement
Design for Data Quality and NOT just
Availability of Data alone
You could also be Sensing unwanted Noise!
SQL
Physical Sensor output can be affected by
Data corruption from
EMI Noise, Humidity, Temperature, Pressure,
Vibration (Lose connections)
Output of data is
taken from a
Database and usually
many trust this data !
When retrieved from SQL dB, the data is Highly
Repeatable and Accurate !
Cyber
Physical
System is
Secured and
Auditable.
Computers don’t
lie ! 
8
Measurement
https://www.isixsigma.com/tools-templates/capability-indices-process-capability/process-capability-cp-cpk-and-process-performance-pp-ppk-what-difference/
Sensor measures the physical parameters.
Need to consider Process Capability and
Statistical Measurement Spread Concept of ± 3σ.
When program
interrogate a SQL,
the data feedback
is always at 0 σ !
Everytime
Measurement Capability
9
Measurement
Physical World Sensor data have Statistical Variations, while
SQL extracted data is always consistent.
Physical
Object under
measurement
Sensor
Video
Analytics
Processed
into
Information
No. of
Sensing
Parameters
< 20 Facial
Sensor
Markers
1 Temperature
Reading
Each Sensing point do
have reading variations
RFID Tag
information
RFID
Reader
1 Digital
information
Sensing
Repeatability
Converts
uV to T oC
Need to
know where
are the
possible
Statistical
Sensing
Errors and
mitigate the
risks
SQL
System
usually
takes one
snapshot
reading and
stores in dB
Always
Repeatable
@ +/- 0 σ
10
Measurement
9
Sensor Measurement Error
due to aliasing
Measurement
Understanding Measurement Principle is important !
Actual
Temperature
Sampled
Temperature
Displayed
Temperature
Nyquist
Frequency:-
Sample at
least Twice
the Highest
frequency
Temperature don’t
change at all !
If sample too slow
Temperature
is actually
fluctuating
12
Measurement
Quasi
Static
Dynamic
Transient
Slow Sensor sampling speed cause Errors
Sample x 20 faster
Theory says
Nyquist
Frequency:-
Sample at
Least Twice
the Highest
frequency
only.
Risk trade off with cost
Too slow, missed spikes
Too fast, huge data to analyse
Expand
the
Time
scale
13
Measurement
14
Facial
Recognition is
Identification
8.7.6 Defining and
Measuring Fields of
View for CCTV
Systems
Monitor
Not Less than 5%R
Detect
Not Less than 10%R
Recognition
Not Less than 50%R
Identification
Not Less than 120%R
The Guidelines for Enhancing Building
Security in Singapore (GEBSS)
is a follow-up from the earlier ‘Enhancing
Building Security’ booklet and has been
prepared by
Homefront Security Division - Ministry of
Home Affairs
in consultation with:
Singapore Police Force;
Internal Security Department;
Singapore Civil Defence Force;
Building and Construction Authority;
Urban Redevelopment Authority;
8.8 Security Lights
for CCTV Systems
For Human viewing CV Accuracy
15
FHD Camera
(1920x1080)
Accuracy of VA is affected by Lighting, Image Size & Viewing
Angle covering a face from a distance
1 m 10 m
1920x1080 192x108
Ideal
Realistic
Lighting
Image Size
Viewing Angle
960x540 96x54
Eye 2 Eye pixels is 50
Hence 10 m is maximum limit, but without stating the accuracy
Number of
Pixels
Viewing
angle of
Camera
Passport
Photo Size
CV Accuracy
Ideal Controlled
Conditions
Outdoor Complex
background scene
in bright lighting
conditions
Same Outdoor
Complex background
scene with lower
lighting conditions
Consistent
People Counting
Accuracies
using VA under
different
conditions
Standard VA
+ Customised
VA = Higher
VA accuracy
16
CV Accuracy
Out-door affected by shadows
In-door with controlled lighting
Out-door
lighting
changes at
different
time of the
year
17
CV Accuracy
18
Street Level and NOT at Laboratory Condition
Internet Images or Pre-stored database
Most VAs are designed for controlled environment
Use other Attributes:-
• Cap Colour
• Backpack colour
• Shirt colour
• Pants Colour
• Shoe colour
• Walking style
Beyond Facial
Recognition !
Boston bomber was identified by a victim.
NOT by computer technology !
CV Accuracy
19
Parameters
• Use of sharp images is recommended
• Use images with at least 64 grey scales within the face area is recommended
• Use several images during enrolment, as it improves facial template quality
which results in improvement of recognition Quality and Reliability
• Robust against typical gestures and expression changes
• Robust against partial face occlusions, beard and hairstyle changes
• Robust against use of glasses (except reflecting sunglasses)
• Robust against lighting changes that do not cause strong shadows
Performance accuracy
• Laboratory: 100%
• Real deployment: To be determined in each scenario conditions
Algorithm Developer’s Specification
Face Recognition Module
In-door controlled environment only
Performance cannot be determine up front
Need to pre-
register for
different facial
angles
Can’t work on Dark skin tones
CV Accuracy
Image Pixels from
distance
20
Colour contains Red Green
Blue components and each
component can have 8 bits
which is 256 resolution
Grey Scale
Every Square is a
Pixel and you only
know its x,y
position and
colour (RGB)
21
Caucasian
22
Afro-American Caucasian
Grey Scale Pixel Analysis
Dynamic Range
Density Distribution
23
Skin complexions of three skin groups: Pale, Yellowish, and Dark. Their reflectance are
smooth and similar, mainly separated by their levels. Measured skin reflectances are
available , for example, in the Physics-based face databases
Dynamic
Range
41 %
Dynamic
Range 15
%
Visible Light Wavelength (nm)
Relative
Reflectance
(%)
Different skin surface complexions
Visible Light Wavelength (nm)
How Visible Light
Reflectance are
affected by Skin
complexions
CV Accuracy
24
Afro-American Caucasian
Grey Scale Pixel Analysis
Dynamic Range (Histogram
spread)
Wide Density Distribution
0
50
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
Caucasian
0
10
20
30
40
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
Afro-American
Grey Scale Grey Scale
Number
of
Pixels
(Total
of
300)
Number
of
Pixels
(Total
of
300)
SAME camera for multiple purposes
Challenges: Huge variations in environments, undefined events
Unstructured Data (Video)
Structured Data
Contextual Information
Detection of events
Detection of objects
Number of cars, humans, events etc.
Sharing of resources, multiple applications with same data
Camera as a Sensor
25
26
Potential Real Time Object Classification Metadata
beyond CCTV Recording for Forensic only In 10 mins interval
1. Number of Cars
2. Average Dwell Time of Cars
3. Number of People
4. Average Dwell Time of
People
5. Number of Bicycle
6. Average Dwell Time of
Bicycle
7. Average Light Intensity
8. Number of Debris on Road
(LTA) / Sidewalk (NEA)
9. People crossing Roads
Hence within One Camera, it is possible to extract extra 9sensing parameters
Camera as a Sensor
If Flooding occurs 36 / 365 days, the
CCTVs have high value and used
10% of the time. The 90% can be
used for other agency purposes
27
A*STAR POC to show that same PUB Flood CCTV can extract Metadata of
Cars and Humans
Other possibilities are:-
• Lighting Metadata (due Smoke, Haze, Rain) for LTA’s SML system
• LTA ITS Vehicle Dwell Time, Fallen Tree, Large object on roads
• LTA & SPF sudden Human Crowding, Density
Camera as a Sensor
Pg
28
PUB Road Surface Flood Monitoring
Automated analysis of large amounts of video data
Over 130 CCTVs
24x7
Same data,
different
applications?
Detected What other
information
Camera as a Sensor
29
Camera as a Sensor
PUB: Auto Surveillance to reduce water pollution
Contractors, developers and
professionals of construction sites
have to play their part to keep our
waterways free from mud.
CCTVs installed to monitor for silty discharges
Over 500 construction sites
To automatically analyze video data
Alerts when silt discharges detected
Silt
30
Camera as a Sensor
Extended Application: Water level detection, debris
detection
Water level
detection Debris
detection
Detection of leakages during dry weather
Detection of chokes in drains
31
Camera as a Sensor
Silt Alert Water Level
Water Flow
x
32
Camera as a Sensor
33
Camera as a Sensor
Detection of floating debris (flotsam)
Assist in monitoring and event driven maintenance
Extended Application: Detection of rain
Presence of debris indicates heavy rainfall upstream
34
Camera as a Sensor
35
Battery Powered
High Speed Strain
Gauge Data
Loggers
-16 simultaneous
SG data acquisition
@ 100 samples
/sec/channel
- Storage for 60
minutes data
streaming logger
Mains Powered
Digital Image
Recording Systems
-5 frames per sec
per logger
- 1024 x 768 pixels
per frame
Battery
Powered X-Z
accelerometers
with Data
Loggers
-3 G with 10
mG resolution
-200
samples/sec
Synchronization
1 S Pulse FM
Broadcaster to
ensure accurate
time stamping
for all measured
data recording
Dynamic Crane
Measurement
System using
Photogrammetry
Measurement
36
Measurement
3
7
Time
Displacement
(mm)
Measurement
Axes
Measurement
3
8
Acceleration
(G)
Time
Measurement
Axes
Measurement
Pixel Design Calculations
need to be done for
Measurement considerations
before even considering
Video Analytics
Planning the Optics with the actual Scene for Measurement
Sensor Pixels
FoV
Focal Length
Width
Height
Velocity
Distance
PMD Measurement Approach
• Maximum Speed of Measurement, Accuracy & Repeatability
• Camera FoV and Orientation (1920x1080)
• Video Frames Sampling Rate (Processing in between frames)
• Security Encryption and De-encryption can cause frame
dropouts as well as Frame in consistent sampling timings,
which can affects how time measurement accuracy.
• Post Process is better than Real Time
• Radar uses Doppler frequency shift to measure moving
objects, Humans has more water based surface than metallic
cars. Need to check this measurement accuracy for Humans
on PMD.
FoV Distance, D (m)
covered in tD
End
Time, tD
Start
Time, t=0
PMD Speed Sp (km/h)
Camera Capturing
Speed, Cs (Frames/Sec)
Acquisition Uncertainty
Window of +/- 1
sampling frame
Acquisition Uncertainty
Window of +/- 1
sampling frame
Ideal FoV with Ideal
Lighting condition
S1
S0
6000 mm S1
S0
• PMD is moving @ 20 km/h across FoV.
• Camera at side view @ 20 fps, covering
6000 mm.
• PMD crosses at 1.08s.
• Frame to Frame PMD covers 280 mm
• Full HD camera is 1920 x 1080 pixels,
camera has to be position for maximum
pixels coverage across 6000 mm
PMD @ 20 km/h
1920 x
1080
pixels
PMD Speed Sp (km/h)
FoV at 45o
with Ideal
Lighting
condition
A1,45
S2
S1 / S2 = Sin (45) = 1/1.414
From Camera interpretation, PMD travels
along S1 for 1m, the actual PMD has actually
travelled in S2 axis for 1.414 m
Interpret
as faster
Camera measures 1 m/s in S1 direction, when
actual S2 direction is 1.74 m/s to 1.15 m/s
x1.4 x1.15
x1.74
h1
l1
D
A2 A3
S1
S0
Camera must be rotate
90o for 1920 pixels
along the length of
measurement !
1080 x
1920
pixels
Using the diagram for camera at an angle view, the Ground pixels =
Camera Pixels / SIN (A1, Angle)
Camera still picks up only 100 pixels,
The camera image will need 174 pixels at S0, but has only 100 pixels
The camera image will need 115 pixels at S1, but has only 100 pixels
Pixel Density Function at Start and End Position from Camera View
A3,60
A2,35
174 pixels 117 pixels
S1
Trip Wire
S0
Trip Wire t0
t1
Use linear interpolation to determine t0
y = m x + c
Use linear interpolation
to determine t1
Capture every frame and time record
Analyse the Tip of PMD wrt to the start point of FoV
Plot the Distance and Time chart
Determine the Points around the Trip wire locations
and then interpolate for the time at cross over
Error Reduction by Interpolation of parameters
s2, t2
s1, t1
sx, tx
Using Linear Interpolation to increase accuracy
s2 - s1
t2 - t1 tx - t1
sx - s1
tx
sx - s1
s2 - s1
t2 - t1
X + t1
Ensure that this is NOT divide by Zero !
Large to Small
Small to Large
More Predictable
scale-down features
as scale-down
modelling with less
pixels
Not Predictable scale-
up features from less
pixels to predicted
pixels ! Is like Pixelation
effect
Image scaling accuracy for measurement
S0
Trip Wire • Image / frame starts from Larger size to
analyse and track its features. Scaling Large
size to Smaller size is easier to track.
• Capture all frames and then Post Processed
the Image Position per Frame for the Time
information.
• Image linearly scales the expected scaled
down size and features as number of pixels
are reduced for that image
• Lower image matching time within scanning
ROI at each Image Frame
• Then Interpolate for the time, that image
crosses that Trip wire
S1
Trip Wire
Image
size
gets
smaller
moving
away
from
camera
PMD moving away from Camera
S0
Trip Wire • Image / frame starts from Small size to
Larger size.
• Capture all image frames and then Post
Processed the Image Position per Frame to
get the Time information.
• Reverse the Time sequence to Larger image
size to Small image size.
• Scaling from Large size to Smaller size is
easier to track.
• Lower image matching time within scanning
ROI at each Image Frame
• Then Interpolate the time image crossed
that Trip wire
S1
Trip Wire
Image
size
gets
larger
moving
towards
camera
PMD moving towards from Camera
51
Thank you for your attention

Computer Vision for Measurement & FR

  • 1.
    Agenda:- Measurement System using ComputerVision & Camera as a Sensor 1 Dr Tan Guan Hong Consultant @ Singapore Technology Land System & Electronics Vice President of Singapore Industrial Automation Association Technology Partner @ RekaNext Technology Partner @ Pestech RekaNext‘s Director @ XjeraLabs A*STAR’s Director @ Sentient.io Business Advisor @ ACKCIO
  • 2.
    2 • Measurement Science •Computer Vision Accuracy challenges • Camera as a Sensor, is a Contactless Sensor with many Metadata • High Accuracy (>99%) for outdoor People Counting • PMD Speed Measurement Technique
  • 3.
    3 Two-dimensional (2D) camera:These sensors captures data over time frames. Using various video analytics algorithms, these 2D camera sensors can provide different information. For example, within the same image, the algorithms can extract information such as (i) people count, (ii) number and color of cars (iii) lighting condition, etc. Over time, processed metadata can yield further insights such as tracking of (iv) people’s movement, (v) dwell time, etc. Measurement Measurement Systems Slow Sensor Data: Temperature, Humidity, Hydrostatic pressure, Strain Gauge, Tilt and Infra-red sensors acquire data in minutes or hours. These are Quasi-static sensors. Dynamic (Fast) Sensor Data: Accelerometer provides G m/s2 in milliseconds or faster. Acoustic sound sensor provides voltage signals over time. When these sensor data are processed in the Frequency Domain using Fast Fourier Transform, the data can provide Peak Vibration Level at various Frequencies.
  • 4.
  • 5.
    High Repeatability High Accuracy HighRepeatability Low Accuracy Low Repeatability High Accuracy Low Repeatability Low Accuracy 7 Which sensor data do you trust & faster to process in Real Time ? Get Data accurate and repeatable is important Measurement With sufficient Data points, Signal Processing can help Improve Accuracy Averaging Averaging Offset
  • 6.
    6 Accuracy of Datadepends :- Accuracy of Sensor Maintenance & Calibration of Sensor (Function of Time, Drift, Deterioration) Video Analytics is Processing of Image Data into Structured Information Accuracy and Repeatability only in controlled environment Installation of Sensor Use of Sensor in its context (monitoring & control function) Expected functional accuracy for decision making ICT: Sensor data is Protected in Cyber Space, Stable, Error and Maintenance free ! Electronics: Sensor operates Physical environment, with Accuracy, Repeatability, Drift and Noise Measurement
  • 7.
    Accuracy of Sensorsaffected by Environmental conditions Average Water Depth of 10 m Water Depth variation of +/- 0.5 m @ 0.1 Hz in flowing canals + 0.5 m - 0.5 m Acceptable Accuracy is then +/- 0.25 m Expected Physical Accuracy to measure Sensor accuracy needs to be x 2 better to be cost effective 10 m Sensor when used outdoor deteriorates over time Regular Cleaning maintenance, validate and re-calibration Sensor diaphragm membrane is stiffened by barnacles, hence affect the readings 7 Measurement
  • 8.
    Design for DataQuality and NOT just Availability of Data alone You could also be Sensing unwanted Noise! SQL Physical Sensor output can be affected by Data corruption from EMI Noise, Humidity, Temperature, Pressure, Vibration (Lose connections) Output of data is taken from a Database and usually many trust this data ! When retrieved from SQL dB, the data is Highly Repeatable and Accurate ! Cyber Physical System is Secured and Auditable. Computers don’t lie !  8 Measurement
  • 9.
    https://www.isixsigma.com/tools-templates/capability-indices-process-capability/process-capability-cp-cpk-and-process-performance-pp-ppk-what-difference/ Sensor measures thephysical parameters. Need to consider Process Capability and Statistical Measurement Spread Concept of ± 3σ. When program interrogate a SQL, the data feedback is always at 0 σ ! Everytime Measurement Capability 9 Measurement
  • 10.
    Physical World Sensordata have Statistical Variations, while SQL extracted data is always consistent. Physical Object under measurement Sensor Video Analytics Processed into Information No. of Sensing Parameters < 20 Facial Sensor Markers 1 Temperature Reading Each Sensing point do have reading variations RFID Tag information RFID Reader 1 Digital information Sensing Repeatability Converts uV to T oC Need to know where are the possible Statistical Sensing Errors and mitigate the risks SQL System usually takes one snapshot reading and stores in dB Always Repeatable @ +/- 0 σ 10 Measurement
  • 11.
    9 Sensor Measurement Error dueto aliasing Measurement
  • 12.
    Understanding Measurement Principleis important ! Actual Temperature Sampled Temperature Displayed Temperature Nyquist Frequency:- Sample at least Twice the Highest frequency Temperature don’t change at all ! If sample too slow Temperature is actually fluctuating 12 Measurement
  • 13.
    Quasi Static Dynamic Transient Slow Sensor samplingspeed cause Errors Sample x 20 faster Theory says Nyquist Frequency:- Sample at Least Twice the Highest frequency only. Risk trade off with cost Too slow, missed spikes Too fast, huge data to analyse Expand the Time scale 13 Measurement
  • 14.
    14 Facial Recognition is Identification 8.7.6 Definingand Measuring Fields of View for CCTV Systems Monitor Not Less than 5%R Detect Not Less than 10%R Recognition Not Less than 50%R Identification Not Less than 120%R The Guidelines for Enhancing Building Security in Singapore (GEBSS) is a follow-up from the earlier ‘Enhancing Building Security’ booklet and has been prepared by Homefront Security Division - Ministry of Home Affairs in consultation with: Singapore Police Force; Internal Security Department; Singapore Civil Defence Force; Building and Construction Authority; Urban Redevelopment Authority; 8.8 Security Lights for CCTV Systems For Human viewing CV Accuracy
  • 15.
    15 FHD Camera (1920x1080) Accuracy ofVA is affected by Lighting, Image Size & Viewing Angle covering a face from a distance 1 m 10 m 1920x1080 192x108 Ideal Realistic Lighting Image Size Viewing Angle 960x540 96x54 Eye 2 Eye pixels is 50 Hence 10 m is maximum limit, but without stating the accuracy Number of Pixels Viewing angle of Camera Passport Photo Size CV Accuracy
  • 16.
    Ideal Controlled Conditions Outdoor Complex backgroundscene in bright lighting conditions Same Outdoor Complex background scene with lower lighting conditions Consistent People Counting Accuracies using VA under different conditions Standard VA + Customised VA = Higher VA accuracy 16 CV Accuracy
  • 17.
    Out-door affected byshadows In-door with controlled lighting Out-door lighting changes at different time of the year 17 CV Accuracy
  • 18.
    18 Street Level andNOT at Laboratory Condition Internet Images or Pre-stored database Most VAs are designed for controlled environment Use other Attributes:- • Cap Colour • Backpack colour • Shirt colour • Pants Colour • Shoe colour • Walking style Beyond Facial Recognition ! Boston bomber was identified by a victim. NOT by computer technology ! CV Accuracy
  • 19.
    19 Parameters • Use ofsharp images is recommended • Use images with at least 64 grey scales within the face area is recommended • Use several images during enrolment, as it improves facial template quality which results in improvement of recognition Quality and Reliability • Robust against typical gestures and expression changes • Robust against partial face occlusions, beard and hairstyle changes • Robust against use of glasses (except reflecting sunglasses) • Robust against lighting changes that do not cause strong shadows Performance accuracy • Laboratory: 100% • Real deployment: To be determined in each scenario conditions Algorithm Developer’s Specification Face Recognition Module In-door controlled environment only Performance cannot be determine up front Need to pre- register for different facial angles Can’t work on Dark skin tones CV Accuracy
  • 20.
    Image Pixels from distance 20 Colourcontains Red Green Blue components and each component can have 8 bits which is 256 resolution Grey Scale Every Square is a Pixel and you only know its x,y position and colour (RGB)
  • 21.
  • 22.
    22 Afro-American Caucasian Grey ScalePixel Analysis Dynamic Range Density Distribution
  • 23.
    23 Skin complexions ofthree skin groups: Pale, Yellowish, and Dark. Their reflectance are smooth and similar, mainly separated by their levels. Measured skin reflectances are available , for example, in the Physics-based face databases Dynamic Range 41 % Dynamic Range 15 % Visible Light Wavelength (nm) Relative Reflectance (%) Different skin surface complexions Visible Light Wavelength (nm) How Visible Light Reflectance are affected by Skin complexions CV Accuracy
  • 24.
    24 Afro-American Caucasian Grey ScalePixel Analysis Dynamic Range (Histogram spread) Wide Density Distribution 0 50 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 Caucasian 0 10 20 30 40 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 Afro-American Grey Scale Grey Scale Number of Pixels (Total of 300) Number of Pixels (Total of 300)
  • 25.
    SAME camera formultiple purposes Challenges: Huge variations in environments, undefined events Unstructured Data (Video) Structured Data Contextual Information Detection of events Detection of objects Number of cars, humans, events etc. Sharing of resources, multiple applications with same data Camera as a Sensor 25
  • 26.
    26 Potential Real TimeObject Classification Metadata beyond CCTV Recording for Forensic only In 10 mins interval 1. Number of Cars 2. Average Dwell Time of Cars 3. Number of People 4. Average Dwell Time of People 5. Number of Bicycle 6. Average Dwell Time of Bicycle 7. Average Light Intensity 8. Number of Debris on Road (LTA) / Sidewalk (NEA) 9. People crossing Roads Hence within One Camera, it is possible to extract extra 9sensing parameters Camera as a Sensor If Flooding occurs 36 / 365 days, the CCTVs have high value and used 10% of the time. The 90% can be used for other agency purposes
  • 27.
    27 A*STAR POC toshow that same PUB Flood CCTV can extract Metadata of Cars and Humans Other possibilities are:- • Lighting Metadata (due Smoke, Haze, Rain) for LTA’s SML system • LTA ITS Vehicle Dwell Time, Fallen Tree, Large object on roads • LTA & SPF sudden Human Crowding, Density Camera as a Sensor
  • 28.
    Pg 28 PUB Road SurfaceFlood Monitoring Automated analysis of large amounts of video data Over 130 CCTVs 24x7 Same data, different applications? Detected What other information Camera as a Sensor
  • 29.
  • 30.
    PUB: Auto Surveillanceto reduce water pollution Contractors, developers and professionals of construction sites have to play their part to keep our waterways free from mud. CCTVs installed to monitor for silty discharges Over 500 construction sites To automatically analyze video data Alerts when silt discharges detected Silt 30 Camera as a Sensor
  • 31.
    Extended Application: Waterlevel detection, debris detection Water level detection Debris detection Detection of leakages during dry weather Detection of chokes in drains 31 Camera as a Sensor
  • 32.
    Silt Alert WaterLevel Water Flow x 32 Camera as a Sensor
  • 33.
  • 34.
    Detection of floatingdebris (flotsam) Assist in monitoring and event driven maintenance Extended Application: Detection of rain Presence of debris indicates heavy rainfall upstream 34 Camera as a Sensor
  • 35.
    35 Battery Powered High SpeedStrain Gauge Data Loggers -16 simultaneous SG data acquisition @ 100 samples /sec/channel - Storage for 60 minutes data streaming logger Mains Powered Digital Image Recording Systems -5 frames per sec per logger - 1024 x 768 pixels per frame Battery Powered X-Z accelerometers with Data Loggers -3 G with 10 mG resolution -200 samples/sec Synchronization 1 S Pulse FM Broadcaster to ensure accurate time stamping for all measured data recording Dynamic Crane Measurement System using Photogrammetry Measurement
  • 36.
  • 37.
  • 38.
  • 39.
    Pixel Design Calculations needto be done for Measurement considerations before even considering Video Analytics
  • 40.
    Planning the Opticswith the actual Scene for Measurement Sensor Pixels FoV Focal Length Width Height Velocity Distance
  • 41.
    PMD Measurement Approach •Maximum Speed of Measurement, Accuracy & Repeatability • Camera FoV and Orientation (1920x1080) • Video Frames Sampling Rate (Processing in between frames) • Security Encryption and De-encryption can cause frame dropouts as well as Frame in consistent sampling timings, which can affects how time measurement accuracy. • Post Process is better than Real Time • Radar uses Doppler frequency shift to measure moving objects, Humans has more water based surface than metallic cars. Need to check this measurement accuracy for Humans on PMD.
  • 42.
    FoV Distance, D(m) covered in tD End Time, tD Start Time, t=0 PMD Speed Sp (km/h) Camera Capturing Speed, Cs (Frames/Sec) Acquisition Uncertainty Window of +/- 1 sampling frame Acquisition Uncertainty Window of +/- 1 sampling frame Ideal FoV with Ideal Lighting condition S1 S0
  • 43.
    6000 mm S1 S0 •PMD is moving @ 20 km/h across FoV. • Camera at side view @ 20 fps, covering 6000 mm. • PMD crosses at 1.08s. • Frame to Frame PMD covers 280 mm • Full HD camera is 1920 x 1080 pixels, camera has to be position for maximum pixels coverage across 6000 mm PMD @ 20 km/h 1920 x 1080 pixels
  • 44.
    PMD Speed Sp(km/h) FoV at 45o with Ideal Lighting condition A1,45 S2 S1 / S2 = Sin (45) = 1/1.414 From Camera interpretation, PMD travels along S1 for 1m, the actual PMD has actually travelled in S2 axis for 1.414 m Interpret as faster Camera measures 1 m/s in S1 direction, when actual S2 direction is 1.74 m/s to 1.15 m/s x1.4 x1.15 x1.74 h1 l1 D A2 A3 S1 S0 Camera must be rotate 90o for 1920 pixels along the length of measurement ! 1080 x 1920 pixels
  • 45.
    Using the diagramfor camera at an angle view, the Ground pixels = Camera Pixels / SIN (A1, Angle) Camera still picks up only 100 pixels, The camera image will need 174 pixels at S0, but has only 100 pixels The camera image will need 115 pixels at S1, but has only 100 pixels Pixel Density Function at Start and End Position from Camera View A3,60 A2,35 174 pixels 117 pixels
  • 46.
    S1 Trip Wire S0 Trip Wiret0 t1 Use linear interpolation to determine t0 y = m x + c Use linear interpolation to determine t1 Capture every frame and time record Analyse the Tip of PMD wrt to the start point of FoV Plot the Distance and Time chart Determine the Points around the Trip wire locations and then interpolate for the time at cross over Error Reduction by Interpolation of parameters
  • 47.
    s2, t2 s1, t1 sx,tx Using Linear Interpolation to increase accuracy s2 - s1 t2 - t1 tx - t1 sx - s1 tx sx - s1 s2 - s1 t2 - t1 X + t1 Ensure that this is NOT divide by Zero !
  • 48.
    Large to Small Smallto Large More Predictable scale-down features as scale-down modelling with less pixels Not Predictable scale- up features from less pixels to predicted pixels ! Is like Pixelation effect Image scaling accuracy for measurement
  • 49.
    S0 Trip Wire •Image / frame starts from Larger size to analyse and track its features. Scaling Large size to Smaller size is easier to track. • Capture all frames and then Post Processed the Image Position per Frame for the Time information. • Image linearly scales the expected scaled down size and features as number of pixels are reduced for that image • Lower image matching time within scanning ROI at each Image Frame • Then Interpolate for the time, that image crosses that Trip wire S1 Trip Wire Image size gets smaller moving away from camera PMD moving away from Camera
  • 50.
    S0 Trip Wire •Image / frame starts from Small size to Larger size. • Capture all image frames and then Post Processed the Image Position per Frame to get the Time information. • Reverse the Time sequence to Larger image size to Small image size. • Scaling from Large size to Smaller size is easier to track. • Lower image matching time within scanning ROI at each Image Frame • Then Interpolate the time image crossed that Trip wire S1 Trip Wire Image size gets larger moving towards camera PMD moving towards from Camera
  • 51.
    51 Thank you foryour attention