SlideShare a Scribd company logo
Generation and Weighting of 3D Point Correspondences
for Improved Registration of RGB-D Data
Kourosh Khoshelham
Daniel Dos Santos
George Vosselman
MAPPING BY RGB-D DATA
 RGB-D cameras like Kinect have great potential for indoor
mapping;
 Kinect captures:
depth + color images @ ~30 fps
= sequence of colored point clouds
IR emitter RGB camera

+

IR camera


2
REGISTRATION OF RGB-D DATA
 Mapping requires registration of consecutive frames;
 Registration: transforming all point clouds into one coordinate
system (usually of the first frame).
Point i in frame j-1

Point i in frame j

𝑗−1

𝐗 𝑖,𝑗−1 = 𝐑 𝑗

𝑗−1

𝐗 𝑖,𝑗 + 𝐭 𝑗

Transformation from
frame j to frame j-1

3
REGISTRATION BY VISUAL FEATURES
 Extraction and matching of keypoints is done more reliably in RGB
images;
 Two main components:
 Keypoint extraction and matching  SIFT, SURF, …
 Outlier detection  RANSAC, M-estimator, …

SURF
matches

Conversion to 3D
correspondences
(using depth data)

Removing
outliers

RANSAC

Least-squares
estimation of
registration
parameters

4
CHALLENGES AND OBJECTIVES
 Challenge:
 Pairwise registration errors accumulate  deformed point cloud

 Objective:
 More accurate pairwise registration by:
i. Accurate generation of 3D correspondences from 2D points;
ii. Weighting 3D point pairs based on random error of depth.
5
GENERATION OF 3D POINT CORRESPONDENCES
 2D keypoints  3D point correspondences ? (ill-posed)
 RGB image coordinates relate to depth image coordinates by a
shift?
 Note: the FOV of the RGB camera and IR camera are different!

 Our approach:
 Transform 2D keypoints from RGB to depth image using
relative orientation between the two cameras;
 Search along the epipolar line for the correct 3D coordinates.
 Note: relative orientation parameters are estimated during calibration.
6
GENERATION OF 3D POINT CORRESPONDENCES
More formally:
 Given a keypoint in the RGB frame:
1. calculate the epipolar line in the depth frame using the relative
orientation parameters;
2. define a search band along the epipolar line using the minimum and
maximum of the range of depth values (0.5 m and 5 m respectively);

 For all pixels within the search band:
1. calculate 3D coordinates and re-project the resulting 3D point back
to the RGB frame;
2. calculate and store the distance between the reprojected point and
the original keypoint;

 Return the 3D point whose re-projection has the smallest distance
to the keypoint.
7
GENERATION OF 3D POINT CORRESPONDENCES
Finding 3D points in the depth image (right) corresponding to 2D
keypoints in the RGB image (left) by searching along epipolar
lines (red bands).

8
ESTIMATING RELATIVE ORIENTATION PARAMETERS
 Relative orientation between the RGB camera and IR camera:
 approximate by a shift;
 estimate by stereo calibration;
 estimate by space resection.

Manually measured markers in the disparity (left) and colour image (right)
used for the estimation of relative orientation parameters by space resection.
9
WEIGHTING OF 3D POINT CORRESPONDENCES
 Observation equation in the estimation model:

vi  X i , j 1  R jj 1X i , j  t jj 1
 Approximate as:

vi  X i , j 1  X i , j

 Note: because of high frame rate transformation parameters between
consecutive frames are quite small.

 Define weights as:

wi 

k
2
v

i

k
 2
2
 Xi , j 1   Xi , j

10
WEIGHTING OF 3D POINT CORRESPONDENCES
 We use random error of depth only:
 Relation between disparity (d) and depth (Z):

2
2
 Propagation of variance:  Z  c12 Z 4 d

 Weight:

Z 1  c0  c1 d
Calibration
parameters


kc12 d 2
wi  4
Z i , j 1  Z i4 j
,

11
RESULTS: ACCURACY OF 3D POINT CORRESPONDENCES
 Relative orientation
 approximated by a shift

12
RESULTS: ACCURACY OF 3D POINT CORRESPONDENCES
 Relative orientation
 estimated by stereo calibration

13
RESULTS: ACCURACY OF 3D POINT CORRESPONDENCES
 Relative orientation
 estimated by space resection

14
EFFECT OF WEIGHTS IN REGISTRATION
 Six RGB-D sequences of an office environment;
 Trajectories formed closed loops;
 Evaluation by closing error:

Closing
rotation

 R
 T
0

Closing
translation

v
n
 H1  H n1H1
2
n
1


Transformation from
last frame to first frame

Transformation from
first frame to last frame

15
EFFECT OF WEIGHTS IN REGISTRATION
 Closing distance for the six sequences registered with and without
weights:

16
EFFECT OF WEIGHTS IN REGISTRATION
 Closing angle for the six sequences registered with and without
weights:

17
EFFECT OF WEIGHTS IN REGISTRATION
 Average closing errors for registrations with and without weight:

Average closing
distance [cm]

Average closing
angle [deg]

without weight

6.42

6.32

with weight

3.80

4.74

Registration

18
EFFECT OF WEIGHTS IN REGISTRATION
 The trajectory obtained by weighted registration (in blue) is more
accurate than the one without weights (in red).

19
EXAMPLE REGISTRATION RESULTS

20
EXAMPLE REGISTRATION RESULTS

21
CONCLUSIONS
 Accurate transformation of keypoints from the RGB space to the
3D space  more accurate registration of consecutive frames;
 Assigning weights based on random error of depth improves the
accuracy of pairwise registration and sensor pose estimates.
 Using weights  covariance matrices for pose vectors
 can be used to weight pose vectors in the global adjustment
 = more accurate loop closure
 Influence of synchronization errors (between RGB and IR cam)
 fine registration using point- and plane correspondences
extracted directly from the point cloud.
22
23
SUPPLEMENTARY SLIDES

24
Measurement principle of Kinect
 Depth measurement by triangulation:
 The laser source emits a laser beam;
 A diffraction grating splits the beam to create a pattern of speckles
projected onto the scene;
 The speckles are captured by the infrared camera;
 The speckle image is correlated with a reference image obtained by
capturing a plane at a known distance from the sensor;
 The result of correlation is a disparity value for each pixel from which
depth can be calculated.
Resulting
disparity
image

IR image of pattern of
speckles projected to
the scene

25
Depth-disparity relation and calculation of point coordinates
From triangle similarities:

and:

Zk 

Zo
Z
1 o d
fb

Zk
( xk  xo  x)
f
Z
Yk   k ( yk  yo  y )
f
Xk  

where:
Zo
f
d
b
xk,yk
xo, yo
δx,δy

Distance of the reference plane
Focal lnegth of the IR camera
Measured disparity
Base length between emitter and IR camera
Image coordinates of point k
Principal point offsets
Lens distortion corrections
26
Calibration
 Calibration procedure:
 focal length (f);
Standard calibration
of IR camera

 principal point offsets (xo, yo);
 lens distortion coefficients (in δx, δy);
 base length (b);

Depth calibration

 distance of the reference pattern (Zo).

Normalization:
d = md’+n
1

Zk  (

m
n
1
) d   (Z o  )
fb
fb

Depth calibration parameters
27
Theoretical model of depth random error
Depth equation:

Zk

1

m
n
  ( Z o 1  )
 ( )d
fb
fb

Propagation
of variance

Depth random error:

Z  (
k

m
2
)Z k  d '
fb

Random error is a quadratic function of depth.

28
Depth random error
 Standard deviation of
plane fitting residuals as
a measure of depth
random error;
 As expected, depth
random error increases
quadratically with
increasing distance from
the sensor.

1.0 m

2.0 m

3.0 m

4.0 m

5.0 m

29
Depth resolution
 Distribution of plane fitting residuals on the
plane at 4 m distance
 Depth resolution is also proportional to the
squared distance from the sensor;
Side view of the points on the plane
at 4 m (effect of depth resolution)

At a maximum range of 5 m
depth resolution is 7 cm.

30

More Related Content

What's hot

Orb feature by nitin
Orb feature by nitinOrb feature by nitin
Orb feature by nitin
NitinMauryaKashipur
 
Hidden Surface Removal using Z-buffer
Hidden Surface Removal using Z-bufferHidden Surface Removal using Z-buffer
Hidden Surface Removal using Z-buffer
Raj Sikarwar
 
Build Your Own 3D Scanner: 3D Scanning with Swept-Planes
Build Your Own 3D Scanner: 3D Scanning with Swept-PlanesBuild Your Own 3D Scanner: 3D Scanning with Swept-Planes
Build Your Own 3D Scanner: 3D Scanning with Swept-Planes
Douglas Lanman
 
Build Your Own 3D Scanner: Surface Reconstruction
Build Your Own 3D Scanner: Surface ReconstructionBuild Your Own 3D Scanner: Surface Reconstruction
Build Your Own 3D Scanner: Surface Reconstruction
Douglas Lanman
 
Hidden lines & surfaces
Hidden lines & surfacesHidden lines & surfaces
Hidden lines & surfaces
Ankur Kumar
 
Computer Graphics - Hidden Line Removal Algorithm
Computer Graphics - Hidden Line Removal AlgorithmComputer Graphics - Hidden Line Removal Algorithm
Computer Graphics - Hidden Line Removal Algorithm
Jyotiraman De
 
CG OpenGL surface detection+illumination+rendering models-course 9
CG OpenGL surface detection+illumination+rendering models-course 9CG OpenGL surface detection+illumination+rendering models-course 9
CG OpenGL surface detection+illumination+rendering models-course 9
fungfung Chen
 
Build Your Own 3D Scanner: Conclusion
Build Your Own 3D Scanner: ConclusionBuild Your Own 3D Scanner: Conclusion
Build Your Own 3D Scanner: Conclusion
Douglas Lanman
 
Neural Scene Representation & Rendering: Introduction to Novel View Synthesis
Neural Scene Representation & Rendering: Introduction to Novel View SynthesisNeural Scene Representation & Rendering: Introduction to Novel View Synthesis
Neural Scene Representation & Rendering: Introduction to Novel View Synthesis
Vincent Sitzmann
 
Hidden surfaces
Hidden surfacesHidden surfaces
Hidden surfacesMohd Arif
 
Sr 01-40 good
Sr 01-40 goodSr 01-40 good
Visible surface determination
Visible  surface determinationVisible  surface determination
Visible surface determinationPatel Punit
 
Visible surface detection in computer graphic
Visible surface detection in computer graphicVisible surface detection in computer graphic
Visible surface detection in computer graphicanku2266
 
Applicability of-tracability-technologies-for-3 d-printing-robust-blind-water...
Applicability of-tracability-technologies-for-3 d-printing-robust-blind-water...Applicability of-tracability-technologies-for-3 d-printing-robust-blind-water...
Applicability of-tracability-technologies-for-3 d-printing-robust-blind-water...
Sirris
 
rural marketing ppt
rural marketing pptrural marketing ppt
rural marketing pptelaya1984
 
Visible surface identification
Visible surface identificationVisible surface identification
Visible surface identification
Pooja Dixit
 
Computer Graphics: Visible surface detection methods
Computer Graphics: Visible surface detection methodsComputer Graphics: Visible surface detection methods
Computer Graphics: Visible surface detection methods
Joseph Charles
 

What's hot (20)

Orb feature by nitin
Orb feature by nitinOrb feature by nitin
Orb feature by nitin
 
Hidden Surface Removal using Z-buffer
Hidden Surface Removal using Z-bufferHidden Surface Removal using Z-buffer
Hidden Surface Removal using Z-buffer
 
Build Your Own 3D Scanner: 3D Scanning with Swept-Planes
Build Your Own 3D Scanner: 3D Scanning with Swept-PlanesBuild Your Own 3D Scanner: 3D Scanning with Swept-Planes
Build Your Own 3D Scanner: 3D Scanning with Swept-Planes
 
Build Your Own 3D Scanner: Surface Reconstruction
Build Your Own 3D Scanner: Surface ReconstructionBuild Your Own 3D Scanner: Surface Reconstruction
Build Your Own 3D Scanner: Surface Reconstruction
 
Hidden lines & surfaces
Hidden lines & surfacesHidden lines & surfaces
Hidden lines & surfaces
 
Computer Graphics - Hidden Line Removal Algorithm
Computer Graphics - Hidden Line Removal AlgorithmComputer Graphics - Hidden Line Removal Algorithm
Computer Graphics - Hidden Line Removal Algorithm
 
CG OpenGL surface detection+illumination+rendering models-course 9
CG OpenGL surface detection+illumination+rendering models-course 9CG OpenGL surface detection+illumination+rendering models-course 9
CG OpenGL surface detection+illumination+rendering models-course 9
 
3 d viewing
3 d viewing3 d viewing
3 d viewing
 
Build Your Own 3D Scanner: Conclusion
Build Your Own 3D Scanner: ConclusionBuild Your Own 3D Scanner: Conclusion
Build Your Own 3D Scanner: Conclusion
 
visible surface detection
visible surface detectionvisible surface detection
visible surface detection
 
Neural Scene Representation & Rendering: Introduction to Novel View Synthesis
Neural Scene Representation & Rendering: Introduction to Novel View SynthesisNeural Scene Representation & Rendering: Introduction to Novel View Synthesis
Neural Scene Representation & Rendering: Introduction to Novel View Synthesis
 
Hidden surfaces
Hidden surfacesHidden surfaces
Hidden surfaces
 
Sr 01-40 good
Sr 01-40 goodSr 01-40 good
Sr 01-40 good
 
Visible surface determination
Visible  surface determinationVisible  surface determination
Visible surface determination
 
Visible surface detection in computer graphic
Visible surface detection in computer graphicVisible surface detection in computer graphic
Visible surface detection in computer graphic
 
Applicability of-tracability-technologies-for-3 d-printing-robust-blind-water...
Applicability of-tracability-technologies-for-3 d-printing-robust-blind-water...Applicability of-tracability-technologies-for-3 d-printing-robust-blind-water...
Applicability of-tracability-technologies-for-3 d-printing-robust-blind-water...
 
Clipping
ClippingClipping
Clipping
 
rural marketing ppt
rural marketing pptrural marketing ppt
rural marketing ppt
 
Visible surface identification
Visible surface identificationVisible surface identification
Visible surface identification
 
Computer Graphics: Visible surface detection methods
Computer Graphics: Visible surface detection methodsComputer Graphics: Visible surface detection methods
Computer Graphics: Visible surface detection methods
 

Similar to Generation and weighting of 3D point correspondences for improved registration of RGB-D data

Visual odometry & slam utilizing indoor structured environments
Visual odometry & slam utilizing indoor structured environmentsVisual odometry & slam utilizing indoor structured environments
Visual odometry & slam utilizing indoor structured environments
NAVER Engineering
 
M.sc. m hassan
M.sc. m hassanM.sc. m hassan
M.sc. m hassan
Ashraf Aboshosha
 
Depth estimation do we need to throw old things away
Depth estimation do we need to throw old things awayDepth estimation do we need to throw old things away
Depth estimation do we need to throw old things away
NAVER Engineering
 
Depth Fusion from RGB and Depth Sensors II
Depth Fusion from RGB and Depth Sensors IIDepth Fusion from RGB and Depth Sensors II
Depth Fusion from RGB and Depth Sensors II
Yu Huang
 
Digital image processing
Digital image processingDigital image processing
Digital image processing
Chetan Hulsure
 
D04432528
D04432528D04432528
D04432528
IOSR-JEN
 
Final Project Report Nadar
Final Project Report NadarFinal Project Report Nadar
Final Project Report NadarMaher Nadar
 
3DSensing.ppt
3DSensing.ppt3DSensing.ppt
3DSensing.ppt
TejaReddy453140
 
[Paper] GIRAFFE: Representing Scenes as Compositional Generative Neural Featu...
[Paper] GIRAFFE: Representing Scenes as Compositional Generative Neural Featu...[Paper] GIRAFFE: Representing Scenes as Compositional Generative Neural Featu...
[Paper] GIRAFFE: Representing Scenes as Compositional Generative Neural Featu...
Susang Kim
 
TransNeRF
TransNeRFTransNeRF
TransNeRF
NavneetPaul2
 
3-d interpretation from single 2-d image for autonomous driving
3-d interpretation from single 2-d image for autonomous driving3-d interpretation from single 2-d image for autonomous driving
3-d interpretation from single 2-d image for autonomous driving
Yu Huang
 
ICRA Nathan Piasco
ICRA Nathan PiascoICRA Nathan Piasco
ICRA Nathan Piasco
Nathan Piasco
 
高解析度面板瑕疵檢測
高解析度面板瑕疵檢測高解析度面板瑕疵檢測
高解析度面板瑕疵檢測
CHENHuiMei
 
3-d interpretation from single 2-d image for autonomous driving II
3-d interpretation from single 2-d image for autonomous driving II3-d interpretation from single 2-d image for autonomous driving II
3-d interpretation from single 2-d image for autonomous driving II
Yu Huang
 
DEEP LEARNING TECHNIQUES POWER POINT PRESENTATION
DEEP LEARNING TECHNIQUES POWER POINT PRESENTATIONDEEP LEARNING TECHNIQUES POWER POINT PRESENTATION
DEEP LEARNING TECHNIQUES POWER POINT PRESENTATION
SelvaLakshmi63
 
EFFICIENT STEREO VIDEO ENCODING FOR MOBILE APPLICATIONS USING THE 3D+F CODEC
EFFICIENT STEREO VIDEO ENCODING FOR MOBILE APPLICATIONS USING THE 3D+F CODECEFFICIENT STEREO VIDEO ENCODING FOR MOBILE APPLICATIONS USING THE 3D+F CODEC
EFFICIENT STEREO VIDEO ENCODING FOR MOBILE APPLICATIONS USING THE 3D+F CODEC
Swisscom
 
Virtual Reality 3D home applications
Virtual Reality 3D home applicationsVirtual Reality 3D home applications
Virtual Reality 3D home applications
slebrun
 
Ray casting algorithm by mhm
Ray casting algorithm by mhmRay casting algorithm by mhm
Ray casting algorithm by mhm
Md Mosharof Hosen
 

Similar to Generation and weighting of 3D point correspondences for improved registration of RGB-D data (20)

Visual odometry & slam utilizing indoor structured environments
Visual odometry & slam utilizing indoor structured environmentsVisual odometry & slam utilizing indoor structured environments
Visual odometry & slam utilizing indoor structured environments
 
M.sc. m hassan
M.sc. m hassanM.sc. m hassan
M.sc. m hassan
 
Depth estimation do we need to throw old things away
Depth estimation do we need to throw old things awayDepth estimation do we need to throw old things away
Depth estimation do we need to throw old things away
 
3d scanning techniques
3d scanning techniques3d scanning techniques
3d scanning techniques
 
Depth Fusion from RGB and Depth Sensors II
Depth Fusion from RGB and Depth Sensors IIDepth Fusion from RGB and Depth Sensors II
Depth Fusion from RGB and Depth Sensors II
 
Digital image processing
Digital image processingDigital image processing
Digital image processing
 
D04432528
D04432528D04432528
D04432528
 
Final Project Report Nadar
Final Project Report NadarFinal Project Report Nadar
Final Project Report Nadar
 
3DSensing.ppt
3DSensing.ppt3DSensing.ppt
3DSensing.ppt
 
[Paper] GIRAFFE: Representing Scenes as Compositional Generative Neural Featu...
[Paper] GIRAFFE: Representing Scenes as Compositional Generative Neural Featu...[Paper] GIRAFFE: Representing Scenes as Compositional Generative Neural Featu...
[Paper] GIRAFFE: Representing Scenes as Compositional Generative Neural Featu...
 
TransNeRF
TransNeRFTransNeRF
TransNeRF
 
3-d interpretation from single 2-d image for autonomous driving
3-d interpretation from single 2-d image for autonomous driving3-d interpretation from single 2-d image for autonomous driving
3-d interpretation from single 2-d image for autonomous driving
 
ICRA Nathan Piasco
ICRA Nathan PiascoICRA Nathan Piasco
ICRA Nathan Piasco
 
高解析度面板瑕疵檢測
高解析度面板瑕疵檢測高解析度面板瑕疵檢測
高解析度面板瑕疵檢測
 
final_presentation
final_presentationfinal_presentation
final_presentation
 
3-d interpretation from single 2-d image for autonomous driving II
3-d interpretation from single 2-d image for autonomous driving II3-d interpretation from single 2-d image for autonomous driving II
3-d interpretation from single 2-d image for autonomous driving II
 
DEEP LEARNING TECHNIQUES POWER POINT PRESENTATION
DEEP LEARNING TECHNIQUES POWER POINT PRESENTATIONDEEP LEARNING TECHNIQUES POWER POINT PRESENTATION
DEEP LEARNING TECHNIQUES POWER POINT PRESENTATION
 
EFFICIENT STEREO VIDEO ENCODING FOR MOBILE APPLICATIONS USING THE 3D+F CODEC
EFFICIENT STEREO VIDEO ENCODING FOR MOBILE APPLICATIONS USING THE 3D+F CODECEFFICIENT STEREO VIDEO ENCODING FOR MOBILE APPLICATIONS USING THE 3D+F CODEC
EFFICIENT STEREO VIDEO ENCODING FOR MOBILE APPLICATIONS USING THE 3D+F CODEC
 
Virtual Reality 3D home applications
Virtual Reality 3D home applicationsVirtual Reality 3D home applications
Virtual Reality 3D home applications
 
Ray casting algorithm by mhm
Ray casting algorithm by mhmRay casting algorithm by mhm
Ray casting algorithm by mhm
 

Recently uploaded

Uni Systems Copilot event_05062024_C.Vlachos.pdf
Uni Systems Copilot event_05062024_C.Vlachos.pdfUni Systems Copilot event_05062024_C.Vlachos.pdf
Uni Systems Copilot event_05062024_C.Vlachos.pdf
Uni Systems S.M.S.A.
 
20240605 QFM017 Machine Intelligence Reading List May 2024
20240605 QFM017 Machine Intelligence Reading List May 202420240605 QFM017 Machine Intelligence Reading List May 2024
20240605 QFM017 Machine Intelligence Reading List May 2024
Matthew Sinclair
 
PHP Frameworks: I want to break free (IPC Berlin 2024)
PHP Frameworks: I want to break free (IPC Berlin 2024)PHP Frameworks: I want to break free (IPC Berlin 2024)
PHP Frameworks: I want to break free (IPC Berlin 2024)
Ralf Eggert
 
Introduction to CHERI technology - Cybersecurity
Introduction to CHERI technology - CybersecurityIntroduction to CHERI technology - Cybersecurity
Introduction to CHERI technology - Cybersecurity
mikeeftimakis1
 
The Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and SalesThe Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and Sales
Laura Byrne
 
A tale of scale & speed: How the US Navy is enabling software delivery from l...
A tale of scale & speed: How the US Navy is enabling software delivery from l...A tale of scale & speed: How the US Navy is enabling software delivery from l...
A tale of scale & speed: How the US Navy is enabling software delivery from l...
sonjaschweigert1
 
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024
GraphSummit Singapore | The Art of the  Possible with Graph - Q2 2024GraphSummit Singapore | The Art of the  Possible with Graph - Q2 2024
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024
Neo4j
 
Generative AI Deep Dive: Advancing from Proof of Concept to Production
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionGenerative AI Deep Dive: Advancing from Proof of Concept to Production
Generative AI Deep Dive: Advancing from Proof of Concept to Production
Aggregage
 
Elevating Tactical DDD Patterns Through Object Calisthenics
Elevating Tactical DDD Patterns Through Object CalisthenicsElevating Tactical DDD Patterns Through Object Calisthenics
Elevating Tactical DDD Patterns Through Object Calisthenics
Dorra BARTAGUIZ
 
National Security Agency - NSA mobile device best practices
National Security Agency - NSA mobile device best practicesNational Security Agency - NSA mobile device best practices
National Security Agency - NSA mobile device best practices
Quotidiano Piemontese
 
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfObservability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
Paige Cruz
 
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...
James Anderson
 
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
Albert Hoitingh
 
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 previewState of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
Prayukth K V
 
Monitoring Java Application Security with JDK Tools and JFR Events
Monitoring Java Application Security with JDK Tools and JFR EventsMonitoring Java Application Security with JDK Tools and JFR Events
Monitoring Java Application Security with JDK Tools and JFR Events
Ana-Maria Mihalceanu
 
PCI PIN Basics Webinar from the Controlcase Team
PCI PIN Basics Webinar from the Controlcase TeamPCI PIN Basics Webinar from the Controlcase Team
PCI PIN Basics Webinar from the Controlcase Team
ControlCase
 
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdfFIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance
 
GraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge GraphGraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge Graph
Guy Korland
 
Video Streaming: Then, Now, and in the Future
Video Streaming: Then, Now, and in the FutureVideo Streaming: Then, Now, and in the Future
Video Streaming: Then, Now, and in the Future
Alpen-Adria-Universität
 
GridMate - End to end testing is a critical piece to ensure quality and avoid...
GridMate - End to end testing is a critical piece to ensure quality and avoid...GridMate - End to end testing is a critical piece to ensure quality and avoid...
GridMate - End to end testing is a critical piece to ensure quality and avoid...
ThomasParaiso2
 

Recently uploaded (20)

Uni Systems Copilot event_05062024_C.Vlachos.pdf
Uni Systems Copilot event_05062024_C.Vlachos.pdfUni Systems Copilot event_05062024_C.Vlachos.pdf
Uni Systems Copilot event_05062024_C.Vlachos.pdf
 
20240605 QFM017 Machine Intelligence Reading List May 2024
20240605 QFM017 Machine Intelligence Reading List May 202420240605 QFM017 Machine Intelligence Reading List May 2024
20240605 QFM017 Machine Intelligence Reading List May 2024
 
PHP Frameworks: I want to break free (IPC Berlin 2024)
PHP Frameworks: I want to break free (IPC Berlin 2024)PHP Frameworks: I want to break free (IPC Berlin 2024)
PHP Frameworks: I want to break free (IPC Berlin 2024)
 
Introduction to CHERI technology - Cybersecurity
Introduction to CHERI technology - CybersecurityIntroduction to CHERI technology - Cybersecurity
Introduction to CHERI technology - Cybersecurity
 
The Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and SalesThe Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and Sales
 
A tale of scale & speed: How the US Navy is enabling software delivery from l...
A tale of scale & speed: How the US Navy is enabling software delivery from l...A tale of scale & speed: How the US Navy is enabling software delivery from l...
A tale of scale & speed: How the US Navy is enabling software delivery from l...
 
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024
GraphSummit Singapore | The Art of the  Possible with Graph - Q2 2024GraphSummit Singapore | The Art of the  Possible with Graph - Q2 2024
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024
 
Generative AI Deep Dive: Advancing from Proof of Concept to Production
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionGenerative AI Deep Dive: Advancing from Proof of Concept to Production
Generative AI Deep Dive: Advancing from Proof of Concept to Production
 
Elevating Tactical DDD Patterns Through Object Calisthenics
Elevating Tactical DDD Patterns Through Object CalisthenicsElevating Tactical DDD Patterns Through Object Calisthenics
Elevating Tactical DDD Patterns Through Object Calisthenics
 
National Security Agency - NSA mobile device best practices
National Security Agency - NSA mobile device best practicesNational Security Agency - NSA mobile device best practices
National Security Agency - NSA mobile device best practices
 
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfObservability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
 
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...
 
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
 
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 previewState of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
 
Monitoring Java Application Security with JDK Tools and JFR Events
Monitoring Java Application Security with JDK Tools and JFR EventsMonitoring Java Application Security with JDK Tools and JFR Events
Monitoring Java Application Security with JDK Tools and JFR Events
 
PCI PIN Basics Webinar from the Controlcase Team
PCI PIN Basics Webinar from the Controlcase TeamPCI PIN Basics Webinar from the Controlcase Team
PCI PIN Basics Webinar from the Controlcase Team
 
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdfFIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
 
GraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge GraphGraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge Graph
 
Video Streaming: Then, Now, and in the Future
Video Streaming: Then, Now, and in the FutureVideo Streaming: Then, Now, and in the Future
Video Streaming: Then, Now, and in the Future
 
GridMate - End to end testing is a critical piece to ensure quality and avoid...
GridMate - End to end testing is a critical piece to ensure quality and avoid...GridMate - End to end testing is a critical piece to ensure quality and avoid...
GridMate - End to end testing is a critical piece to ensure quality and avoid...
 

Generation and weighting of 3D point correspondences for improved registration of RGB-D data

  • 1. Generation and Weighting of 3D Point Correspondences for Improved Registration of RGB-D Data Kourosh Khoshelham Daniel Dos Santos George Vosselman
  • 2. MAPPING BY RGB-D DATA  RGB-D cameras like Kinect have great potential for indoor mapping;  Kinect captures: depth + color images @ ~30 fps = sequence of colored point clouds IR emitter RGB camera + IR camera  2
  • 3. REGISTRATION OF RGB-D DATA  Mapping requires registration of consecutive frames;  Registration: transforming all point clouds into one coordinate system (usually of the first frame). Point i in frame j-1 Point i in frame j 𝑗−1 𝐗 𝑖,𝑗−1 = 𝐑 𝑗 𝑗−1 𝐗 𝑖,𝑗 + 𝐭 𝑗 Transformation from frame j to frame j-1 3
  • 4. REGISTRATION BY VISUAL FEATURES  Extraction and matching of keypoints is done more reliably in RGB images;  Two main components:  Keypoint extraction and matching  SIFT, SURF, …  Outlier detection  RANSAC, M-estimator, … SURF matches Conversion to 3D correspondences (using depth data) Removing outliers RANSAC Least-squares estimation of registration parameters 4
  • 5. CHALLENGES AND OBJECTIVES  Challenge:  Pairwise registration errors accumulate  deformed point cloud  Objective:  More accurate pairwise registration by: i. Accurate generation of 3D correspondences from 2D points; ii. Weighting 3D point pairs based on random error of depth. 5
  • 6. GENERATION OF 3D POINT CORRESPONDENCES  2D keypoints  3D point correspondences ? (ill-posed)  RGB image coordinates relate to depth image coordinates by a shift?  Note: the FOV of the RGB camera and IR camera are different!  Our approach:  Transform 2D keypoints from RGB to depth image using relative orientation between the two cameras;  Search along the epipolar line for the correct 3D coordinates.  Note: relative orientation parameters are estimated during calibration. 6
  • 7. GENERATION OF 3D POINT CORRESPONDENCES More formally:  Given a keypoint in the RGB frame: 1. calculate the epipolar line in the depth frame using the relative orientation parameters; 2. define a search band along the epipolar line using the minimum and maximum of the range of depth values (0.5 m and 5 m respectively);  For all pixels within the search band: 1. calculate 3D coordinates and re-project the resulting 3D point back to the RGB frame; 2. calculate and store the distance between the reprojected point and the original keypoint;  Return the 3D point whose re-projection has the smallest distance to the keypoint. 7
  • 8. GENERATION OF 3D POINT CORRESPONDENCES Finding 3D points in the depth image (right) corresponding to 2D keypoints in the RGB image (left) by searching along epipolar lines (red bands). 8
  • 9. ESTIMATING RELATIVE ORIENTATION PARAMETERS  Relative orientation between the RGB camera and IR camera:  approximate by a shift;  estimate by stereo calibration;  estimate by space resection. Manually measured markers in the disparity (left) and colour image (right) used for the estimation of relative orientation parameters by space resection. 9
  • 10. WEIGHTING OF 3D POINT CORRESPONDENCES  Observation equation in the estimation model: vi  X i , j 1  R jj 1X i , j  t jj 1  Approximate as: vi  X i , j 1  X i , j  Note: because of high frame rate transformation parameters between consecutive frames are quite small.  Define weights as: wi  k 2 v i k  2 2  Xi , j 1   Xi , j 10
  • 11. WEIGHTING OF 3D POINT CORRESPONDENCES  We use random error of depth only:  Relation between disparity (d) and depth (Z): 2 2  Propagation of variance:  Z  c12 Z 4 d  Weight: Z 1  c0  c1 d Calibration parameters  kc12 d 2 wi  4 Z i , j 1  Z i4 j , 11
  • 12. RESULTS: ACCURACY OF 3D POINT CORRESPONDENCES  Relative orientation  approximated by a shift 12
  • 13. RESULTS: ACCURACY OF 3D POINT CORRESPONDENCES  Relative orientation  estimated by stereo calibration 13
  • 14. RESULTS: ACCURACY OF 3D POINT CORRESPONDENCES  Relative orientation  estimated by space resection 14
  • 15. EFFECT OF WEIGHTS IN REGISTRATION  Six RGB-D sequences of an office environment;  Trajectories formed closed loops;  Evaluation by closing error: Closing rotation  R  T 0 Closing translation v n  H1  H n1H1 2 n 1  Transformation from last frame to first frame Transformation from first frame to last frame 15
  • 16. EFFECT OF WEIGHTS IN REGISTRATION  Closing distance for the six sequences registered with and without weights: 16
  • 17. EFFECT OF WEIGHTS IN REGISTRATION  Closing angle for the six sequences registered with and without weights: 17
  • 18. EFFECT OF WEIGHTS IN REGISTRATION  Average closing errors for registrations with and without weight: Average closing distance [cm] Average closing angle [deg] without weight 6.42 6.32 with weight 3.80 4.74 Registration 18
  • 19. EFFECT OF WEIGHTS IN REGISTRATION  The trajectory obtained by weighted registration (in blue) is more accurate than the one without weights (in red). 19
  • 22. CONCLUSIONS  Accurate transformation of keypoints from the RGB space to the 3D space  more accurate registration of consecutive frames;  Assigning weights based on random error of depth improves the accuracy of pairwise registration and sensor pose estimates.  Using weights  covariance matrices for pose vectors  can be used to weight pose vectors in the global adjustment  = more accurate loop closure  Influence of synchronization errors (between RGB and IR cam)  fine registration using point- and plane correspondences extracted directly from the point cloud. 22
  • 23. 23
  • 25. Measurement principle of Kinect  Depth measurement by triangulation:  The laser source emits a laser beam;  A diffraction grating splits the beam to create a pattern of speckles projected onto the scene;  The speckles are captured by the infrared camera;  The speckle image is correlated with a reference image obtained by capturing a plane at a known distance from the sensor;  The result of correlation is a disparity value for each pixel from which depth can be calculated. Resulting disparity image IR image of pattern of speckles projected to the scene 25
  • 26. Depth-disparity relation and calculation of point coordinates From triangle similarities: and: Zk  Zo Z 1 o d fb Zk ( xk  xo  x) f Z Yk   k ( yk  yo  y ) f Xk   where: Zo f d b xk,yk xo, yo δx,δy Distance of the reference plane Focal lnegth of the IR camera Measured disparity Base length between emitter and IR camera Image coordinates of point k Principal point offsets Lens distortion corrections 26
  • 27. Calibration  Calibration procedure:  focal length (f); Standard calibration of IR camera  principal point offsets (xo, yo);  lens distortion coefficients (in δx, δy);  base length (b); Depth calibration  distance of the reference pattern (Zo). Normalization: d = md’+n 1 Zk  ( m n 1 ) d   (Z o  ) fb fb Depth calibration parameters 27
  • 28. Theoretical model of depth random error Depth equation: Zk 1 m n   ( Z o 1  )  ( )d fb fb Propagation of variance Depth random error: Z  ( k m 2 )Z k  d ' fb Random error is a quadratic function of depth. 28
  • 29. Depth random error  Standard deviation of plane fitting residuals as a measure of depth random error;  As expected, depth random error increases quadratically with increasing distance from the sensor. 1.0 m 2.0 m 3.0 m 4.0 m 5.0 m 29
  • 30. Depth resolution  Distribution of plane fitting residuals on the plane at 4 m distance  Depth resolution is also proportional to the squared distance from the sensor; Side view of the points on the plane at 4 m (effect of depth resolution) At a maximum range of 5 m depth resolution is 7 cm. 30