1. The document discusses methods for improving registration of RGB-D frames by generating accurate 3D point correspondences from 2D keypoints and weighting correspondences based on depth uncertainty.
2. It describes estimating the relative orientation of RGB and depth cameras, searching along epipolar lines to find corresponding 3D points, and assigning lower weights to points with higher random depth errors.
3. The results show that weighted registration outperforms non-weighted, producing more accurate trajectories with lower closing distances and angles between start and end frames.
In Computer Graphics, Hidden surface determination also known as Visible Surface determination or hidden surface removal is the process used to determine which surfaces
of a particular object are not visible from a particular angle or particular viewpoint. In this scribe we will describe the object-space method and image space method. We
will also discuss Algorithm based on Z-buffer method, A-buffer method, and Scan-Line Method.
Build Your Own 3D Scanner: 3D Scanning with Swept-PlanesDouglas Lanman
Build Your Own 3D Scanner:
3D Scanning with Swept-Planes
http://mesh.brown.edu/byo3d/
SIGGRAPH 2009 Courses
Douglas Lanman and Gabriel Taubin
This course provides a beginner with the necessary mathematics, software, and practical details to leverage projector-camera systems in their own 3D scanning projects. An example-driven approach is used throughout; each new concept is illustrated using a practical scanner implemented with off-the-shelf parts. The course concludes by detailing how these new approaches are used in rapid prototyping, entertainment, cultural heritage, and web-based applications.
Build Your Own 3D Scanner: Surface ReconstructionDouglas Lanman
Build Your Own 3D Scanner:
Surface Reconstruction
http://mesh.brown.edu/byo3d/
SIGGRAPH 2009 Courses
Douglas Lanman and Gabriel Taubin
This course provides a beginner with the necessary mathematics, software, and practical details to leverage projector-camera systems in their own 3D scanning projects. An example-driven approach is used throughout; each new concept is illustrated using a practical scanner implemented with off-the-shelf parts. The course concludes by detailing how these new approaches are used in rapid prototyping, entertainment, cultural heritage, and web-based applications.
Identify those parts of a scene that are visible from a chosen viewing position.
Visible-surface detection algorithms are broadly classified according to whether
they deal with object definitions directly or with their projected images.
These two approaches are called object-space methods and image-space methods, respectively
An object-space method compares
objects and parts of objects to each other within the scene definition to determine which surfaces, as a whole, we should label as visible.
In an image-space algorithm, visibility is decided point by point at each pixel position on the projection plane.
Build Your Own 3D Scanner:
Conclusion
http://mesh.brown.edu/byo3d/
SIGGRAPH 2009 Courses
Douglas Lanman and Gabriel Taubin
This course provides a beginner with the necessary mathematics, software, and practical details to leverage projector-camera systems in their own 3D scanning projects. An example-driven approach is used throughout; each new concept is illustrated using a practical scanner implemented with off-the-shelf parts. The course concludes by detailing how these new approaches are used in rapid prototyping, entertainment, cultural heritage, and web-based applications.
Neural Scene Representation & Rendering: Introduction to Novel View SynthesisVincent Sitzmann
An overview over the neural scene representation and rendering framework and an introduction to novel view synthesis approaches. Slides made for the Eurographics, CVPR, and SIGGRAPH courses on neural rendering, connected to the state-of-the-art report on Neural Rendering at Eurographics 2020.
Feel free to re-use the slides! I just ask that you keep some form of attribution, either at the beginning of your presentation, or in the slide footer.
Intellectual property, traceability and the counterfeiting of 3D printable objects
3D Robust Blind Watermarking : A tool for 3D copyrighted printing?
Benoit Macq and Patrice Rondão Alface - ICL-ICTEAM
Visual odometry & slam utilizing indoor structured environmentsNAVER Engineering
Visual odometry (VO) and simultaneous localization and mapping (SLAM) are fundamental building blocks for various applications from autonomous vehicles to virtual and augmented reality (VR/AR).
To improve the accuracy and robustness of the VO & SLAM approaches, we exploit multiple lines and orthogonal planar features, such as walls, floors, and ceilings, common in man-made indoor environments.
We demonstrate the effectiveness of the proposed VO & SLAM algorithms through an extensive evaluation on a variety of RGB-D datasets and compare with other state-of-the-art methods.
In Computer Graphics, Hidden surface determination also known as Visible Surface determination or hidden surface removal is the process used to determine which surfaces
of a particular object are not visible from a particular angle or particular viewpoint. In this scribe we will describe the object-space method and image space method. We
will also discuss Algorithm based on Z-buffer method, A-buffer method, and Scan-Line Method.
Build Your Own 3D Scanner: 3D Scanning with Swept-PlanesDouglas Lanman
Build Your Own 3D Scanner:
3D Scanning with Swept-Planes
http://mesh.brown.edu/byo3d/
SIGGRAPH 2009 Courses
Douglas Lanman and Gabriel Taubin
This course provides a beginner with the necessary mathematics, software, and practical details to leverage projector-camera systems in their own 3D scanning projects. An example-driven approach is used throughout; each new concept is illustrated using a practical scanner implemented with off-the-shelf parts. The course concludes by detailing how these new approaches are used in rapid prototyping, entertainment, cultural heritage, and web-based applications.
Build Your Own 3D Scanner: Surface ReconstructionDouglas Lanman
Build Your Own 3D Scanner:
Surface Reconstruction
http://mesh.brown.edu/byo3d/
SIGGRAPH 2009 Courses
Douglas Lanman and Gabriel Taubin
This course provides a beginner with the necessary mathematics, software, and practical details to leverage projector-camera systems in their own 3D scanning projects. An example-driven approach is used throughout; each new concept is illustrated using a practical scanner implemented with off-the-shelf parts. The course concludes by detailing how these new approaches are used in rapid prototyping, entertainment, cultural heritage, and web-based applications.
Identify those parts of a scene that are visible from a chosen viewing position.
Visible-surface detection algorithms are broadly classified according to whether
they deal with object definitions directly or with their projected images.
These two approaches are called object-space methods and image-space methods, respectively
An object-space method compares
objects and parts of objects to each other within the scene definition to determine which surfaces, as a whole, we should label as visible.
In an image-space algorithm, visibility is decided point by point at each pixel position on the projection plane.
Build Your Own 3D Scanner:
Conclusion
http://mesh.brown.edu/byo3d/
SIGGRAPH 2009 Courses
Douglas Lanman and Gabriel Taubin
This course provides a beginner with the necessary mathematics, software, and practical details to leverage projector-camera systems in their own 3D scanning projects. An example-driven approach is used throughout; each new concept is illustrated using a practical scanner implemented with off-the-shelf parts. The course concludes by detailing how these new approaches are used in rapid prototyping, entertainment, cultural heritage, and web-based applications.
Neural Scene Representation & Rendering: Introduction to Novel View SynthesisVincent Sitzmann
An overview over the neural scene representation and rendering framework and an introduction to novel view synthesis approaches. Slides made for the Eurographics, CVPR, and SIGGRAPH courses on neural rendering, connected to the state-of-the-art report on Neural Rendering at Eurographics 2020.
Feel free to re-use the slides! I just ask that you keep some form of attribution, either at the beginning of your presentation, or in the slide footer.
Intellectual property, traceability and the counterfeiting of 3D printable objects
3D Robust Blind Watermarking : A tool for 3D copyrighted printing?
Benoit Macq and Patrice Rondão Alface - ICL-ICTEAM
Visual odometry & slam utilizing indoor structured environmentsNAVER Engineering
Visual odometry (VO) and simultaneous localization and mapping (SLAM) are fundamental building blocks for various applications from autonomous vehicles to virtual and augmented reality (VR/AR).
To improve the accuracy and robustness of the VO & SLAM approaches, we exploit multiple lines and orthogonal planar features, such as walls, floors, and ceilings, common in man-made indoor environments.
We demonstrate the effectiveness of the proposed VO & SLAM algorithms through an extensive evaluation on a variety of RGB-D datasets and compare with other state-of-the-art methods.
Depth estimation do we need to throw old things awayNAVER Engineering
발표의 개요 : Human visual system 기반의 CNN for depth estimation과 CNN inspired by conventional methods
Case1: Cross-channel stereo matching
Case2: Depth from light field
Case3: Multiview stereo
Conclusion
Slides of Nathan Piasco ICRA 2019 oral presentation about the paper "Learning Scene Geometry for Visual Localization in Challenging Conditions". Best paper in Robot Vision Finalist
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
GridMate - End to end testing is a critical piece to ensure quality and avoid...
Generation and weighting of 3D point correspondences for improved registration of RGB-D data
1. Generation and Weighting of 3D Point Correspondences
for Improved Registration of RGB-D Data
Kourosh Khoshelham
Daniel Dos Santos
George Vosselman
2. MAPPING BY RGB-D DATA
RGB-D cameras like Kinect have great potential for indoor
mapping;
Kinect captures:
depth + color images @ ~30 fps
= sequence of colored point clouds
IR emitter RGB camera
+
IR camera
2
3. REGISTRATION OF RGB-D DATA
Mapping requires registration of consecutive frames;
Registration: transforming all point clouds into one coordinate
system (usually of the first frame).
Point i in frame j-1
Point i in frame j
𝑗−1
𝐗 𝑖,𝑗−1 = 𝐑 𝑗
𝑗−1
𝐗 𝑖,𝑗 + 𝐭 𝑗
Transformation from
frame j to frame j-1
3
4. REGISTRATION BY VISUAL FEATURES
Extraction and matching of keypoints is done more reliably in RGB
images;
Two main components:
Keypoint extraction and matching SIFT, SURF, …
Outlier detection RANSAC, M-estimator, …
SURF
matches
Conversion to 3D
correspondences
(using depth data)
Removing
outliers
RANSAC
Least-squares
estimation of
registration
parameters
4
5. CHALLENGES AND OBJECTIVES
Challenge:
Pairwise registration errors accumulate deformed point cloud
Objective:
More accurate pairwise registration by:
i. Accurate generation of 3D correspondences from 2D points;
ii. Weighting 3D point pairs based on random error of depth.
5
6. GENERATION OF 3D POINT CORRESPONDENCES
2D keypoints 3D point correspondences ? (ill-posed)
RGB image coordinates relate to depth image coordinates by a
shift?
Note: the FOV of the RGB camera and IR camera are different!
Our approach:
Transform 2D keypoints from RGB to depth image using
relative orientation between the two cameras;
Search along the epipolar line for the correct 3D coordinates.
Note: relative orientation parameters are estimated during calibration.
6
7. GENERATION OF 3D POINT CORRESPONDENCES
More formally:
Given a keypoint in the RGB frame:
1. calculate the epipolar line in the depth frame using the relative
orientation parameters;
2. define a search band along the epipolar line using the minimum and
maximum of the range of depth values (0.5 m and 5 m respectively);
For all pixels within the search band:
1. calculate 3D coordinates and re-project the resulting 3D point back
to the RGB frame;
2. calculate and store the distance between the reprojected point and
the original keypoint;
Return the 3D point whose re-projection has the smallest distance
to the keypoint.
7
8. GENERATION OF 3D POINT CORRESPONDENCES
Finding 3D points in the depth image (right) corresponding to 2D
keypoints in the RGB image (left) by searching along epipolar
lines (red bands).
8
9. ESTIMATING RELATIVE ORIENTATION PARAMETERS
Relative orientation between the RGB camera and IR camera:
approximate by a shift;
estimate by stereo calibration;
estimate by space resection.
Manually measured markers in the disparity (left) and colour image (right)
used for the estimation of relative orientation parameters by space resection.
9
10. WEIGHTING OF 3D POINT CORRESPONDENCES
Observation equation in the estimation model:
vi X i , j 1 R jj 1X i , j t jj 1
Approximate as:
vi X i , j 1 X i , j
Note: because of high frame rate transformation parameters between
consecutive frames are quite small.
Define weights as:
wi
k
2
v
i
k
2
2
Xi , j 1 Xi , j
10
11. WEIGHTING OF 3D POINT CORRESPONDENCES
We use random error of depth only:
Relation between disparity (d) and depth (Z):
2
2
Propagation of variance: Z c12 Z 4 d
Weight:
Z 1 c0 c1 d
Calibration
parameters
kc12 d 2
wi 4
Z i , j 1 Z i4 j
,
11
12. RESULTS: ACCURACY OF 3D POINT CORRESPONDENCES
Relative orientation
approximated by a shift
12
13. RESULTS: ACCURACY OF 3D POINT CORRESPONDENCES
Relative orientation
estimated by stereo calibration
13
14. RESULTS: ACCURACY OF 3D POINT CORRESPONDENCES
Relative orientation
estimated by space resection
14
15. EFFECT OF WEIGHTS IN REGISTRATION
Six RGB-D sequences of an office environment;
Trajectories formed closed loops;
Evaluation by closing error:
Closing
rotation
R
T
0
Closing
translation
v
n
H1 H n1H1
2
n
1
Transformation from
last frame to first frame
Transformation from
first frame to last frame
15
16. EFFECT OF WEIGHTS IN REGISTRATION
Closing distance for the six sequences registered with and without
weights:
16
17. EFFECT OF WEIGHTS IN REGISTRATION
Closing angle for the six sequences registered with and without
weights:
17
18. EFFECT OF WEIGHTS IN REGISTRATION
Average closing errors for registrations with and without weight:
Average closing
distance [cm]
Average closing
angle [deg]
without weight
6.42
6.32
with weight
3.80
4.74
Registration
18
19. EFFECT OF WEIGHTS IN REGISTRATION
The trajectory obtained by weighted registration (in blue) is more
accurate than the one without weights (in red).
19
22. CONCLUSIONS
Accurate transformation of keypoints from the RGB space to the
3D space more accurate registration of consecutive frames;
Assigning weights based on random error of depth improves the
accuracy of pairwise registration and sensor pose estimates.
Using weights covariance matrices for pose vectors
can be used to weight pose vectors in the global adjustment
= more accurate loop closure
Influence of synchronization errors (between RGB and IR cam)
fine registration using point- and plane correspondences
extracted directly from the point cloud.
22
25. Measurement principle of Kinect
Depth measurement by triangulation:
The laser source emits a laser beam;
A diffraction grating splits the beam to create a pattern of speckles
projected onto the scene;
The speckles are captured by the infrared camera;
The speckle image is correlated with a reference image obtained by
capturing a plane at a known distance from the sensor;
The result of correlation is a disparity value for each pixel from which
depth can be calculated.
Resulting
disparity
image
IR image of pattern of
speckles projected to
the scene
25
26. Depth-disparity relation and calculation of point coordinates
From triangle similarities:
and:
Zk
Zo
Z
1 o d
fb
Zk
( xk xo x)
f
Z
Yk k ( yk yo y )
f
Xk
where:
Zo
f
d
b
xk,yk
xo, yo
δx,δy
Distance of the reference plane
Focal lnegth of the IR camera
Measured disparity
Base length between emitter and IR camera
Image coordinates of point k
Principal point offsets
Lens distortion corrections
26
27. Calibration
Calibration procedure:
focal length (f);
Standard calibration
of IR camera
principal point offsets (xo, yo);
lens distortion coefficients (in δx, δy);
base length (b);
Depth calibration
distance of the reference pattern (Zo).
Normalization:
d = md’+n
1
Zk (
m
n
1
) d (Z o )
fb
fb
Depth calibration parameters
27
28. Theoretical model of depth random error
Depth equation:
Zk
1
m
n
( Z o 1 )
( )d
fb
fb
Propagation
of variance
Depth random error:
Z (
k
m
2
)Z k d '
fb
Random error is a quadratic function of depth.
28
29. Depth random error
Standard deviation of
plane fitting residuals as
a measure of depth
random error;
As expected, depth
random error increases
quadratically with
increasing distance from
the sensor.
1.0 m
2.0 m
3.0 m
4.0 m
5.0 m
29
30. Depth resolution
Distribution of plane fitting residuals on the
plane at 4 m distance
Depth resolution is also proportional to the
squared distance from the sensor;
Side view of the points on the plane
at 4 m (effect of depth resolution)
At a maximum range of 5 m
depth resolution is 7 cm.
30