Visual odometry presentation material. In this presentation, there are two papers. "Omnidirectional visual odomtery of a planetry rovoer" written by peter corke and "Visual odometry for ground vehicle applications" written by David Nister.
ORB SLAM Proposal for NTU GPU Programming Course 2016Mindos Cheng
We proposed a ORB SLAM optimization project on NTU GPU Programming Course 2016. The speed up is around +140%
We release the code here :
https://github.com/yunchih/ORB-SLAM2-GPU2016-final
The materials (i.e. images) are from their respective owners.
Thanks for Prof. Wei-Chao Chen @ National Taiwan University.
Unsupervised Collaborative Learning of Keyframe Detection and Visual Odometry...Masaya Kaneko
SfMLearner + KF selectionを提案した"Unsupervised Collaborative Learning of Keyframe Detection and Visual Odometry Towards Monocular Deep SLAM [ICCV19]"を論文読み会で紹介した時の資料です.
ORB SLAM Proposal for NTU GPU Programming Course 2016Mindos Cheng
We proposed a ORB SLAM optimization project on NTU GPU Programming Course 2016. The speed up is around +140%
We release the code here :
https://github.com/yunchih/ORB-SLAM2-GPU2016-final
The materials (i.e. images) are from their respective owners.
Thanks for Prof. Wei-Chao Chen @ National Taiwan University.
Unsupervised Collaborative Learning of Keyframe Detection and Visual Odometry...Masaya Kaneko
SfMLearner + KF selectionを提案した"Unsupervised Collaborative Learning of Keyframe Detection and Visual Odometry Towards Monocular Deep SLAM [ICCV19]"を論文読み会で紹介した時の資料です.
Neural Scene Representation & Rendering: Introduction to Novel View SynthesisVincent Sitzmann
An overview over the neural scene representation and rendering framework and an introduction to novel view synthesis approaches. Slides made for the Eurographics, CVPR, and SIGGRAPH courses on neural rendering, connected to the state-of-the-art report on Neural Rendering at Eurographics 2020.
Feel free to re-use the slides! I just ask that you keep some form of attribution, either at the beginning of your presentation, or in the slide footer.
Slides from the presentation made to the Flash/Flex User Group in Wellington.
Introduction to the Kinect sensors and how to read their data with actionscript.
Neural Scene Representation & Rendering: Introduction to Novel View SynthesisVincent Sitzmann
An overview over the neural scene representation and rendering framework and an introduction to novel view synthesis approaches. Slides made for the Eurographics, CVPR, and SIGGRAPH courses on neural rendering, connected to the state-of-the-art report on Neural Rendering at Eurographics 2020.
Feel free to re-use the slides! I just ask that you keep some form of attribution, either at the beginning of your presentation, or in the slide footer.
Slides from the presentation made to the Flash/Flex User Group in Wellington.
Introduction to the Kinect sensors and how to read their data with actionscript.
This is a basic introduction for kinect v1 and processing in 2014. However, some practice codes not included in this slide. It's only the concept help you understand some information about how using processing play with kinect.
The tracking robot DiSCAN-PTZ.8 is a compact speed dome multi-camera system that analyzes video content from multiple highly sensitive panoramic cameras providing real time PTZ intruder tracking. Equipped with an unobtrusive housing, it has been designed for continuous 24x7 operation under minimum street lighting.
A single observation unit provides a detection radius of up to 80 meters, allowing for the control of an area as large as two hectares (5 acres). The resolution of the visual coverage corresponds to a CCTV camera distance of roughly three to five meters. The optical system is preconfigured for 180, 270 or 360 degree panoramic observation and is ready for immediate deployment. The robot can also be equipped with an optional top view camera in order to eliminate "blind spots". This allows the DiSCAN-PTZ.8 to cover up to eight video signals in its maximum configuration.
MIT 6.870 - Template Matching and Histograms (Nicolas Pinto, MIT)npinto
MIT 6.870 Object Recognition and Scene Understanding (Fall 2008)
http://people.csail.mit.edu/torralba/courses/6.870/6.870.recognition.htm
This class will review and discuss current approaches to object recognition and scene understanding in computer vision. The course will cover bag of words models, part based models, classifier based models, multiclass object recognition and transfer learning, concurrent recognition and segmentation, context models for object recognition, grammars for scene understanding and large datasets for semi supervised and unsupervised discovery of object and scene categories. We will be reading a mixture of papers from computer vision and influential works from cognitive psychology on object and scene recognition.
ADAPTIVE FILTER FOR DENOISING 3D DATA CAPTURED BY DEPTH SENSORSSoma Boubou
Current consumer depth sensors produce depth maps that are often noisy and lack sufficient detail. Enhancing the quality of the 3D depth data obtained from compact depth Kinect-like sensors is an increasingly popular research area. Although depth data is known to carry a signal-dependent noise, the state-of-the-art denoising methods tend to employ denoising techniques which are independent of the depth signal itself. In this paper, we present a novel adaptive denoising filter to enhance object recognition from 3D depth data. We evaluate the performance of our proposed denoising filter against other state-of-the-art filters based on the enhancement of object recognition accuracy achieved after denoising the raw data with each filter. In order to perform object recognition from depth data, we make use of Differential Histogram of Normal Vectors (DHONV) features along with a linear SVM. Experiments show that our proposed filter outperformed the state-of-the-art denoising methods.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
3. Motivation
his laser scanner is good enough
to obtain the position (x, y, θ, z) of
the quadrotor at 10Hz. This data
provides from ROS canonical scan
matcher package.
0.5
0.4
0.3
0.2
0.1
y position(m)
0
−0.1
−0.2
−0.3
−0.4
−0.5
−0.5 −0.4 −0.3 −0.2 −0.1 0 0.1 0.2 0.3 0.4 0.5
x position(m)
4. Motivation
his laser scanner is good enough
to obtain the position (x, y, θ, z) of
the quadrotor at 10Hz. This data
provides from ROS canonical scan
matcher package.
0.5
0.4
0.3
0.2
- Relatively high accuracy. 0.1
y position(m)
- ROS device driver support. 0
−0.1
−0.2
−0.3
−0.4
−0.5
−0.5 −0.4 −0.3 −0.2 −0.1 0 0.1 0.2 0.3 0.4 0.5
x position(m)
5. Motivation
his laser scanner is good enough
to obtain the position (x, y, θ, z) of
the quadrotor at 10Hz. This data
provides from ROS canonical scan
matcher package.
0.5
0.4
0.3
0.2
- Relatively high accuracy. 0.1
y position(m)
- ROS device driver support. 0
−0.1
−0.2
- Expensive, USD 2375 −0.3
- Low frequency 10Hz −0.4
- Only for 2D. −0.5
−0.5 −0.4 −0.3 −0.2 −0.1 0 0.1 0.2 0.3 0.4 0.5
x position(m)
7. Motivation
inect 3D depth camera can
provide not only 2D RGB images but
3D depth images at 30Hz.
http://www.ifixit.com
8. Motivation
inect 3D depth camera can
provide not only 2D RGB images but
3D depth images at 30Hz.
http://www.ifixit.com
- Reasonable price. AUD 180.
- 3 Dimensional information.
- Openni Kinect ROS device driver and
point could library support.
- Available to use for visual odometry and
object recognition, 3D SLAM and so on.
9. Motivation
inect 3D depth camera can
provide not only 2D RGB images but
3D depth images at 30Hz.
http://www.ifixit.com
- Relatively low accuracy and many noise.
- Reasonable price. AUD 180.
- Heavy weight. original kinect over 500g.
- 3 Dimensional information.
- Openni Kinect ROS device driver and - Requires high computational power.
point could library support.
◦ ◦
- Narrow filed of view. H=57,V=43
- Available to use for visual odometry and
object recognition, 3D SLAM and so on.
10. Motivation
inect 3D depth camera can
provide not only 2D RGB images but
3D depth images at 30Hz.
http://www.ifixit.com
- Relatively low accuracy and many noise.
- Reasonable price. AUD 180.
- Heavy weight. original kinect over 500g.
- 3 Dimensional information.
- Openni Kinect ROS device driver and - Requires high computational power.
point could library support.
◦ ◦
- Narrow filed of view. H=57,V=43
- Available to use for visual odometry and
object recognition, 3D SLAM and so on.
11. Motivation
inect 3D depth camera can
provide not only 2D RGB images but
3D depth images at 30Hz.
http://www.ifixit.com
- Relatively low accuracy and many noise.
- Reasonable price. AUD 180.
- Heavy weight. original kinect over 500g.
- 3 Dimensional information.
- Openni Kinect ROS device driver and - Requires high computational power.
point could library support.
◦ ◦
- Narrow filed of view. H=57,V=43
- Available to use for visual odometry and
object recognition, 3D SLAM and so on.
12. Motivation
inect 3D depth camera can
provide not only 2D RGB images but
3D depth images at 30Hz.
http://www.ifixit.com
- Relatively low accuracy and many noise.
- Reasonable price. AUD 180.
- Heavy weight. original kinect over 500g.
- 3 Dimensional information.
- Openni Kinect ROS device driver and - Requires high computational power.
point could library support.
◦ ◦
- Narrow filed of view. H=57,V=43
- Available to use for visual odometry and
object recognition, 3D SLAM and so on.
13. Motivation
inect 3D depth camera can
provide not only 2D RGB images but
3D depth images at 30Hz.
http://www.ifixit.com
- Relatively low accuracy and many noise.
- Reasonable price. AUD 180.
- Heavy weight. original kinect over 500g.
- 3 Dimensional information.
- Openni Kinect ROS device driver and - Requires high computational power.
point could library support.
◦ ◦
- Narrow filed of view. H=57,V=43
- Available to use for visual odometry and
object recognition, 3D SLAM and so on.
21.
x a
y = b
z 1
a = tan{α tan−1 u/f } cos β
b = tan{α tan−1 v/f } sin β
u =x point of image plane.
v =y point of image plane.
22. (∆x, ∆y, ∆θ)
(x , y )
(u, v) (u , v )
(x, y)
ˆ ˆ
(du, dv) = P (u, v, {u0 , v0 , f, α}, {∆x, ∆y, ∆θ})
P is optical flow function of
the feature coordinate.
t t+1
23. e1 = med ˆ ˆ
(dui − dui )2 ) + (dvi − dvi )2 )
e1
26. Solar powered robot, Hyperion,
developed by CMU.
The parameter estimates are
somewhat noisy but closely with
those determined using a CMU
calibration method.
estimates=(Value)
Calibration method=(True)
27. R W
x
˙ x
˙
R = RZ (θ) W
y
˙ y
˙
Then integration of the robot
velocity using sample time
can be produce the position
of the robot as shown the
left image.
R R
x x
˙
R = R ∆t
y y
˙
28. Using the following equation,
the observed robot coordinate
velocity can be calculated.
R W
x
˙ x
˙
R = RZ (θ) W
y
˙ y
˙
Then integration of the robot
velocity using sample time
can be produce the position
of the robot as shown the
left image.
R R
x x
˙
R = R ∆t
y y
˙
29.
30.
31. 6DOF of camera position + 3DOF
of features position.
Observation vector,the projection
data for the current image.
Process noise covariance,should
be known.
Measurement noise covariance,
should be know. isotropic with
variance(4.0 pixels).
Error covariance
Kalman gain.
Observation matrix
32. − −
xk =
ˆ xk
ˆ + Kk (zk − H xk )
ˆ
The measurement is re-
projection of point.
T
zj = (R(ρ) Zj + t)
ρ, t are the camera-to-world rotation Euler angles and translation
of the camera.
Zj is the 3D world coordinate system position of point j.
This measurement is nonlinear in the estimated parameters and
this motivates use of the iterated extended Kalman filter.
33. − −
xk =
ˆ xk
ˆ + Kk (zk − H xk )
ˆ
The measurement is re-
projection of point.
T
zj = (R(ρ) Zj + t)
ρ, t are the camera-to-world rotation Euler angles and translation
of the camera.
Zj is the 3D world coordinate system position of point j.
This measurement is nonlinear in the estimated parameters and
this motivates use of the iterated extended Kalman filter.
34. Initial state estimate distribution
is done using batch algorithm[1]
to get mean and covariance.
This estimates initial 6D camera
positions corresponding to
several images in the sequence.
29.2m traveled and average
error=22.9cm and maximum
error=72.7cm.
43. y Robert Collins CSE486, Penn State
x
λ1 = large , λ2 = small
44. y Robert Collins CSE486, Penn State
x
λ1 = small , λ2 = small
45. y Robert Collins CSE486, Penn State
x
λ1 = large , λ2 = large
46. 2
E(u, v) = w(x, y)[I(x + u, y + v) − I(x, y)]
x,y
≈ [I(x, y) + uIx + vIy − I(x, y)]2
x,y
= u2 Ix + 2uvIx Iy + v 2 Iy
2 2
x,y
2
Ix Ix Iy u
= u v 2
Ix Iy Iy v
x,y
2
Ix Ix Iy u
= u v ( 2 )
Ix Iy Iy v
x,y
u 2
Ix Ix Iy
E(u, v) ∼
= u v M ,M = w(x, y) 2
v Ix Iy Iy
x,y
47. R = detM − k(traceM )2
2 2 2 2
= Ix Iy − k(Ix + Iy )
2
detM =λ1 λ2 α =Ix
2
traceM =λ1 + λ2 β =Iy
Ix =Gx ∗ I
k is an empirically determined σ
constant range from 0.04~0.06 Iy =Gy ∗ I
σ
2
Ix Ix Iy
M= w(x, y) 2
Ix Iy Iy
x,y
48. R = detM − k(traceM )2
2 2 2 2
= Ix Iy − k(Ix + Iy )
2
detM =λ1 λ2 α =Ix
2
traceM =λ1 + λ2 β =Iy
Ix =Gx ∗ I
k is an empirically determined σ
constant range from 0.04~0.06 Iy =Gy ∗ I
σ
2
Ix Ix Iy
M= w(x, y) 2
Ix Iy Iy
x,y
Source from [3]
49.
50. For each detected feature, search every features within a
certain disparity limit from the next image.
(10% of image size)
(t)
(t-1)
51. For each detected feature, calculate the normalized
correlation using 11x11 window.
A= I
x,y
B= I2
x,y
1
C =√
nB − A2
D= I1 I2
x,y
n = 121, 11 × 11
The normalized correlation Find the highest value of NC,
between two patches is (Mutual consistency check)
N C1,2 = (nD − A1 A2 )C1 C2 = max(N C1, 2)
52. Circles shows the current feature locations
and lines are feature tracks over the images
53. Track matched features and estimate relative position
using 5-points algorithm. RANSAC refines position.
54. Track matched features and estimate relative position
using 5-points algorithm. RANSAC refines position.
Construct 3D points with first and last observation
and estimate the scale factor.
55. Track matched features and estimate relative position
using 5-points algorithm. RANSAC refines position.
Construct 3D points with first and last observation
and estimate the scale factor.
Track additional number of frames and compute the
position of camera with known 3D point using
3-point algorithm. RANSAC refines positions.
56. Track matched features and estimate relative position
using 5-points algorithm. RANSAC refines position.
Construct 3D points with first and last observation
and estimate the scale factor.
Track additional number of frames and compute the
position of camera with known 3D point using
3-point algorithm. RANSAC refines positions.
57. Triangulate the observed matches into 3D points.
http://en.wikipedia.org/wiki/File:TriangulationReal.svg
= abs(y1 − y1 )
58. Triangulate the observed matches into 3D points.
Track features for a certain number of frames
and calculate the position of stereo rig and
refine with RANSAC and 3points algorithm.
E{(p1 , p1 ), (p2 , p2 ), (p3 , p3 )}
From this equation, we p1
could get R,T matrix. t
p2 p3
p1 t-1
p3
p2
59. Triangulate the observed matches into 3D points.
Track features for a certain number of frames
and calculate the position of stereo rig and
refine with RANSAC and 3points algorithm.
E{(p1 , p1 ), (p2 , p2 ), (p3 , p3 )}
From this equation, we p1
could get R,T matrix. t
p2 p3
p1 t-1
p3
p2
60. Triangulate the observed matches into 3D points.
Track features for a certain number of frames
and calculate the position of stereo rig and
refine with RANSAC and 3points algorithm.
E{(p1 , p1 ), (p2 , p2 ), (p3 , p3 )}
From this equation, we p1
could get R,T matrix. t
p2 p3
p1 t-1
p3
p2
61. Triangulate the observed matches into 3D points.
Track features for a certain number of frames
and calculate the position of stereo rig and
refine with RANSAC and 3points algorithm.
Triangulate all new feature matches and repeat
previous step a certain number of time.
62. Triangulate the observed matches into 3D points.
Track features for a certain number of frames
and calculate the position of stereo rig and
refine with RANSAC and 3points algorithm.
Triangulate all new feature matches and repeat
previous step a certain number of time.
63. Note: In this paper, fire wall refers to the tool in order to avoid error
propagation. Idea is that don’t triangulate of 3D points using observation beyond
the most recent firewall.
time
projection error Set the firewall at this frame
Then using from this frame
to triangulate 3D points.
time
69. Visual Odometry’s frame processing rate
is around 13Hz.
No a priori knowledge of the motion.
3D trajectory is estimated.
DGPS accuray in RG-2 mode is 2cm
75. Frame-to-frame error analysis of the
vehicle heading estimates. Approximately
zero-mean suggests that estimates are not
biased.
76.
77.
78. Unit=metre
Autonomous run
GPS-(Gyro+Wheel)=0.29m
GPS-(Gyro+Vis)=0.77m
Remote control
GPS-(Gyro+Wheel)=-6.78m
Official runs to report results of visual GPS-(Gyro+Vis)=3.5m
odometry to DARPA. “Remote” means
manual control by a person who is not a
member of the vo team.
Distance from true DGPS position at the
end of eacho run. (in metres)
This is our quadrotor. Currently we use the laser scanner to get the position.\n\nStdev for x=0.13m and y=0.09m, The graph is 1m x 1m for 2D. \n\nExplain advantages and disadvantage.\n\nLet’s look at vision sensor for visual odometry.\n
This is our quadrotor. Currently we use the laser scanner to get the position.\n\nStdev for x=0.13m and y=0.09m, The graph is 1m x 1m for 2D. \n\nExplain advantages and disadvantage.\n\nLet’s look at vision sensor for visual odometry.\n
This is our quadrotor. Currently we use the laser scanner to get the position.\n\nStdev for x=0.13m and y=0.09m, The graph is 1m x 1m for 2D. \n\nExplain advantages and disadvantage.\n\nLet’s look at vision sensor for visual odometry.\n
This is our quadrotor. Currently we use the laser scanner to get the position.\n\nStdev for x=0.13m and y=0.09m, The graph is 1m x 1m for 2D. \n\nExplain advantages and disadvantage.\n\nLet’s look at vision sensor for visual odometry.\n
This is our quadrotor. Currently we use the laser scanner to get the position.\n\nStdev for x=0.13m and y=0.09m, The graph is 1m x 1m for 2D. \n\nExplain advantages and disadvantage.\n\nLet’s look at vision sensor for visual odometry.\n
This is our quadrotor. Currently we use the laser scanner to get the position.\n\nStdev for x=0.13m and y=0.09m, The graph is 1m x 1m for 2D. \n\nExplain advantages and disadvantage.\n\nLet’s look at vision sensor for visual odometry.\n
This is our quadrotor. Currently we use the laser scanner to get the position.\n\nStdev for x=0.13m and y=0.09m, The graph is 1m x 1m for 2D. \n
This is our quadrotor. Currently we use the laser scanner to get the position.\n\nStdev for x=0.13m and y=0.09m, The graph is 1m x 1m for 2D. \n
This is our quadrotor. Currently we use the laser scanner to get the position.\n\nStdev for x=0.13m and y=0.09m, The graph is 1m x 1m for 2D. \n
This is our quadrotor. Currently we use the laser scanner to get the position.\n\nStdev for x=0.13m and y=0.09m, The graph is 1m x 1m for 2D. \n
This is our quadrotor. Currently we use the laser scanner to get the position.\n\nStdev for x=0.13m and y=0.09m, The graph is 1m x 1m for 2D. \n
This is our quadrotor. Currently we use the laser scanner to get the position.\n\nStdev for x=0.13m and y=0.09m, The graph is 1m x 1m for 2D. \n
This is our quadrotor. Currently we use the laser scanner to get the position.\n\nStdev for x=0.13m and y=0.09m, The graph is 1m x 1m for 2D. \n
This is our quadrotor. Currently we use the laser scanner to get the position.\n\nStdev for x=0.13m and y=0.09m, The graph is 1m x 1m for 2D. \n
This is our quadrotor. Currently we use the laser scanner to get the position.\n\nStdev for x=0.13m and y=0.09m, The graph is 1m x 1m for 2D. \n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
principle point u0,v0, focal length f, elevation gain alpha\nP = a \n
principle point u0,v0, focal length f, elevation gain alpha\nP = a \n
principle point u0,v0, focal length f, elevation gain alpha\nP = a \n
principle point u0,v0, focal length f, elevation gain alpha\nP = a \n
principle point u0,v0, focal length f, elevation gain alpha\nP = a \n
principle point u0,v0, focal length f, elevation gain alpha\nP = a \n
principle point u0,v0, focal length f, elevation gain alpha\nP = a \n
principle point u0,v0, focal length f, elevation gain alpha\nP = a \n
principle point u0,v0, focal length f, elevation gain alpha\nP = a \n
principle point u0,v0, focal length f, elevation gain alpha\nP = a \n
The different approach which is proposed in this paper is structure from motion.\nx_hat=posteriori state estimate\nx_hat_minus=priori state estimate\n
The different approach which is proposed in this paper is structure from motion.\nx_hat=posteriori state estimate\nx_hat_minus=priori state estimate\n
The different approach which is proposed in this paper is structure from motion.\nx_hat=posteriori state estimate\nx_hat_minus=priori state estimate\n
The different approach which is proposed in this paper is structure from motion.\nx_hat=posteriori state estimate\nx_hat_minus=priori state estimate\n
The different approach which is proposed in this paper is structure from motion.\nx_hat=posteriori state estimate\nx_hat_minus=priori state estimate\n
The different approach which is proposed in this paper is structure from motion.\nx_hat=posteriori state estimate\nx_hat_minus=priori state estimate\n
The different approach which is proposed in this paper is structure from motion.\nx_hat=posteriori state estimate\nx_hat_minus=priori state estimate\n
The different approach which is proposed in this paper is structure from motion.\nx_hat=posteriori state estimate\nx_hat_minus=priori state estimate\n
The different approach which is proposed in this paper is structure from motion.\nx_hat=posteriori state estimate\nx_hat_minus=priori state estimate\n
The different approach which is proposed in this paper is structure from motion.\nx_hat=posteriori state estimate\nx_hat_minus=priori state estimate\n
The different approach which is proposed in this paper is structure from motion.\nx_hat=posteriori state estimate\nx_hat_minus=priori state estimate\n
\n
\n
Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n