We've updated our privacy policy. Click here to review the details. Tap here to review the details.
Activate your 30 day free trial to unlock unlimited reading.
Activate your 30 day free trial to continue reading.
Download to read offline
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/cadence/embedded-vision-training/videos/pages/may-2019-embedded-vision-summit-gadkari
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Shrinivas Gadkari, Design Engineering Director at Cadence, presents the "Fundamentals of Monocular SLAM" tutorial at the May 2019 Embedded Vision Summit.
Simultaneous Localization and Mapping (SLAM) refers to a class of algorithms that enables a device with one or more cameras and/or other sensors to create an accurate map of its surroundings, to determine the device’s location relative to its surroundings and to track its path as it moves through this environment. This is a key capability for many new use cases and applications, especially in the domains of augmented reality, virtual reality and mobile robots.
Monocular SLAM is a type of SLAM that relies exclusively on a monocular image sequence captured by a moving camera. In this talk, Gadkari introduces the fundamentals of monocular SLAM algorithms, from input images to 3D map. He takes a close look at key components of monocular SLAM algorithms, including Oriented Fast and Oriented Brief (ORB), Fundamental Matrix-based Pose Estimation, stitching together poses using translation estimation and loop closure. He also discusses implementation considerations for these components, including arithmetic precision required to achieve acceptable mapping and tracking accuracy.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/cadence/embedded-vision-training/videos/pages/may-2019-embedded-vision-summit-gadkari
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Shrinivas Gadkari, Design Engineering Director at Cadence, presents the "Fundamentals of Monocular SLAM" tutorial at the May 2019 Embedded Vision Summit.
Simultaneous Localization and Mapping (SLAM) refers to a class of algorithms that enables a device with one or more cameras and/or other sensors to create an accurate map of its surroundings, to determine the device’s location relative to its surroundings and to track its path as it moves through this environment. This is a key capability for many new use cases and applications, especially in the domains of augmented reality, virtual reality and mobile robots.
Monocular SLAM is a type of SLAM that relies exclusively on a monocular image sequence captured by a moving camera. In this talk, Gadkari introduces the fundamentals of monocular SLAM algorithms, from input images to 3D map. He takes a close look at key components of monocular SLAM algorithms, including Oriented Fast and Oriented Brief (ORB), Fundamental Matrix-based Pose Estimation, stitching together poses using translation estimation and loop closure. He also discusses implementation considerations for these components, including arithmetic precision required to achieve acceptable mapping and tracking accuracy.
You just clipped your first slide!
Clipping is a handy way to collect important slides you want to go back to later. Now customize the name of a clipboard to store your clips.The SlideShare family just got bigger. Enjoy access to millions of ebooks, audiobooks, magazines, and more from Scribd.
Cancel anytime.Unlimited Reading
Learn faster and smarter from top experts
Unlimited Downloading
Download to take your learnings offline and on the go
You also get free access to Scribd!
Instant access to millions of ebooks, audiobooks, magazines, podcasts and more.
Read and listen offline with any device.
Free access to premium services like Tuneln, Mubi and more.
We’ve updated our privacy policy so that we are compliant with changing global privacy regulations and to provide you with insight into the limited ways in which we use your data.
You can read the details below. By accepting, you agree to the updated privacy policy.
Thank you!