We've updated our privacy policy. Click here to review the details. Tap here to review the details.
Activate your 30 day free trial to unlock unlimited reading.
Activate your 30 day free trial to continue reading.
Download to read offline
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2022/09/unifying-computer-vision-and-natural-language-understanding-for-autonomous-systems-a-presentation-from-verizon/
Mumtaz Vauhkonen, Lead Distinguished Scientist and Head of Computer Vision for Cognitive AI in AI&D at Verizon, presents the “Unifying Computer Vision and Natural Language Understanding for Autonomous Systems” tutorial at the May 2022 Embedded Vision Summit.
As the applications of autonomous systems expand, many such systems need the ability to perceive using both vision and language, coherently. For example, some systems need to translate a visual scene into language. Others may need to follow language-based instructions when operating in environments that they understand visually. Or, they may need to combine visual and language inputs to understand their environments.
In this talk, Vauhkonen introduces popular approaches to joint language-vision perception. She also presents a unique deep learning rule-based approach utilizing a universal language object model. This new model derives rules and learns a universal language of object interaction and reasoning structure from a corpus, which it then applies to the objects detected visually. She shows that this approach works reliably for frequently occurring actions. She also shows that this type of model can be localized for specific environments and can communicate with humans and other autonomous systems.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2022/09/unifying-computer-vision-and-natural-language-understanding-for-autonomous-systems-a-presentation-from-verizon/
Mumtaz Vauhkonen, Lead Distinguished Scientist and Head of Computer Vision for Cognitive AI in AI&D at Verizon, presents the “Unifying Computer Vision and Natural Language Understanding for Autonomous Systems” tutorial at the May 2022 Embedded Vision Summit.
As the applications of autonomous systems expand, many such systems need the ability to perceive using both vision and language, coherently. For example, some systems need to translate a visual scene into language. Others may need to follow language-based instructions when operating in environments that they understand visually. Or, they may need to combine visual and language inputs to understand their environments.
In this talk, Vauhkonen introduces popular approaches to joint language-vision perception. She also presents a unique deep learning rule-based approach utilizing a universal language object model. This new model derives rules and learns a universal language of object interaction and reasoning structure from a corpus, which it then applies to the objects detected visually. She shows that this approach works reliably for frequently occurring actions. She also shows that this type of model can be localized for specific environments and can communicate with humans and other autonomous systems.
You just clipped your first slide!
Clipping is a handy way to collect important slides you want to go back to later. Now customize the name of a clipboard to store your clips.The SlideShare family just got bigger. Enjoy access to millions of ebooks, audiobooks, magazines, and more from Scribd.
Cancel anytime.Unlimited Reading
Learn faster and smarter from top experts
Unlimited Downloading
Download to take your learnings offline and on the go
You also get free access to Scribd!
Instant access to millions of ebooks, audiobooks, magazines, podcasts and more.
Read and listen offline with any device.
Free access to premium services like Tuneln, Mubi and more.
We’ve updated our privacy policy so that we are compliant with changing global privacy regulations and to provide you with insight into the limited ways in which we use your data.
You can read the details below. By accepting, you agree to the updated privacy policy.
Thank you!