Deep learning has evolved not linearly but through a series of step-functions: sudden unexpected outbreaks of capability, which fundamentally changed the envelope of what computers are able to do. At TwentyBN, we have created spatio-temporal video models and data infrastructure that allowed us to grow approximately one million labeled videos showing everyday common-sense scenes and situations - many of them extremely subtle. This allowed us to successfully train neural networks end-to-end on a wide range of action understanding tasks, that neither hand-engineering nor neural networks had appeared anywhere near solving just a few months ago. I will show how these recognition tasks now drive commercial value at TwentyBN, and how they drive our long-term AI agenda for learning common sense world knowledge through video.
Roland Memisevic at AI Frontiers : Using Video to Make Your Assistant SeeAI Frontiers
In this talk, I will introduce an AI system that interacts with you while "looking" at you - to understand your behaviour, your surroundings and the full context of the engagement. At the core of this technology is a crowd acting-platform, that allows humans to engage with and teach the system about everyday aspects of our lives and of our physical world. Combining this with deep neural networks makes it possible to generate a high degree human-like "awareness" of everyday scenes and situations. I will describe how this technology allows devices, ranging from information kiosks to cars, to engage with humans more naturally and instinctively, and how TwentyBN uses this ability to create commercial value for our customers.
Xiaofeng Ren at AI Frontiers: The Quest for Video UnderstandingAI Frontiers
In this talk I will briefly discuss the ubiquitous needs of video and video understanding across Alibaba and the challenges that are being addressed and solved at iDST, Alibaba's AI R&D division. Examples include mobile shopping on Taobao, video search and recommendation on Youku and Tudou, and real-time systems for Cainiao Logistics and City Brain.
Rajarshi Gupta at AI Frontiers : Security is AI’s biggest challenge, AI is Se...AI Frontiers
The progress of AI in the last decade has seemed almost magical. But we will discuss the unique challenges posed by Security and what makes this domain the biggest challenge for AI. Reporting from the frontlines, we will describe the deployment of large-scale production-grade AI systems to combat security breaches, using lessons learned at Avast from defending over 400 million consumers every single day. Topics will cover the recent AI advancements in file-based anti-malware solutions, behavior-based on-device solutions, and network-based IoT security solutions.
Liu Ren at AI Frontiers: Sensor-aware Augmented RealityAI Frontiers
Successful Human Machine Interaction (HMI) solutions need to feature three 'I's (Intuitive, Interactive, and Intelligent) in their applications as they are key success factors to ensure superior user experience for our future products. Augmented Reality (AR) as a core HMI topic is on its way to become more practical. In this talk, Liu discusses the real-world HMI challenges for industrial AR applications and present our recent advances at Bosch to address the needs of these three 'I's. Bosch sees that many of these HMI challenges (i.e. dynamic occlusion handling, robust tracking, and easy content generation) are closely related to typical AI tasks such as scene perception and understanding. Sensor-aware approaches that leverage sensor knowledge and machine learning methods are effective to address these challenges.
Jian Liang (HiScene): AR for Industry in China: From Concepts to Real Applica...AugmentedWorldExpo
A talk from the XR Enablement Track at AWE USA 2019 - the World's #1 XR Conference & Expo in Santa Clara, California May 29-31, 2019.
Jian Liang (HiScene): AR for Industry in China: From Concepts to Real Applications
AI/AR industry has attracted attention never seen before of academia and industry, into which numerous talents and resources have been invested. However, academic achievements are not equal to products, which need to be adjusted and optimized in technology, engineering, product, etc. according to specific application scenarios. This talk will share with you some difficulties, misconceptions and experience in commercializing AR based on HiScene’s practice.
https://awexr.com
Qualcomm: How to take advantage of XR over 5G in 2019: Understanding XR ViewersAugmentedWorldExpo
A talk from the XR Enablement Track at AWE USA 2019 - the World's #1 XR Conference & Expo in Santa Clara, California May 29-31, 2019.
Qualcomm: How to take advantage of XR over 5G in 2019: Understanding XR Viewers
Prince Gupta | Qualcomm
Hiren Bhinde | Qualcomm
The opportunity for mobile XR is very strong and with 5G networks being deployed this year it will allow for more ubiquitous use of XR experiences and devices thus creating a bigger impact on society. Using XR viewers (AR or VR headworn connected to smartphones or other compute accessories through USB-C) allows for lighter and smaller designs while offering immersive and powerful computing and performance. There is already great ecosystem momentum for this new category of devices. In this session, learn about the use cases for XR viewers, the different form factors of XR viewers, architecture challenges, technology considerations and more. Learn how you can enable great XR experiences of 5G now.
https://awexr.com
Preparing your team for a new XR platform; 7 key take-awaysAugmentedWorldExpo
A talk from the Enterprise Track at AWE USA 2019 - the World's #1 XR Conference & Expo in Santa Clara, California May 29-31, 2019.
Preparing your team for a new XR platform; 7 key take-aways
Hans Wernke | Inhance Digital
Rodrick Lekey | Inhance Digital
The launch of a new XR platform often creates a great deal of excitement among institutional users and developers alike. After all, products such as Oculus, HoloLens and Magic Leap help us significantly improve the way we tell stories, train employees and address persistent maintenance challenges. However, for developers, each product launch presents its own unique challenges. The team has to familiarize itself with a new SDK, a new device and often a completely different way of structuring content. In this session, Rodrick Lekey and Hans Wernke, of Inhance Digital, an LA-based interactive marketing agency with 21+ years of experience, will share the first-hand perspective of such a team of developers. What did we learn? What challenges did we experience? What would we do differently the next time around?
https://awexr.com
AWE USA 2019: 2 Partners sharing 1 vision for smart operatorsAugmentedWorldExpo
A talk from the Enterprise Track at AWE USA 2019 - the World's #1 XR Conference & Expo in Santa Clara, California May 29-31, 2019.
2 Partners sharing 1 vision for smart operators
Mark Fleischer | AMA XPERTEYE INC.
Peter Verstraeten | Proceedix
The partnership between XpertEye and Proceedix offers a best-in-class solution to empower smart operators and technicians. The combination leverages both mobile technology and smart glasses for remote assistance and instruction and inspection execution.
https://awexr.com
Roland Memisevic at AI Frontiers : Using Video to Make Your Assistant SeeAI Frontiers
In this talk, I will introduce an AI system that interacts with you while "looking" at you - to understand your behaviour, your surroundings and the full context of the engagement. At the core of this technology is a crowd acting-platform, that allows humans to engage with and teach the system about everyday aspects of our lives and of our physical world. Combining this with deep neural networks makes it possible to generate a high degree human-like "awareness" of everyday scenes and situations. I will describe how this technology allows devices, ranging from information kiosks to cars, to engage with humans more naturally and instinctively, and how TwentyBN uses this ability to create commercial value for our customers.
Xiaofeng Ren at AI Frontiers: The Quest for Video UnderstandingAI Frontiers
In this talk I will briefly discuss the ubiquitous needs of video and video understanding across Alibaba and the challenges that are being addressed and solved at iDST, Alibaba's AI R&D division. Examples include mobile shopping on Taobao, video search and recommendation on Youku and Tudou, and real-time systems for Cainiao Logistics and City Brain.
Rajarshi Gupta at AI Frontiers : Security is AI’s biggest challenge, AI is Se...AI Frontiers
The progress of AI in the last decade has seemed almost magical. But we will discuss the unique challenges posed by Security and what makes this domain the biggest challenge for AI. Reporting from the frontlines, we will describe the deployment of large-scale production-grade AI systems to combat security breaches, using lessons learned at Avast from defending over 400 million consumers every single day. Topics will cover the recent AI advancements in file-based anti-malware solutions, behavior-based on-device solutions, and network-based IoT security solutions.
Liu Ren at AI Frontiers: Sensor-aware Augmented RealityAI Frontiers
Successful Human Machine Interaction (HMI) solutions need to feature three 'I's (Intuitive, Interactive, and Intelligent) in their applications as they are key success factors to ensure superior user experience for our future products. Augmented Reality (AR) as a core HMI topic is on its way to become more practical. In this talk, Liu discusses the real-world HMI challenges for industrial AR applications and present our recent advances at Bosch to address the needs of these three 'I's. Bosch sees that many of these HMI challenges (i.e. dynamic occlusion handling, robust tracking, and easy content generation) are closely related to typical AI tasks such as scene perception and understanding. Sensor-aware approaches that leverage sensor knowledge and machine learning methods are effective to address these challenges.
Jian Liang (HiScene): AR for Industry in China: From Concepts to Real Applica...AugmentedWorldExpo
A talk from the XR Enablement Track at AWE USA 2019 - the World's #1 XR Conference & Expo in Santa Clara, California May 29-31, 2019.
Jian Liang (HiScene): AR for Industry in China: From Concepts to Real Applications
AI/AR industry has attracted attention never seen before of academia and industry, into which numerous talents and resources have been invested. However, academic achievements are not equal to products, which need to be adjusted and optimized in technology, engineering, product, etc. according to specific application scenarios. This talk will share with you some difficulties, misconceptions and experience in commercializing AR based on HiScene’s practice.
https://awexr.com
Qualcomm: How to take advantage of XR over 5G in 2019: Understanding XR ViewersAugmentedWorldExpo
A talk from the XR Enablement Track at AWE USA 2019 - the World's #1 XR Conference & Expo in Santa Clara, California May 29-31, 2019.
Qualcomm: How to take advantage of XR over 5G in 2019: Understanding XR Viewers
Prince Gupta | Qualcomm
Hiren Bhinde | Qualcomm
The opportunity for mobile XR is very strong and with 5G networks being deployed this year it will allow for more ubiquitous use of XR experiences and devices thus creating a bigger impact on society. Using XR viewers (AR or VR headworn connected to smartphones or other compute accessories through USB-C) allows for lighter and smaller designs while offering immersive and powerful computing and performance. There is already great ecosystem momentum for this new category of devices. In this session, learn about the use cases for XR viewers, the different form factors of XR viewers, architecture challenges, technology considerations and more. Learn how you can enable great XR experiences of 5G now.
https://awexr.com
Preparing your team for a new XR platform; 7 key take-awaysAugmentedWorldExpo
A talk from the Enterprise Track at AWE USA 2019 - the World's #1 XR Conference & Expo in Santa Clara, California May 29-31, 2019.
Preparing your team for a new XR platform; 7 key take-aways
Hans Wernke | Inhance Digital
Rodrick Lekey | Inhance Digital
The launch of a new XR platform often creates a great deal of excitement among institutional users and developers alike. After all, products such as Oculus, HoloLens and Magic Leap help us significantly improve the way we tell stories, train employees and address persistent maintenance challenges. However, for developers, each product launch presents its own unique challenges. The team has to familiarize itself with a new SDK, a new device and often a completely different way of structuring content. In this session, Rodrick Lekey and Hans Wernke, of Inhance Digital, an LA-based interactive marketing agency with 21+ years of experience, will share the first-hand perspective of such a team of developers. What did we learn? What challenges did we experience? What would we do differently the next time around?
https://awexr.com
AWE USA 2019: 2 Partners sharing 1 vision for smart operatorsAugmentedWorldExpo
A talk from the Enterprise Track at AWE USA 2019 - the World's #1 XR Conference & Expo in Santa Clara, California May 29-31, 2019.
2 Partners sharing 1 vision for smart operators
Mark Fleischer | AMA XPERTEYE INC.
Peter Verstraeten | Proceedix
The partnership between XpertEye and Proceedix offers a best-in-class solution to empower smart operators and technicians. The combination leverages both mobile technology and smart glasses for remote assistance and instruction and inspection execution.
https://awexr.com
AWE Tel Aviv Startup Pitch: Dor Zepeniuk with InuitiveAugmentedWorldExpo
A Startup Pitch from the Main Stage at AWE Tel Aviv 2018 - the World's #1 XR Conference & Expo in Tel Aviv, Israel, November 5, 2018.
http://AugmentedWorldExpo.com
This presentation by Dov Nimratz (Solution Architect, Consultant, GlobalLogic, Lviv) and Roman Chobik (Software Engineer, Engineering Consultant, GlobalLogic, Lviv) was delivered at GlobalLogic Kharkiv Embedded Conference 2019 on July 7, 2019.
During this talk, we were discussed Features of Embedded AI solutions, compared different hardware devices from Google and Intel and showed real-time Embedded AI demonstration.
Conference materials: https://www.globallogic.com/ua/events/kharkiv-embedded-conference-2019/
Mobile Extended Reality (XR) is likely to become one of the world’s most disruptive computing platforms. It is expected to transform the way we interact with the world around us every day, delivering unprecedented new experiences and the potential to exponentially increase productivity. XR is inherently meant to be mobile, intuitive and always connected. Many new technologies in the areas of low power visual processing, cognition, and connectivity are required for this vision to become reality. This presentation discusses:
• A view of the evolution of XR from today to the future
• Examples of unprecedented experiences that XR is expected to enable
• Necessary technology advancements required in areas such as 3D graphics, computer vision, next-gen displays, machine learning, and wireless connectivity to support a new class of intelligent, and personalized XR experiences
https://www.qualcomm.com/invention/extended-reality
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/8tree/embedded-vision-training/videos/pages/feb-2017-member-meeting
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Arun Chhabra of 8tree delivers the presentation "Designing Vision Systems for Human Operators and Workflows" at the February 2017 Embedded Vision Alliance Member Meeting. Chhabra explains how his company is deploying computer vision to enhance existing workflows in industries such as aircraft maintenance.
For the full video of this presentation, please visit:
http://www.embedded-vision.com/platinum-members/embedded-vision-alliance/embedded-vision-training/videos/pages/may-2016-embedded-vision-summit-plenary-session
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Jeff Bier, founder of the Embedded Vision Alliance, presents the "Computer Vision 2.0: Where We Are and Where We're Going" plenary session at the May 2016 Embedded Vision Summit.
Computer vision has rapidly transitioned from a research topic with few commercial applications to a mainstream technology with applications in virtually every sector of our economy. But what we are seeing today is just the beginning. In this presentation, Embedded Vision Alliance founder Jeff Bier presents an insider's view of the state of computer vision technology and applications today, and predictions on how the field will evolve in the next few years. Jeff explores the impact of game-changing technologies such as deep neural networks, ultra-low-power processors, and cloud-based vision services. He highlights new products and applications that illuminate what we can expect from visually intelligent devices in the near future.
Phil LaFond (Bosch Automotive Service Solutions Inc.): Bosch Technical Traini...AugmentedWorldExpo
A talk from the Main Stage at AWE USA 2019 - the World's #1 XR Conference & Expo in Santa Clara, California May 29-31, 2019.
Phil LaFond (Bosch Automotive Service Solutions Inc.): Bosch Technical Training Supported by AR
Learn how Bosch is using Augmented Reality to facilitate technical training.
https://awexr.com
For the full video of this presentation, please visit:
http://www.embedded-vision.com/platinum-members/embedded-vision-alliance/embedded-vision-training/videos/pages/sept-2014-member-meeting-linley
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Linley Gwennap, founder and principal analyst of The Linley Group, delivers the presentation "Processors for Embedded Vision: Technology and Market Trends" at the September 2014 Embedded Vision Alliance Member Meeting.
The path to personalized, on-device virtual assistantQualcomm Research
Machine learning has ignited the voice UI and virtual assistant revolution as machine speech recognition approaches the accuracy of humans. The AI powering key voice UI components, such as automatic speech recognition and natural language processing, has traditionally run in the cloud due to computing, storage, and power constraints. However, on-device processing of voice UI provides unique benefits, such as instant response, reliability, and privacy. And fusing multiple on-device sensor inputs, such as camera and accelerometers, in addition to microphones adds a level of personalization that will take us closer to a true personal assistant.
biggest technology trends
Artificial Intelligence
Data Science
Internet of Things
Nanotechnology
Robotic Process Automation (RPA)
Virtual Reality
Edge Computing
Intelligent apps
More Technology Trends
A talk from the Main Stage at AWE USA 2019 - the World's #1 XR Conference & Expo in Santa Clara, California May 29-31, 2019.
Charlie Fink (XR Consultant, Forbes): Convergence
This is a historic moment. We are on the cusp of a new generation of mobile computing. Latency-free 5G broadband networks, Artificial Intelligence (AI) and Augmented Reality (AR) technologies will converge in the next five years to change the world as we know it. Our devices will change dramatically and change us in ways no one can fully predict. Convergence tells the story of Augmented Reality, a new technology that’s seeping into every smartphone and every workplace. But the smartphone is just the beginning. We will soon wonder how we put up with its miserable form for so long. In this presentation of key ideas from his new book, author and Forbes columnist Charlie Fink will discuss how the convergence will lead to head-worn, interoperable AR/VR glasses and, ultimately, to wearable, invisible, computing. The book uses a kind of AR called "marker AR," to allow readers to use their smartphone to bring pages to life in surprising and entertaining ways to illustrate how the world, and everyone it, will be painted with data. More than a book about technology, this is about an evolutionary change in humankind.
Dedi Gadot (Magic Leap): An Introduction to Magic LeapAugmentedWorldExpo
A talk from the Develop/Create at AWE Tel Aviv 2018 - the World's #1 XR Conference & Expo in Tel Aviv, Israel, November 5, 2018.
Dedi Gadot (Magic Leap): An Introduction to Magic Leap
We will introduce Magic Leap, talk about our research work and (some) plans ahead.
http://AugmentedWorldExpo.com
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Jeff Bier, Founder of the Embedded Vision Alliance, welcomes attendees to the May 2016 Embedded Vision Summit on May 3, 2016 (Day 2). Bier provides an overview of the embedded vision market opportunity, challenges, solutions and trends, in the context of reviewing the presentation highlights and take-aways from the previous day. He also introduces the Embedded Vision Alliance and the resources it offers for both product creators and potential members, and reviews the day's agenda and other logistics.
Scott Montgomerie (Scope AR): AR’s Influence on the Workforce of Tomorrow: Jo...AugmentedWorldExpo
A talk from the Main Stage at AWE USA 2019 - the World's #1 XR Conference & Expo in Santa Clara, California May 29-31, 2019.
Scott Montgomerie (Scope AR): AR’s Influence on the Workforce of Tomorrow: Job Eliminator or Creator?
As the speed of technology continues to accelerate automation in the manufacturing world, the inevitable question of whether or not the “human touch” will become obsolete is top of mind for workers. The fear that robots and smart technologies will take everyone's jobs is prevalent, but not necessarily true. AR has the power to be a job creator, not a job eliminator. Its ability to make anyone an instant expert can in fact increase job security by quickly helping workers become more proficient in tasks. Learn about real-world use cases where AR is making people better, and safer, at their jobs, and explore why enterprises who create a workplace that’s augmented, not automated, will be the leaders of tomorrow.
https://awexr.com
A talk from the Gaming & Entertainment Track at AWE USA 2019 - the World's #1 XR Conference & Expo in Santa Clara, California May 29-31, 2019.
Verizon Media: Entertainment, XR + 5G
JR Dawkins | Verizon Media
Nigel Tierney | RYOT // Verizon Media
We will talk about the changing landscape of entertainment and the effect that XR and 5G will have on it.
https://awexr.com
Dilek Hakkani-Tur at AI Frontiers: Conversational machines: Deep Learning for...AI Frontiers
In this talk, I will present recent developments in Google Research for end-to-end goal-oriented dialogue systems, with components for language understanding, dialogue state tracking, policy, and language generation. The talk will summarize novel aspects of each component, and highlight novel approaches where dialogue is viewed as a collaborative game between a user and an agent: The user has a goal in mind and the agent has access to the data that user is interested in, and can perform actions in order to realize the user’s goal. The two engage in a conversation so that the agent can help the user find a way for task completion.
AWE Tel Aviv Startup Pitch: Dor Zepeniuk with InuitiveAugmentedWorldExpo
A Startup Pitch from the Main Stage at AWE Tel Aviv 2018 - the World's #1 XR Conference & Expo in Tel Aviv, Israel, November 5, 2018.
http://AugmentedWorldExpo.com
This presentation by Dov Nimratz (Solution Architect, Consultant, GlobalLogic, Lviv) and Roman Chobik (Software Engineer, Engineering Consultant, GlobalLogic, Lviv) was delivered at GlobalLogic Kharkiv Embedded Conference 2019 on July 7, 2019.
During this talk, we were discussed Features of Embedded AI solutions, compared different hardware devices from Google and Intel and showed real-time Embedded AI demonstration.
Conference materials: https://www.globallogic.com/ua/events/kharkiv-embedded-conference-2019/
Mobile Extended Reality (XR) is likely to become one of the world’s most disruptive computing platforms. It is expected to transform the way we interact with the world around us every day, delivering unprecedented new experiences and the potential to exponentially increase productivity. XR is inherently meant to be mobile, intuitive and always connected. Many new technologies in the areas of low power visual processing, cognition, and connectivity are required for this vision to become reality. This presentation discusses:
• A view of the evolution of XR from today to the future
• Examples of unprecedented experiences that XR is expected to enable
• Necessary technology advancements required in areas such as 3D graphics, computer vision, next-gen displays, machine learning, and wireless connectivity to support a new class of intelligent, and personalized XR experiences
https://www.qualcomm.com/invention/extended-reality
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/8tree/embedded-vision-training/videos/pages/feb-2017-member-meeting
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Arun Chhabra of 8tree delivers the presentation "Designing Vision Systems for Human Operators and Workflows" at the February 2017 Embedded Vision Alliance Member Meeting. Chhabra explains how his company is deploying computer vision to enhance existing workflows in industries such as aircraft maintenance.
For the full video of this presentation, please visit:
http://www.embedded-vision.com/platinum-members/embedded-vision-alliance/embedded-vision-training/videos/pages/may-2016-embedded-vision-summit-plenary-session
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Jeff Bier, founder of the Embedded Vision Alliance, presents the "Computer Vision 2.0: Where We Are and Where We're Going" plenary session at the May 2016 Embedded Vision Summit.
Computer vision has rapidly transitioned from a research topic with few commercial applications to a mainstream technology with applications in virtually every sector of our economy. But what we are seeing today is just the beginning. In this presentation, Embedded Vision Alliance founder Jeff Bier presents an insider's view of the state of computer vision technology and applications today, and predictions on how the field will evolve in the next few years. Jeff explores the impact of game-changing technologies such as deep neural networks, ultra-low-power processors, and cloud-based vision services. He highlights new products and applications that illuminate what we can expect from visually intelligent devices in the near future.
Phil LaFond (Bosch Automotive Service Solutions Inc.): Bosch Technical Traini...AugmentedWorldExpo
A talk from the Main Stage at AWE USA 2019 - the World's #1 XR Conference & Expo in Santa Clara, California May 29-31, 2019.
Phil LaFond (Bosch Automotive Service Solutions Inc.): Bosch Technical Training Supported by AR
Learn how Bosch is using Augmented Reality to facilitate technical training.
https://awexr.com
For the full video of this presentation, please visit:
http://www.embedded-vision.com/platinum-members/embedded-vision-alliance/embedded-vision-training/videos/pages/sept-2014-member-meeting-linley
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Linley Gwennap, founder and principal analyst of The Linley Group, delivers the presentation "Processors for Embedded Vision: Technology and Market Trends" at the September 2014 Embedded Vision Alliance Member Meeting.
The path to personalized, on-device virtual assistantQualcomm Research
Machine learning has ignited the voice UI and virtual assistant revolution as machine speech recognition approaches the accuracy of humans. The AI powering key voice UI components, such as automatic speech recognition and natural language processing, has traditionally run in the cloud due to computing, storage, and power constraints. However, on-device processing of voice UI provides unique benefits, such as instant response, reliability, and privacy. And fusing multiple on-device sensor inputs, such as camera and accelerometers, in addition to microphones adds a level of personalization that will take us closer to a true personal assistant.
biggest technology trends
Artificial Intelligence
Data Science
Internet of Things
Nanotechnology
Robotic Process Automation (RPA)
Virtual Reality
Edge Computing
Intelligent apps
More Technology Trends
A talk from the Main Stage at AWE USA 2019 - the World's #1 XR Conference & Expo in Santa Clara, California May 29-31, 2019.
Charlie Fink (XR Consultant, Forbes): Convergence
This is a historic moment. We are on the cusp of a new generation of mobile computing. Latency-free 5G broadband networks, Artificial Intelligence (AI) and Augmented Reality (AR) technologies will converge in the next five years to change the world as we know it. Our devices will change dramatically and change us in ways no one can fully predict. Convergence tells the story of Augmented Reality, a new technology that’s seeping into every smartphone and every workplace. But the smartphone is just the beginning. We will soon wonder how we put up with its miserable form for so long. In this presentation of key ideas from his new book, author and Forbes columnist Charlie Fink will discuss how the convergence will lead to head-worn, interoperable AR/VR glasses and, ultimately, to wearable, invisible, computing. The book uses a kind of AR called "marker AR," to allow readers to use their smartphone to bring pages to life in surprising and entertaining ways to illustrate how the world, and everyone it, will be painted with data. More than a book about technology, this is about an evolutionary change in humankind.
Dedi Gadot (Magic Leap): An Introduction to Magic LeapAugmentedWorldExpo
A talk from the Develop/Create at AWE Tel Aviv 2018 - the World's #1 XR Conference & Expo in Tel Aviv, Israel, November 5, 2018.
Dedi Gadot (Magic Leap): An Introduction to Magic Leap
We will introduce Magic Leap, talk about our research work and (some) plans ahead.
http://AugmentedWorldExpo.com
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Jeff Bier, Founder of the Embedded Vision Alliance, welcomes attendees to the May 2016 Embedded Vision Summit on May 3, 2016 (Day 2). Bier provides an overview of the embedded vision market opportunity, challenges, solutions and trends, in the context of reviewing the presentation highlights and take-aways from the previous day. He also introduces the Embedded Vision Alliance and the resources it offers for both product creators and potential members, and reviews the day's agenda and other logistics.
Scott Montgomerie (Scope AR): AR’s Influence on the Workforce of Tomorrow: Jo...AugmentedWorldExpo
A talk from the Main Stage at AWE USA 2019 - the World's #1 XR Conference & Expo in Santa Clara, California May 29-31, 2019.
Scott Montgomerie (Scope AR): AR’s Influence on the Workforce of Tomorrow: Job Eliminator or Creator?
As the speed of technology continues to accelerate automation in the manufacturing world, the inevitable question of whether or not the “human touch” will become obsolete is top of mind for workers. The fear that robots and smart technologies will take everyone's jobs is prevalent, but not necessarily true. AR has the power to be a job creator, not a job eliminator. Its ability to make anyone an instant expert can in fact increase job security by quickly helping workers become more proficient in tasks. Learn about real-world use cases where AR is making people better, and safer, at their jobs, and explore why enterprises who create a workplace that’s augmented, not automated, will be the leaders of tomorrow.
https://awexr.com
A talk from the Gaming & Entertainment Track at AWE USA 2019 - the World's #1 XR Conference & Expo in Santa Clara, California May 29-31, 2019.
Verizon Media: Entertainment, XR + 5G
JR Dawkins | Verizon Media
Nigel Tierney | RYOT // Verizon Media
We will talk about the changing landscape of entertainment and the effect that XR and 5G will have on it.
https://awexr.com
Dilek Hakkani-Tur at AI Frontiers: Conversational machines: Deep Learning for...AI Frontiers
In this talk, I will present recent developments in Google Research for end-to-end goal-oriented dialogue systems, with components for language understanding, dialogue state tracking, policy, and language generation. The talk will summarize novel aspects of each component, and highlight novel approaches where dialogue is viewed as a collaborative game between a user and an agent: The user has a goal in mind and the agent has access to the data that user is interested in, and can perform actions in order to realize the user’s goal. The two engage in a conversation so that the agent can help the user find a way for task completion.
How should startups embrace the trend of IoT and Big DataRuvento Ventures
This presentation prepared by Ruvento Ventures gives comprehensive coverage of the state of IoT, Big Data and AI industries. It covers the latest trends and most successful investments in Consumer Hardware. Moreover, we tried to give pieces of advice to startups working in the intersection of IoT, Big Data and AI.
Omar Tawakol at AI Frontiers: The Rise Of Voice-Activated Assistants In The W...AI Frontiers
The market is already demonstrating strong value in the home for voice-activated AI, but the work environment is yet to catch up. Omar will explain why voice-activated AI is the most important development to come to the workplace. He will pull from his experiences creating Eva, the first enterprise voice assistant focused on making meetings more actionable, and dive specifically into the challenges of ASR (Automatic Speech Recognition), NLP and neural networks in creating these kinds of voice-activated assistants. He will share how his team have overcome these challenges.
Dekang Lin at AI Frontiers: Adding Conversation to GUIsAI Frontiers
Most AI assistants on mobile phones uses a conversational user interface (CUI) that mimics a chat app and translates user requests to API calls to backend services. I will present Conversational GUI (CGUI) which provides a thin layer of conversational interaction on top of existing GUI of mobile apps, by translating user requests into sequences of GUI actions such as clicks and swipes that user would have to perform by themselves. CUI avoids rebuilding existing user experiences in a chat window. More importantly, it makes it possible for end users, instead of software engineers, to create new skills by providing pairs of natural language expressions and a demonstration of the GUI actions.
Investing in Artificial Intelligence - AIBE Talk, London Feb 2017Carlos Espinal
These are the slides to the talk I gave during the AIBE Summit in Feb 2017, focusing on Artificial Intelligence investment by Venture Capital firms and how we, at Seedcamp, focus on investing in the sector.
The audio file to these slides can be found here:
https://soundcloud.com/carloseduardoespinal/talk-at-the-aibe-summit-feb-2017-on-venture-capital-in-ai
More on the AIBE Summit from their website (https://aibesummit.com/):
The AIBE Summit is a conference on artificial intelligence in business & entrepreneurship. It will be the largest event of its kind ever to be held, with a capacity of up to 800 participants.
Our mission is to increase public understanding and intellectual discussion on the implications of AI for the business world, to raise the technological literacy of students, entrepreneurs, and professionals alike, and to recognise London as one of the world’s major digital capitals for the future of AI.
It is an initiative pioneered by the LSE Entrepreneurs Society, driven to celebrate the newly formed Partnership on AI between Google, Facebook, Amazon, IBM, and Microsoft.
Learn the fundamentals of Deep Learning, Machine Learning, and AI, how they've impacted everyday technology, and what's coming next in Artificial Intelligence technology.
A report providing an overview of the Artificial Intelligence (AI) technology startup landscape. Includes a sector overview, graphical trends with insights, and recent funding/exit events. Contact info@venturescanner.com or visit www.venturescanner.com to learn more!
by Dan Romuald Mbanga, Business Development Manager, AWS
Deep learning continues to push the state of the art in domains such as computer vision, natural language understanding and recommendation engines. One of the key reasons for this progress is the availability of highly flexible and developer friendly deep learning frameworks. In this workshop, we will provide an overview of deep learning focusing on getting started with the TensorFlow and Keras frameworks on AWS. Level 100
Rahul Sukthankar at AI Frontiers: Large-Scale Video Understanding: YouTube an...AI Frontiers
This talk will present some recent advances in video understanding at Google. It will cover the technology behind progress in applications such as large-scale video annotation for YouTube, video summarization and Motion Stills, as well as our research in weakly-supervised learning, domain adaptation from YouTube to Google Photos and action recognition. I will also give my perspective on promising directions for future research in video.
Magnus Nordin at AI Frontiers: Deep Learning for Game DevelopmentAI Frontiers
The number of applications of deep neural networks has multiplied in the last couple of years. Neural nets has enabled significant breakthroughs in everything from computer vision, voice generation, voice recognition, translation, and self-driving cars. Neural nets will also be an powerful enabler for future game development. This presentation will give an overview of the potential of neural nets in game development, as well as provide an in depth look at how we can use neural nets combined with reinforcement learning for new types of game AI.
James Manyika at AI Frontiers: Sizing up the promise of AIAI Frontiers
This presentation will draw on new findings from the McKinsey Global Institute's ongoing research on the economic and business impact of AI. It will explore four key questions for AI today: who is investing and where, who is adopting AI and how, where can AI improve corporate performance, and what do business leaders need to know tomorrow morning.
Yuandong Tian at AI Frontiers: AI in Games: Achievements and ChallengesAI Frontiers
Recently, substantial progress of AI has been made in applications that require advanced pattern reading, including computer vision, speech recognition and natural language processing. However, it remains an open problem whether AI will make the same level of progress in tasks that require sophisticated reasoning, planning and decision making in complicated game environments similar to the real-world. In this talk, I present the state-of-the-art approaches to build such an AI, our recent contributions in terms of designing more effective algorithms and building extensive and fast general environments and platforms, as well as issues and challenges.
Deep-Dive into Deep Learning Pipelines with Sue Ann Hong and Tim HunterDatabricks
Deep learning has shown tremendous successes, yet it often requires a lot of effort to leverage its power. Existing deep learning frameworks require writing a lot of code to run a model, let alone in a distributed manner. Deep Learning Pipelines is a Spark Package library that makes practical deep learning simple based on the Spark MLlib Pipelines API. Leveraging Spark, Deep Learning Pipelines scales out many compute-intensive deep learning tasks. In this talk we dive into – the various use cases of Deep Learning Pipelines such as prediction at massive scale, transfer learning, and hyperparameter tuning, many of which can be done in just a few lines of code. – how to work with complex data such as images in Spark and Deep Learning Pipelines. – how to deploy deep learning models through familiar Spark APIs such as MLlib and Spark SQL to empower everyone from machine learning practitioners to business analysts. Finally, we discuss integration with popular deep learning frameworks.
Frank Chen at AI Frontiers: Startups and AIAI Frontiers
Isn't AI going to be dominated by the big companies like Google and Amazon and Microsoft and Baidu? What can startups do to thrive in this ecosystem? What are investors looking for when they meet AI-powered startups? Should startups with AI inside think about their go-to-market process any differently from other startups? Frank Chen from Andreessen Horowitz will tackle these and other AI startup questions in this session.
Future of AI: Blockchain and Deep LearningMelanie Swan
The Future of AI: Blockchain and Deep Learning
First point: considering blockchain and deep learning together suggests the emergence of a new class of global network computing system. These systems are self-operating computation graphs that make probabilistic guesses about reality states of the world.
Second point: blockchain and deep learning are facilitating each other’s development. This includes using deep learning algorithms for setting fees and detecting fraudulent activity, and using blockchains for secure registry, tracking, and remuneration of deep learning nets as they go onto the open Internet (in autonomous driving applications for example). Blockchain peer-to-peer nodes might provide deep learning services as they already provide transaction hosting and confirmation, news hosting, and banking (payment, credit flow-through) services. Further, there are similar functional emergences within the systems, for example LSTM (long-short term memory in RNNs) are like payment channels.
Third point: AI smart network thesis. We are starting to run more complicated operations through our networks: information (past), money (present), and brains (future). There are two fundamental eras of network computing: simple networks for the transfer of information (all computing to date from mainframe to mobile) and now smart networks for the transfer of value and intelligence. Blockchain and deep learning are built directly into smart networks so that they may automatically confirm authenticity and transfer value (blockchain) and predictively identify individual items and patterns.
Robotic design: Frontiers in visual and tactile sensingDesign World
Speakers Goksel Dedeoglu of PercepTonic and Gerald Loeb of SynTouch LLC will share their insights on the engineering challenges of designing robots that process visual and tactile data. Join them for a discussion of the latest advances and what the future holds for robotic sensing.
Emerging Experiences - More Personal Computing (MPC) - Tim HuckabyITCamp
How are natural & intuitive interactive emerging experiences designed into software? How do you design inspirational Emerging Experiences in new scenarios across the broadest range of devices, from big screens to small screens to no screens at all? How do you build software for a world that is more mobile, natural and grounded in intuitive?
Join Tim in a demo heavy, entertaining and technical discussion of the future of More Personal Computing and Emerging Experiences. Touch, Gesture, Voice Recognition, Demographic Profiling, Facial Recognition, Emotional Recognition, Holographic Experiences and more: All the bad; all the good; privacy law, all the real customer demos and stories, and the tools, tips and tricks learned along the way.
This demo-heavy session will show you a number of real emerging experiences solutions (from propriety solutions to broadcast television solutions you see every day). Tim will show you the use cases where these types of emerging experiences solutions are happening. And those coming in the immediate future and beyond.
Presentation of the Meetup 'Augmented Reality Barcelona' celebrated on december 12th at Campus La Salle Barcelona. Isidro Navarro – CEO at INAR, organizer of the meeting -Introduction to AR & HCI
List of speakers:
William Provancher - founder of Tactical Haptics - Utah -creators of Reactive Grip™ Touch Feedback for VR, gaming, and medical apps
Joseph Rampolla – co-founder of AR Meetup NY & founder of the Augmented Reality Dirt Podcast & Blog – New York - Presentation of AR references in global scenario
Richard Hebert – Director of BLOOM – Girona - 3D center and emerging technologies
Brian Wassom - attorney and co-founder of AR Meetup Detroit - Augmented Reality Games - Legal Concerns
David Miralles – DTM Enginyeria La Salle - Strategy Advisor on Interaction at La Salle BCN presents research projects
IDenTV The Next Evolution in Big Video DataAmro Shihadah
“To make video data into useful big data, we need to leap beyond this (human intervention). We need true video analytics, powered by computer vision.” - Wired
CHECK OUT THE LATEST DEMONSTRATION OF OUR REAL-TIME LOGO AND BRAND DETECTION AND IDENTIFICATION ENGINE!!
Our Mission: To create powerful and transformative video analytics capabilities. We assembled a global team of top computer vision and imaging scientists and engineers to transform the way large-scale video can be understood and analyzed.
The result of this pioneering work is the Intelligent Video Platform, a commercial-ready, breakthrough technology that enables high-speed visual content recognition and indexing, combined with real-time search and verification of massive amounts of video with extreme accuracy, efficiency, and scalability. Offering true video big data analytics. State-of-the art machine learning and artificial intelligence techniques make the IVP highly accurate and efficient.
Published July 26th, 2017
A slightly edited version of the Wearables slide deck.
Presented to entire Liquid Studio Team as part of the weekly studio sessions.
Accenture | Liquid Studio
Wearables Team
Summer 2016 Intern
---
FVCproductions
https://fvcproductions.com
Presentation on the topic Screenless Display , it is a type of display in which no screen is used.
It has 3 sub-topic visual image, retinal direct display and synaptic interface
Virtual reality-What you see is what you believe kaishik gundu
The recent and the most famous technology cruising in the world and has got good applications in the modern world.This is a small Slide Show on the topic
Divya Jain at AI Frontiers : Video SummarizationAI Frontiers
As video content is becoming mainstream, video summarization is becoming a hot research topic in academia and industry. Video thumbnail generation and summarization has been worked on for years, but deep learning and reinforcement learning is changing the landscape and emerging as the winner for optimal frame selection. Recent advances in GANs are improving the quality, aesthetics and relevancy of the frames to represent the original videos. Come join this session to get an understanding of various challenges and emerging solutions around video summarization.
Training at AI Frontiers 2018 - LaiOffer Data Session: How Spark Speedup AI AI Frontiers
Topic: How to use big data to enhance AI
Outline:
1. Spark ETL
Spark SQL
Spark Streaming
2. Spark ML
Spark ML pipeline
Distributed model tuning
Spark ML model and data lineage management
3. Spark XGboost
XGboost introduction
XGboost with Spark
XGboost with GPU
4. Spark Deep Learning pipeline
Transfer learning
Build Spark ML pipeline with TensorFlow
Model selection on distributed TF model
Training at AI Frontiers 2018 - Ni Lao: Weakly Supervised Natural Language Un...AI Frontiers
In this tutorial I will introduce recent work in applying weak supervision and reinforcement learning to Questions Answering (QA) systems. Specifically we discuss the semantic parsing task for which natural language queries are converted to computation steps on knowledge graphs or data tables and produce the expected answers. State-of-the-art results can be achieved by novel memory structure for sequence models and improvements in reinforcement learning algorithms. Related code and experiment setup can be found at https://github.com/crazydonkey200/neural-symbolic-machines. Related paper: https://openreview.net/pdf?id=SyK00v5xx.
Training at AI Frontiers 2018 - Udacity: Enhancing NLP with Deep Neural NetworksAI Frontiers
Instructor: Mat Leonard
Outline
1. Text Processing
Using Python + NLTK
Cleaning
Normalization
Tokenization
Part-of-speech Tagging
Stemming and Lemmatization
2. Feature Extraction
Bag of Words
TF-IDF
Word Embeddings
Word2Vec
GloVe
3. Topic Modeling
Latent Variables
Beta and Dirichlet Distributions
Laten Dirichlet Allocation
4. NLP with Deep Learning
Neural Networks
Recurrent Neural Networks (RNNs)
Word Embeddings
Sentiment Analysis with RNNs
Training at AI Frontiers 2018 - Lukasz Kaiser: Sequence to Sequence Learning ...AI Frontiers
Sequence to sequence learning is a powerful way to train deep networks for machine translation, various NLP tasks, but also image generation and recently video and music generation. We will give a hands-on tutorial showing how to use the open-source Tensor2Tensor library to train state-of-the-art models for translation, image generation, and a task of your choice!
Percy Liang at AI Frontiers : Pushing the Limits of Machine LearningAI Frontiers
In recent years, machine learning has undoubtedly been hugely successful in driving progress in AI applications. However, as we will explore in this talk, even state-of-the-art systems have "blind spots" which make them generalize poorly out of domain and render them vulnerable to adversarial examples. We then suggest that more unsupervised learning settings can encourage the development of more robust systems. We show positive results on two tasks: (i) text style and attribute transfer, the task of converting a sentence with one attribute (e.g., sentiment) to one with another; and (ii) solving SAT instances (classical problems requiring logical reasoning) using end-to-end neural networks.
Ilya Sutskever at AI Frontiers : Progress towards the OpenAI missionAI Frontiers
I will present several advances in deep learning from OpenAI. First, I will present OpenAI Five, a neural network that learned to play on par with some of the strongest professional Dota 2 teams in the world in an 18-hero version of the game. Next, I will present Dactyl, a human-like robot hand trained entirely in simulation with reinforcement learning that has achieved unprecedented dexterity on a physical robot. I will also present our results on unsupervised learning in language, that show that pre-training and finetuning can achieve a significant improvement over state of the art. Finally, I will present an overview of the historical progress in the field.
Mario Munich at AI Frontiers : Consumer robotics: embedding affordable AI in ...AI Frontiers
The availability of affordable electronics components, powerful embedded microprocessors, and ubiquitous internet access and WiFi in the household has enabled a new generation of connected consumer robots. In 2015, iRobot launched the Roomba 980, introducing intelligent visual navigation to its successful line of vacuum cleaning robots. In 2018, iRobot launched the Roomba i7, equipped with the latest mapping and navigation technology that provides spatial information to the broader ecosystem of connected devices in the home. In this talk, I will describe the challenges and the potential of introducing consumer robots capable of developing spatial context by exploring the physical space of the home, and I will elaborate on the impact of AI in the future of robotics applications. Moreover, I will describe our vision of the Smart Home, an AI-powered home that maintains itself and magically just does the right thing in anticipation of occupant needs. This home will be built on an ecosystem of connected and coordinated robots, sensors, and devices that provides the occupants with a high quality of life by seamlessly responding to the needs of daily living – from comfort to convenience to security to efficiency.
Anima Anandkumar at AI Frontiers : Modern ML : Deep, distributed, Multi-dimen...AI Frontiers
As the data and models scale, it becomes necessary to have multiple processing units for both training and inference. SignSGD is a gradient compression algorithm that only transmits the sign of the stochastic gradients during distributed training. This algorithm uses 32 times less communication per iteration than distributed SGD. We show that signSGD obtains free lunch both in theory and practice: no loss in accuracy while yielding speedups. Pushing the current boundaries of deep learning also requires using multiple dimensions and modalities. These can be encoded into tensors, which are natural extensions of matrices. These functionalities are available in the Tensorly package with multiple backend interfaces for large-scale deep learning.
Sumit Gupta at AI Frontiers : AI for EnterpriseAI Frontiers
The use of AI for voice search and image recognition is talked about often. Enterprises, however, have different challenges and requirements. In this talk, we will focus on talking about use cases in the enterprise and challenges in building out AI solutions. We will talk about how an Auto-machine learning software for videos and images called PowerAI Vision enables quick AI model training & deployment for various enterprise use cases.
Yuandong Tian at AI Frontiers : Planning in Reinforcement LearningAI Frontiers
Deep Reinforcement Learning (DRL) has made strong progress in many tasks, such as board games, robotics, navigation, neural architecture search, etc. I will present our recent open-sourced DRL frameworks to facilitate game research and development. Our framework is scalable so we can can reproduce AlphaGoZero and AlphaZero using 2000 GPUs, achieving super-human performance of Go AI that beats 4 top-30 professional players. We also show usability of our platform by training agents in real-time strategy games, and show interesting behaviors with a small amount of resource.
Alex Ermolaev at AI Frontiers : Major Applications of AI in HealthcareAI Frontiers
The latest AI advances have the potential to massively improve our health and well being. However, most of the work is yet to be done. In this talk, we will explore the most important opportunities for AI in healthcare. For example, we will explore how AI can diagnose major life-threatening conditions even before those conditions emerge. We will talk about AI ability to recommend dramatically more effective and less harmful treatment plans based on AI understanding of patient's medical history and current conditions. Finally, we will talk about AI role in making our healthcare system effective and affordable for everyone.
Long Lin at AI Frontiers : AI in GamingAI Frontiers
Games have been leveraging AI since the 1950s, when people built a rules-based AI engine that played tic-tac-toe. With technological advances over the years, AI has become increasingly popular and widely used in the gaming industry. The typical characteristics of games and game development makes them an ideal playground for practicing and implementing AI techniques, especially deep learning and reinforcement learning. Most games are well scoped; it is relatively easy to generate and use the data; and states/actions/rewards are relatively clear. In this talk, I will show a couple of use cases where ML/AI helps in-game development and enhances player experience. Examples include AI agents playing game and services that provide personalized experience to players.
Melissa Goldman at AI Frontiers : AI & FinanceAI Frontiers
AI in finance is having wide-ranging impact and solving some of the most critical societal problems. The talk gives overview of the opportunities of applying AI in finance with specific examples and highlights some of the unique challenges financial services firms face in deploying AI at scale.
As Europe's leading economic powerhouse and the fourth-largest hashtag#economy globally, Germany stands at the forefront of innovation and industrial might. Renowned for its precision engineering and high-tech sectors, Germany's economic structure is heavily supported by a robust service industry, accounting for approximately 68% of its GDP. This economic clout and strategic geopolitical stance position Germany as a focal point in the global cyber threat landscape.
In the face of escalating global tensions, particularly those emanating from geopolitical disputes with nations like hashtag#Russia and hashtag#China, hashtag#Germany has witnessed a significant uptick in targeted cyber operations. Our analysis indicates a marked increase in hashtag#cyberattack sophistication aimed at critical infrastructure and key industrial sectors. These attacks range from ransomware campaigns to hashtag#AdvancedPersistentThreats (hashtag#APTs), threatening national security and business integrity.
🔑 Key findings include:
🔍 Increased frequency and complexity of cyber threats.
🔍 Escalation of state-sponsored and criminally motivated cyber operations.
🔍 Active dark web exchanges of malicious tools and tactics.
Our comprehensive report delves into these challenges, using a blend of open-source and proprietary data collection techniques. By monitoring activity on critical networks and analyzing attack patterns, our team provides a detailed overview of the threats facing German entities.
This report aims to equip stakeholders across public and private sectors with the knowledge to enhance their defensive strategies, reduce exposure to cyber risks, and reinforce Germany's resilience against cyber threats.
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...Subhajit Sahu
Abstract — Levelwise PageRank is an alternative method of PageRank computation which decomposes the input graph into a directed acyclic block-graph of strongly connected components, and processes them in topological order, one level at a time. This enables calculation for ranks in a distributed fashion without per-iteration communication, unlike the standard method where all vertices are processed in each iteration. It however comes with a precondition of the absence of dead ends in the input graph. Here, the native non-distributed performance of Levelwise PageRank was compared against Monolithic PageRank on a CPU as well as a GPU. To ensure a fair comparison, Monolithic PageRank was also performed on a graph where vertices were split by components. Results indicate that Levelwise PageRank is about as fast as Monolithic PageRank on the CPU, but quite a bit slower on the GPU. Slowdown on the GPU is likely caused by a large submission of small workloads, and expected to be non-issue when the computation is performed on massive graphs.
Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...John Andrews
SlideShare Description for "Chatty Kathy - UNC Bootcamp Final Project Presentation"
Title: Chatty Kathy: Enhancing Physical Activity Among Older Adults
Description:
Discover how Chatty Kathy, an innovative project developed at the UNC Bootcamp, aims to tackle the challenge of low physical activity among older adults. Our AI-driven solution uses peer interaction to boost and sustain exercise levels, significantly improving health outcomes. This presentation covers our problem statement, the rationale behind Chatty Kathy, synthetic data and persona creation, model performance metrics, a visual demonstration of the project, and potential future developments. Join us for an insightful Q&A session to explore the potential of this groundbreaking project.
Project Team: Jay Requarth, Jana Avery, John Andrews, Dr. Dick Davis II, Nee Buntoum, Nam Yeongjin & Mat Nicholas
Show drafts
volume_up
Empowering the Data Analytics Ecosystem: A Laser Focus on Value
The data analytics ecosystem thrives when every component functions at its peak, unlocking the true potential of data. Here's a laser focus on key areas for an empowered ecosystem:
1. Democratize Access, Not Data:
Granular Access Controls: Provide users with self-service tools tailored to their specific needs, preventing data overload and misuse.
Data Catalogs: Implement robust data catalogs for easy discovery and understanding of available data sources.
2. Foster Collaboration with Clear Roles:
Data Mesh Architecture: Break down data silos by creating a distributed data ownership model with clear ownership and responsibilities.
Collaborative Workspaces: Utilize interactive platforms where data scientists, analysts, and domain experts can work seamlessly together.
3. Leverage Advanced Analytics Strategically:
AI-powered Automation: Automate repetitive tasks like data cleaning and feature engineering, freeing up data talent for higher-level analysis.
Right-Tool Selection: Strategically choose the most effective advanced analytics techniques (e.g., AI, ML) based on specific business problems.
4. Prioritize Data Quality with Automation:
Automated Data Validation: Implement automated data quality checks to identify and rectify errors at the source, minimizing downstream issues.
Data Lineage Tracking: Track the flow of data throughout the ecosystem, ensuring transparency and facilitating root cause analysis for errors.
5. Cultivate a Data-Driven Mindset:
Metrics-Driven Performance Management: Align KPIs and performance metrics with data-driven insights to ensure actionable decision making.
Data Storytelling Workshops: Equip stakeholders with the skills to translate complex data findings into compelling narratives that drive action.
Benefits of a Precise Ecosystem:
Sharpened Focus: Precise access and clear roles ensure everyone works with the most relevant data, maximizing efficiency.
Actionable Insights: Strategic analytics and automated quality checks lead to more reliable and actionable data insights.
Continuous Improvement: Data-driven performance management fosters a culture of learning and continuous improvement.
Sustainable Growth: Empowered by data, organizations can make informed decisions to drive sustainable growth and innovation.
By focusing on these precise actions, organizations can create an empowered data analytics ecosystem that delivers real value by driving data-driven decisions and maximizing the return on their data investment.
Explore our comprehensive data analysis project presentation on predicting product ad campaign performance. Learn how data-driven insights can optimize your marketing strategies and enhance campaign effectiveness. Perfect for professionals and students looking to understand the power of data analysis in advertising. for more details visit: https://bostoninstituteofanalytics.org/data-science-and-artificial-intelligence/
2. DOMESTIC COMPANIONS AUGMENTED REALITY
AUTOMOTIVE
(10M cars)
(85M smart cameras) (6M AR glasses)
COLLABORATIVE ROBOTICS
(150M cobots)
SMARTPHONE APPS
(3 BN phones)
All figures are estimated number of devices in 2020
By 2020:
(CONSUMER VIDEOS)
(80% of Internet Traffic)
Sources: KPCB, Barclays
5. 2012 2014 2016 2017
“Neural networks
can’t do
image
classification”
“Neural networks
can’t
translate text”
“Neural networks
can’t play Go”
“Neural networks
don’t have
common sense”
1986
“Neural networks
don’t work”
?
6. At TwentyBN we build the brain that allows cameras to see
Prof. Yoshua Bengio
Scientific Advisor
Professor at MILA Montréal;
noted for his pioneering work
on deep learning
Valentin Haenel
VP Engineering
Co-initiator of PyData Berlin;
contributor in more than 50
open source projects
Nathan Benaich
Advisor
VC investor, technologist,
former scientist; Organizer of
London.ai and RAAIS
+ 13 full-time staff, including AI researchers, engineers and product people
Roland Memisevic
15+ years experience in DL as
Professor (MILA Montreal) &
PhD student of Geoff Hinton
CEO & Chief Scientist
Moritz Müller-Freitag
COO & Head of Product
Experience as Professor (FH
Münster) & principal software
architecture (XING AG)
Experience as data scientist
(Eleven) & country manager
(Savedo/HitFox Group)
Ingo Bax
CTO
Christian Thurau
CBDO
Experience as Co-founder, CTO
(Game Analytics, exit) &
researcher (Fraunhofer)
9. ● RGB (for example, cheap, built-in laptop camera)
● Recognizes 25 hand gestures
● Very high accuracy
● Runs in real-time on a laptop using RGB camera input
● Require depth sensor devices
● ~5 gestures
● Low accuracy
● Never gained traction
Camera based gesture control
Existing solutions
TwentyBN solution
Note: Click picture for video
10. Variations
Camera angles and scene layouts
Multi-person actions and
localization
Interactivity
Complex object interactions
11. Indoor activity monitoring
Output: “Person picking
[something] up”
Output: “[Something] falling
like a feather or paper”
Output: “Person leaving
through a door”
Output: “Bending [something]
until it breaks”
Output: “Trying to bend
[something unbendable] so
nothing happens”
Output: “[gesture] Zooming
Out With Two Fingers”
12.
13. We support all stages of our clients’ product cycles
Softcore IP
Data licensing
Software licensing
Hardware licensing
Product Description
Software that adds video
capabilities to your
product
High-quality labeled videos
customized to support
your video applications
14. 20BN-JESTER
A crowd-acted dataset of generic human
hand gestures.
Number of Videos: 148.094
License: Free for academic use
(Creative Commons Attribution 4.0
International license CC BY-NC-ND 4.0)
https://www.twentybn.com/datasets/jester
15. 20BN-SOMETHING-SOMETHING
A crowd-acted dataset of basic interactions
with everyday objects.
Number of Videos: 108.499
License: Free for academic use
(Creative Commons Attribution 4.0
International license CC BY-NC-ND 4.0)
https://www.twentybn.com/datasets/something-something
16. Contrastive classes make learning harder and networks stronger
Tearing [something] into two pieces VS Tearing [something] just a little bit 0.74 (0.52)
Pretending to pick [something] up VS Picking [something] up 0.86 (0.75)
Pretending to pour VS Pouring 0.82 (0.64)
Pouring with overflow VS Pouring without 0.76 (0.54)
Pretending to put [something] onto VS Putting [something] onto [something] 0.82 (0.64)
17. Mistaken “opening” predictions
Ground truth: Moving [part]
of [something]
Prediction: Opening
[something]
Ground truth: Unfolding
[something]
Ground truth: Putting
[something] on a flat surface
without letting it roll
Prediction: Opening
[something]
Prediction: Opening
[something]
18. Mistaken “covering” predictions
Ground truth: Putting [something] in
front of [something]
Prediction: Covering
[something]
Ground truth: Turning [something] upside
down
Prediction: Covering
[something]