Recently, substantial progress of AI has been made in applications that require advanced pattern reading, including computer vision, speech recognition and natural language processing. However, it remains an open problem whether AI will make the same level of progress in tasks that require sophisticated reasoning, planning and decision making in complicated game environments similar to the real-world. In this talk, I present the state-of-the-art approaches to build such an AI, our recent contributions in terms of designing more effective algorithms and building extensive and fast general environments and platforms, as well as issues and challenges.
Xiaofeng Ren at AI Frontiers: The Quest for Video UnderstandingAI Frontiers
In this talk I will briefly discuss the ubiquitous needs of video and video understanding across Alibaba and the challenges that are being addressed and solved at iDST, Alibaba's AI R&D division. Examples include mobile shopping on Taobao, video search and recommendation on Youku and Tudou, and real-time systems for Cainiao Logistics and City Brain.
Magnus Nordin at AI Frontiers: Deep Learning for Game DevelopmentAI Frontiers
The number of applications of deep neural networks has multiplied in the last couple of years. Neural nets has enabled significant breakthroughs in everything from computer vision, voice generation, voice recognition, translation, and self-driving cars. Neural nets will also be an powerful enabler for future game development. This presentation will give an overview of the potential of neural nets in game development, as well as provide an in depth look at how we can use neural nets combined with reinforcement learning for new types of game AI.
Dilek Hakkani-Tur at AI Frontiers: Conversational machines: Deep Learning for...AI Frontiers
In this talk, I will present recent developments in Google Research for end-to-end goal-oriented dialogue systems, with components for language understanding, dialogue state tracking, policy, and language generation. The talk will summarize novel aspects of each component, and highlight novel approaches where dialogue is viewed as a collaborative game between a user and an agent: The user has a goal in mind and the agent has access to the data that user is interested in, and can perform actions in order to realize the user’s goal. The two engage in a conversation so that the agent can help the user find a way for task completion.
James Manyika at AI Frontiers: Sizing up the promise of AIAI Frontiers
This presentation will draw on new findings from the McKinsey Global Institute's ongoing research on the economic and business impact of AI. It will explore four key questions for AI today: who is investing and where, who is adopting AI and how, where can AI improve corporate performance, and what do business leaders need to know tomorrow morning.
Dekang Lin at AI Frontiers: Adding Conversation to GUIsAI Frontiers
Most AI assistants on mobile phones uses a conversational user interface (CUI) that mimics a chat app and translates user requests to API calls to backend services. I will present Conversational GUI (CGUI) which provides a thin layer of conversational interaction on top of existing GUI of mobile apps, by translating user requests into sequences of GUI actions such as clicks and swipes that user would have to perform by themselves. CUI avoids rebuilding existing user experiences in a chat window. More importantly, it makes it possible for end users, instead of software engineers, to create new skills by providing pairs of natural language expressions and a demonstration of the GUI actions.
Omar Tawakol at AI Frontiers: The Rise Of Voice-Activated Assistants In The W...AI Frontiers
The market is already demonstrating strong value in the home for voice-activated AI, but the work environment is yet to catch up. Omar will explain why voice-activated AI is the most important development to come to the workplace. He will pull from his experiences creating Eva, the first enterprise voice assistant focused on making meetings more actionable, and dive specifically into the challenges of ASR (Automatic Speech Recognition), NLP and neural networks in creating these kinds of voice-activated assistants. He will share how his team have overcome these challenges.
Learn the fundamentals of Deep Learning, Machine Learning, and AI, how they've impacted everyday technology, and what's coming next in Artificial Intelligence technology.
Investing in Artificial Intelligence - AIBE Talk, London Feb 2017Carlos Espinal
These are the slides to the talk I gave during the AIBE Summit in Feb 2017, focusing on Artificial Intelligence investment by Venture Capital firms and how we, at Seedcamp, focus on investing in the sector.
The audio file to these slides can be found here:
https://soundcloud.com/carloseduardoespinal/talk-at-the-aibe-summit-feb-2017-on-venture-capital-in-ai
More on the AIBE Summit from their website (https://aibesummit.com/):
The AIBE Summit is a conference on artificial intelligence in business & entrepreneurship. It will be the largest event of its kind ever to be held, with a capacity of up to 800 participants.
Our mission is to increase public understanding and intellectual discussion on the implications of AI for the business world, to raise the technological literacy of students, entrepreneurs, and professionals alike, and to recognise London as one of the world’s major digital capitals for the future of AI.
It is an initiative pioneered by the LSE Entrepreneurs Society, driven to celebrate the newly formed Partnership on AI between Google, Facebook, Amazon, IBM, and Microsoft.
Xiaofeng Ren at AI Frontiers: The Quest for Video UnderstandingAI Frontiers
In this talk I will briefly discuss the ubiquitous needs of video and video understanding across Alibaba and the challenges that are being addressed and solved at iDST, Alibaba's AI R&D division. Examples include mobile shopping on Taobao, video search and recommendation on Youku and Tudou, and real-time systems for Cainiao Logistics and City Brain.
Magnus Nordin at AI Frontiers: Deep Learning for Game DevelopmentAI Frontiers
The number of applications of deep neural networks has multiplied in the last couple of years. Neural nets has enabled significant breakthroughs in everything from computer vision, voice generation, voice recognition, translation, and self-driving cars. Neural nets will also be an powerful enabler for future game development. This presentation will give an overview of the potential of neural nets in game development, as well as provide an in depth look at how we can use neural nets combined with reinforcement learning for new types of game AI.
Dilek Hakkani-Tur at AI Frontiers: Conversational machines: Deep Learning for...AI Frontiers
In this talk, I will present recent developments in Google Research for end-to-end goal-oriented dialogue systems, with components for language understanding, dialogue state tracking, policy, and language generation. The talk will summarize novel aspects of each component, and highlight novel approaches where dialogue is viewed as a collaborative game between a user and an agent: The user has a goal in mind and the agent has access to the data that user is interested in, and can perform actions in order to realize the user’s goal. The two engage in a conversation so that the agent can help the user find a way for task completion.
James Manyika at AI Frontiers: Sizing up the promise of AIAI Frontiers
This presentation will draw on new findings from the McKinsey Global Institute's ongoing research on the economic and business impact of AI. It will explore four key questions for AI today: who is investing and where, who is adopting AI and how, where can AI improve corporate performance, and what do business leaders need to know tomorrow morning.
Dekang Lin at AI Frontiers: Adding Conversation to GUIsAI Frontiers
Most AI assistants on mobile phones uses a conversational user interface (CUI) that mimics a chat app and translates user requests to API calls to backend services. I will present Conversational GUI (CGUI) which provides a thin layer of conversational interaction on top of existing GUI of mobile apps, by translating user requests into sequences of GUI actions such as clicks and swipes that user would have to perform by themselves. CUI avoids rebuilding existing user experiences in a chat window. More importantly, it makes it possible for end users, instead of software engineers, to create new skills by providing pairs of natural language expressions and a demonstration of the GUI actions.
Omar Tawakol at AI Frontiers: The Rise Of Voice-Activated Assistants In The W...AI Frontiers
The market is already demonstrating strong value in the home for voice-activated AI, but the work environment is yet to catch up. Omar will explain why voice-activated AI is the most important development to come to the workplace. He will pull from his experiences creating Eva, the first enterprise voice assistant focused on making meetings more actionable, and dive specifically into the challenges of ASR (Automatic Speech Recognition), NLP and neural networks in creating these kinds of voice-activated assistants. He will share how his team have overcome these challenges.
Learn the fundamentals of Deep Learning, Machine Learning, and AI, how they've impacted everyday technology, and what's coming next in Artificial Intelligence technology.
Investing in Artificial Intelligence - AIBE Talk, London Feb 2017Carlos Espinal
These are the slides to the talk I gave during the AIBE Summit in Feb 2017, focusing on Artificial Intelligence investment by Venture Capital firms and how we, at Seedcamp, focus on investing in the sector.
The audio file to these slides can be found here:
https://soundcloud.com/carloseduardoespinal/talk-at-the-aibe-summit-feb-2017-on-venture-capital-in-ai
More on the AIBE Summit from their website (https://aibesummit.com/):
The AIBE Summit is a conference on artificial intelligence in business & entrepreneurship. It will be the largest event of its kind ever to be held, with a capacity of up to 800 participants.
Our mission is to increase public understanding and intellectual discussion on the implications of AI for the business world, to raise the technological literacy of students, entrepreneurs, and professionals alike, and to recognise London as one of the world’s major digital capitals for the future of AI.
It is an initiative pioneered by the LSE Entrepreneurs Society, driven to celebrate the newly formed Partnership on AI between Google, Facebook, Amazon, IBM, and Microsoft.
A report providing an overview of the Artificial Intelligence (AI) technology startup landscape. Includes a sector overview, graphical trends with insights, and recent funding/exit events. Contact info@venturescanner.com or visit www.venturescanner.com to learn more!
by Dan Romuald Mbanga, Business Development Manager, AWS
Deep learning continues to push the state of the art in domains such as computer vision, natural language understanding and recommendation engines. One of the key reasons for this progress is the availability of highly flexible and developer friendly deep learning frameworks. In this workshop, we will provide an overview of deep learning focusing on getting started with the TensorFlow and Keras frameworks on AWS. Level 100
Rahul Sukthankar at AI Frontiers: Large-Scale Video Understanding: YouTube an...AI Frontiers
This talk will present some recent advances in video understanding at Google. It will cover the technology behind progress in applications such as large-scale video annotation for YouTube, video summarization and Motion Stills, as well as our research in weakly-supervised learning, domain adaptation from YouTube to Google Photos and action recognition. I will also give my perspective on promising directions for future research in video.
Roland Memisevic at AI Frontiers: Common sense video understanding at TwentyBNAI Frontiers
Deep learning has evolved not linearly but through a series of step-functions: sudden unexpected outbreaks of capability, which fundamentally changed the envelope of what computers are able to do. At TwentyBN, we have created spatio-temporal video models and data infrastructure that allowed us to grow approximately one million labeled videos showing everyday common-sense scenes and situations - many of them extremely subtle. This allowed us to successfully train neural networks end-to-end on a wide range of action understanding tasks, that neither hand-engineering nor neural networks had appeared anywhere near solving just a few months ago. I will show how these recognition tasks now drive commercial value at TwentyBN, and how they drive our long-term AI agenda for learning common sense world knowledge through video.
Deep-Dive into Deep Learning Pipelines with Sue Ann Hong and Tim HunterDatabricks
Deep learning has shown tremendous successes, yet it often requires a lot of effort to leverage its power. Existing deep learning frameworks require writing a lot of code to run a model, let alone in a distributed manner. Deep Learning Pipelines is a Spark Package library that makes practical deep learning simple based on the Spark MLlib Pipelines API. Leveraging Spark, Deep Learning Pipelines scales out many compute-intensive deep learning tasks. In this talk we dive into – the various use cases of Deep Learning Pipelines such as prediction at massive scale, transfer learning, and hyperparameter tuning, many of which can be done in just a few lines of code. – how to work with complex data such as images in Spark and Deep Learning Pipelines. – how to deploy deep learning models through familiar Spark APIs such as MLlib and Spark SQL to empower everyone from machine learning practitioners to business analysts. Finally, we discuss integration with popular deep learning frameworks.
Frank Chen at AI Frontiers: Startups and AIAI Frontiers
Isn't AI going to be dominated by the big companies like Google and Amazon and Microsoft and Baidu? What can startups do to thrive in this ecosystem? What are investors looking for when they meet AI-powered startups? Should startups with AI inside think about their go-to-market process any differently from other startups? Frank Chen from Andreessen Horowitz will tackle these and other AI startup questions in this session.
Future of AI: Blockchain and Deep LearningMelanie Swan
The Future of AI: Blockchain and Deep Learning
First point: considering blockchain and deep learning together suggests the emergence of a new class of global network computing system. These systems are self-operating computation graphs that make probabilistic guesses about reality states of the world.
Second point: blockchain and deep learning are facilitating each other’s development. This includes using deep learning algorithms for setting fees and detecting fraudulent activity, and using blockchains for secure registry, tracking, and remuneration of deep learning nets as they go onto the open Internet (in autonomous driving applications for example). Blockchain peer-to-peer nodes might provide deep learning services as they already provide transaction hosting and confirmation, news hosting, and banking (payment, credit flow-through) services. Further, there are similar functional emergences within the systems, for example LSTM (long-short term memory in RNNs) are like payment channels.
Third point: AI smart network thesis. We are starting to run more complicated operations through our networks: information (past), money (present), and brains (future). There are two fundamental eras of network computing: simple networks for the transfer of information (all computing to date from mainframe to mobile) and now smart networks for the transfer of value and intelligence. Blockchain and deep learning are built directly into smart networks so that they may automatically confirm authenticity and transfer value (blockchain) and predictively identify individual items and patterns.
Driven by the rapid progress in Artificial Intelligence (AI) research, intelligent machines are gaining the ability to learn, improve and make calculated decisions in ways that will enable them to perform tasks previously thought to rely solely on human experience, creativity, and ingenuity. As a result, we will in the near future see large parts of our lives influenced by AI.
AI innovation will also be central to the achievement of the United Nations' Sustainable Development Goals (SDGs) and will help solving humanity's grand challenges by capitalizing on the unprecedented quantities of data now being generated on sentiment behavior, human health, commerce, communications, migration and more.
With large parts of our lives being influenced by AI, it is critical that government, industry, academia and civil society work together to evaluate the opportunities presented by AI, ensuring that AI benefits all of humanity. Responding to this critical issue, ITU and the XPRIZE Foundation organized AI for Good Global Summit in Geneva, 7-9 June, 2017 in partnership with a number of UN sister agencies. The Summit aimed to accelerate and advance the development and democratization of AI solutions that can address specific global challenges related to poverty, hunger, health, education, the environment, and others.
The Summit provided a neutral platform for government officials, UN agencies, NGO's, industry leaders, and AI experts to discuss the ethical, technical, societal and policy issues related to AI, offer reccommendations and guidance, and promote international dialogue and cooperation in support of AI innovation.
Please visit the AI for Good Global Summit page for more resources: https://www.itu.int/en/ITU-T/AI/Pages/201706-default.aspx
If you would like to speak, partner or sponsor the 2018 edition of the summit, please contact: ai@itu.int
AlphaGo: Mastering the Game of Go with Deep Neural Networks and Tree SearchKarel Ha
the presentation of the article "Mastering the game of Go with deep neural networks and tree search" given at the Optimization Seminar 2015/2016
Notes:
- All URLs are clickable.
- All citations are clickable (when hovered over the "year" part of "[author year]").
- To download without a SlideShare account, use https://www.dropbox.com/s/p4rnlhoewbedkjg/AlphaGo.pdf?dl=0
- The corresponding leaflet is available at http://www.slideshare.net/KarelHa1/leaflet-for-the-talk-on-alphago
- The source code is available at https://github.com/mathemage/AlphaGo-presentation
Gary Tarolli's presentation on April 27, 2015 to the Computer Systems Fundamentals class at Middlesex Community College. A great perspective on the history of graphics and Gary's unique role in groundbreaking companies like 3dfx and nvidia.
As the 2011 Christmas Holidays quickly approach we are offering some great gift ideas that you can surprise your family and friends with. Don't forget to click the "LIKE" Button
NVIDIA at CES 2014: The visual computing revolution continues. At the company's press conference on Sunday, Jan. 5, 2014, NVIDIA CEO Jen-Hsun Huang showcases the new Tegra K1, a 192-core super chip, Tegra K1 VCM, putting supercomputing technology in cars, and next-gen PC gaming with GameStream and G-SYNC.
A report providing an overview of the Artificial Intelligence (AI) technology startup landscape. Includes a sector overview, graphical trends with insights, and recent funding/exit events. Contact info@venturescanner.com or visit www.venturescanner.com to learn more!
by Dan Romuald Mbanga, Business Development Manager, AWS
Deep learning continues to push the state of the art in domains such as computer vision, natural language understanding and recommendation engines. One of the key reasons for this progress is the availability of highly flexible and developer friendly deep learning frameworks. In this workshop, we will provide an overview of deep learning focusing on getting started with the TensorFlow and Keras frameworks on AWS. Level 100
Rahul Sukthankar at AI Frontiers: Large-Scale Video Understanding: YouTube an...AI Frontiers
This talk will present some recent advances in video understanding at Google. It will cover the technology behind progress in applications such as large-scale video annotation for YouTube, video summarization and Motion Stills, as well as our research in weakly-supervised learning, domain adaptation from YouTube to Google Photos and action recognition. I will also give my perspective on promising directions for future research in video.
Roland Memisevic at AI Frontiers: Common sense video understanding at TwentyBNAI Frontiers
Deep learning has evolved not linearly but through a series of step-functions: sudden unexpected outbreaks of capability, which fundamentally changed the envelope of what computers are able to do. At TwentyBN, we have created spatio-temporal video models and data infrastructure that allowed us to grow approximately one million labeled videos showing everyday common-sense scenes and situations - many of them extremely subtle. This allowed us to successfully train neural networks end-to-end on a wide range of action understanding tasks, that neither hand-engineering nor neural networks had appeared anywhere near solving just a few months ago. I will show how these recognition tasks now drive commercial value at TwentyBN, and how they drive our long-term AI agenda for learning common sense world knowledge through video.
Deep-Dive into Deep Learning Pipelines with Sue Ann Hong and Tim HunterDatabricks
Deep learning has shown tremendous successes, yet it often requires a lot of effort to leverage its power. Existing deep learning frameworks require writing a lot of code to run a model, let alone in a distributed manner. Deep Learning Pipelines is a Spark Package library that makes practical deep learning simple based on the Spark MLlib Pipelines API. Leveraging Spark, Deep Learning Pipelines scales out many compute-intensive deep learning tasks. In this talk we dive into – the various use cases of Deep Learning Pipelines such as prediction at massive scale, transfer learning, and hyperparameter tuning, many of which can be done in just a few lines of code. – how to work with complex data such as images in Spark and Deep Learning Pipelines. – how to deploy deep learning models through familiar Spark APIs such as MLlib and Spark SQL to empower everyone from machine learning practitioners to business analysts. Finally, we discuss integration with popular deep learning frameworks.
Frank Chen at AI Frontiers: Startups and AIAI Frontiers
Isn't AI going to be dominated by the big companies like Google and Amazon and Microsoft and Baidu? What can startups do to thrive in this ecosystem? What are investors looking for when they meet AI-powered startups? Should startups with AI inside think about their go-to-market process any differently from other startups? Frank Chen from Andreessen Horowitz will tackle these and other AI startup questions in this session.
Future of AI: Blockchain and Deep LearningMelanie Swan
The Future of AI: Blockchain and Deep Learning
First point: considering blockchain and deep learning together suggests the emergence of a new class of global network computing system. These systems are self-operating computation graphs that make probabilistic guesses about reality states of the world.
Second point: blockchain and deep learning are facilitating each other’s development. This includes using deep learning algorithms for setting fees and detecting fraudulent activity, and using blockchains for secure registry, tracking, and remuneration of deep learning nets as they go onto the open Internet (in autonomous driving applications for example). Blockchain peer-to-peer nodes might provide deep learning services as they already provide transaction hosting and confirmation, news hosting, and banking (payment, credit flow-through) services. Further, there are similar functional emergences within the systems, for example LSTM (long-short term memory in RNNs) are like payment channels.
Third point: AI smart network thesis. We are starting to run more complicated operations through our networks: information (past), money (present), and brains (future). There are two fundamental eras of network computing: simple networks for the transfer of information (all computing to date from mainframe to mobile) and now smart networks for the transfer of value and intelligence. Blockchain and deep learning are built directly into smart networks so that they may automatically confirm authenticity and transfer value (blockchain) and predictively identify individual items and patterns.
Driven by the rapid progress in Artificial Intelligence (AI) research, intelligent machines are gaining the ability to learn, improve and make calculated decisions in ways that will enable them to perform tasks previously thought to rely solely on human experience, creativity, and ingenuity. As a result, we will in the near future see large parts of our lives influenced by AI.
AI innovation will also be central to the achievement of the United Nations' Sustainable Development Goals (SDGs) and will help solving humanity's grand challenges by capitalizing on the unprecedented quantities of data now being generated on sentiment behavior, human health, commerce, communications, migration and more.
With large parts of our lives being influenced by AI, it is critical that government, industry, academia and civil society work together to evaluate the opportunities presented by AI, ensuring that AI benefits all of humanity. Responding to this critical issue, ITU and the XPRIZE Foundation organized AI for Good Global Summit in Geneva, 7-9 June, 2017 in partnership with a number of UN sister agencies. The Summit aimed to accelerate and advance the development and democratization of AI solutions that can address specific global challenges related to poverty, hunger, health, education, the environment, and others.
The Summit provided a neutral platform for government officials, UN agencies, NGO's, industry leaders, and AI experts to discuss the ethical, technical, societal and policy issues related to AI, offer reccommendations and guidance, and promote international dialogue and cooperation in support of AI innovation.
Please visit the AI for Good Global Summit page for more resources: https://www.itu.int/en/ITU-T/AI/Pages/201706-default.aspx
If you would like to speak, partner or sponsor the 2018 edition of the summit, please contact: ai@itu.int
AlphaGo: Mastering the Game of Go with Deep Neural Networks and Tree SearchKarel Ha
the presentation of the article "Mastering the game of Go with deep neural networks and tree search" given at the Optimization Seminar 2015/2016
Notes:
- All URLs are clickable.
- All citations are clickable (when hovered over the "year" part of "[author year]").
- To download without a SlideShare account, use https://www.dropbox.com/s/p4rnlhoewbedkjg/AlphaGo.pdf?dl=0
- The corresponding leaflet is available at http://www.slideshare.net/KarelHa1/leaflet-for-the-talk-on-alphago
- The source code is available at https://github.com/mathemage/AlphaGo-presentation
Gary Tarolli's presentation on April 27, 2015 to the Computer Systems Fundamentals class at Middlesex Community College. A great perspective on the history of graphics and Gary's unique role in groundbreaking companies like 3dfx and nvidia.
As the 2011 Christmas Holidays quickly approach we are offering some great gift ideas that you can surprise your family and friends with. Don't forget to click the "LIKE" Button
NVIDIA at CES 2014: The visual computing revolution continues. At the company's press conference on Sunday, Jan. 5, 2014, NVIDIA CEO Jen-Hsun Huang showcases the new Tegra K1, a 192-core super chip, Tegra K1 VCM, putting supercomputing technology in cars, and next-gen PC gaming with GameStream and G-SYNC.
A grand challenge of AI has fallen - a decade earlier than "experts" predicted. But should we care?
What made AlphaGo, the AI built by DeepMind, so unique?
Dive into AlphaGo's system of deep learning, evaluation, and search algorithms that combined to defeat the reigning Go world champion, and draw your own conclusions.
Tim Riser presented an analysis of "Mastering the Game of Go with Deep Neural Networks & Tree Search", a paper by Google DeepMind to the Boston/Cambridge chapter of Papers We Love, a computer science discussion group on June 28, 2016.
FGS 2011: Making A Game With Molehill: Zombie Tycoonmochimedia
Luc Beaulieu and Jean-Philipe Auclair from Frima Studio share their experience working with Adobe's new Molehill API's in making their new game "Zombie Tycoon".
We have all enjoyed computer games, but ever wondered how they do it? How do developers make them? What are the functional parts of a game?
"Computer Games Inner Workings" - a presentation by Ioannis Loukeris, AIT Senior Web Developer and Golden Age CTO.
For this year's keynote at High Performance Graphics 2018, Colin Barré-Brisebois from SEED discussed the state of the art in real-time game ray tracing. He explored some of the connections between offline and real-time game ray tracing, and presented some of the open problems. Colin exposed a few potential solutions to those problems, and also proposed a call-to-arms on topics where the ray tracing research community and the games industry should unite in order to solve such open problems.
Divya Jain at AI Frontiers : Video SummarizationAI Frontiers
As video content is becoming mainstream, video summarization is becoming a hot research topic in academia and industry. Video thumbnail generation and summarization has been worked on for years, but deep learning and reinforcement learning is changing the landscape and emerging as the winner for optimal frame selection. Recent advances in GANs are improving the quality, aesthetics and relevancy of the frames to represent the original videos. Come join this session to get an understanding of various challenges and emerging solutions around video summarization.
Training at AI Frontiers 2018 - LaiOffer Data Session: How Spark Speedup AI AI Frontiers
Topic: How to use big data to enhance AI
Outline:
1. Spark ETL
Spark SQL
Spark Streaming
2. Spark ML
Spark ML pipeline
Distributed model tuning
Spark ML model and data lineage management
3. Spark XGboost
XGboost introduction
XGboost with Spark
XGboost with GPU
4. Spark Deep Learning pipeline
Transfer learning
Build Spark ML pipeline with TensorFlow
Model selection on distributed TF model
Training at AI Frontiers 2018 - Ni Lao: Weakly Supervised Natural Language Un...AI Frontiers
In this tutorial I will introduce recent work in applying weak supervision and reinforcement learning to Questions Answering (QA) systems. Specifically we discuss the semantic parsing task for which natural language queries are converted to computation steps on knowledge graphs or data tables and produce the expected answers. State-of-the-art results can be achieved by novel memory structure for sequence models and improvements in reinforcement learning algorithms. Related code and experiment setup can be found at https://github.com/crazydonkey200/neural-symbolic-machines. Related paper: https://openreview.net/pdf?id=SyK00v5xx.
Training at AI Frontiers 2018 - Udacity: Enhancing NLP with Deep Neural NetworksAI Frontiers
Instructor: Mat Leonard
Outline
1. Text Processing
Using Python + NLTK
Cleaning
Normalization
Tokenization
Part-of-speech Tagging
Stemming and Lemmatization
2. Feature Extraction
Bag of Words
TF-IDF
Word Embeddings
Word2Vec
GloVe
3. Topic Modeling
Latent Variables
Beta and Dirichlet Distributions
Laten Dirichlet Allocation
4. NLP with Deep Learning
Neural Networks
Recurrent Neural Networks (RNNs)
Word Embeddings
Sentiment Analysis with RNNs
Training at AI Frontiers 2018 - Lukasz Kaiser: Sequence to Sequence Learning ...AI Frontiers
Sequence to sequence learning is a powerful way to train deep networks for machine translation, various NLP tasks, but also image generation and recently video and music generation. We will give a hands-on tutorial showing how to use the open-source Tensor2Tensor library to train state-of-the-art models for translation, image generation, and a task of your choice!
Percy Liang at AI Frontiers : Pushing the Limits of Machine LearningAI Frontiers
In recent years, machine learning has undoubtedly been hugely successful in driving progress in AI applications. However, as we will explore in this talk, even state-of-the-art systems have "blind spots" which make them generalize poorly out of domain and render them vulnerable to adversarial examples. We then suggest that more unsupervised learning settings can encourage the development of more robust systems. We show positive results on two tasks: (i) text style and attribute transfer, the task of converting a sentence with one attribute (e.g., sentiment) to one with another; and (ii) solving SAT instances (classical problems requiring logical reasoning) using end-to-end neural networks.
Ilya Sutskever at AI Frontiers : Progress towards the OpenAI missionAI Frontiers
I will present several advances in deep learning from OpenAI. First, I will present OpenAI Five, a neural network that learned to play on par with some of the strongest professional Dota 2 teams in the world in an 18-hero version of the game. Next, I will present Dactyl, a human-like robot hand trained entirely in simulation with reinforcement learning that has achieved unprecedented dexterity on a physical robot. I will also present our results on unsupervised learning in language, that show that pre-training and finetuning can achieve a significant improvement over state of the art. Finally, I will present an overview of the historical progress in the field.
Mario Munich at AI Frontiers : Consumer robotics: embedding affordable AI in ...AI Frontiers
The availability of affordable electronics components, powerful embedded microprocessors, and ubiquitous internet access and WiFi in the household has enabled a new generation of connected consumer robots. In 2015, iRobot launched the Roomba 980, introducing intelligent visual navigation to its successful line of vacuum cleaning robots. In 2018, iRobot launched the Roomba i7, equipped with the latest mapping and navigation technology that provides spatial information to the broader ecosystem of connected devices in the home. In this talk, I will describe the challenges and the potential of introducing consumer robots capable of developing spatial context by exploring the physical space of the home, and I will elaborate on the impact of AI in the future of robotics applications. Moreover, I will describe our vision of the Smart Home, an AI-powered home that maintains itself and magically just does the right thing in anticipation of occupant needs. This home will be built on an ecosystem of connected and coordinated robots, sensors, and devices that provides the occupants with a high quality of life by seamlessly responding to the needs of daily living – from comfort to convenience to security to efficiency.
Anima Anandkumar at AI Frontiers : Modern ML : Deep, distributed, Multi-dimen...AI Frontiers
As the data and models scale, it becomes necessary to have multiple processing units for both training and inference. SignSGD is a gradient compression algorithm that only transmits the sign of the stochastic gradients during distributed training. This algorithm uses 32 times less communication per iteration than distributed SGD. We show that signSGD obtains free lunch both in theory and practice: no loss in accuracy while yielding speedups. Pushing the current boundaries of deep learning also requires using multiple dimensions and modalities. These can be encoded into tensors, which are natural extensions of matrices. These functionalities are available in the Tensorly package with multiple backend interfaces for large-scale deep learning.
Sumit Gupta at AI Frontiers : AI for EnterpriseAI Frontiers
The use of AI for voice search and image recognition is talked about often. Enterprises, however, have different challenges and requirements. In this talk, we will focus on talking about use cases in the enterprise and challenges in building out AI solutions. We will talk about how an Auto-machine learning software for videos and images called PowerAI Vision enables quick AI model training & deployment for various enterprise use cases.
Yuandong Tian at AI Frontiers : Planning in Reinforcement LearningAI Frontiers
Deep Reinforcement Learning (DRL) has made strong progress in many tasks, such as board games, robotics, navigation, neural architecture search, etc. I will present our recent open-sourced DRL frameworks to facilitate game research and development. Our framework is scalable so we can can reproduce AlphaGoZero and AlphaZero using 2000 GPUs, achieving super-human performance of Go AI that beats 4 top-30 professional players. We also show usability of our platform by training agents in real-time strategy games, and show interesting behaviors with a small amount of resource.
Alex Ermolaev at AI Frontiers : Major Applications of AI in HealthcareAI Frontiers
The latest AI advances have the potential to massively improve our health and well being. However, most of the work is yet to be done. In this talk, we will explore the most important opportunities for AI in healthcare. For example, we will explore how AI can diagnose major life-threatening conditions even before those conditions emerge. We will talk about AI ability to recommend dramatically more effective and less harmful treatment plans based on AI understanding of patient's medical history and current conditions. Finally, we will talk about AI role in making our healthcare system effective and affordable for everyone.
Long Lin at AI Frontiers : AI in GamingAI Frontiers
Games have been leveraging AI since the 1950s, when people built a rules-based AI engine that played tic-tac-toe. With technological advances over the years, AI has become increasingly popular and widely used in the gaming industry. The typical characteristics of games and game development makes them an ideal playground for practicing and implementing AI techniques, especially deep learning and reinforcement learning. Most games are well scoped; it is relatively easy to generate and use the data; and states/actions/rewards are relatively clear. In this talk, I will show a couple of use cases where ML/AI helps in-game development and enhances player experience. Examples include AI agents playing game and services that provide personalized experience to players.
Melissa Goldman at AI Frontiers : AI & FinanceAI Frontiers
AI in finance is having wide-ranging impact and solving some of the most critical societal problems. The talk gives overview of the opportunities of applying AI in finance with specific examples and highlights some of the unique challenges financial services firms face in deploying AI at scale.
Techniques to optimize the pagerank algorithm usually fall in two categories. One is to try reducing the work per iteration, and the other is to try reducing the number of iterations. These goals are often at odds with one another. Skipping computation on vertices which have already converged has the potential to save iteration time. Skipping in-identical vertices, with the same in-links, helps reduce duplicate computations and thus could help reduce iteration time. Road networks often have chains which can be short-circuited before pagerank computation to improve performance. Final ranks of chain nodes can be easily calculated. This could reduce both the iteration time, and the number of iterations. If a graph has no dangling nodes, pagerank of each strongly connected component can be computed in topological order. This could help reduce the iteration time, no. of iterations, and also enable multi-iteration concurrency in pagerank computation. The combination of all of the above methods is the STICD algorithm. [sticd] For dynamic graphs, unchanged components whose ranks are unaffected can be skipped altogether.
Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...John Andrews
SlideShare Description for "Chatty Kathy - UNC Bootcamp Final Project Presentation"
Title: Chatty Kathy: Enhancing Physical Activity Among Older Adults
Description:
Discover how Chatty Kathy, an innovative project developed at the UNC Bootcamp, aims to tackle the challenge of low physical activity among older adults. Our AI-driven solution uses peer interaction to boost and sustain exercise levels, significantly improving health outcomes. This presentation covers our problem statement, the rationale behind Chatty Kathy, synthetic data and persona creation, model performance metrics, a visual demonstration of the project, and potential future developments. Join us for an insightful Q&A session to explore the potential of this groundbreaking project.
Project Team: Jay Requarth, Jana Avery, John Andrews, Dr. Dick Davis II, Nee Buntoum, Nam Yeongjin & Mat Nicholas
As Europe's leading economic powerhouse and the fourth-largest hashtag#economy globally, Germany stands at the forefront of innovation and industrial might. Renowned for its precision engineering and high-tech sectors, Germany's economic structure is heavily supported by a robust service industry, accounting for approximately 68% of its GDP. This economic clout and strategic geopolitical stance position Germany as a focal point in the global cyber threat landscape.
In the face of escalating global tensions, particularly those emanating from geopolitical disputes with nations like hashtag#Russia and hashtag#China, hashtag#Germany has witnessed a significant uptick in targeted cyber operations. Our analysis indicates a marked increase in hashtag#cyberattack sophistication aimed at critical infrastructure and key industrial sectors. These attacks range from ransomware campaigns to hashtag#AdvancedPersistentThreats (hashtag#APTs), threatening national security and business integrity.
🔑 Key findings include:
🔍 Increased frequency and complexity of cyber threats.
🔍 Escalation of state-sponsored and criminally motivated cyber operations.
🔍 Active dark web exchanges of malicious tools and tactics.
Our comprehensive report delves into these challenges, using a blend of open-source and proprietary data collection techniques. By monitoring activity on critical networks and analyzing attack patterns, our team provides a detailed overview of the threats facing German entities.
This report aims to equip stakeholders across public and private sectors with the knowledge to enhance their defensive strategies, reduce exposure to cyber risks, and reinforce Germany's resilience against cyber threats.
Adjusting primitives for graph : SHORT REPORT / NOTESSubhajit Sahu
Graph algorithms, like PageRank Compressed Sparse Row (CSR) is an adjacency-list based graph representation that is
Multiply with different modes (map)
1. Performance of sequential execution based vs OpenMP based vector multiply.
2. Comparing various launch configs for CUDA based vector multiply.
Sum with different storage types (reduce)
1. Performance of vector element sum using float vs bfloat16 as the storage type.
Sum with different modes (reduce)
1. Performance of sequential execution based vs OpenMP based vector element sum.
2. Performance of memcpy vs in-place based CUDA based vector element sum.
3. Comparing various launch configs for CUDA based vector element sum (memcpy).
4. Comparing various launch configs for CUDA based vector element sum (in-place).
Sum with in-place strategies of CUDA mode (reduce)
1. Comparing various launch configs for CUDA based vector element sum (in-place).
Show drafts
volume_up
Empowering the Data Analytics Ecosystem: A Laser Focus on Value
The data analytics ecosystem thrives when every component functions at its peak, unlocking the true potential of data. Here's a laser focus on key areas for an empowered ecosystem:
1. Democratize Access, Not Data:
Granular Access Controls: Provide users with self-service tools tailored to their specific needs, preventing data overload and misuse.
Data Catalogs: Implement robust data catalogs for easy discovery and understanding of available data sources.
2. Foster Collaboration with Clear Roles:
Data Mesh Architecture: Break down data silos by creating a distributed data ownership model with clear ownership and responsibilities.
Collaborative Workspaces: Utilize interactive platforms where data scientists, analysts, and domain experts can work seamlessly together.
3. Leverage Advanced Analytics Strategically:
AI-powered Automation: Automate repetitive tasks like data cleaning and feature engineering, freeing up data talent for higher-level analysis.
Right-Tool Selection: Strategically choose the most effective advanced analytics techniques (e.g., AI, ML) based on specific business problems.
4. Prioritize Data Quality with Automation:
Automated Data Validation: Implement automated data quality checks to identify and rectify errors at the source, minimizing downstream issues.
Data Lineage Tracking: Track the flow of data throughout the ecosystem, ensuring transparency and facilitating root cause analysis for errors.
5. Cultivate a Data-Driven Mindset:
Metrics-Driven Performance Management: Align KPIs and performance metrics with data-driven insights to ensure actionable decision making.
Data Storytelling Workshops: Equip stakeholders with the skills to translate complex data findings into compelling narratives that drive action.
Benefits of a Precise Ecosystem:
Sharpened Focus: Precise access and clear roles ensure everyone works with the most relevant data, maximizing efficiency.
Actionable Insights: Strategic analytics and automated quality checks lead to more reliable and actionable data insights.
Continuous Improvement: Data-driven performance management fosters a culture of learning and continuous improvement.
Sustainable Growth: Empowered by data, organizations can make informed decisions to drive sustainable growth and innovation.
By focusing on these precise actions, organizations can create an empowered data analytics ecosystem that delivers real value by driving data-driven decisions and maximizing the return on their data investment.
19. Case study: AlphaGo
• Computations
• Train with many GPUs and inference with TPU.
• Policy network
• Trained supervised from human replays.
• Self-play network with RL.
• High quality playout/rollout policy
• 2 microsecond per move, 24.2% accuracy. ~30%
• Thousands of times faster than DCNN prediction.
• Value network
• Predicts game consequence for current situation.
• Trained on 30M self-play games.
“Mastering the game of Go with deep neural networks and tree search”, Silver et al, Nature 2016
23. AlphaGo
• Value Network (trained via 30M self-played games)
• How data are collected?
Game start
Current state
Sampling SL network
(more diverse moves)
Game terminates
Sampling RL network (higher win rate)
Uniform
sampling
“Mastering the game of Go with deep neural networks and tree search”, Silver et al, Nature 2016
24. AlphaGo
• Value Network (trained via 30M self-played games)
“Mastering the game of Go with deep neural networks and tree search”, Silver et al, Nature 2016
27. Our computer Go player: DarkForest
• DCNN as a tree policy
• Predict next k moves (rather than next move)
• Trained on 170k KGS dataset/80k GoGoD, 57.1% accuracy.
• KGS 3D without search (0.1s per move)
• Release 3 month before AlphaGo, < 1% GPUs (from Aja Huang)
Yuandong Tian and Yan Zhu, ICLR 2016
Yan Zhu
32. • DCNN+MCTS
• Use top3/5 moves from DCNN, 75k rollouts.
• Stable KGS 5d. Open source.
• 3rd place on KGS January Tournaments
• 2nd place in 9th UEC Computer Go Competition (Not this time J)
DarkForest versus Koichi Kobayashi (9p)
Our computer Go player: DarkForest
https://github.com/facebookresearch/darkforestGo
42. ELF: Extensive, Lightweight and Flexible
Framework for Game Research
• Extensive
• Any games with C++ interfaces can be incorporated.
• Lightweight
• Fast. Mini-RTS (40K FPS per core)
• Minimal resource usage (1GPU+several CPUs)
• Fast training (a couple of hours for a RTS game)
• Flexible
• Environment-Actor topology
• Parametrized game environments.
• Choice of different RL methods.
Yuandong Tian, Qucheng Gong, Wendy Shang, Yuxin Wu, Larry Zitnick (NIPS 2017 Oral)
Arxiv: https://arxiv.org/abs/1707.01067
Larry Zitnick
Qucheng Gong Wendy Shang
Yuxin Wu
https://github.com/facebookresearch/ELF
43. How RL system works
Game 1
Game N
Game 2
Consumers (Python)
Actor
Model
Optimizer
Process 1
Process 2
Process N
Replay Buffer
44. ELF design
Plug-and-play; no worry about the concurrency anymore.
Game 1
Game N
Daemon
(batch
collector)
Producer (Games in C++)
Game 2
History buffer
History buffer
History buffer
Consumers (Python)
Reply
Batch with
History info
Actor
Model
Optimizer
53. Flexible Environment-Actor topology
(b) Many-to-One (c) One-to-Many
Environment Actor
(a) One-to-One
Vanilla A3C BatchA3C, GA3C Self-Play,
Monte-Carlo Tree Search
Environment Actor
Environment Actor
Environment
Environment Actor
Environment
Actor
Environment Actor
Actor
56. A miniature RTS engine
Enemy base
Your base
Your barracks
Worker
Enemy unit
Resource Game Name Descriptions Avg Game Length
Mini-RTS Gather resource and build
troops to destroy
opponent’s base.
1000-6000 ticks
Capture the Flag Capture the flag and bring
it to your own base
1000-4000 ticks
Tower Defense Builds defensive towers to
block enemy invasion.
1000-2000 ticks
Fog of War
58. Training AI
Conv ReLUBN
x4
Policy
Value
Game visualization Game internal data
(respecting fog of war)
Location of all workers
Location of all melee tanks
Location of all range tanks
HP portion
Using Internal Game data and A3C.
Reward is only available once the game is over.
Resource
60. Training AI
9 discrete actions.
No. Action name Descriptions
1 IDLE Do nothing
2 BUILD WORKER If the base is idle, build a worker
3 BUILD BARRACK Move a worker (gathering or idle) to an empty place and build a barrack.
4 BUILD MELEE ATTACKER If we have an idle barrack, build an melee attacker.
5 BUILD RANGE ATTACKER If we have an idle barrack, build a range attacker.
6 HIT AND RUN
If we have range attackers, move towards opponent base and attack. Take
advantage of their long attack range and high movement speed to hit and
run if enemy counter-attack.
7 ATTACK All melee and range attackers attack the opponent’s base.
8 ATTACK IN RANGE All melee and range attackers attack enemies in sight.
9 ALL DEFEND All troops attack enemy troops near the base and resource.
63. Transfer Learning and Curriculum Training
AI_SIMPLE AI_HIT_AND_RUN
Combined
(50%SIMPLE+50% H&R)
SIMPLE 68.4 (±4.3) 26.6(±7.6) 47.5(±5.1)
HIT_AND_RUN 34.6(±13.1) 63.6 (±7.9) 49.1(±10.5)
Combined
(No curriculum)
49.4(±10.0) 46.0(±15.3) 47.7(±11.0)
Combined 51.8(±10.6) 54.7(±11.2) 53.2(±8.5)
AI_SIMPLE AI_HIT_AND_RUN CAPTURE_THE_FLAG
Without
curriculum training
66.0 (±2.4) 54.4 (±15.9) 54.2 (±20.0)
With
curriculum training
68.4 (±4.3) 63.6 (±7.9) 59.9 (±7.4)
Mixture of SIMPLE_AI
and Trained AI
Training time
99%
Highest win rate against AI_SIMPLE: 80%
64. Monte Carlo Tree Search
MiniRTS (AI_SIMPLE) MiniRTS (Hit_and_Run)
Random 24.2 (±3.9) 25.9 (±0.6)
MCTS 73.2 (±0.6) 62.7 (±2.0)
MCTS uses complete information and perfect dynamics
MCTS evaluation is repeated on 1000 games, using 800 rollouts.
65.
66. Ongoing Work
• One framework for different games.
• DarkForest remastered: https://github.com/facebookresearch/ELF/tree/master/go
• Richer game scenarios for MiniRTS.
• Multiple bases (Expand? Rush? Defending?)
• More complicated units.
• Provide a LUA interface for easier modification of the game.
• Realistic action space
• One command per unit
• Model-based Reinforcement Learning
• MCTS with perfect information and perfect dynamics also achieves ~70% winrate
• Self-Play (Trained AI versus Trained AI)