Integrating Relational Databases with the Semantic Web: A ReflectionJuan Sequeda
This is a lecture given at the 2017 Reasoning Web Summer School
It has been clear from the beginning that the success of the Semantic Web hinges on integrating the vast amount of data stored in Relational Databases. In 2007, the W3C organized a workshop on RDF Access to Relational Databases. In 2012, two standards were ratified that map relational data to RDF: Direct Mapping and R2RML.
In this lecture, I will reflect on the last 10 years of research results and systems to integrate Relational Databases with the Semantic web. I will provide an answer to the following question: how and to what extent can Relational Databases be integrated with the Semantic Web? I will review how these standards and systems are being used in practice for data integration and discuss open challenges.
Use Machine learning to solve classification problems through building binary and multi-class classifiers.
Does your company face business-critical decisions that rely on dynamic transactional data? If you answered “yes,” you need to attend this free event featuring Microsoft analytics tools. We’ll focus on Azure Machine Learning capabilities and explore the following topics: - Introduction of two class classification problems.
- Classification Algorithms (Two Class Classification)
- Available algorithms in Azure ML.
- Real business problems that is solved using two class classification.
Derivation of Convolutional Neural Network from Fully Connected Network Step-...Ahmed Gad
In image analysis, #convolutional neural networks (#CNNs or #ConvNets for short) are time and memory efficient than fully connected (#FC) networks. But why? What are the advantages of ConvNets over FC networks in image analysis? How is #ConvNet derived from FC networks? Where the term #convolution in CNNs came from? These questions are to be answered in this #presentation.
Image analysis has a number of challenges such as classification, object detection, recognition, description, etc. If an image classifier, for example, is to be created, it should be able to work with a high accuracy even with variations such as occlusion, illumination changes, viewing angles, and others. The traditional pipeline of image classification with its main step of feature engineering is not suitable for working in rich environments. Even experts in the field won’t be able to give a single or a group of features that are able to reach high accuracy under different variations. Motivated by this problem, the idea of feature learning came out. The suitable features to work with images are learned automatically. This is the reason why artificial neural networks (ANNs) are one of the robust ways of image analysis. Based on a learning algorithm such as gradient descent (GD), ANN learns the image features automatically. The raw image is applied to the ANN and ANN is responsible for generating the features describing it.
Chap 8. Optimization for training deep modelsYoung-Geun Choi
연구실 내부 세미나 자료. Goodfellow et al. (2016), Deep Learning, MIT Press의 Chapter 8을 요약/발췌하였습니다. 깊은 신경망(deep neural network) 모형 훈련시 목적함수 최적화 방법으로 흔히 사용되는 방법들을 소개합니다.
Integrating Relational Databases with the Semantic Web: A ReflectionJuan Sequeda
This is a lecture given at the 2017 Reasoning Web Summer School
It has been clear from the beginning that the success of the Semantic Web hinges on integrating the vast amount of data stored in Relational Databases. In 2007, the W3C organized a workshop on RDF Access to Relational Databases. In 2012, two standards were ratified that map relational data to RDF: Direct Mapping and R2RML.
In this lecture, I will reflect on the last 10 years of research results and systems to integrate Relational Databases with the Semantic web. I will provide an answer to the following question: how and to what extent can Relational Databases be integrated with the Semantic Web? I will review how these standards and systems are being used in practice for data integration and discuss open challenges.
Use Machine learning to solve classification problems through building binary and multi-class classifiers.
Does your company face business-critical decisions that rely on dynamic transactional data? If you answered “yes,” you need to attend this free event featuring Microsoft analytics tools. We’ll focus on Azure Machine Learning capabilities and explore the following topics: - Introduction of two class classification problems.
- Classification Algorithms (Two Class Classification)
- Available algorithms in Azure ML.
- Real business problems that is solved using two class classification.
Derivation of Convolutional Neural Network from Fully Connected Network Step-...Ahmed Gad
In image analysis, #convolutional neural networks (#CNNs or #ConvNets for short) are time and memory efficient than fully connected (#FC) networks. But why? What are the advantages of ConvNets over FC networks in image analysis? How is #ConvNet derived from FC networks? Where the term #convolution in CNNs came from? These questions are to be answered in this #presentation.
Image analysis has a number of challenges such as classification, object detection, recognition, description, etc. If an image classifier, for example, is to be created, it should be able to work with a high accuracy even with variations such as occlusion, illumination changes, viewing angles, and others. The traditional pipeline of image classification with its main step of feature engineering is not suitable for working in rich environments. Even experts in the field won’t be able to give a single or a group of features that are able to reach high accuracy under different variations. Motivated by this problem, the idea of feature learning came out. The suitable features to work with images are learned automatically. This is the reason why artificial neural networks (ANNs) are one of the robust ways of image analysis. Based on a learning algorithm such as gradient descent (GD), ANN learns the image features automatically. The raw image is applied to the ANN and ANN is responsible for generating the features describing it.
Chap 8. Optimization for training deep modelsYoung-Geun Choi
연구실 내부 세미나 자료. Goodfellow et al. (2016), Deep Learning, MIT Press의 Chapter 8을 요약/발췌하였습니다. 깊은 신경망(deep neural network) 모형 훈련시 목적함수 최적화 방법으로 흔히 사용되는 방법들을 소개합니다.
Suggestions:
1) For best quality, download the PDF before viewing.
2) Open at least two windows: One for the Youtube video, one for the screencast (link below), and optionally one for the slides themselves.
3) The Youtube video is shown on the first page of the slide deck, for slides, just skip to page 2.
Screencast: http://youtu.be/VoL7JKJmr2I
Video recording: http://youtu.be/CJRvb8zxRdE (Thanks to Al Friedrich!)
In this talk, we take Deep Learning to task with real world data puzzles to solve.
Data:
- Higgs binary classification dataset (10M rows, 29 cols)
- MNIST 10-class dataset
- Weather categorical dataset
- eBay text classification dataset (8500 cols, 500k rows, 467 classes)
- ECG heartbeat anomaly detection
- Powered by the open source machine learning software H2O.ai. Contributors welcome at: https://github.com/h2oai
- To view videos on H2O open source machine learning software, go to: https://www.youtube.com/user/0xdata
Evolution of Deep Learning and new advancementsChitta Ranjan
Earlier known as neural networks, deep learning saw a remarkable resurgence in the past decade. Neural networks did not find enough adopters in the past century due to its limited accuracy in real world applications (due to various reasons) and difficult interpretation. Many of these limitations got resolved in the recent years, and it was re-branded as deep learning. Now deep learning is widely used in industry and has become a popular research topic in academia. Learning about the passage of its evolution and development is intriguing. In this presentation, we will learn about how we resolved the issues in last generation neural networks, how we reached to the recent advanced methods from the earlier works, and different components of deep learning models.
Apache Spark - Basics of RDD | Big Data Hadoop Spark Tutorial | CloudxLabCloudxLab
Big Data with Hadoop & Spark Training: http://bit.ly/2L4rPmM
This CloudxLab Basics of RDD tutorial helps you to understand Basics of RDD in detail. Below are the topics covered in this tutorial:
1) What is RDD - Resilient Distributed Datasets
2) Creating RDD in Scala
3) RDD Operations - Transformations & Actions
4) RDD Transformations - map() & filter()
5) RDD Actions - take() & saveAsTextFile()
6) Lazy Evaluation & Instant Evaluation
7) Lineage Graph
8) flatMap and Union
9) Scala Transformations - Union
10) Scala Actions - saveAsTextFile(), collect(), take() and count()
11) More Actions - reduce()
12) Can We Use reduce() for Computing Average?
13) Solving Problems with Spark
14) Compute Average and Standard Deviation with Spark
15) Pick Random Samples From a Dataset using Spark
Apache Hadoop and Spark: Introduction and Use Cases for Data AnalysisTrieu Nguyen
Growth of big datasets
Introduction to Apache Hadoop and Spark for developing applications
Components of Hadoop, HDFS, MapReduce and HBase
Capabilities of Spark and the differences from a typical MapReduce solution
Some Spark use cases for data analysis
Introduction to Graph Neural Networks: Basics and Applications - Katsuhiko Is...Preferred Networks
This presentation explains basic ideas of graph neural networks (GNNs) and their common applications. Primary target audiences are students, engineers and researchers who are new to GNNs but interested in using GNNs for their projects. This is a modified version of the course material for a special lecture on Data Science at Nara Institute of Science and Technology (NAIST), given by Preferred Networks researcher Katsuhiko Ishiguro, PhD.
http://imatge-upc.github.io/telecombcn-2016-dlcv/
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of big annotated data and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which had been addressed until now with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks and Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles and applications of deep learning to computer vision problems, such as image classification, object detection or text captioning.
Apache Pig is a high-level platform for creating programs that runs on Apache Hadoop. The language for this platform is called Pig Latin. Pig can execute its Hadoop jobs in MapReduce, Apache Tez, or Apache Spark.
Deep Learning: Evolution of ML from Statistical to Brain-like Computing- Data...Impetus Technologies
Presentation on 'Deep Learning: Evolution of ML from Statistical to Brain-like Computing'
Speaker- Dr. Vijay Srinivas Agneeswaran,Director, Big Data Labs, Impetus
The main objective of the presentation is to give an overview of our cutting edge work on realizing distributed deep learning networks over GraphLab. The objectives can be summarized as below:
- First-hand experience and insights into implementation of distributed deep learning networks.
- Thorough view of GraphLab (including descriptions of code) and the extensions required to implement these networks.
- Details of how the extensions were realized/implemented in GraphLab source – they have been submitted to the community for evaluation.
- Arrhythmia detection use case as an application of the large scale distributed deep learning network.
Suggestions:
1) For best quality, download the PDF before viewing.
2) Open at least two windows: One for the Youtube video, one for the screencast (link below), and optionally one for the slides themselves.
3) The Youtube video is shown on the first page of the slide deck, for slides, just skip to page 2.
Screencast: http://youtu.be/VoL7JKJmr2I
Video recording: http://youtu.be/CJRvb8zxRdE (Thanks to Al Friedrich!)
In this talk, we take Deep Learning to task with real world data puzzles to solve.
Data:
- Higgs binary classification dataset (10M rows, 29 cols)
- MNIST 10-class dataset
- Weather categorical dataset
- eBay text classification dataset (8500 cols, 500k rows, 467 classes)
- ECG heartbeat anomaly detection
- Powered by the open source machine learning software H2O.ai. Contributors welcome at: https://github.com/h2oai
- To view videos on H2O open source machine learning software, go to: https://www.youtube.com/user/0xdata
Evolution of Deep Learning and new advancementsChitta Ranjan
Earlier known as neural networks, deep learning saw a remarkable resurgence in the past decade. Neural networks did not find enough adopters in the past century due to its limited accuracy in real world applications (due to various reasons) and difficult interpretation. Many of these limitations got resolved in the recent years, and it was re-branded as deep learning. Now deep learning is widely used in industry and has become a popular research topic in academia. Learning about the passage of its evolution and development is intriguing. In this presentation, we will learn about how we resolved the issues in last generation neural networks, how we reached to the recent advanced methods from the earlier works, and different components of deep learning models.
Apache Spark - Basics of RDD | Big Data Hadoop Spark Tutorial | CloudxLabCloudxLab
Big Data with Hadoop & Spark Training: http://bit.ly/2L4rPmM
This CloudxLab Basics of RDD tutorial helps you to understand Basics of RDD in detail. Below are the topics covered in this tutorial:
1) What is RDD - Resilient Distributed Datasets
2) Creating RDD in Scala
3) RDD Operations - Transformations & Actions
4) RDD Transformations - map() & filter()
5) RDD Actions - take() & saveAsTextFile()
6) Lazy Evaluation & Instant Evaluation
7) Lineage Graph
8) flatMap and Union
9) Scala Transformations - Union
10) Scala Actions - saveAsTextFile(), collect(), take() and count()
11) More Actions - reduce()
12) Can We Use reduce() for Computing Average?
13) Solving Problems with Spark
14) Compute Average and Standard Deviation with Spark
15) Pick Random Samples From a Dataset using Spark
Apache Hadoop and Spark: Introduction and Use Cases for Data AnalysisTrieu Nguyen
Growth of big datasets
Introduction to Apache Hadoop and Spark for developing applications
Components of Hadoop, HDFS, MapReduce and HBase
Capabilities of Spark and the differences from a typical MapReduce solution
Some Spark use cases for data analysis
Introduction to Graph Neural Networks: Basics and Applications - Katsuhiko Is...Preferred Networks
This presentation explains basic ideas of graph neural networks (GNNs) and their common applications. Primary target audiences are students, engineers and researchers who are new to GNNs but interested in using GNNs for their projects. This is a modified version of the course material for a special lecture on Data Science at Nara Institute of Science and Technology (NAIST), given by Preferred Networks researcher Katsuhiko Ishiguro, PhD.
http://imatge-upc.github.io/telecombcn-2016-dlcv/
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of big annotated data and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which had been addressed until now with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks and Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles and applications of deep learning to computer vision problems, such as image classification, object detection or text captioning.
Apache Pig is a high-level platform for creating programs that runs on Apache Hadoop. The language for this platform is called Pig Latin. Pig can execute its Hadoop jobs in MapReduce, Apache Tez, or Apache Spark.
Deep Learning: Evolution of ML from Statistical to Brain-like Computing- Data...Impetus Technologies
Presentation on 'Deep Learning: Evolution of ML from Statistical to Brain-like Computing'
Speaker- Dr. Vijay Srinivas Agneeswaran,Director, Big Data Labs, Impetus
The main objective of the presentation is to give an overview of our cutting edge work on realizing distributed deep learning networks over GraphLab. The objectives can be summarized as below:
- First-hand experience and insights into implementation of distributed deep learning networks.
- Thorough view of GraphLab (including descriptions of code) and the extensions required to implement these networks.
- Details of how the extensions were realized/implemented in GraphLab source – they have been submitted to the community for evaluation.
- Arrhythmia detection use case as an application of the large scale distributed deep learning network.
This paper reports results of artificial neural network for robot navigation tasks. Machine
learning methods have proven usability in many complex problems concerning mobile robots
control. In particular we deal with the well-known strategy of navigating by “wall-following”.
In this study, probabilistic neural network (PNN) structure was used for robot navigation tasks.
The PNN result was compared with the results of the Logistic Perceptron, Multilayer
Perceptron, Mixture of Experts and Elman neural networks and the results of the previous
studies reported focusing on robot navigation tasks and using same dataset. It was observed the
PNN is the best classification accuracy with 99,635% accuracy using same dataset.
LEARNING OF ROBOT NAVIGATION TASKS BY PROBABILISTIC NEURAL NETWORKcsandit
This paper reports results of artificial neural network for robot navigation tasks. Machine
learning methods have proven usability in many complex problems concerning mobile robots
control. In particular we deal with the well-known strategy of navigating by “wall-following”.
In this study, probabilistic neural network (PNN) structure was used for robot navigation tasks.
The PNN result was compared with the results of the Logistic Perceptron, Multilayer
Perceptron, Mixture of Experts and Elman neural networks and the results of the previous
studies reported focusing on robot navigation tasks and using same dataset. It was observed the
PNN is the best classification accuracy with 99,635% accuracy using same dataset.
Deep reinforcement learning framework for autonomous drivingGopikaGopinath5
Motivated by the successful demonstrations of learning of Atari games and Go by Google DeepMind, it is possible to propose a framework for autonomous driving using deep reinforcement learning.
It incorporates Recurrent Neural Networks for information integration, enabling the car to handle partially observable scenarios.
In this talk, after a brief overview of AI concepts in particular Machine Learning (ML) techniques, some of the well-known computer design concepts for high performance and power efficiency are presented. Subsequently, those techniques that have had a promising impact for computing ML algorithms are discussed. Deep learning has emerged as a game changer for many applications in various fields of engineering and medical sciences. Although the primary computation function is matrix vector multiplication, many competing efficient implementations of this primary function have been proposed and put into practice. This talk will review and compare some of those techniques that are used for ML computer design.
This slide gives brief overview of supervised, unsupervised and reinforcement learning. Algorithms discussed are Naive Bayes, K nearest neighbour, SVM,decision tree, Markov model.
Difference between regression and classification. difference between supervised and reinforcement, iterative functioning of Markov model and machine learning applications.
LEARNING OF ROBOT NAVIGATION TASKS BY PROBABILISTIC NEURAL NETWORKcscpconf
This paper reports results of artificial neural network for robot navigation tasks. Machine learning methods have proven usability in many complex problems concerning mobile robots
control. In particular we deal with the well-known strategy of navigating by “wall-following”. In this study, probabilistic neural network (PNN) structure was used for robot navigation tasks.
The PNN result was compared with the results of the Logistic Perceptron, Multilayer Perceptron, Mixture of Experts and Elman neural networks and the results of the previous
studies reported focusing on robot navigation tasks and using same dataset. It was observed the PNN is the best classification accuracy with 99,635% accuracy using same dataset.
Overview of the fundamental roles in Hydropower generation and the components involved in wider Electrical Engineering.
This paper presents the design and construction of hydroelectric dams from the hydrologist’s survey of the valley before construction, all aspects and involved disciplines, fluid dynamics, structural engineering, generation and mains frequency regulation to the very transmission of power through the network in the United Kingdom.
Author: Robbie Edward Sayers
Collaborators and co editors: Charlie Sims and Connor Healey.
(C) 2024 Robbie E. Sayers
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptxR&R Consult
CFD analysis is incredibly effective at solving mysteries and improving the performance of complex systems!
Here's a great example: At a large natural gas-fired power plant, where they use waste heat to generate steam and energy, they were puzzled that their boiler wasn't producing as much steam as expected.
R&R and Tetra Engineering Group Inc. were asked to solve the issue with reduced steam production.
An inspection had shown that a significant amount of hot flue gas was bypassing the boiler tubes, where the heat was supposed to be transferred.
R&R Consult conducted a CFD analysis, which revealed that 6.3% of the flue gas was bypassing the boiler tubes without transferring heat. The analysis also showed that the flue gas was instead being directed along the sides of the boiler and between the modules that were supposed to capture the heat. This was the cause of the reduced performance.
Based on our results, Tetra Engineering installed covering plates to reduce the bypass flow. This improved the boiler's performance and increased electricity production.
It is always satisfying when we can help solve complex challenges like this. Do your systems also need a check-up or optimization? Give us a call!
Work done in cooperation with James Malloy and David Moelling from Tetra Engineering.
More examples of our work https://www.r-r-consult.dk/en/cases-en/
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)MdTanvirMahtab2
This presentation is about the working procedure of Shahjalal Fertilizer Company Limited (SFCL). A Govt. owned Company of Bangladesh Chemical Industries Corporation under Ministry of Industries.
Welcome to WIPAC Monthly the magazine brought to you by the LinkedIn Group Water Industry Process Automation & Control.
In this month's edition, along with this month's industry news to celebrate the 13 years since the group was created we have articles including
A case study of the used of Advanced Process Control at the Wastewater Treatment works at Lleida in Spain
A look back on an article on smart wastewater networks in order to see how the industry has measured up in the interim around the adoption of Digital Transformation in the Water Industry.
Immunizing Image Classifiers Against Localized Adversary Attacksgerogepatton
This paper addresses the vulnerability of deep learning models, particularly convolutional neural networks
(CNN)s, to adversarial attacks and presents a proactive training technique designed to counter them. We
introduce a novel volumization algorithm, which transforms 2D images into 3D volumetric representations.
When combined with 3D convolution and deep curriculum learning optimization (CLO), itsignificantly improves
the immunity of models against localized universal attacks by up to 40%. We evaluate our proposed approach
using contemporary CNN architectures and the modified Canadian Institute for Advanced Research (CIFAR-10
and CIFAR-100) and ImageNet Large Scale Visual Recognition Challenge (ILSVRC12) datasets, showcasing
accuracy improvements over previous techniques. The results indicate that the combination of the volumetric
input and curriculum learning holds significant promise for mitigating adversarial attacks without necessitating
adversary training.
NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...Amil Baba Dawood bangali
Contact with Dawood Bhai Just call on +92322-6382012 and we'll help you. We'll solve all your problems within 12 to 24 hours and with 101% guarantee and with astrology systematic. If you want to take any personal or professional advice then also you can call us on +92322-6382012 , ONLINE LOVE PROBLEM & Other all types of Daily Life Problem's.Then CALL or WHATSAPP us on +92322-6382012 and Get all these problems solutions here by Amil Baba DAWOOD BANGALI
#vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore#blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #blackmagicforlove #blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #Amilbabainuk #amilbabainspain #amilbabaindubai #Amilbabainnorway #amilbabainkrachi #amilbabainlahore #amilbabaingujranwalan #amilbabainislamabad
Deep Learning in Robotics: Robot gains Social Intelligence through Multimodal Deep Reinforcement Learning
1. “Deep Learning in Robotics”
Student: Gabriele Sisinna (516706)
Course: Intelligent Systems
Professor: Beatrice Lazzerini
Authors
Harry A. Pierson
Michael S. Gashler
2. Introduction
• This review discusses the applications, benefits, and
limitations of deep learning for robotic systems, using
contemporary research as example.
• Applying deep learning to
robotics is an active
research area, with at
least thirty papers
published on the subject
from 2014 through the
time of this writing (2017).
3. Deep learning
• Deep learning is the science of training large artificial
neural networks. Deep neural networks (DNNs) can have
hundreds of millions of parameters, allowing them to model
complex functions such as nonlinear dynamics.
4. History
• Several important advances have slowly transformed regression
into what we now call deep learning. First, the addition of an
activation function enabled regression methods to fit to
nonlinear functions, and It introduced biological similarity with
brain cells.
• Next, nonlinear models were stacked in “layers” to create
powerful models, called multi-layer perceptrons (MLP).
5. History
• Multi-layer perceptrons are universal function approximators,
meaning they could fit to any data, no matter how complex, with
arbitrary precision, using a finite number of regression units.
• Backpropagation marked the beginning of the deep learning
revolution; however, researchers still mostly limited their neural
networks to a few layers because of the problem of vanishing
gradients
6. Application in Robotics
• Neural networks were successfully applied for robotics
control as early as the 1980s. It was quickly recognized that
nonlinear regression provided the functionality that was
needed for operating dynamical systems in continuous
spaces
7. Biorobotics and Neural networks
• In 2008, neuroscientists made advances in recognizing how
animals achieved locomotion, and were able to extend this
knowledge to neural networks for experimental control of
biomimetic robots
Infinite Degree of Freedom discretization
• In the soft robotics field
new techniques are
needed for the control of
continuous systems
with high number of
DOFs
8. Structure A: MLP as function approximator
• DNNs are well suited for use with robots because they are
flexible and can be used in structures that other machine
learning models cannot support.
• MLP are trained by presenting a large collection of example
training pairs:
• An optimization method is applied to minimize the
prediction loss
Supervised
9. Classification
• This structures also excel at classification tasks, such as
determining what type of object lies before the robot, which
grasping approach or general planning strategy is best
suited for current conditions, or what is the state of a certain
complex object with which the robot is interacting.
10. Parallel Computing: training DNNs
• To make effective use of deep learning models, it is
important to train on one or more General Purpose
Graphical Processing Units (GPGPUs). Many other ways of
parallelizing deep neural networks have been attempted, but
none of them yet yield the performance gains of GPGPUs.
11. Structure B: Autoencoders
• Auto-encoders are used primarily in cases where high-
dimensional observations are available, but the user wants
a low-dimensional representation of state.
• It is one common model for facilitating “unsupervised
learning.” It requires two DNNs, called an “encoder” and a
“decoder.”
Unsupervised
12. Structure C: Recurrent Neural Networks
• They can keep track of the past
thanks to feedback loops
(discrete time non autonomous
dynamical systems)
• Structure C is a type of “recurrent
neural network,” which is designed to
model dynamical systems, including
robots. It is often trained with an
approach called “backpropagation
through time”
Supervised
13. Structure D: Deep Reinforcement Learning
• Deep reinforcement learning (DRL) uses deep learning and
reinforcement learning principles to create efficient algorithms
applied on areas like robotics, video games, healthcare, ecc…
• Implementing deep learning architectures (deep neural networks)
with reinforcement learning algorithms (Q-learning, actor critic,
etc.) is capable of scaling to previously unsolvable problems.
14. Exploration and exploitation
• Instead of minimizing
prediction error against a
training set of samples, deep
Q-networks seek to maximize
long-term reward.
• This is done through seeking
a balance between
exploration and exploitation
that ultimately leads to an
effective policy model.
15. Biological analogy
• Doya identified that supervised learning methods (Structures
A and C) mirror the function of the cerebellum.
• Unsupervised methods (Structure B) learn in a manner
comparable to that of the cerebral cortex and reinforcement
learning (Structure D) is analogous with the basal ganglia.
16. What’s the point?
• Every part of a complex system can be made to “learn”.
• The real power of deep learning does not come from using
just one of the structures described in the previous slides as
a component in a robotics system, but in connecting parts of
all these structures together to form a full system that learns
throughout.
• This is where the “deep” in deep learning begins to make its
impact – when each part of a system is capable of learning,
the system can adapt in sophisticated ways.
17. Limits
• Some remaining barriers to the adoption of deep learning in
robotics include the necessity for large training data and
long training times. One promising trend is crowdsourcing
training data via cloud robotics.
• Distributed computing offers the potential to direct more
computing resources to a given problem but can be limited
by communication speeds.
• DNNs excel at 2D image recognition, but they are known to
be highly susceptible to adversarial samples, and they still
struggle to model 3D spatial layouts.
18. Open challenges for the next years
1.Learning complex, high-dimensional, and novel dynamics
2.Learning control policies in dynamic environments
3.Advanced manipulation
4.Advanced object recognition
5.Interpreting and anticipating human actions (next slides)
6.Sensor fusion & dimensionality reduction
7.High-level task planning
19. Robot gains Social Intelligence
through Multimodal Deep
Reinforcement Learning
Authors
Ahmed Hussain Qureshi, Yutaka Nakamura, Yuichiro Yoshikawa and Hiroshi Ishiguro
20. Pepper Robot
• Designed to be used in professional environments, Pepper
is a humanoid robot that can interact with people, ‘read’
emotions, learn, move and adapt to its environment, and
even recharge on its own. Pepper can perform facial
recognition and develop individualized relationships when it
interacts with people.
• The authors propose a Multimodal Deep Q-Network
(MDQN) to enable a robot to learn human-like interaction
skills through a trial and error method.
21. Reinforcement Learning background
• An agent interacts
sequentially with an
environment E with an aim of
maximizing cumulative
reward.
• At each time-step, the agent
observes a state 𝐒𝒕, takes an
action at from the set of legal
actions 𝑨 = {𝟏,· · · , 𝑲} and
receives a scalar reward 𝑹𝒕
from the environment.
• An agent’s behavior is
formalized by a policy π,
which maps states to actions.
• The goal of a RL agent is to
learn a policy π that
maximizes the expected total
return (reward)
22. Deep Q-network
• Further advancements in
machine learning have merged
deep learning with reinforcement
learning (RL) which has led to
the development of the
deep Q-network (DQN)
• DQN utilizes an automatic
feature extractor called deep
convolutional neural network
(Convnets) to approximate the
action-value function of
Q-learning method
23. CNN for action-value function approximation
• The structure of the two streams is identical and each stream comprises of
eight layers (excluding the input layer).
• Since each stream takes eight frames as an input, therefore, the last eight
frames from the corresponding camera are pre-processed and stacked
together to form the input for each stream of the network.
24. Multimodal Deep Q-Network (MDQN)
• The dual stream convnets process the depth and grayscale
images independently
• The robot learns to greet people using a set of four legal actions,
i.e., waiting, looking towards human, waving hand and
handshaking.
• The objective of the robot is to learn which action to perform in
each situation.
25. Reward and action-value function
• The expected total return is the sum of rewards discounted by
factor 𝜸: [𝟎, 𝟏] at each time-step (𝛾 = 0.99 for the proposed work)
• Given that the optimal Q-function 𝑸′(𝒔’, 𝒂’) of the sequence 𝒔’ at
next time-step is deterministic for all possible actions 𝒂’, the
optimal policy is to select an action 𝒂’ that maximizes the expected
value of: 𝐫 + 𝐐′ 𝐬’, 𝐚’
• In DQN, the parameters of the Q-network are adjusted iteratively
towards the Bellman target by minimizing the following loss
function:
26. Parameters and agent behavior
• The current parameters are updated by stochastic gradient
descent in the direction of the gradient of the loss function with
respect to the parameters
• The agent’s behavior at each time-step is selected by an ε-greedy
policy where the greedy strategy is adopted with probability
(1−ε) while the random strategy with probability ε.
• The robot gets a reward of 1 on the successful handshake, -0.1
on an unsuccessful handshake and 0 for the rest of the three
actions.
27. Proposed algorithm
• Data generation phase: the system interacts with the environment
using Q-network 𝑄(𝑠, 𝑎; 𝜃). The system observes the current
scene, which comprises of grayscale and depth frames, and takes
an action using the 𝜺-greedy strategy. The environment in return
provides the scalar reward. The interaction experience
𝑒 = (𝑠𝑖, 𝑎𝑖, 𝑟𝑖, 𝑠𝑖 + 1) is stored in the replay memory 𝑴.
• Training phase: the system utilizes the collected data, stored in
replay memory 𝑴, for training the networks. The hyperparameter 𝒏
denotes the number of experience replay. For each experience
replay, a mini buffer 𝑩 of size 2000 interaction experiences is
randomly sampled from the finite sized replay memory M. The
model is trained on the mini batches sampled from buffer B and the
network parameters are updated iteratively.
28. Evaluation
• For testing the model performance, a separate test dataset,
comprising 4480 grayscale and depth frames not seen by the
system during learning was collected.
• If the agent’s decision was considered wrong by the majority, then
the evaluators were asked to consent on the most appropriate
action for that scenario.
29. Results
• The authors evaluated the trained y-channel Q-network,
depth-channel Q-network and the MDQN on the test
dataset; table 1 summarizes the performance measures of
these trained Q-networks. In table 1, accuracy corresponds
to how often the predictions by the Q-networks were correct.
• The multimodal deep Q-network achieved maximum
accuracy of 95.3 %, whereas the y-channel and the depth-
channel of Q-networks achieved 85.9% and 82.6% accuracy,
respectively. The results in table 1 validate that fusion of
two streams improves the social cognitive ability of the
agent.
30. Performance
• This figure shows the performance of MDQN on the test dataset
over the series of episodes. The episode 0 on the plot
corresponds to the Q-network with randomly initialized parameters.
The plot indicates that the performance of MQDN agent on test
dataset is continuously improving as the agent gets more and
more interaction experience with humans.
31. Conclusions
• In social physical human-robot interaction, it is very difficult to
envisage all the possible interaction scenarios which the robot can
face in the real-world, hence programming a social robot is
notoriously hard.
• The MDQN-agent has learned to give importance to walking
trajectories, head orientation, body language and the activity in
progress in order to decide its best action.
• Aims: i) increase the action space instead of limiting it to just four
actions; ii) use recurrent attention model so that the robot can
indicate its attention; iii) evaluate the influence of three actions,
other than handshake, on the human behavior.
33. References
• Deep Learning in Robotics: A Review of Recent Research
(Harry A. Pierson, Michael S. Gashler)
• Robot gains Social Intelligence through Multimodal Deep
Reinforcement Learning (Ahmed Hussain Qureshi, Yutaka
Nakamura, Yuichiro Yoshikawa, Hiroshi Ishiguro)