Recurrent Neural Networks have shown to be very powerful models as they can propagate context over several time steps. Due to this they can be applied effectively for addressing several problems in Natural Language Processing, such as Language Modelling, Tagging problems, Speech Recognition etc. In this presentation we introduce the basic RNN model and discuss the vanishing gradient problem. We describe LSTM (Long Short Term Memory) and Gated Recurrent Units (GRU). We also discuss Bidirectional RNN with an example. RNN architectures can be considered as deep learning systems where the number of time steps can be considered as the depth of the network. It is also possible to build the RNN with multiple hidden layers, each having recurrent connections from the previous time steps that represent the abstraction both in time and space.
Word Embeddings, Application of Sequence modelling, Recurrent neural network , drawback of recurrent neural networks, gated recurrent unit, long short term memory unit, Attention Mechanism
Brief introduction on attention mechanism and its application in neural machine translation, especially in transformer, where attention was used to remove RNNs completely from NMT.
This presentation on Recurrent Neural Network will help you understand what is a neural network, what are the popular neural networks, why we need recurrent neural network, what is a recurrent neural network, how does a RNN work, what is vanishing and exploding gradient problem, what is LSTM and you will also see a use case implementation of LSTM (Long short term memory). Neural networks used in Deep Learning consists of different layers connected to each other and work on the structure and functions of the human brain. It learns from huge volumes of data and used complex algorithms to train a neural net. The recurrent neural network works on the principle of saving the output of a layer and feeding this back to the input in order to predict the output of the layer. Now lets deep dive into this presentation and understand what is RNN and how does it actually work.
Below topics are explained in this recurrent neural networks tutorial:
1. What is a neural network?
2. Popular neural networks?
3. Why recurrent neural network?
4. What is a recurrent neural network?
5. How does an RNN work?
6. Vanishing and exploding gradient problem
7. Long short term memory (LSTM)
8. Use case implementation of LSTM
Simplilearn’s Deep Learning course will transform you into an expert in deep learning techniques using TensorFlow, the open-source software library designed to conduct machine learning & deep neural network research. With our deep learning course, you'll master deep learning and TensorFlow concepts, learn to implement algorithms, build artificial neural networks and traverse layers of data abstraction to understand the power of data and prepare you for your new role as deep learning scientist.
Why Deep Learning?
It is one of the most popular software platforms used for deep learning and contains powerful tools to help you build and implement artificial neural networks.
Advancements in deep learning are being seen in smartphone applications, creating efficiencies in the power grid, driving advancements in healthcare, improving agricultural yields, and helping us find solutions to climate change. With this Tensorflow course, you’ll build expertise in deep learning models, learn to operate TensorFlow to manage neural networks and interpret the results.
And according to payscale.com, the median salary for engineers with deep learning skills tops $120,000 per year.
You can gain in-depth knowledge of Deep Learning by taking our Deep Learning certification training course. With Simplilearn’s Deep Learning course, you will prepare for a career as a Deep Learning engineer as you master concepts and techniques including supervised and unsupervised learning, mathematical and heuristic aspects, and hands-on modeling to develop algorithms. Those who complete the course will be able to:
Learn more at: https://www.simplilearn.com/
This is a presentation I gave as a short overview of LSTMs. The slides are accompanied by two examples which apply LSTMs to Time Series data. Examples were implemented using Keras. See links in slide pack.
Word Embeddings, Application of Sequence modelling, Recurrent neural network , drawback of recurrent neural networks, gated recurrent unit, long short term memory unit, Attention Mechanism
Brief introduction on attention mechanism and its application in neural machine translation, especially in transformer, where attention was used to remove RNNs completely from NMT.
This presentation on Recurrent Neural Network will help you understand what is a neural network, what are the popular neural networks, why we need recurrent neural network, what is a recurrent neural network, how does a RNN work, what is vanishing and exploding gradient problem, what is LSTM and you will also see a use case implementation of LSTM (Long short term memory). Neural networks used in Deep Learning consists of different layers connected to each other and work on the structure and functions of the human brain. It learns from huge volumes of data and used complex algorithms to train a neural net. The recurrent neural network works on the principle of saving the output of a layer and feeding this back to the input in order to predict the output of the layer. Now lets deep dive into this presentation and understand what is RNN and how does it actually work.
Below topics are explained in this recurrent neural networks tutorial:
1. What is a neural network?
2. Popular neural networks?
3. Why recurrent neural network?
4. What is a recurrent neural network?
5. How does an RNN work?
6. Vanishing and exploding gradient problem
7. Long short term memory (LSTM)
8. Use case implementation of LSTM
Simplilearn’s Deep Learning course will transform you into an expert in deep learning techniques using TensorFlow, the open-source software library designed to conduct machine learning & deep neural network research. With our deep learning course, you'll master deep learning and TensorFlow concepts, learn to implement algorithms, build artificial neural networks and traverse layers of data abstraction to understand the power of data and prepare you for your new role as deep learning scientist.
Why Deep Learning?
It is one of the most popular software platforms used for deep learning and contains powerful tools to help you build and implement artificial neural networks.
Advancements in deep learning are being seen in smartphone applications, creating efficiencies in the power grid, driving advancements in healthcare, improving agricultural yields, and helping us find solutions to climate change. With this Tensorflow course, you’ll build expertise in deep learning models, learn to operate TensorFlow to manage neural networks and interpret the results.
And according to payscale.com, the median salary for engineers with deep learning skills tops $120,000 per year.
You can gain in-depth knowledge of Deep Learning by taking our Deep Learning certification training course. With Simplilearn’s Deep Learning course, you will prepare for a career as a Deep Learning engineer as you master concepts and techniques including supervised and unsupervised learning, mathematical and heuristic aspects, and hands-on modeling to develop algorithms. Those who complete the course will be able to:
Learn more at: https://www.simplilearn.com/
This is a presentation I gave as a short overview of LSTMs. The slides are accompanied by two examples which apply LSTMs to Time Series data. Examples were implemented using Keras. See links in slide pack.
Basics covered regarding Natural Language Processing, How ANN transformed to RNN, Architectures of vanila RNN, LSTM and GRU and few preprocessing techniques
Recurrent Neural Network
ACRRL
Applied Control & Robotics Research Laboratory of Shiraz University
Department of Power and Control Engineering, Shiraz University, Fars, Iran.
Mohammad Sabouri
https://sites.google.com/view/acrrl/
This Edureka Recurrent Neural Networks tutorial will help you in understanding why we need Recurrent Neural Networks (RNN) and what exactly it is. It also explains few issues with training a Recurrent Neural Network and how to overcome those challenges using LSTMs. The last section includes a use-case of LSTM to predict the next word using a sample short story
Below are the topics covered in this tutorial:
1. Why Not Feedforward Networks?
2. What Are Recurrent Neural Networks?
3. Training A Recurrent Neural Network
4. Issues With Recurrent Neural Networks - Vanishing And Exploding Gradient
5. Long Short-Term Memory Networks (LSTMs)
6. LSTM Use-Case
Introduction For seq2seq(sequence to sequence) and RNNHye-min Ahn
This is my slides for introducing sequence to sequence model and Recurrent Neural Network(RNN) to my laboratory colleagues.
Hyemin Ahn, @CPSLAB, Seoul National University (SNU)
An overview of Deep Learning With Neural Networks. Use cases of Deep learning and it's development. Basic introduction tp the layers of Neural Networks.
Transformer modality is an established architecture in natural language processing that utilizes a framework of self-attention with a deep learning approach.
This presentation was delivered under the mentorship of Mr. Mukunthan Tharmakulasingam (University of Surrey, UK), as a part of the ScholarX program from Sustainable Education Foundation.
Deep learning (also known as deep structured learning or hierarchical learning) is the application of artificial neural networks (ANNs) to learning tasks that contain more than one hidden layer. Deep learning is part of a broader family of machine learning methods based on learning data representations, as opposed to task-specific algorithms. Learning can be supervised, partially supervised or unsupervised.
A fast-paced introduction to Deep Learning concepts, such as activation functions, cost functions, back propagation, and then a quick dive into CNNs. Basic knowledge of vectors, matrices, and derivatives is helpful in order to derive the maximum benefit from this session.
Deep Learning: Recurrent Neural Network (Chapter 10) Larry Guo
This Material is an in_depth study report of Recurrent Neural Network (RNN)
Material mainly from Deep Learning Book Bible, http://www.deeplearningbook.org/
Topics: Briefing, Theory Proof, Variation, Gated RNNN Intuition. Real World Application
Application (CNN+RNN on SVHN)
Also a video (In Chinese)
https://www.youtube.com/watch?v=p6xzPqRd46w
What Is Deep Learning? | Introduction to Deep Learning | Deep Learning Tutori...Simplilearn
This Deep Learning Presentation will help you in understanding what is Deep learning, why do we need Deep learning, applications of Deep Learning along with a detailed explanation on Neural Networks and how these Neural Networks work. Deep learning is inspired by the integral function of the human brain specific to artificial neural networks. These networks, which represent the decision-making process of the brain, use complex algorithms that process data in a non-linear way, learning in an unsupervised manner to make choices based on the input. This Deep Learning tutorial is ideal for professionals with beginners to intermediate levels of experience. Now, let us dive deep into this topic and understand what Deep learning actually is.
Below topics are explained in this Deep Learning Presentation:
1. What is Deep Learning?
2. Why do we need Deep Learning?
3. Applications of Deep Learning
4. What is Neural Network?
5. Activation Functions
6. Working of Neural Network
Simplilearn’s Deep Learning course will transform you into an expert in deep learning techniques using TensorFlow, the open-source software library designed to conduct machine learning & deep neural network research. With our deep learning course, you’ll master deep learning and TensorFlow concepts, learn to implement algorithms, build artificial neural networks and traverse layers of data abstraction to understand the power of data and prepare you for your new role as deep learning scientist.
Why Deep Learning?
It is one of the most popular software platforms used for deep learning and contains powerful tools to help you build and implement artificial neural networks.
Advancements in deep learning are being seen in smartphone applications, creating efficiencies in the power grid, driving advancements in healthcare, improving agricultural yields, and helping us find solutions to climate change. With this Tensorflow course, you’ll build expertise in deep learning models, learn to operate TensorFlow to manage neural networks and interpret the results.
You can gain in-depth knowledge of Deep Learning by taking our Deep Learning certification training course. With Simplilearn’s Deep Learning course, you will prepare for a career as a Deep Learning engineer as you master concepts and techniques including supervised and unsupervised learning, mathematical and heuristic aspects, and hands-on modeling to develop algorithms.
There is booming demand for skilled deep learning engineers across a wide range of industries, making this deep learning course with TensorFlow training well-suited for professionals at the intermediate to advanced level of experience. We recommend this deep learning online course particularly for the following professionals:
1. Software engineers
2. Data scientists
3. Data analysts
4. Statisticians with an interest in deep learning
Artificial Intelligence, Machine Learning and Deep LearningSujit Pal
Slides for talk Abhishek Sharma and I gave at the Gennovation tech talks (https://gennovationtalks.com/) at Genesis. The talk was part of outreach for the Deep Learning Enthusiasts meetup group at San Francisco. My part of the talk is covered from slides 19-34.
Basics covered regarding Natural Language Processing, How ANN transformed to RNN, Architectures of vanila RNN, LSTM and GRU and few preprocessing techniques
Recurrent Neural Network
ACRRL
Applied Control & Robotics Research Laboratory of Shiraz University
Department of Power and Control Engineering, Shiraz University, Fars, Iran.
Mohammad Sabouri
https://sites.google.com/view/acrrl/
This Edureka Recurrent Neural Networks tutorial will help you in understanding why we need Recurrent Neural Networks (RNN) and what exactly it is. It also explains few issues with training a Recurrent Neural Network and how to overcome those challenges using LSTMs. The last section includes a use-case of LSTM to predict the next word using a sample short story
Below are the topics covered in this tutorial:
1. Why Not Feedforward Networks?
2. What Are Recurrent Neural Networks?
3. Training A Recurrent Neural Network
4. Issues With Recurrent Neural Networks - Vanishing And Exploding Gradient
5. Long Short-Term Memory Networks (LSTMs)
6. LSTM Use-Case
Introduction For seq2seq(sequence to sequence) and RNNHye-min Ahn
This is my slides for introducing sequence to sequence model and Recurrent Neural Network(RNN) to my laboratory colleagues.
Hyemin Ahn, @CPSLAB, Seoul National University (SNU)
An overview of Deep Learning With Neural Networks. Use cases of Deep learning and it's development. Basic introduction tp the layers of Neural Networks.
Transformer modality is an established architecture in natural language processing that utilizes a framework of self-attention with a deep learning approach.
This presentation was delivered under the mentorship of Mr. Mukunthan Tharmakulasingam (University of Surrey, UK), as a part of the ScholarX program from Sustainable Education Foundation.
Deep learning (also known as deep structured learning or hierarchical learning) is the application of artificial neural networks (ANNs) to learning tasks that contain more than one hidden layer. Deep learning is part of a broader family of machine learning methods based on learning data representations, as opposed to task-specific algorithms. Learning can be supervised, partially supervised or unsupervised.
A fast-paced introduction to Deep Learning concepts, such as activation functions, cost functions, back propagation, and then a quick dive into CNNs. Basic knowledge of vectors, matrices, and derivatives is helpful in order to derive the maximum benefit from this session.
Deep Learning: Recurrent Neural Network (Chapter 10) Larry Guo
This Material is an in_depth study report of Recurrent Neural Network (RNN)
Material mainly from Deep Learning Book Bible, http://www.deeplearningbook.org/
Topics: Briefing, Theory Proof, Variation, Gated RNNN Intuition. Real World Application
Application (CNN+RNN on SVHN)
Also a video (In Chinese)
https://www.youtube.com/watch?v=p6xzPqRd46w
What Is Deep Learning? | Introduction to Deep Learning | Deep Learning Tutori...Simplilearn
This Deep Learning Presentation will help you in understanding what is Deep learning, why do we need Deep learning, applications of Deep Learning along with a detailed explanation on Neural Networks and how these Neural Networks work. Deep learning is inspired by the integral function of the human brain specific to artificial neural networks. These networks, which represent the decision-making process of the brain, use complex algorithms that process data in a non-linear way, learning in an unsupervised manner to make choices based on the input. This Deep Learning tutorial is ideal for professionals with beginners to intermediate levels of experience. Now, let us dive deep into this topic and understand what Deep learning actually is.
Below topics are explained in this Deep Learning Presentation:
1. What is Deep Learning?
2. Why do we need Deep Learning?
3. Applications of Deep Learning
4. What is Neural Network?
5. Activation Functions
6. Working of Neural Network
Simplilearn’s Deep Learning course will transform you into an expert in deep learning techniques using TensorFlow, the open-source software library designed to conduct machine learning & deep neural network research. With our deep learning course, you’ll master deep learning and TensorFlow concepts, learn to implement algorithms, build artificial neural networks and traverse layers of data abstraction to understand the power of data and prepare you for your new role as deep learning scientist.
Why Deep Learning?
It is one of the most popular software platforms used for deep learning and contains powerful tools to help you build and implement artificial neural networks.
Advancements in deep learning are being seen in smartphone applications, creating efficiencies in the power grid, driving advancements in healthcare, improving agricultural yields, and helping us find solutions to climate change. With this Tensorflow course, you’ll build expertise in deep learning models, learn to operate TensorFlow to manage neural networks and interpret the results.
You can gain in-depth knowledge of Deep Learning by taking our Deep Learning certification training course. With Simplilearn’s Deep Learning course, you will prepare for a career as a Deep Learning engineer as you master concepts and techniques including supervised and unsupervised learning, mathematical and heuristic aspects, and hands-on modeling to develop algorithms.
There is booming demand for skilled deep learning engineers across a wide range of industries, making this deep learning course with TensorFlow training well-suited for professionals at the intermediate to advanced level of experience. We recommend this deep learning online course particularly for the following professionals:
1. Software engineers
2. Data scientists
3. Data analysts
4. Statisticians with an interest in deep learning
Artificial Intelligence, Machine Learning and Deep LearningSujit Pal
Slides for talk Abhishek Sharma and I gave at the Gennovation tech talks (https://gennovationtalks.com/) at Genesis. The talk was part of outreach for the Deep Learning Enthusiasts meetup group at San Francisco. My part of the talk is covered from slides 19-34.
This presentation is Part 2 of my September Lisp NYC presentation on Reinforcement Learning and Artificial Neural Nets. We will continue from where we left off by covering Convolutional Neural Nets (CNN) and Recurrent Neural Nets (RNN) in depth.
Time permitting I also plan on having a few slides on each of the following topics:
1. Generative Adversarial Networks (GANs)
2. Differentiable Neural Computers (DNCs)
3. Deep Reinforcement Learning (DRL)
Some code examples will be provided in Clojure.
After a very brief recap of Part 1 (ANN & RL), we will jump right into CNN and their appropriateness for image recognition. We will start by covering the convolution operator. We will then explain feature maps and pooling operations and then explain the LeNet 5 architecture. The MNIST data will be used to illustrate a fully functioning CNN.
Next we cover Recurrent Neural Nets in depth and describe how they have been used in Natural Language Processing. We will explain why gated networks and LSTM are used in practice.
Please note that some exposure or familiarity with Gradient Descent and Backpropagation will be assumed. These are covered in the first part of the talk for which both video and slides are available online.
A lot of material will be drawn from the new Deep Learning book by Goodfellow & Bengio as well as Michael Nielsen's online book on Neural Networks and Deep Learning as well several other online resources.
Bio
Pierre de Lacaze has over 20 years industry experience with AI and Lisp based technologies. He holds a Bachelor of Science in Applied Mathematics and a Master’s Degree in Computer Science.
https://www.linkedin.com/in/pierre-de-lacaze-b11026b/
Introduction to computer vision with Convoluted Neural NetworksMarcinJedyk
Introduction to computer vision with Convoluted Neural Networks - going over history of CNNs, describing basic concepts such as convolution and discussing applications of computer vision and image recognition technologies
A comprehensive tutorial on Convolutional Neural Networks (CNN) which talks about the motivation behind CNNs and Deep Learning in general, followed by a description of the various components involved in a typical CNN layer. It explains the theory involved with the different variants used in practice and also, gives a big picture of the whole network by putting everything together.
Next, there's a discussion of the various state-of-the-art frameworks being used to implement CNNs to tackle real-world classification and regression problems.
Finally, the implementation of the CNNs is demonstrated by implementing the paper 'Age ang Gender Classification Using Convolutional Neural Networks' by Hassner (2015).
Artificial Neural Networks have been very successfully used in several machine learning applications. They are often the building blocks when building deep learning systems. We discuss the hypothesis, training with backpropagation, update methods, regularization techniques.
This is a slide deck from a presentation, that my colleague Shirin Glander (https://www.slideshare.net/ShirinGlander/) and I did together. As we created our respective parts of the presentation on our own, it is quite easy to figure out who did which part of the presentation as the two slide decks look quite different ... :)
For the sake of simplicity and completeness, I just copied the two slide decks together. As I did the "surrounding" part, I added Shirin's part at the place when she took over and then added my concluding slides at the end. Well, I'm sure, you will figure it out easily ... ;)
The presentation was intended to be an introduction to deep learning (DL) for people who are new to the topic. It starts with some DL success stories as motivation. Then a quick classification and a bit of history follows before the "how" part starts.
The first part of the "how" is some theory of DL, to demystify the topic and explain and connect some of the most important terms on the one hand, but also to give an idea of the broadness of the topic on the other hand.
After that the second part dives deeper into the question how to actually implement DL networks. This part starts with coding it all on your own and then moves on to less coding step by step, depending on where you want to start.
The presentation ends with some pitfalls and challenges that you should have in mind if you want to dive deeper into DL - plus the invitation to become part of it.
As always the voice track of the presentation is missing. I hope that the slides are of some use for you, though.
This is a slide deck from a presentation, that my colleague Uwe Friedrichsen (https://www.slideshare.net/ufried/) and I did together. As we created our respective parts of the presentation on our own, it is quite easy to figure out who did which part of the presentation as the two slide decks look quite different ... :)
For the sake of simplicity and completeness, Uwe copied the two slide decks together. As he did the "surrounding" part, he added my part at the place where I took over and then added concluding slides at the end. Well, I'm sure, you will figure it out easily ... ;)
The presentation was intended to be an introduction to deep learning (DL) for people who are new to the topic. It starts with some DL success stories as motivation. Then a quick classification and a bit of history follows before the "how" part starts.
The first part of the "how" is some theory of DL, to demystify the topic and explain and connect some of the most important terms on the one hand, but also to give an idea of the broadness of the topic on the other hand.
After that the second part dives deeper into the question how to actually implement DL networks. This part starts with coding it all on your own and then moves on to less coding step by step, depending on where you want to start.
The presentation ends with some pitfalls and challenges that you should have in mind if you want to dive deeper into DL - plus the invitation to become part of it.
As always the voice track of the presentation is missing. I hope that the slides are of some use for you, though.
Similar to Recurrent Neural Networks, LSTM and GRU (20)
Generative Adversarial Networks : Basic architecture and variantsananth
In this presentation we review the fundamentals behind GANs and look at different variants. We quickly review the theory such as the cost functions, training procedure, challenges and go on to look at variants such as CycleGAN, SAGAN etc.
Convolutional Neural Networks : Popular Architecturesananth
In this presentation we look at some of the popular architectures, such as ResNet, that have been successfully used for a variety of applications. Starting from the AlexNet and VGG that showed that the deep learning architectures can deliver unprecedented accuracies for Image classification and localization tasks, we review other recent architectures such as ResNet, GoogleNet (Inception) and the more recent SENet that have won ImageNet competitions.
In this presentation we discuss the convolution operation, the architecture of a convolution neural network, different layers such as pooling etc. This presentation draws heavily from A Karpathy's Stanford Course CS 231n
Artificial Intelligence Course: Linear models ananth
In this presentation we present the linear models: Regression and Classification. We illustrate with several examples. Concepts such as underfitting (Bias) and overfitting (Variance) are presented. Linear models can be used as stand alone classifiers for simple cases and they are essential building blocks as a part of larger deep learning networks
Naive Bayes Classifier is a machine learning technique that is exceedingly useful to address several classification problems. It is often used as a baseline classifier to benchmark results. It is also used as a standalone classifier for tasks such as spam filtering where the naive assumption (conditional independence) made by the classifier seem reasonable. In this presentation we discuss the mathematical basis for the Naive Bayes and illustrate with examples
Mathematical Background for Artificial Intelligenceananth
Mathematical background is essential for understanding and developing AI and Machine Learning applications. In this presentation we give a brief tutorial that encompasses basic probability theory, distributions, mixture models, anomaly detection, graphical representations such as Bayesian Networks, etc.
This presentation discusses the state space problem formulation and different search techniques to solve these. Techniques such as Breadth First, Depth First, Uniform Cost and A star algorithms are covered with examples. We also discuss where such techniques are useful and the limitations.
This is the first lecture of the AI course offered by me at PES University, Bangalore. In this presentation we discuss the different definitions of AI, the notion of Intelligent Agents, distinguish an AI program from a complex program such as those that solve complex calculus problems (see the integration example) and look at the role of Machine Learning and Deep Learning in the context of AI. We also go over the course scope and logistics.
In this presentation we discuss several concepts that include Word Representation using SVD as well as neural networks based techniques. In addition we also cover core concepts such as cosine similarity, atomic and distributed representations.
Deep Learning techniques have enabled exciting novel applications. Recent advances hold lot of promise for speech based applications that include synthesis and recognition. This slideset is a brief overview that presents a few architectures that are the state of the art in contemporary speech research. These slides are brief because most concepts/details were covered using the blackboard in a classroom setting. These slides are meant to supplement the lecture.
Overview of TensorFlow For Natural Language Processingananth
TensorFlow open sourced recently by Google is one of the key frameworks that support development of deep learning architectures. In this slideset, part 1, we get started with a few basic primitives of TensorFlow. We will also discuss when and when not to use TensorFlow.
This slide set on convolutional neural networks is meant to be supplementary material to the slides from Andrej Karpathy's course. In this slide set we explain the motivation for CNN and also describe how to understand CNN coming from a standard feed forward neural networks perspective. For detailed architecture and discussions refer the original slides. I might post more detailed slides later.
This presentation discusses decision trees as a machine learning technique. This introduces the problem with several examples: cricket player selection, medical C-Section diagnosis and Mobile Phone price predictor. It discusses the ID3 algorithm and discusses how the decision tree is induced. The definition and use of the concepts such as Entropy, Information Gain are discussed.
This presentation is a part of ML Course and this deals with some of the basic concepts such as different types of learning, definitions of classification and regression, decision surfaces etc. This slide set also outlines the Perceptron Learning algorithm as a starter to other complex models to follow in the rest of the course.
This is the first lecture on Applied Machine Learning. The course focuses on the emerging and modern aspects of this subject such as Deep Learning, Recurrent and Recursive Neural Networks (RNN), Long Short Term Memory (LSTM), Convolution Neural Networks (CNN), Hidden Markov Models (HMM). It deals with several application areas such as Natural Language Processing, Image Understanding etc. This presentation provides the landscape.
In this presentation we discuss the hypothesis of MaxEnt models, describe the role of feature functions and their applications to Natural Language Processing (NLP). The training of the classifier is discussed in a later presentation.
In this presentation we describe the formulation of the HMM model as consisting of states that are hidden that generate the observables. We introduce the 3 basic problems: Finding the probability of a sequence of observation given the model, the decoding problem of finding the hidden states given the observations and the model and the training problem of determining the model parameters that generate the given observations. We discuss the Forward, Backward, Viterbi and Forward-Backward algorithms.
Discusses the concept of Language Models in Natural Language Processing. The n-gram models, markov chains are discussed. Smoothing techniques such as add-1 smoothing, interpolation and discounting methods are addressed.
The discrete or atomic representation of words don't scale well to support rich semantics. Distributed representations associate a word with a vector based on the context in which the word occurs. In this presentation we describe the problem of word representation with a few illustrations and also describe the approach taken by word2vec. We also discuss the limitations of using a static database approach.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
2. Objective
• Overview of Neural Networks
• Recurrent Neural Networks (RNN)
• Bidirectional Recurrent Neural Networks (BRNN)
• Differences between Recursive and Recurrent Neural Networks
• Challenges in implementing RNN: Vanishing Gradient Problem
• Gated Recurrent Units (GRUs)
• Long Short Term Memory (LSTM)
• Applications
3. References (Abridged list)
• Machine Learning, T Mitchell
• MOOC Courses offered by Prof Andrew
Ng, Prof Yaser Mustafa, Geoff Hinton
• CMU Videos Prof T Mitchell
• Alex Graves: Supervised Sequence
Labelling with Recurrent Neural Networks
• Andrej Karpathy’s Blogs
• Stanford course CS224d: Socher
• Recurrent Neural Networks based
Language Models: Mikolov etal
• Annotating Expressions of Opinions and
Emotions in Language: Wiebe etal
R Socher
4. Human Cognition
• Most Common human cognitive tasks (such as
understanding and speaking natural language,
recognizing objects etc) are highly non linear.
• Human cognition tasks are often hierarchical
• Though our brain receives low level sensory inputs as
impulses, we process the input as a whole, recognizing
several patterns as opposed to looking at micro level data
• Humans learn continuously often unaided or
unsupervised
• For the same object or pattern that we recognize we see
them in different perspectives.
• E.g. We may know “home” and “residence” as two separate
words but we can interpret them in different contexts. Home
maker versus residence address.
What does it take to build an autonomous car that can
drive itself in Bangalore traffic?
5. Quick recap of last lecture
• ML attempts to approximate real world
applications by mathematical models
• The underlying process behind the given real
world application (that we are trying to
model) is called the unknown target function
• Linear models approximate the real world
using a linear function.
• Most of the real world applications are non-
linear and are hierarchical
• Artificial Neural networks (ANN) are non
linear models and are effective for certain
class of applications.
• Each hidden layer represents a particular level
of abstraction
• ANNs are commonly trained using
backpropagation algorithms
• The model parameters are tunable knobs that
determine the output of the machine and
signify the degrees of freedom
• More the parameters, more easily we can fit
the training data but impacts the
generalization. Regularization keeps the
model parameters under check
• Traditional ANNs with a large number of
hidden layers are hard to train: Problems of
local minima and vanishing/exploding
gradients
• Deep learning techniques are breakthroughs
that enable realization of deep architectures
• Recurrent Neural Networks (RNN), Recursive
Neural Networks and Convolutional Neural
Networks are specializations of the ANN
architecture to handle different nature of
problems.
• For instance RNNs are effective for predicting time
series problems
• For a brief 5 slide refresher on DNNs see:
http://www.slideshare.net/ananth/deep-
learningprimer-7june2014
6. Non linear Models: Neural Networks
• Motivation
• A large number of classification tasks involve inherently highly non linear target functions – for example,
face recognition
• Though we can transform the input vector in to a non linear form and perform classification with linear
models, the model becomes very complex quickly.
• For example:
• Consider a 10 dimensional input vector that needs to be transformed in to a polynomial with degree 3. O(n3)
• Consider the problem of looking at the image of a building and identifying it (say: 100 by 100 pixels)
• Over fitting problems are common when we train more complex models
• Illustration (on black board)
• Boolean functions AND, OR can be effectively modelled by Linear Models
• A single logistic regression unit can’t model more complex Boolean functions such as XOR
• Cascading logistic regression units can classify complex Boolean target functions effectively
• It is shown that with 2 layers of logistic regression units, one can model many complex Boolean
expressions effectively
7. Neural Networks (Fig: courtesy R Socher)
Neural Networks can be built for different
input, output types.
- Outputs can be:
- Linear, single output (Linear)
- Linear, multiple outputs (Linear)
- Single output binary (Logistic)
- Multi output binary (Logistic)
- 1 of k Multinomial output (Softmax)
- Inputs can be:
- A scalar number
- Vector of Real numbers
- Vector of Binary
Goal of training: Given the training data (inputs, targets) and the
architecture, determine the model parameters.
Model Parameters for a 3 layer network:
- Weight matrix from input layer to the hidden (Wjk)
- Weight matrix from hidden layer to the output (Wkj)
- Bias terms for hidden layer
- Bias terms for output layer
Our strategy will be:
- Compute the error at the output
- Determine the contribution of each parameter to the error by
taking the differential of error wrt the parameter
- Update the parameter commensurate with the error it contributed.
8. Design Choices
• When building a neural network, the designer would choose the
following hyper parameters and non linearities based on the
application characteristics:
• Number of hidden layers
• Number of hidden units in each layer
• Learning rate
• Regularization coefft
• Number of outputs
• Type of output (linear, logistic, softmax)
• Choice of Non linearity at the output layer and hidden layer (See next slide)
• Input representation and dimensionality
10. Objective Functions and gradients (derivation of gradient on the board)
• Linear – Mean squared error
• 𝐸 𝑤 =
1
2𝑁 1
𝑁
(𝑡 𝑛 − 𝑦𝑛)2
• Logistic with binary classifications: Cross Entropy Error
• Logistic with k outputs: k > 2: Cross Entropy Error
• Softmax: 1 of K multinomial classification: Cross Entropy Error, minimize NLL
• In all the above cases we can show that the gradient is: (yk - tk) where yk is
the predicted output for the output unit k and tk is the corresponding target
11. High Level Backpropagation Algorithm
• Apply the input vector to the network and forward propagate. This
will yield the activations for hidden layer(s) and the output layer
• 𝑛𝑒𝑡𝑗 = 𝑖 𝑤𝑗𝑖 𝑧𝑖,
• 𝑧𝑗 = ℎ(𝑛𝑒𝑡𝑗) where h is your choice of non linearity. Usually it is sigmoid or
tanh. Rectified Linear Unit (RelU) is also used.
• Evaluate the error 𝛿 𝑘 for all the output units
𝛿 𝑘 = 𝑜 𝑘 − 𝑡 𝑘 where 𝑜 𝑘 is the output produced by the model and 𝑡 𝑘 is the
target provided in the training dataset
• Backpropagate the 𝛿’s to obtain 𝛿𝑗 for each hidden unit j
𝛿𝑗 = ℎ′
(𝑧𝑗) 𝑘 𝑤 𝑘𝑗 𝛿 𝑘
• Evaluate the required derivatives
𝜕𝐸
𝜕𝑊𝑗𝑖
= 𝛿𝑗 𝑧𝑖
13. RNN – Some toy applications to evaluate the system
• Often times some toy applications, even if they are contrived, serve
the following purposes:
• Test the correctness of the implementation of the model
• Compare the performance of the new model with respect to the old ones
• Example applications for verifying the performance of RNN:
• Arithmetic progression (will be demo’d now)
• Process an input of the form: an bj and return true if n = j
• Count the number of words in a sequence ignoring the words that are
enclosed in parenthesis
• Perform XOR of bits of a sequence up to a time step t
16. Training Algorithm (Fig: Xiodong He etal, Microsoft Research)
• Different training procedures
exist, we will use Back Propagation
Through Time (BPTT)
• Similar to standard
backpropagation, BPTT involves
using chain rule repeatedly and
bakpropagating the deltas
• However one key subtlety is that,
for RNNs, the cost function
depends on the activation of
hidden layer not only through its
influence on output layer but also
through its influence on hidden
layer of the next time step
17. A sketch of implementation – Forward pass
Forward Propagation – Key steps
for t from 1 to T
1. Compute hidden activations of time t with current input and hidden activations for (t-1)
2. For all j in the output units compute the netj (dot product of WS with ht )
3. Apply the softmax function on the netj and get the probability distribution for time t
18. A sketch of implementation – Backpropagation
Backpropagation for RNN – Key steps
for t from T down to 1
1. compute the delta at the
output (dy)
2. Compute Δwji where w is the
(softmax) weight matrix WS
3. Determine the bias terms
4. Backpropagate and compute
delta for hidden layer (dhraw)
5. Compute the updates to
weight matrix Whh and Whx
6. Perform BPTT by computing
the error to be propagated to
the previous layer (dbnext).
19. Applications
• Language Model (Mikolov etal)
• Input at a time t is the corresponding word vector
• Output is the predicted next word
• Language translation
• Slot filling (see next slide)
• Character LM (Andrej Karpathy)
• Image captioning and description
• Speech recognition
• Question Answering Systems (We are doing a special topic project on this)
• Semantic Role Labeling (We are doing a special topic project on this)
• NER (demo done last week!)
• And many more sequence based applications
20. Semantic Slot Filling Application Example
Many problems in
Information extraction
require generating a data
structure from a natural
language input
One possible way to cast
this problem is to treat this
as a slot filling exercise.
This can be viewed as a
sequential tagging
problem and use an RNN
for tagging
21. Building an NER with RNN
• The traditional MEMM or CRF based NER design techniques require
domain expertise when designing the feature vector
• RNN based NER’s don’t need feature engineering and with some
minimum text preprocessing (such as removing infrequent words),
one can build an NER that provides comparable performance
• Steps:
• Preprocess the words: tokenization and some simple task dependent
preprocessing as needed
• Get word vectors (this helps reducing the dimensionality)
• Form the training dataset
• Train the NER
• Predict
22. Encoder Decoder Design
• Example: Machine Translation
• Use 2 RNN’s, one for encoding and the other decoding
• The activations of the final stage of the encoder is fed to the decoder
• This is useful when the output sequence is of variable length and if
the entire input sequence can be processed before generating the
output
24. Clipping
• Key Idea: Avoid the vanishing/exploding gradient problem by looking
at a threshold and clip the gradient to that threshold.
• While this is a simple workaround to address the issue, it is crude and
might hamper the performance
• Better solutions: LSTMs and its variants like GRUs (topic of next class!)
25. Bidirectional RNNs
• Key idea:
• Output at a step t not only depends on the past steps (t-1…t1) but also
depends on future steps (t+1, …T).
• The forward pass abstracts and summarizes the context in the forward
direction while the backward pass does the same from the reverse direction
• Examples: Fill in the blanks below
• I want ______ buy a good book _______ NLP
• I want ______ Mercedes
• Let’s illustrate bidirectional RNNs with an application example from:
Opinion Mining with Deep Recurrent Nets by Irsoy and Cardie 2014
26. Problem Statement: Ref Irsoy and Cardie 2014
• Given a sentence, classify each
word in to one of the tags: {O,
B-ESE, I-ESE, B-DSE, I-DSE}
• Definitions
• Direct Subjective Expressions
(DSE): explicit mentions of private
states or speech events
expressing private states
• Expressive Subjective Expressions
(ESE): Expressions that indicate
sentiment, emotion, etc., without
explicitly conveying them.
27. Bidirectional RNN Model
• Input: A sequence of words. At each
time step t a single token
(represented by its word vector) is
input to the RNN. (Black dots)
• Output: At each time step t one of
the possible tags from the tagset is
output by the RNN (Red dots)
• Memory: This is the hidden unit that
is computed from current word and
the past hidden values. It
summarizes the sentence up to that
time. (Orange dots)
29. Deep Bidirectional RNNs
• RNNs are deep networks with depth in
time.
• When unfolded, they are multi layer feed
forward neural networks, where there
are as many hidden layers as input
tokens.
• However, this doesn’t represent the
hierarchical processing of data across
time units as we still use same U, V, W
• A stacked deep learner supports
hierarchical computations, where each
hidden layer corresponds to a degree of
abstraction.
• Stacking a simple RNN on top of others
has the potential to perform hierarchical
computations moving over the time axis
30. Training the BRNN (ref: Alex Graves: Supervised Sequence Labelling with
Recurrent Neural Networks)
Forward Pass
for t = 1 to T do
Forward pass for the forward hidden layer, storing activations at each time step
for t = T to 1 do
Forward pass for the backward hidden layer, storing activations at each time step
for all t, in any order do
Forward pass for the output layer, using the stored activations from both hidden layers
Backward Pass
for all t, in any order do
Backward pass for the output layer, storing terms at each time step
for t = T to 1 do
BPTT backward pass for the forward hidden layer, using the stored terms from the output layer
for t = 1 to T do
BPTT backward pass for the backward hidden layer, using the stored terms from the output layer
31. Long Short Term Memory (LSTM): Motivation 1 of 2
• Consider the cases below, where a customer is interested in iPhone 6s plus and he needs to
gift it to his father on his birthday on Oct 2. He goes through a review that reads as below:
• Review 1: Apple has unveiled the iPhone 6s and iPhone 6s Plus - described by CEO Tim Cook as the "most
advanced phones ever" - at a special event in San Francisco on Wednesday. Pre-orders for the new iPhone
models begin this Saturday and they have a launch date (start shipping) in twelve countries on September
25. The price for the iPhone 6s and iPhone 6s Plus remain unchanged compared to their predecessors:
$649 for the 16GB iPhone 6s, $749 for the 64GB iPhone 6s and 16GB iPhone 6s Plus, $849 for 128GB
iPhone 6s and 64GB iPhone 6s Plus, and $949 for the 128GB iPhone 6s Plus (all US prices). There's no
word yet on India price or launch date
• How would we design a RNN that advises him: Buy/No Buy?
• Suppose the customer doesn’t have the time constraint as above but has a price constraint,
where his budget is around Rs 50K, what would be our decision?
• Suppose there is another review article that reads as below:
• Review 2: Priced at INR 75K for the low end model, Apple iPhone boasts of an ultra slim device with an
awesome camera. Apple’s CEO while showcasing the device at San Francisco, announced its availability on
12 countries including India. This is the best phone that one can flaunt if he can afford it!
32. LSTM Motivation 2 of 2
• Observations from the case studies:
• A product review has many sentences and the pieces of information that we may be interested for making our
buying decision is found at various places in the text.
• Certain aspects are “must have” for us that can’t be compromised. For instance if a customer needs an item
within a few days, he can’t wait for it indefinitely. Similarly if he has a budget constraint, he can’t buy the item
even if it is the best fit for his other requirements.
• If we find a sentence that implies that a must have feature can’t be met, rest of the sentences don’t
contribute to the buying decision
• Hence the context plays a vital role in the classification decision.
• In a large text (say a 5 page product review) with over 100 sentences, just the first sentence alone may
contribute to the decision.
• While an RNN can carry the context, there are 2 limitations:
• Due to the vanishing gradient problem, RNN’s effectiveness is limited when it needs to go back deep in to the
context.
• There is no finer control over which part of the context needs to be carried forward and how much of the past needs
to be “forgotten”
• LSTM is proposed as a solution to address this issue
33. The five key Architectural Elements of LSTM
• Input Gate
• Forget Gate
• Cell
• Output Gate
• Hidden state output
34. Effect of LSTM on sensitivity (Ref: Graves)
• In a simple RNN with sigmoid or tanh
neuron units, the later output nodes of
the network are less sensitive to the
input at time t = 1. This happens due to
the vanishing gradient problem
• An LSTM allows the preservation of
gradients. The memory cell remembers
the first input as long as the forget gate
is open and the input gate is closed.
• The output gate provides finer control
to switch the output layer on or off
without altering the cell contents.
35. Implementing an LSTM: Notes for practitioners
• Some points to take in to account while
choosing an LSTM architecture:
• LSTM has many variants compared to the
architecture proposed in the paper by Sepp
Hochreiter and Schmidhuber
• The LSTM initially didn’t have forget gate, it
was later added.
• Most of the current implementations are
based on the 3 gate LSTM model (input,
forget, output).
• Some variants adopt a simpler version. E.g.
peephole connections may be omitted
• Training is a bit complex compared to
feedforward ANN
• Many training techniques are reported. For
BPTT see Alex Graves’s thesis
• See Theano for Python DL library
• LSTMs can be stacked vertically to create a
deep LSTM network