The document summarizes Tomas Mikolov's talk on recurrent neural networks and directions for future research. The key points are:
1) Recurrent networks have seen renewed success since 2010 due to simple tricks like gradient clipping that allow them to be trained more stably. Structurally constrained recurrent networks (SCRNs) provide longer short-term memory than simple RNNs without complex architectures.
2) While RNNs have achieved strong performance on many tasks, they struggle with algorithmic patterns requiring memorization of sequences or counting. Stack augmented RNNs add structured memory to address such limitations.
3) To build truly intelligent machines, we need to focus on developing skills like communication, learning new tasks quickly from few examples
Deep Learning Architectures for NLP (Hungarian NLP Meetup 2016-09-07)Márton Miháltz
A brief survey of current deep learning/neural network methods currently used in NLP: recurrent networks (LSTM, GRU), recursive networks, convolutional networks, hybrid architectures, attention models. We will look at specific papers in the literature, targeting sentiment analysis, text classification and other tasks.
Word Embeddings, Application of Sequence modelling, Recurrent neural network , drawback of recurrent neural networks, gated recurrent unit, long short term memory unit, Attention Mechanism
Tutorial on Deep Learning in Recommender System, Lars summer school 2019Anoop Deoras
I had a fun time giving tutorial on the topic of deep learning in recommender systems at Latin America School on Recommender Systems (LARS) in Fortaleza, Brazil.
Natural language processing techniques transition from machine learning to de...Divya Gera
Natural Language processing, its need, business applications, NLP with machine learning, Text data preprocessing for machine learning, NLP with Deep Learning.
Visual-Semantic Embeddings: some thoughts on LanguageRoelof Pieters
Language technology is rapidly evolving. A resurgence in the use of distributed semantic representations and word embeddings, combined with the rise of deep neural networks has led to new approaches and new state of the art results in many natural language processing tasks. One such exciting - and most recent - trend can be seen in multimodal approaches fusing techniques and models of natural language processing (NLP) with that of computer vision.
The talk is aimed at giving an overview of the NLP part of this trend. It will start with giving a short overview of the challenges in creating deep networks for language, as well as what makes for a “good” language models, and the specific requirements of semantic word spaces for multi-modal embeddings.
Deep Learning Architectures for NLP (Hungarian NLP Meetup 2016-09-07)Márton Miháltz
A brief survey of current deep learning/neural network methods currently used in NLP: recurrent networks (LSTM, GRU), recursive networks, convolutional networks, hybrid architectures, attention models. We will look at specific papers in the literature, targeting sentiment analysis, text classification and other tasks.
Word Embeddings, Application of Sequence modelling, Recurrent neural network , drawback of recurrent neural networks, gated recurrent unit, long short term memory unit, Attention Mechanism
Tutorial on Deep Learning in Recommender System, Lars summer school 2019Anoop Deoras
I had a fun time giving tutorial on the topic of deep learning in recommender systems at Latin America School on Recommender Systems (LARS) in Fortaleza, Brazil.
Natural language processing techniques transition from machine learning to de...Divya Gera
Natural Language processing, its need, business applications, NLP with machine learning, Text data preprocessing for machine learning, NLP with Deep Learning.
Visual-Semantic Embeddings: some thoughts on LanguageRoelof Pieters
Language technology is rapidly evolving. A resurgence in the use of distributed semantic representations and word embeddings, combined with the rise of deep neural networks has led to new approaches and new state of the art results in many natural language processing tasks. One such exciting - and most recent - trend can be seen in multimodal approaches fusing techniques and models of natural language processing (NLP) with that of computer vision.
The talk is aimed at giving an overview of the NLP part of this trend. It will start with giving a short overview of the challenges in creating deep networks for language, as well as what makes for a “good” language models, and the specific requirements of semantic word spaces for multi-modal embeddings.
Deep Learning for Information Retrieval: Models, Progress, & OpportunitiesMatthew Lease
Talk given at the 8th Forum for Information Retrieval Evaluation (FIRE, http://fire.irsi.res.in/fire/2016/), December 10, 2016, and at the Qatar Computing Research Institute (QCRI), December 15, 2016.
This is the first lecture on Applied Machine Learning. The course focuses on the emerging and modern aspects of this subject such as Deep Learning, Recurrent and Recursive Neural Networks (RNN), Long Short Term Memory (LSTM), Convolution Neural Networks (CNN), Hidden Markov Models (HMM). It deals with several application areas such as Natural Language Processing, Image Understanding etc. This presentation provides the landscape.
ODSC East: Effective Transfer Learning for NLPindico data
Presented by indico co-founder Madison May at ODSC East.
Abstract: Transfer learning, the practice of applying knowledge gained on one machine learning task to aid the solution of a second task, has seen historic success in the field of computer vision. The output representations of generic image classification models trained on ImageNet have been leveraged to build models that detect the presence of custom objects in natural images. Image classification tasks that would typically require hundreds of thousands of images can be tackled with mere dozens of training examples per class thanks to the use of these pretrained reprsentations. The field of natural language processing, however, has seen more limited gains from transfer learning, with most approaches limited to the use of pretrained word representations. In this talk, we explore parameter and data efficient mechanisms for transfer learning on text, and show practical improvements on real-world tasks. In addition, we demo the use of Enso, a newly open-sourced library designed to simplify benchmarking of transfer learning methods on a variety of target tasks. Enso provides tools for the fair comparison of varied feature representations and target task models as the amount of training data made available to the target model is incrementally increased.
At Return Path, we used a deep learning-inspired machine-learning algorithm called word2vec and the data in our Consumer Data Stream to find interesting relationships between email senders.
Representation Learning of Vectors of Words and PhrasesFelipe Moraes
Talk about representation learning using word vectors such as Word2Vec, Paragraph Vector. Also introduced to neural network language models. Expose some applications using NNLM such as sentiment analysis and information retrieval.
Deep Learning Models for Question AnsweringSujit Pal
Talk about a hobby project to apply Deep Learning models to predict answers to 8th grade science multiple choice questions for the Allen AI challenge on Kaggle.
Artificial Intelligence, Machine Learning and Deep LearningSujit Pal
Slides for talk Abhishek Sharma and I gave at the Gennovation tech talks (https://gennovationtalks.com/) at Genesis. The talk was part of outreach for the Deep Learning Enthusiasts meetup group at San Francisco. My part of the talk is covered from slides 19-34.
[KDD 2018 tutorial] End to-end goal-oriented question answering systemsQi He
End to-end goal-oriented question answering systems
version 2.0: An updated version with references of the old version (https://www.slideshare.net/QiHe2/kdd-2018-tutorial-end-toend-goaloriented-question-answering-systems).
08/22/2018: The old version was just deleted for reducing the confusion.
Deep Learning for Information Retrieval: Models, Progress, & OpportunitiesMatthew Lease
Talk given at the 8th Forum for Information Retrieval Evaluation (FIRE, http://fire.irsi.res.in/fire/2016/), December 10, 2016, and at the Qatar Computing Research Institute (QCRI), December 15, 2016.
This is the first lecture on Applied Machine Learning. The course focuses on the emerging and modern aspects of this subject such as Deep Learning, Recurrent and Recursive Neural Networks (RNN), Long Short Term Memory (LSTM), Convolution Neural Networks (CNN), Hidden Markov Models (HMM). It deals with several application areas such as Natural Language Processing, Image Understanding etc. This presentation provides the landscape.
ODSC East: Effective Transfer Learning for NLPindico data
Presented by indico co-founder Madison May at ODSC East.
Abstract: Transfer learning, the practice of applying knowledge gained on one machine learning task to aid the solution of a second task, has seen historic success in the field of computer vision. The output representations of generic image classification models trained on ImageNet have been leveraged to build models that detect the presence of custom objects in natural images. Image classification tasks that would typically require hundreds of thousands of images can be tackled with mere dozens of training examples per class thanks to the use of these pretrained reprsentations. The field of natural language processing, however, has seen more limited gains from transfer learning, with most approaches limited to the use of pretrained word representations. In this talk, we explore parameter and data efficient mechanisms for transfer learning on text, and show practical improvements on real-world tasks. In addition, we demo the use of Enso, a newly open-sourced library designed to simplify benchmarking of transfer learning methods on a variety of target tasks. Enso provides tools for the fair comparison of varied feature representations and target task models as the amount of training data made available to the target model is incrementally increased.
At Return Path, we used a deep learning-inspired machine-learning algorithm called word2vec and the data in our Consumer Data Stream to find interesting relationships between email senders.
Representation Learning of Vectors of Words and PhrasesFelipe Moraes
Talk about representation learning using word vectors such as Word2Vec, Paragraph Vector. Also introduced to neural network language models. Expose some applications using NNLM such as sentiment analysis and information retrieval.
Deep Learning Models for Question AnsweringSujit Pal
Talk about a hobby project to apply Deep Learning models to predict answers to 8th grade science multiple choice questions for the Allen AI challenge on Kaggle.
Artificial Intelligence, Machine Learning and Deep LearningSujit Pal
Slides for talk Abhishek Sharma and I gave at the Gennovation tech talks (https://gennovationtalks.com/) at Genesis. The talk was part of outreach for the Deep Learning Enthusiasts meetup group at San Francisco. My part of the talk is covered from slides 19-34.
[KDD 2018 tutorial] End to-end goal-oriented question answering systemsQi He
End to-end goal-oriented question answering systems
version 2.0: An updated version with references of the old version (https://www.slideshare.net/QiHe2/kdd-2018-tutorial-end-toend-goaloriented-question-answering-systems).
08/22/2018: The old version was just deleted for reducing the confusion.
A Simple Introduction to Word EmbeddingsBhaskar Mitra
In information retrieval there is a long history of learning vector representations for words. In recent times, neural word embeddings have gained significant popularity for many natural language processing tasks, such as word analogy and machine translation. The goal of this talk is to introduce basic intuitions behind these simple but elegant models of text representation. We will start our discussion with classic vector space models and then make our way to recently proposed neural word embeddings. We will see how these models can be useful for analogical reasoning as well applied to many information retrieval tasks.
A tutorial on query auto-completion (QAC), which refer from 10 more search conference papers in recent years. About the development of the QAC, personalized QAC, time-sensitive QAC, QAC in mobile and the future QAC.
SIGIR 2016 presentation slide for paper: Xin Qian, Jimmy Lin, and Adam Roegiest. Interleaved Evaluation for Retrospective Summarization and Prospective Notification on Document Streams. Proceedings of the 39th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2016), pages 175-184, July 2016, Pisa, Italy.
Natural Language Processing with Graph Databases and Neo4jWilliam Lyon
Originally presented at DataDay Texas in Austin, this presentation shows how a graph database such as Neo4j can be used for common natural language processing tasks, such as building a word adjacency graph, mining word associations, summarization and keyword extraction and content recommendation.
Continuous representations of words and documents, which is recently referred to as Word Embeddings, have recently demonstrated large advancements in many of the Natural language processing tasks.
In this presentation we will provide an introduction to the most common methods of learning these representations. As well as previous methods in building these representations before the recent advances in deep learning, such as dimensionality reduction on the word co-occurrence matrix.
Moreover, we will present the continuous bag of word model (CBOW), one of the most successful models for word embeddings and one of the core models in word2vec, and in brief a glance of many other models of building representations for other tasks such as knowledge base embeddings.
Finally, we will motivate the potential of using such embeddings for many tasks that could be of importance for the group, such as semantic similarity, document clustering and retrieval.
Recurrent Neural Networks have shown to be very powerful models as they can propagate context over several time steps. Due to this they can be applied effectively for addressing several problems in Natural Language Processing, such as Language Modelling, Tagging problems, Speech Recognition etc. In this presentation we introduce the basic RNN model and discuss the vanishing gradient problem. We describe LSTM (Long Short Term Memory) and Gated Recurrent Units (GRU). We also discuss Bidirectional RNN with an example. RNN architectures can be considered as deep learning systems where the number of time steps can be considered as the depth of the network. It is also possible to build the RNN with multiple hidden layers, each having recurrent connections from the previous time steps that represent the abstraction both in time and space.
Over the last two years, the field of Natural Language Processing (NLP) has witnessed the emergence of transfer learning methods and architectures which significantly improved upon the state-of-the-art on pretty much every NLP tasks.
The wide availability and ease of integration of these transfer learning models are strong indicators that these methods will become a common tool in the NLP landscape as well as a major research direction.
In this talk, I'll present a quick overview of modern transfer learning methods in NLP and review examples and case studies on how these models can be integrated and adapted in downstream NLP tasks, focusing on open-source solutions.
Website: https://fwdays.com/event/data-science-fwdays-2019/review/transfer-learning-in-nlp
Deep learning: the future of recommendationsBalázs Hidasi
An informative talk about deep learning and its potential uses in recommender systems. Presented at the Budapest Startup Safary, 21 April, 2016.
The breakthroughs of the last decade in neural network research and the quick increasing of computational power resulted in the revival of deep neural networks and the field focusing on their training: deep learning. Deep learning methods have succeeded in complex tasks where other machine learning methods have failed, such as computer vision and natural language processing. Recently deep learning has began to gain ground in recommender systems as well. This talk introduces deep learning and its applications, with emphasis on how deep learning methods can solve long standing recommendation problems.
Building a Neural Machine Translation System From ScratchNatasha Latysheva
Human languages are complex, diverse and riddled with exceptions – translating between different languages is therefore a highly challenging technical problem. Deep learning approaches have proved powerful in modelling the intricacies of language, and have surpassed all statistics-based methods for automated translation. This session begins with an introduction to the problem of machine translation and discusses the two dominant neural architectures for solving it – recurrent neural networks and transformers. A practical overview of the workflow involved in training, optimising and adapting a competitive neural machine translation system is provided. Attendees will gain an understanding of the internal workings and capabilities of state-of-the-art systems for automatic translation, as well as an appreciation of the key challenges and open problems in the field.
Thomas Wolf "An Introduction to Transfer Learning and Hugging Face"Fwdays
In this talk I'll start by introducing the recent breakthroughs in NLP that resulted from the combination of Transfer Learning schemes and Transformer architectures. The second part of the talk will be dedicated to an introduction of the open-source tools released by Hugging Face, in particular our transformers, tokenizers, and NLP libraries as well as our distilled and pruned models.
deep learningFeature selection using Deep Neural Networks March 18, 2016 CSI 991 Kevin Ham
2 Neural Networks Basics Perceptron Equivalent performance to least mean square algorithm (linear regression) Activation Function Sigmoid, Hyperbolic Tangent Multi Layer Perceptrons Chains of perceptrons, perform feature extraction Training the network Training set, validation set, generalization set Back propagation
3 Perceptron and Activation Function The basic building block of Neural Networks (1) Summation of weighted inputs Bias performs change in y-intercept Output is present when the activation Threshold is overcome Activation function must be differentiable w1 w3 w2 Activation function Input 1 Input 2 Input 3 output Bias
4 Multi-Layer Perceptrons and Training Classification with 20 node MLP NN (4) Feature extraction with 5 layered Convolutional Neural Network (2) Feature Extraction with MLP NN (4)
5 Article Objectives … we propose a supervised approach for task-aware selection of features using Deep Neural Networks (DNN) in the context of action recognition (e.g. walking, running, jumping). (1) … selected features are found to give better classification performance than the original high-dimensional features. (1) It is also shown that the classification performance of the proposed feature selection technique is superior to the low-dimensional representation obtained by principal component analysis (PCA). (1)
6 Methodology … analyze the contribution of each of the input dimensions to identify the features (inputs) important for classification (1) … to correctly analyze the contribution of an input feature, we study its activation potential (averaged over all training values of the input and hidden neurons) relative to the total activation potential (1) The higher the activation potential contribution of an input dimension, the more likely is its participation in hidden neuronal activity and consequently, classification. (1)Feature selection using Deep Neural Networks March 18, 2016 CSI 991 Kevin Ham
2 Neural Networks Basics Perceptron Equivalent performance to least mean square algorithm (linear regression) Activation Function Sigmoid, Hyperbolic Tangent Multi Layer Perceptrons Chains of perceptrons, perform feature extraction Training the network Training set, validation set, generalization set Back propagation
3 Perceptron and Activation Function The basic building block of Neural Networks (1) Summation of weighted inputs Bias performs change in y-intercept Output is present when the activation Threshold is overcome Activation function must be differentiable w1 w3 w2 Activation function Input 1 Input 2 Input 3 output Bias
4 Multi-Layer Perceptrons and Training Classification with 20 node MLP NN (4) Feature extraction with 5 layered Convolutional Neural Network (2) Feature Extraction with MLP NN (4)
5 Article Objectives … we propose a supervised approach fo
Deep Learning For Practitioners, lecture 2: Selecting the right applications...ananth
In this presentation we articulate when deep learning techniques yield best results from a practitioner's view point. Do we apply deep learning techniques for every machine learning problem? What characteristics of an application lends itself suitable for deep learning? Does more data automatically imply better results regardless of the algorithm or model? Does "automated feature learning" obviate the need for data preprocessing and feature design?
Similar to Recurrent networks and beyond by Tomas Mikolov (20)
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
Joint Multisided Exposure Fairness for Search and RecommendationBhaskar Mitra
(Slides from my talk at SEA: Search Engines Amsterdam)
Online information access systems, like recommender systems and search, mediate what information gets exposure and thereby influence their consumption at scale. There is a growing body of evidence that information retrieval (IR) algorithms that narrowly focus on maximizing ranking utility of retrieved items may disparately expose items of similar relevance from the collection. Such disparities in exposure outcome raise concerns of algorithmic fairness and bias of moral import, and may contribute to both representational harms—by reinforcing negative stereotypes and perpetuating inequities in representation of women and other historically marginalized peoples—and allocative harms, from disparate exposure to economic opportunities. In this talk, we present a framework of exposure fairness metrics that model the problem jointly from the perspective of both the consumers and producers. Specifically, we consider group attributes for both types of stakeholders to identify and mitigate fairness concerns that go beyond individual users and items towards more systemic biases in retrieval.
What’s next for deep learning for Search?Bhaskar Mitra
In this talk, I will share some of my personal reflections on the progress in the field of neural IR and some of the ongoing and future research directions that I am personally excited about. This talk will be informed by my own research in this area as well as my experience both as a developer/organizer of the MS MARCO benchmark and the TREC Deep Learning Track and as an applied researcher previously working on web scale search systems at Bing. My goal in this talk would be to move the conversation beyond neural reranking models towards a richer and bolder vision of search powered by deep learning.
So, You Want to Release a Dataset? Reflections on Benchmark Development, Comm...Bhaskar Mitra
In this talk, I share some of my personal reflections and learnings on benchmark development and community building for making robust scientific progress. This talk is informed by my experience as a developer of the MS MARCO benchmark and as an organizer of the TREC Deep Learning Track. My goal in this talk is to situate the act of releasing a dataset in the context of broader research visions and to draw due attention to considerations of scientific and social outcomes that are invariably salient in the acts of dataset creation and distribution.
Efficient Machine Learning and Machine Learning for Efficiency in Information...Bhaskar Mitra
Emerging machine learning approaches, including deep learning methods, for information retrieval (IR) have recently demonstrated significant improvements in accuracy of relevance estimation at the cost of increasing model complexity and corresponding rise in computational and environmental costs of training and inference. In web search, these costs are further compounded by the necessity to train on large-scale datasets, consume long documents as inputs, and retrieve relevant documents from web-scale collections within milliseconds in response to high volume query traffic. A typical playbook for developing deep learning models for IR involves largely ignoring efficiency concerns during model development and then later scaling these methods by either finding faster approximations of the same models or employing heuristics to reduce the input space over which these models operate. Domain knowledge about the specific IR task and deeper understanding of system design and data structures in whose context these models are deployed can significantly help with not only model simplification but also to inform data-structure specific machine learning model design. Alternatively, predictive machine learning can also be employed specifically to improve efficiency in large scale IR settings. In this talk, I will cover several case studies for both improving efficiency of machine learning models for IR as well as direct application of machine learning to improve retrieval efficiency, and conclude with a brief discussion on potential future directions for efficiency-sensitive benchmarking of machine learning models for IR.
Multisided Exposure Fairness for Search and RecommendationBhaskar Mitra
Online information access systems, like recommender systems and search, mediate what information gets exposure and thereby influence their consumption at scale. There is a growing body of evidence that information retrieval (IR) algorithms that narrowly focus on maximizing ranking utility of retrieved items may disparately expose items of similar relevance from the collection. Such disparities in exposure outcome raise concerns of algorithmic fairness and bias of moral import, and may contribute to both representational harms—by reinforcing negative stereotypes and perpetuating inequities in representation of women and other historically marginalized peoples—and allocative harms, from disparate exposure to economic opportunities. In this talk, we present a framework of exposure fairness metrics that model the problem jointly from the perspective of both the consumers and producers. Specifically, we consider group attributes for both types of stakeholders to identify and mitigate fairness concerns that go beyond individual users and items towards more systemic biases in retrieval. The development of expected exposure based metrics also opens up new opportunities and challenges for model optimization. We demonstrate how stochastic ranking policies can be optimized towards target expected exposure and highlight the trade-offs that may exist in optimizing for different fairness dimensions.
Learning to rank (LTR) for information retrieval (IR) involves the application of machine learning models to rank artifacts, such as webpages, in response to user's need, which may be expressed as a query. LTR models typically employ training data, such as human relevance labels and click data, to discriminatively train towards an IR objective. The focus of this lecture will be on the fundamentals of neural networks and their applications to learning to rank.
Neural Information Retrieval: In search of meaningful progressBhaskar Mitra
The emergence of deep learning based methods for search poses several challenges and opportunities not just for modeling, but also for benchmarking and measuring progress in the field. Some of these challenges are new, while others have evolved from existing challenges in IR benchmarking exacerbated by the scale at which deep learning models operate. Evaluation efforts such as the TREC Deep Learning track and the MS MARCO public leaderboard are intended to encourage research and track our progress, addressing big questions in our field. The goal is not simply to identify which run is "best" but to move the field forward by developing new robust techniques, that work in many different settings, and are adopted in research and practice. This entails a wider conversation in the IR community about what constitutes meaningful progress, how benchmark design can encourage or discourage certain outcomes, and about the validity of our findings. In this talk, I will present a brief overview of what we have learned from our work on MS MARCO and the TREC Deep Learning track--and reflect on the state of the field and the road ahead.
Conformer-Kernel with Query Term Independence @ TREC 2020 Deep Learning TrackBhaskar Mitra
We benchmark Conformer-Kernel models under the strict blind evaluation setting of the TREC 2020 Deep Learning track. In particular, we study the impact of incorporating: (i) Explicit term matching to complement matching based on learned representations (i.e., the “Duet principle”), (ii) query term independence (i.e., the “QTI assumption”) to scale the model to the full retrieval setting, and (iii) the ORCAS click data as an additional document description field. We find evidence which supports that all three aforementioned strategies can lead to improved retrieval quality.
Lecture slides presented at Northeastern University (December, 2020).
Learning to rank (LTR) for information retrieval (IR) involves the application of machine learning models to rank artifacts, such as webpages, in response to user's need, which may be expressed as a query. LTR models typically employ training data, such as human relevance labels and click data, to discriminatively train towards an IR objective. The focus of this lecture will be on the fundamentals of neural networks and their applications to learning to rank.
This report discusses three submissions based on the Duet architecture to the Deep Learning track at TREC 2019. For the document retrieval task, we adapt the Duet model to ingest a "multiple field" view of documents—we refer to the new architecture as Duet with Multiple Fields (DuetMF). A second submission combines the DuetMF model with other neural and traditional relevance estimators in a learning-to-rank framework and achieves improved performance over the DuetMF baseline. For the passage retrieval task, we submit a single run based on an ensemble of eight Duet models.
Benchmarking for Neural Information Retrieval: MS MARCO, TREC, and BeyondBhaskar Mitra
The emergence of deep learning-based methods for information retrieval (IR) poses several challenges and opportunities for benchmarking. Some of these are new, while others have evolved from existing challenges in IR exacerbated by the scale at which deep learning models operate. In this talk, I will present a brief overview of what we have learned from our work on MS MARCO and the TREC Deep Learning track, and reflect on the road ahead.
Deep neural methods have recently demonstrated significant performance improvements in several IR tasks. In this lecture, we will present a brief overview of deep models for ranking and retrieval.
This is a follow-up lecture to "Neural Learning to Rank" (https://www.slideshare.net/BhaskarMitra3/neural-learning-to-rank-231759858)
Learning to rank (LTR) for information retrieval (IR) involves the application of machine learning models to rank artifacts, such as items to be recommended, in response to user's need. LTR models typically employ training data, such as human relevance labels and click data, to discriminatively train towards an IR objective. The focus of this tutorial will be on the fundamentals of neural networks and their applications to learning to rank.
Tutorial presented at ACM SIGIR/SIGKDD Africa Summer School on Machine Learning for Data Mining and Search (AFIRM 2020) conference in Cape Town, South Africa.
A fundamental goal of search engines is to identify, given a query, documents that have relevant text. This is intrinsically difficult because the query and the document may use different vocabulary, or the document may contain query words without being relevant. We investigate neural word embeddings as a source of evidence in document ranking. We train a word2vec embedding model on a large unlabelled query corpus, but in contrast to how the model is commonly used, we retain both the input and the output projections, allowing us to leverage both the embedding spaces to derive richer distributional relationships. During ranking we map the query words into the input space and the document words into the output space, and compute a query-document relevance score by aggregating the cosine similarities across all the query-document word pairs.
We postulate that the proposed Dual Embedding Space Model (DESM) captures evidence on whether a document is about a query term in addition to what is modelled by traditional term-frequency based approaches. Our experiments show that the DESM can re-rank top documents returned by a commercial Web search engine, like Bing, better than a term-matching based signal like TF-IDF. However, when ranking a larger set of candidate documents, we find the embeddings-based approach is prone to false positives, retrieving documents that are only loosely related to the query. We demonstrate that this problem can be solved effectively by ranking based on a linear mixture of the DESM and the word counting features.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Enhancing Performance with Globus and the Science DMZGlobus
ESnet has led the way in helping national facilities—and many other institutions in the research community—configure Science DMZs and troubleshoot network issues to maximize data transfer performance. In this talk we will present a summary of approaches and tips for getting the most out of your network infrastructure using Globus Connect Server.
Welcome to the first live UiPath Community Day Dubai! Join us for this unique occasion to meet our local and global UiPath Community and leaders. You will get a full view of the MEA region's automation landscape and the AI Powered automation technology capabilities of UiPath. Also, hosted by our local partners Marc Ellis, you will enjoy a half-day packed with industry insights and automation peers networking.
📕 Curious on our agenda? Wait no more!
10:00 Welcome note - UiPath Community in Dubai
Lovely Sinha, UiPath Community Chapter Leader, UiPath MVPx3, Hyper-automation Consultant, First Abu Dhabi Bank
10:20 A UiPath cross-region MEA overview
Ashraf El Zarka, VP and Managing Director MEA, UiPath
10:35: Customer Success Journey
Deepthi Deepak, Head of Intelligent Automation CoE, First Abu Dhabi Bank
11:15 The UiPath approach to GenAI with our three principles: improve accuracy, supercharge productivity, and automate more
Boris Krumrey, Global VP, Automation Innovation, UiPath
12:15 To discover how Marc Ellis leverages tech-driven solutions in recruitment and managed services.
Brendan Lingam, Director of Sales and Business Development, Marc Ellis
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfPeter Spielvogel
Building better applications for business users with SAP Fiori.
• What is SAP Fiori and why it matters to you
• How a better user experience drives measurable business benefits
• How to get started with SAP Fiori today
• How SAP Fiori elements accelerates application development
• How SAP Build Code includes SAP Fiori tools and other generative artificial intelligence capabilities
• How SAP Fiori paves the way for using AI in SAP apps
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
The Metaverse and AI: how can decision-makers harness the Metaverse for their...Jen Stirrup
The Metaverse is popularized in science fiction, and now it is becoming closer to being a part of our daily lives through the use of social media and shopping companies. How can businesses survive in a world where Artificial Intelligence is becoming the present as well as the future of technology, and how does the Metaverse fit into business strategy when futurist ideas are developing into reality at accelerated rates? How do we do this when our data isn't up to scratch? How can we move towards success with our data so we are set up for the Metaverse when it arrives?
How can you help your company evolve, adapt, and succeed using Artificial Intelligence and the Metaverse to stay ahead of the competition? What are the potential issues, complications, and benefits that these technologies could bring to us and our organizations? In this session, Jen Stirrup will explain how to start thinking about these technologies as an organisation.
2. Goals of this talk
• Explain recent success of recurrent networks
• Understand better the concept of (longer) short term memory
• Explore limitations of recurrent networks
• Discuss what needs to be done to build machines that can
understand language
Tomas Mikolov, Facebook, 2016
3. Brief History of Recurrent Nets – 80’s & 90’s
• Recurrent network architectures were very popular in the 80’s and
early 90’s (Elman, Jordan, Mozer, Hopfield, Parallel Distributed
Processing group, …)
• The main idea is very attractive: to re-use parameters and
computation (usually over time)
Tomas Mikolov, Facebook, 2016
4. Simple RNN Architecture
• Input layer, hidden layer with recurrent
connections, and the output layer
• In theory, the hidden layer can learn
to represent unlimited memory
• Also called Elman network
(Finding structure in time, Elman 1990)
Tomas Mikolov, Facebook, 2016
5. Brief History of Recurrent Nets – 90’s - 2010
• After the initial excitement, recurrent nets vanished from the
mainstream research
• Despite being theoretically powerful models, RNNs were mostly
considered as unstable to be trained
• Some success was achieved at IDSIA with the Long Short Term
Memory RNN architecture, but this model was too complex for others
to reproduce easily
Tomas Mikolov, Facebook, 2016
6. Brief History of Recurrent Nets – 2010 - today
• In 2010, it was shown that RNNs can significantly improve state-of-
the-art in language modeling, machine translation, data compression
and speech recognition (including strong commercial speech
recognizer from IBM)
• RNNLM toolkit was published to allow researchers to reproduce the
results and extend the techniques
• The key novel trick in RNNLM was trivial: to clip gradients to prevent
instability of training
Tomas Mikolov, Facebook, 2016
7. Brief History of Recurrent Nets – 2010 - today
• 21% - 24% reduction of WER on Wall Street Journal setup
Tomas Mikolov, Facebook, 2016
8. Brief History of Recurrent Nets – 2010 - today
• Improvement from RNNLM over n-gram increases with more data!
Tomas Mikolov, Facebook, 2016
9. Brief History of Recurrent Nets – 2010 - today
• Breakthrough result in 2011: 11% WER reduction over large system from IBM
• Ensemble of big RNNLM models trained on a lot of data
Tomas Mikolov, Facebook, 2016
10. Brief History of Recurrent Nets – 2010 - today
• RNNs became much more accessible through open-source
implementations in general ML toolkits:
• Theano
• Torch
• PyBrain
• TensorFlow
• …
Tomas Mikolov, Facebook, 2016
11. Recurrent Nets Today
• Widely applied:
• ASR (both acoustic and language models)
• MT (language & translation & alignment models, joint models)
• Many NLP applications
• Video modeling, handwriting recognition, user intent prediction, …
• Downside: for many problems RNNs are too powerful, models are
becoming unnecessarily complex
• Often, complicated RNN architectures are preferred because of wrong
reasons (easier to get a paper published and attract attention)
Tomas Mikolov, Facebook, 2016
12. Longer short term memory in simple RNNs
• How to add longer memory to RNNs without unnecessary complexity
• Paper: Learning Longer Memory in Recurrent Neural Networks
(Mikolov, Joulin, Chopra, Mathieu, Ranzato, ICLR Workshop 2015)
Tomas Mikolov, Facebook, 2016
13. Recurrent Network – Elman Architecture
• Also known as Simple Recurrent Network (SRN)
• Input layer 𝑥 𝑡, hidden layer ℎ 𝑡, output 𝑦𝑡
• Weight matrices 𝐴, 𝑅, 𝑈
Tomas Mikolov, Facebook, 2016
15. Simple Recurrent Net Problems
• Backpropagation through time algorithm + stochastic gradient
descent is commonly used for training (Rumelhart et al, 1985)
• Gradients can either vanish or explode (Hochreiter 1991;
Bengio 1994)
Tomas Mikolov, Facebook, 2016
16. Simple Recurrent Net: Exploding Gradients
• The gradients explode rarely, but this can have disastrous effects
• Simple “hack” is to clip gradients to stay within some range
• This prevents exponential growth (which would later lead to giant
step in weight update)
• One can also normalize the gradients, or discard the weight updates
that are too big
Tomas Mikolov, Facebook, 2016
17. Simple Recurrent Net: Vanishing Gradients
• Most of the time, the gradients quickly vanish (after 5-10 steps of
backpropagation through time)
• This may not be a problem of SGD, but of the architecture of the SRN
Tomas Mikolov, Facebook, 2016
18. Simple Recurrent Net: Vanishing Gradients
• What recurrent architecture would be easier to train to capture
longer term patterns?
• Instead of fully connected recurrent matrix, we can use architecture
where each neuron is connected only to the input and to itself
• Old idea (Jordan 1987; Mozer 1989)
Tomas Mikolov, Facebook, 2016
19. Combination of both ideas: Elman + Mozer
• Part of the hidden layer is fully connected,
part is diagonal (self-connections)
• Can be seen as RNN with two
hidden layers
• Or as RNN with partially diagonal
recurrent matrix (+ linear hidden units)
Tomas Mikolov, Facebook, 2016
20. Combination of both ideas: Elman + Mozer
• The 𝛼 value can be learned, or kept
fixed close to 1 (we used 0.95)
• The 𝑃 matrix is optional
(usually helps a bit)
Tomas Mikolov, Facebook, 2016
21. Structurally Constrained Recurrent Net
• Because we constrain the architecture of SRN, we further denote the
model as Structurally Constrained Recurrent Net (SCRN)
• Alternative name is “slow recurrent nets”, as the state of the diagonal
layer changes slowly
Q: Wouldn’t it be enough to initialize the recurrent matrix to be diagonal?
A: No. This would degrade back to normal RNN and not learn longer memory.
Tomas Mikolov, Facebook, 2016
22. Results
• Language modeling experiments: Penn Treebank, Text8
• Longer memory in language models is commonly called cache / topic
• Comparison to Long Short Term Memory RNNs (currently popular but
quite complicated architecture that can learn longer term patterns)
• Datasets & code: http://github.com/facebook/SCRNNs
(link is in the paper)
Tomas Mikolov, Facebook, 2016
23. Results: Penn Treebank language modeling
• Gain from SCRN / LSTM over simpler recurrent net is similar to gain from cache
• LSTM has 3 gates for each hidden unit, and thus 4x more parameters need to be
accessed during training for the given hidden layer size (=> slower to train)
• SCRN with 100 fully connected and 40 self-connected neurons is only slightly
more expensive to train than SRN
Tomas Mikolov, Facebook, 2016
MODEL # hidden units Perplexity
N-gram - 141
N-gram + cache - 125
SRN 100 129
LSTM 100 (x4 parameters) 115
SCRN 100 + 40 115
24. Results: Text8
• Text8: Wikipedia text (~17M words), much stronger effect from cache
• Big gain for both SCRN & LSTM over SRN
• For small models, SCRN seems to be superior (simpler architecture, better
accuracy, faster training – less parameters)
Tomas Mikolov, Facebook, 2016
MODEL # hidden units Perplexity
N-gram - 309
N-gram + cache - 229
SRN 100 245
LSTM 100 (x4 parameters) 193
SCRN 100 + 80 184
25. Results: Text8
• With 500 hidden units, LSTM is slightly better in perplexity (3%) than SCRN, but it
also has many more parameters
Tomas Mikolov, Facebook, 2016
MODEL # hidden units Perplexity
N-gram - 309
N-gram + cache - 229
SRN 500 184
LSTM 500 (x4 parameters) 156
SCRN 500 + 80 161
26. Discussion of Results
• SCRN accumulates longer history in the “slow” hidden layer: the same
as exponentially decaying cache model
• Empirically, LSTM performance correlates strongly with cache
(weighted bag-of-words)
• For very large (~infinite) training sets, SCRN seems to be the
preferable architecture: it is computationally very cheap
Tomas Mikolov, Facebook, 2016
27. Conclusion
• Simple tricks can overcome the vanishing and exploding gradient
problems
• State of the recurrent layer can represent longer short term memory,
but not the long term one (across millions of time steps)
• To represent true long term memory, we may need to develop models
with ability to grow in size (modify their own structure)
Tomas Mikolov, Facebook, 2016
28. Beyond Deep Learning
• Going beyond: what RNNs and deep networks cannot model
efficiently?
• Surprisingly simple patterns! For example, memorization of
variable-length sequence of symbols
Tomas Mikolov, Facebook, 2016
29. Beyond Deep Learning: Algorithmic Patterns
• Many complex patterns have short, finite description length in natural
language (or in any Turing-complete computational system)
• We call such patterns Algorithmic patterns
• Examples of algorithmic patterns: 𝑎 𝑛 𝑏 𝑛, sequence memorization,
addition of numbers learned from examples
• These patterns often cannot be learned with standard deep learning
techniques
Tomas Mikolov, Facebook, 2016
30. Beyond Deep Learning: Algorithmic Patterns
• Among the myriad of complex tasks that are currently not solvable,
which ones should we focus on?
• We need to set ambitious end goal, and define a roadmap how to
achieve it step-by-step
Tomas Mikolov, Facebook, 2016
32. Ultimate Goal for Communication-based AI
Can do almost anything:
• Machine that helps students to understand homeworks
• Help researchers to find relevant information
• Write programs
• Help scientists in tasks that are currently too demanding (would
require hundreds of years of work to solve)
Tomas Mikolov, Facebook, 2016
33. The Roadmap
• We describe a minimal set of components we think the intelligent
machine will consist of
• Then, an approach to construct the machine
• And the requirements for the machine to be scalable
Tomas Mikolov, Facebook, 2016
34. Components of Intelligent machines
• Ability to communicate
• Motivation component
• Learning skills (further requires long-term memory), ie. ability to
modify itself to adapt to new problems
Tomas Mikolov, Facebook, 2016
35. Components of Framework
To build and develop intelligent machines, we need:
• An environment that can teach the machine basic communication skills and
learning strategies
• Communication channels
• Rewards
• Incremental structure
Tomas Mikolov, Facebook, 2016
36. The need for new tasks: simulated
environment
• There is no existing dataset known to us that would allow to teach the
machine communication skills
• Careful design of the tasks, including how quickly the complexity is
growing, seems essential for success:
• If we add complexity too quickly, even correctly implemented intelligent
machine can fail to learn
• By adding complexity too slowly, we may miss the final goals
Tomas Mikolov, Facebook, 2016
37. High-level description of the environment
Simulated environment:
• Learner
• Teacher
• Rewards
Scaling up:
• More complex tasks, less examples, less supervision
• Communication with real humans
• Real input signals (internet)
Tomas Mikolov, Facebook, 2016
38. Simulated environment - agents
• Environment: simple script-based reactive agent that produces signals
for the learner, represents the world
• Learner: the intelligent machine which receives input signal, reward
signal and produces output signal to maximize average incoming
reward
• Teacher: specifies tasks for Learner, first based on scripts, later to be
replaced by human users
Tomas Mikolov, Facebook, 2016
39. Simulated environment - communication
• Both Teacher and Environment write to Learner’s input channel
• Learner’s output channel influences its behavior in the Environment,
and can be used for communication with the Teacher
• Rewards are also part of the IO channels
Tomas Mikolov, Facebook, 2016
40. Visualization for better understanding
• Example of input / output streams and visualization:
Tomas Mikolov, Facebook, 2016
41. How to scale up: fast learners
• It is essential to develop fast learner: we can easily build a machine
today that will “solve” simple tasks in the simulated world using a
myriad of trials, but this will not scale to complex problems
• In general, showing the Learner new type of behavior and guiding it
through few tasks should be enough for it to generalize to similar
tasks later
• There should be less and less need for direct supervision through
rewards
Tomas Mikolov, Facebook, 2016
42. How to scale up: adding humans
• Learner capable of fast learning can start communicating with human
experts (us) who will teach it novel behavior
• Later, a pre-trained Learner with basic communication skills can be
used by human non-experts
Tomas Mikolov, Facebook, 2016
43. How to scale up: adding real world
• Learner can gain access to internet through its IO channels
• This can be done by teaching the Learner how to form a query in its
output stream
Tomas Mikolov, Facebook, 2016
44. The need for new techniques
Certain trivial patterns are nowadays hard to learn:
• 𝑎 𝑛 𝑏 𝑛 context free language is out-of-scope of standard RNNs
• Sequence memorization breaks LSTM RNNs
• We show this in a recent paper Inferring Algorithmic Patterns with
Stack-Augmented Recurrent Nets
Tomas Mikolov, Facebook, 2016
45. Scalability
To hope the machine can scale to more complex problems, we need:
• Long-term memory
• (Turing-) Complete and efficient computational model
• Incremental, compositional learning
• Fast learning from small number of examples
• Decreasing amount of supervision through rewards
• Further discussed in: A Roadmap towards Machine Intelligence
http://arxiv.org/abs/1511.08130
Tomas Mikolov, Facebook, 2016
46. Some steps forward: Stack RNNs (Joulin &
Mikolov, 2015)
• Simple RNN extended with a long term memory module that the
neural net learns to control
• The idea itself is very old (from 80’s – 90’s)
• Our version is very simple and learns patterns with complexity far
exceeding what was shown before (though still very toyish): much
less supervision, scales to more complex tasks
Tomas Mikolov, Facebook, 2016
47. • Learns algorithms from examples
• Add structured memory to RNN:
• Trainable [read/write]
• Unbounded
• Actions: PUSH / POP / NO-OP
• Examples of memory structures:
stacks, lists, queues, tapes, grids,
…
Stack RNN
Tomas Mikolov, Facebook, 2016
48. Algorithmic Patterns
• Examples of simple algorithmic patterns generated by short programs
(grammars)
• The goal is to learn these patterns unsupervisedly just by observing the
example sequences
Tomas Mikolov, Facebook, 2016
49. Algorithmic Patterns - Counting
• Performance on simple counting tasks
• RNN with sigmoidal activation function cannot count
• Stack-RNN and LSTM can count
Tomas Mikolov, Facebook, 2016
50. Algorithmic Patterns - Sequences
• Sequence memorization and binary addition are out-of-scope of
LSTM
• Expandable memory of stacks allows to learn the solution
Tomas Mikolov, Facebook, 2016
51. Binary Addition
• No supervision in training, just prediction
• Learns to: store digits, when to produce output, carry
Tomas Mikolov, Facebook, 2016
52. Stack RNNs: summary
The good:
• Turing-complete model of computation (with >=2 stacks)
• Learns some algorithmic patterns
• Has long term memory
• Simple model that works for some problems that break RNNs and LSTMs
• Reproducible: https://github.com/facebook/Stack-RNN
The bad:
• The long term memory is used only to store partial computation (ie. learned skills are not
stored there yet)
• Does not seem to be a good model for incremental learning
• Stacks do not seem to be a very general choice for the topology of the memory
Tomas Mikolov, Facebook, 2016
53. Conclusion
To achieve true artificial intelligence, we need:
• AI-complete goal
• New set of tasks
• Develop new techniques
• Motivate more people to address these problems
Tomas Mikolov, Facebook, 2016