The document describes an end-to-end memory network model for multi-turn spoken language understanding. The model encodes context from previous utterances using an attention mechanism over the memory of past utterances. It then performs slot tagging on the current utterance incorporating the contextual knowledge. Experiments on a Cortana dataset show the model outperforms alternatives, achieving 67.1% accuracy by encoding both history and current utterances with the memory network.
2017 Tutorial - Deep Learning for Dialogue SystemsMLReview
In the past decade, goal-oriented spoken dialogue systems (SDS) have been the most promi-nent component in today’s virtual personal assistants (VPAs). Among these VPAs, Microsoft’s Cortana, Apple’s Siri, Amazon Alexa, Google Assistant, and Facebook’s M, have incorporated SDS modules in various devices, which allow users to speak naturally in order to finish tasks more efficiently. The traditional conversational systems have rather complex and/or modular pipelines. The advance of deep learning technologies has recently risen the applicatins of neural models to dialogue modeling. Nevertheless, applying deep learning technologies for building robust and scalable dialogue systems is still a challenging task and an open research area as it requires deeper understanding of the classic pipelines as well as detailed knowledge on the benchmark of the models of the prior work and the recent state-of-the-art work. Thus, this tutorial is designed to focus on an overview of the dialogue system development while describing most recent research for building dialogue systems, and summarizing the challenges. We target an audience of students and practitioners who have some deep learning background and want to get more familiar with conversational dialog systems.
2017 Tutorial - Deep Learning for Dialogue SystemsMLReview
In the past decade, goal-oriented spoken dialogue systems (SDS) have been the most promi-nent component in today’s virtual personal assistants (VPAs). Among these VPAs, Microsoft’s Cortana, Apple’s Siri, Amazon Alexa, Google Assistant, and Facebook’s M, have incorporated SDS modules in various devices, which allow users to speak naturally in order to finish tasks more efficiently. The traditional conversational systems have rather complex and/or modular pipelines. The advance of deep learning technologies has recently risen the applicatins of neural models to dialogue modeling. Nevertheless, applying deep learning technologies for building robust and scalable dialogue systems is still a challenging task and an open research area as it requires deeper understanding of the classic pipelines as well as detailed knowledge on the benchmark of the models of the prior work and the recent state-of-the-art work. Thus, this tutorial is designed to focus on an overview of the dialogue system development while describing most recent research for building dialogue systems, and summarizing the challenges. We target an audience of students and practitioners who have some deep learning background and want to get more familiar with conversational dialog systems.
This is a survey about Dialog System, Question and Answering, including the 03 generations: (1) Symbolic Rule/Template Based QA; (2) Data Driven, Learning; (3) Data-Driven Deep Learning. It also presents the available Frameworks and Datas for Dialog Systems.
We have envisioned that computers will understand natural language and predict what we need help in order to complete tasks via conversational interactions. This talk focuses on context-aware understanding in different levels: 1) word-level contexts in sentences 2) sentence-level contexts in dialogues. Word-level contexts contribute both semantic and syntactic relations, which benefit sense representation learning and knowledge-guided language understanding. Also, sentence-level contexts may significantly affect dialogue-level performance. This talk investigates how misunderstanding of a single-turn utterance degrades the success rate of an end-to-end reinforcement learning based dialogue system. Then we will highlight challenges and recent trends driven by deep learning and intelligent assistants.
Dilek Hakkani-Tur at AI Frontiers: Conversational machines: Deep Learning for...AI Frontiers
In this talk, I will present recent developments in Google Research for end-to-end goal-oriented dialogue systems, with components for language understanding, dialogue state tracking, policy, and language generation. The talk will summarize novel aspects of each component, and highlight novel approaches where dialogue is viewed as a collaborative game between a user and an agent: The user has a goal in mind and the agent has access to the data that user is interested in, and can perform actions in order to realize the user’s goal. The two engage in a conversation so that the agent can help the user find a way for task completion.
This presentation covers dialogue systems: their definition, basic structure (covering all modules: natural language understanding, dialogue manager, natural language generation), evaluation and the way they can be used. We also provide details about future directions and discusses current personal assistants: SIRI, S-Voice, Cortana, Maluuba etc.
This is a survey about Dialog System, Question and Answering, including the 03 generations: (1) Symbolic Rule/Template Based QA; (2) Data Driven, Learning; (3) Data-Driven Deep Learning. It also presents the available Frameworks and Datas for Dialog Systems.
We have envisioned that computers will understand natural language and predict what we need help in order to complete tasks via conversational interactions. This talk focuses on context-aware understanding in different levels: 1) word-level contexts in sentences 2) sentence-level contexts in dialogues. Word-level contexts contribute both semantic and syntactic relations, which benefit sense representation learning and knowledge-guided language understanding. Also, sentence-level contexts may significantly affect dialogue-level performance. This talk investigates how misunderstanding of a single-turn utterance degrades the success rate of an end-to-end reinforcement learning based dialogue system. Then we will highlight challenges and recent trends driven by deep learning and intelligent assistants.
Dilek Hakkani-Tur at AI Frontiers: Conversational machines: Deep Learning for...AI Frontiers
In this talk, I will present recent developments in Google Research for end-to-end goal-oriented dialogue systems, with components for language understanding, dialogue state tracking, policy, and language generation. The talk will summarize novel aspects of each component, and highlight novel approaches where dialogue is viewed as a collaborative game between a user and an agent: The user has a goal in mind and the agent has access to the data that user is interested in, and can perform actions in order to realize the user’s goal. The two engage in a conversation so that the agent can help the user find a way for task completion.
This presentation covers dialogue systems: their definition, basic structure (covering all modules: natural language understanding, dialogue manager, natural language generation), evaluation and the way they can be used. We also provide details about future directions and discusses current personal assistants: SIRI, S-Voice, Cortana, Maluuba etc.
Cascon 2016 Keynote: Disrupting Developer Productivity One Bot at a TimeMargaret-Anne Storey
Conversational bots have become a popular addition to many mainstream platforms and software engineering has adopted them at an almost dizzying pace across every phase of the development life cycle. Bots reportedly help developers become more productive by automating tedious tasks, by bringing awareness of important project or community activities, and by reducing interruptions. Developers "talk to" and "listen to" these bots in the same conversational channels they use to collaborate with and monitor each other. However, the actual impact these bots have on developer productivity and project quality is still unclear. In this talk, I will give an overview of how bots play a prominent role in software development and discuss the benefits and challenges that can arise from relying on these "new virtual team members". I will also explore how bots may influence other knowledge work domains and propose a number of future directions for practitioners and researchers to consider.
Deep Learning in practice : Speech recognition and beyond - MeetupLINAGORA
Retrouvez la présentation de notre Meetup du 27 septembre 2017 présenté par notre collaborateur Abdelwahab HEBA : Deep Learning in practice : Speech recognition and beyond
Training at AI Frontiers 2018 - Lukasz Kaiser: Sequence to Sequence Learning ...AI Frontiers
Sequence to sequence learning is a powerful way to train deep networks for machine translation, various NLP tasks, but also image generation and recently video and music generation. We will give a hands-on tutorial showing how to use the open-source Tensor2Tensor library to train state-of-the-art models for translation, image generation, and a task of your choice!
Recurrent Neural Networks hold great promise as general sequence learning algorithms. As such, they are a very promising tool for text analysis. However, outside of very specific use cases such as handwriting recognition and recently, machine translation, they have not seen wide spread use. Why has this been the case?
In this presentation, we will first introduce RNNs as a concept. Then we will sketch how to implement them and cover the tricks necessary to make them work well. With the basics covered, we will investigate using RNNs as general text classification and regression models, examining where they succeed and where they fail compared to more traditional text analysis models. A straightforward open-source Python and Theano library for training RNNs with a scikit-learn style interface will be introduced and we’ll see how to use it through a tutorial on a real world text dataset
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Isolated word recognition using lpc & vector quantizationeSAT Journals
Abstract Speech recognition is always looked upon as a fascinating field in human computer interaction. It is one of the fundamental steps towards understanding human recognition and their behavior. This paper explicates the theory and implementation of Speech recognition. This is a speaker-dependent real time isolated word recognizer. The major logic used was to first obtain the feature vectors using LPC which was followed by vector quantization. The quantized vectors were then recognized by measuring the Minimum average distortion. All Speech Recognition systems contain Two Main Phases, namely Training Phase and Testing Phase. In the Training Phase, the Features of the words are extracted and during the recognition phase feature matching Takes place. The feature or the template thus extracted is stored in the data base, during the recognition phase the extracted features are compared with the template in the database. The features of the words are extracted by using LPC analysis. Vector Quantization is used for generating the code books. Finally the recognition decision is made based on the matching score. MATLAB will be used to implement this concept to achieve further understanding. Index Terms: Speech Recognition, LPC, Vector Quantization, and Code Book.
Realization and design of a pilot assist decision making system based on spee...csandit
A system based on speech recognition is proposed fo
r pilot assist decision-making. It is based
on a HIL aircraft simulation platform and uses the
microcontroller SPCE061A as the central
processor to achieve better reliability and higher
cost-effect performance. Technologies of
LPCC (linear predictive cepstral coding) and DTW (D
ynamic Time Warping) are applied for
isolated-word speech recognition to gain a smaller
amount of calculation and a better real-time
performance. Besides, we adopt the PWM (Pulse Width
Modulation) regulation technology to
effectively regulate each control surface by speech
, and thus to assist the pilot to make decisions.
By trial and error, it is proved that we have a sat
isfactory accuracy rate of speech recognition
and control effect. More importantly, our paper pro
vides a creative idea for intelligent human-
computer interaction and applications of speech rec
ognition in the field of aviation control. Our
system is also very easy to be extended and applied
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Connector Corner: Automate dynamic content and events by pushing a button
End-to-End Memory Networks with Knowledge Carryover for Multi-Turn Spoken Language Understanding
1. 1
Y U N - N U N G ( V I V I A N ) C H E N
H T T P : / / V I V I A N C H E N . I D V.T W
H A K K A N I - T U R , T U R , G A O , D E N G
2. Outline
Introduction
Spoken Dialogue System
Spoken/Natural Language Understanding (SLU/NLU)
Contextual Spoken Language Understanding
Model Architecture
End-to-End Training
Experiments
Conclusion & Future Work
2
End-to-EndMemoryNetworksforMulti-TurnSpokenLanguageUnderstandingYun-Nung(Vivian)Chen
3. Outline
Introduction
Spoken Dialogue System
Spoken/Natural Language Understanding (SLU/NLU)
Contextual Spoken Language Understanding
Model Architecture
End-to-End Training
Experiments
Conclusion & Future Work
3
End-to-EndMemoryNetworksforMulti-TurnSpokenLanguageUnderstandingYun-Nung(Vivian)Chen
4. Spoken Dialogue System (SDS)
• Spoken dialogue systems are intelligent agents that are able to help
users finish tasks more efficiently via spoken interactions.
• Spoken dialogue systems are being incorporated into various devices
(smart-phones, smart TVs, in-car navigating system, etc).
4
Good intelligent assistants help users to organize and access information
conveniently
JARVIS – Iron Man’s Personal Assistant Baymax – Personal Healthcare Companion
End-to-EndMemoryNetworksforMulti-TurnSpokenLanguageUnderstandingYun-Nung(Vivian)Chen
5. Dialogue System Pipeline
End-to-EndMemoryNetworksforMulti-TurnSpokenLanguageUnderstandingYun-Nung(Vivian)Chen
5
ASR
Language Understanding (LU)
• User Intent Detection
• Slot Filling
Dialogue Management (DM)
• Dialogue State Tracking
• Policy Decision
Output
Generation
Hypothesis
are there any action movies
to see this weekend
Semantic Frame
(Intents, Slots)
request_movie
genre=action
date=this weekend
System Action
request_locaion
Text response
Where are you located?
Screen Display
location?
Text Input
Are there any action movies to see this weekend?
Speech Signal
8. Dialogue System Pipeline
End-to-EndMemoryNetworksforMulti-TurnSpokenLanguageUnderstandingYun-Nung(Vivian)Chen
8
SLU usually focuses on understanding single-turn utterances
The understanding result is usually influenced by
1) local observations 2) global knowledge.
ASR
Language Understanding (LU)
• User Intent Detection
• Slot Filling
Dialogue Management (DM)
• Dialogue State Tracking
• Policy Decision
Output
Generation
Hypothesis
are there any action movies
to see this weekend
Semantic Frame
(Intents, Slots)
request_movie
genre=action
date=this weekend
System Action
request_locaion
Text response
Where are you located?
Screen Display
location?
Text Input
Are there any action movies to see this weekend?
Speech Signal
current bottleneck
error propagation
9. Spoken Language Understanding
9
End-to-EndMemoryNetworksforMulti-TurnSpokenLanguageUnderstandingYun-Nung(Vivian)Chen
just sent email to bob about fishing this weekend
O O O O
B-contact_name
O
B-subject I-subject I-subject
U
S
I send_email
D communication
send_email(contact_name=“bob”, subject=“fishing this weekend”)
are we going to fish this weekend
U1
S2
send_email(message=“are we going to fish this weekend”)
send email to bob
U2
send_email(contact_name=“bob”)
B-message
I-message
I-message I-message I-message
I-message I-message
B-contact_nameS1
Domain Identification Intent Prediction Slot Filling
10. Outline
Introduction
Spoken Dialogue System
Spoken/Natural Language Understanding (SLU/NLU)
Contextual Spoken Language Understanding
Model Architecture
End-to-End Training
Experiments
Conclusion & Future Work
10
End-to-EndMemoryNetworksforMulti-TurnSpokenLanguageUnderstandingYun-Nung(Vivian)Chen
11. MODEL ARCHITECTURE
End-to-EndMemoryNetworksforMulti-TurnSpokenLanguageUnderstandingYun-Nung(Vivian)Chen
11
u
Knowledge Attention Distributionpi
mi
Memory Representation
Weighted
Sum
h
∑ Wkg
o
Knowledge Encoding
Representation
history utterances
{xi}
current utterance
c
Inner
Product
Sentence
Encoder
RNNin
x1 x2 xi…
Contextual
Sentence Encoder
x1 x2 xi…
RNNmem
slot tagging sequence y
ht-1 ht
V V
W W W
wt-1 wt
yt-1 yt
U U
RNN
Tagger
M M
Idea: additionally incorporating contextual knowledge during slot tagging
Chen, et al., “End-to-End Memory Networks with Knowledge Carryover for Multi-Turn Spoken Language Understanding,” in Interspeech, 2016.
1. Sentence Encoding 2. Knowledge Attention 3. Knowledge Encoding
12. MODEL ARCHITECTURE
End-to-EndMemoryNetworksforMulti-TurnSpokenLanguageUnderstandingYun-Nung(Vivian)Chen
12
u
Knowledge Attention Distributionpi
mi
Memory Representation
Weighted
Sum
h
∑ Wkg
o
Knowledge Encoding
Representation
history utterances
{xi}
current utterance
c
Inner
Product
Sentence
Encoder
RNNin
x1 x2 xi…
Contextual
Sentence Encoder
x1 x2 xi…
RNNmem
slot tagging sequence y
ht-1 ht
V V
W W W
wt-1 wt
yt-1 yt
U U
RNN
Tagger
M M
Idea: additionally incorporating contextual knowledge during slot tagging
Chen, et al., “End-to-End Memory Networks with Knowledge Carryover for Multi-Turn Spoken Language Understanding,” in Interspeech, 2016.
1. Sentence Encoding 2. Knowledge Attention 3. Knowledge Encoding
CNN
CNN
13. END-TO-END TRAINING
• Tagging Objective
• RNN Tagger
End-to-EndMemoryNetworksforMulti-TurnSpokenLanguageUnderstandingYun-Nung(Vivian)Chen
13
slot tag sequence contextual utterances & current utterance
ht-1 ht+1ht
V V V
W W W W
wt-1 wt+1wt
yt-1 yt+1yt
U U U
o
M M M
Automatically figure out the attention distribution without explicit
supervision
14. Outline
Introduction
Spoken Dialogue System
Spoken/Natural Language Understanding (SLU/NLU)
Contextual Spoken Language Understanding
Model Architecture
End-to-End Training
Experiments
Conclusion & Future Work
14
End-to-EndMemoryNetworksforMulti-TurnSpokenLanguageUnderstandingYun-Nung(Vivian)Chen
15. EXPERIMENTS
• Dataset: Cortana communication session data
– GRU for all RNN
– adam optimizer
– embedding dim=150
– hidden unit=100
– dropout=0.5
End-to-EndMemoryNetworksforMulti-TurnSpokenLanguageUnderstandingYun-Nung(Vivian)Chen
15
Model Training Set
Knowledge
Encoding
Sentence
Encoder
First Turn Other Overall
RNN Tagger
single-turn x x 60.6 16.2 25.5
The model trained on single-turn data performs worse for non-first
turns due to mismatched training data
16. EXPERIMENTS
• Dataset: Cortana communication session data
– GRU for all RNN
– adam optimizer
– embedding dim=150
– hidden unit=100
– dropout=0.5
End-to-EndMemoryNetworksforMulti-TurnSpokenLanguageUnderstandingYun-Nung(Vivian)Chen
16
Model Training Set
Knowledge
Encoding
Sentence
Encoder
First Turn Other Overall
RNN Tagger
single-turn x x 60.6 16.2 25.5
multi-turn x x 55.9 45.7 47.4
Treating multi-turn data as single-turn for training performs reasonable
17. EXPERIMENTS
• Dataset: Cortana communication session data
– GRU for all RNN
– adam optimizer
– embedding dim=150
– hidden unit=100
– dropout=0.5
End-to-EndMemoryNetworksforMulti-TurnSpokenLanguageUnderstandingYun-Nung(Vivian)Chen
17
Model Training Set
Knowledge
Encoding
Sentence
Encoder
First Turn Other Overall
RNN Tagger
single-turn x x 60.6 16.2 25.5
multi-turn x x 55.9 45.7 47.4
Encoder-
Tagger
multi-turn current utt (c) RNN 57.6 56.0 56.3
multi-turn history + current (x, c) RNN 69.9 60.8 62.5
Encoding current and history utterances improves the performance
but increases the training time
18. EXPERIMENTS
• Dataset: Cortana communication session data
– GRU for all RNN
– adam optimizer
– embedding dim=150
– hidden unit=100
– dropout=0.5
End-to-EndMemoryNetworksforMulti-TurnSpokenLanguageUnderstandingYun-Nung(Vivian)Chen
18
Model Training Set
Knowledge
Encoding
Sentence
Encoder
First Turn Other Overall
RNN Tagger
single-turn x x 60.6 16.2 25.5
multi-turn x x 55.9 45.7 47.4
Encoder-
Tagger
multi-turn current utt (c) RNN 57.6 56.0 56.3
multi-turn history + current (x, c) RNN 69.9 60.8 62.5
Proposed multi-turn history + current (x, c) RNN 73.2 65.7 67.1
Applying memory networks significantly outperforms all approaches
with much less training time
19. EXPERIMENTS
• Dataset: Cortana communication session data
– GRU for all RNN
– adam optimizer
– embedding dim=150
– hidden unit=100
– dropout=0.5
End-to-EndMemoryNetworksforMulti-TurnSpokenLanguageUnderstandingYun-Nung(Vivian)Chen
19
Model Training Set
Knowledge
Encoding
Sentence
Encoder
First Turn Other Overall
RNN Tagger
single-turn x x 60.6 16.2 25.5
multi-turn x x 55.9 45.7 47.4
Encoder-
Tagger
multi-turn current utt (c) RNN 57.6 56.0 56.3
multi-turn history + current (x, c) RNN 69.9 60.8 62.5
Proposed
multi-turn history + current (x, c) RNN 73.2 65.7 67.1
multi-turn history + current (x, c) CNN 73.8 66.5 68.0
CNN produces comparable results for sentence encoding with
shorter training time
NEW! NOT IN THE PAPER!
20. Outline
Introduction
Spoken Dialogue System
Spoken/Natural Language Understanding (SLU/NLU)
Contextual Spoken Language Understanding
Model Architecture
End-to-End Training
Experiments
Conclusion & Future Work
20
End-to-EndMemoryNetworksforMulti-TurnSpokenLanguageUnderstandingYun-Nung(Vivian)Chen
21. Conclusion
• The proposed end-to-end memory networks store
contextual knowledge, which can be exploited dynamically
based on an attention model for manipulating knowledge
carryover for multi-turn understanding
• The end-to-end model performs the tagging task instead of
classification
• The experiments show the feasibility and robustness of
modeling knowledge carryover through memory networks
21
End-to-EndMemoryNetworksforMulti-TurnSpokenLanguageUnderstandingYun-Nung(Vivian)Chen
22. Future Work
• Leveraging not only local observation but also global
knowledge for better language understanding
– Syntax or semantics can serve as global knowledge to guide
the understanding model
– “Knowledge as a Teacher: Knowledge-Guided Structural
Attention Networks,” arXiv preprint arXiv: 1609.03286
22
End-to-EndMemoryNetworksforMulti-TurnSpokenLanguageUnderstandingYun-Nung(Vivian)Chen
23. Q & A
T H A N K S F O R YO U R AT T E N T I O N !
23
The code will be available at
https://github.com/yvchen/ContextualSLU