This document describes a method for unsupervised spoken language understanding using matrix factorization with knowledge graph propagation. It discusses two main issues: 1) adapting generic frames to domain-specific slots, which is addressed using a knowledge graph propagation model; and 2) learning implicit semantics, which is addressed using matrix factorization. The method is evaluated on a dialogue corpus where it achieves improved semantics estimation compared to baselines by modeling implicit semantics.
Explanations in Dialogue Systems through Uncertain RDF Knowledge BasesDaniel Sonntag
We implemented a generic dialogue shell that can be configured for and applied to domain-specific dialogue applications. The dialogue system works robustly for a new domain when the application backend can automatically infer previously unknown knowledge (facts) and provide explanations for the inference steps involved. For this purpose, we employ URDF, a query engine for uncertain and potentially inconsistent RDF knowledge bases. URDF supports rule-based, first-order predicate logic as used in OWL-Lite and OWL-DL, with simple and effective top-down reasoning capabilities. This mechanism also generates explanation graphs. These graphs can then be displayed in the GUI of the dialogue shell and help the user understand the underlying reasoning processes. We believe that proper explanations are a main factor for increasing the level of user trust in end-to-end human-computer interaction systems.
LOANONT-A RULE BASED ONTOLOGY FOR PERSONAL LOAN ELIGIBILITY EVALUATIONIJwest
In recent years, significant attention has been given to understand and implement banking solutions. The
global competitive business environment and advancement in Information Technology and in particular
internet technologies has facilitated the carrying out of banking activities outside the brick and mortar
premise of the banks. Credit availing schemes are the core of the banking industry. Many agencies are
working on it so as to make this facility hassle free for the customers and also to minimize the losses
incurred by the banks in the form of bad debts. The challenge has been, and still is, to recognize,
communicate and steadily improvise the banking solutions. The internet technologies are a potential
candidates to overcome these challenges. The paper describes LoanOnt Ontology with the associated
implementation toolset for creating an interoperable and sustainable personal loan calculation solution
which would provide an intercommunication platform to facilitate integration and interoperation of
information across interacting applications in banking scenarios.
Explanations in Dialogue Systems through Uncertain RDF Knowledge BasesDaniel Sonntag
We implemented a generic dialogue shell that can be configured for and applied to domain-specific dialogue applications. The dialogue system works robustly for a new domain when the application backend can automatically infer previously unknown knowledge (facts) and provide explanations for the inference steps involved. For this purpose, we employ URDF, a query engine for uncertain and potentially inconsistent RDF knowledge bases. URDF supports rule-based, first-order predicate logic as used in OWL-Lite and OWL-DL, with simple and effective top-down reasoning capabilities. This mechanism also generates explanation graphs. These graphs can then be displayed in the GUI of the dialogue shell and help the user understand the underlying reasoning processes. We believe that proper explanations are a main factor for increasing the level of user trust in end-to-end human-computer interaction systems.
LOANONT-A RULE BASED ONTOLOGY FOR PERSONAL LOAN ELIGIBILITY EVALUATIONIJwest
In recent years, significant attention has been given to understand and implement banking solutions. The
global competitive business environment and advancement in Information Technology and in particular
internet technologies has facilitated the carrying out of banking activities outside the brick and mortar
premise of the banks. Credit availing schemes are the core of the banking industry. Many agencies are
working on it so as to make this facility hassle free for the customers and also to minimize the losses
incurred by the banks in the form of bad debts. The challenge has been, and still is, to recognize,
communicate and steadily improvise the banking solutions. The internet technologies are a potential
candidates to overcome these challenges. The paper describes LoanOnt Ontology with the associated
implementation toolset for creating an interoperable and sustainable personal loan calculation solution
which would provide an intercommunication platform to facilitate integration and interoperation of
information across interacting applications in banking scenarios.
THE ABILITY OF WORD EMBEDDINGS TO CAPTURE WORD SIMILARITIESkevig
Distributed language representation has become the most widely used technique for language representation in various natural language processing tasks. Most of the natural language processing models that are based on deep learning techniques use already pre-trained distributed word representations, commonly called word embeddings. Determining the most qualitative word embeddings is of crucial importance for such models. However, selecting the appropriate word embeddings is a perplexing task since the projected embedding space is not intuitive to humans. In this paper, we explore different approaches for creating distributed word representations. We perform an intrinsic evaluation of several state-of-the-art word embedding methods. Their performance on capturing word similarities is analysed with existing benchmark datasets for word pairs similarities. The research in this paper conducts a correlation analysis between ground truth word similarities and similarities obtained by different word embedding methods.
THE ABILITY OF WORD EMBEDDINGS TO CAPTURE WORD SIMILARITIESkevig
Distributed language representation has become the most widely used technique for language representation in various natural language processing tasks. Most of the natural language processing models that are based on deep learning techniques use already pre-trained distributed word representations, commonly called word embeddings. Determining the most qualitative word embeddings is of crucial importance for such models. However, selecting the appropriate word embeddings is a perplexing task since the projected embedding space is not intuitive to humans.In this paper, we explore different approaches for creating distributed word representations. We perform an intrinsic evaluation of several state-of-the-art word embedding methods. Their performance on capturing word similarities is analysed with existing benchmark datasets for word pairs similarities. The research in this paper conducts a correlation analysis between ground truth word similarities and similarities obtained by different word embedding methods.
Schema-agnositc queries over large-schema databases: a distributional semanti...Andre Freitas
The evolution of data environments towards the growth in the size, complexity, dy-
namicity and decentralisation (SCoDD) of schemas drastically impacts contemporary
data management. The SCoDD trend emerges as a central data management concern
in Big Data scenarios, where users and applications have a demand for more complete
data, produced by independent data sources, under different semantic assumptions and
contexts of use. Most Database Management Systems (DBMSs) today target a closed
communication scenario, where the symbolic schema of the database is known a priori
by the database user, which is able to interpret it in an unambiguous way. The context
in which the data is consumed and produced is well-defined and it is typically the
same context in which the data was created. In contrast, data management under the
SCoDD conditions target an open communication scenario where the symbolic system of
the database is unknown by the user and multiple interpretation contexts are possible.
In this case the database can be created under a different context from the database
user. The emergence of this new data environment demands the revisit of the semantic
assumptions behind databases and the design of data access mechanisms which can
support semantically heterogeneous (open communication) data environments.
This work aims at filling this gap by proposing a complementary semantic model for
databases, based on distributional semantic models. Distributional semantics provides a
complementary perspective to the formal perspective of database semantics, which supports
semantic approximation as a first-class database operation. Differently from models
which describe uncertain and incomplete data or probabilistic databases, distributional-
relational models focuses on the construction of conceptual approximation approaches
for databases, supported by a comprehensive semantic model automatically built from
large-scale unstructured data external to the database, which serves as a semantic/com-
monsense knowledge base. The semantic model can be used to support schema-agnosticqueries, i.e. abstracting the data consumer from a specific conceptualization behind the
data.
The proposed distributional-relational semantic model is supported by a distributional
structured vector space model, named τ −Space, which represents structured data under
a distributional semantic model representation which, in coordination with a query plan-
ning approach, supports a schema-agnostic query mechanism for large-schema databases.
The query mechanism is materialized in the Treo query engine and is evaluated using
schema-agnostic natural language queries.
The evaluation of the query mechanism confirms that distributional semantics provides
a high-recall, medium-high precision, and low maintainability solution to cope with
the abstraction and conceptual-level differences in schema-agnostic queries over largeschema/
schema-less open domain dataset
Concept and example of a semantic solution implemented with SQL views to cooperate with users on queries over structured data with independence from database schema knowledge and technology.
The Smart Way to Invest in Artificial Intelligence and Machine Learning: Lisha Li, Amplify Partners
AI and ML are seeping into every startup, at least into every pitch deck. But what does it mean to build an AI/ML company? Some startups do require a closet filled with five PhD’s in data science, but that doesn’t necessarily mean yours does. Building intelligently with AI and ML.
Presentation of "Challenges in transfer learning in NLP" from Madrid Natural Language Processing Meetup Event, May, 2019.
https://www.meetup.com/es-ES/Madrid-Natural-Language-Processing-meetup/
Practical related work in repository: https://github.com/laraolmos/madrid-nlp-meetup
THE ABILITY OF WORD EMBEDDINGS TO CAPTURE WORD SIMILARITIESkevig
Distributed language representation has become the most widely used technique for language representation in various natural language processing tasks. Most of the natural language processing models that are based on deep learning techniques use already pre-trained distributed word representations, commonly called word embeddings. Determining the most qualitative word embeddings is of crucial importance for such models. However, selecting the appropriate word embeddings is a perplexing task since the projected embedding space is not intuitive to humans. In this paper, we explore different approaches for creating distributed word representations. We perform an intrinsic evaluation of several state-of-the-art word embedding methods. Their performance on capturing word similarities is analysed with existing benchmark datasets for word pairs similarities. The research in this paper conducts a correlation analysis between ground truth word similarities and similarities obtained by different word embedding methods.
THE ABILITY OF WORD EMBEDDINGS TO CAPTURE WORD SIMILARITIESkevig
Distributed language representation has become the most widely used technique for language representation in various natural language processing tasks. Most of the natural language processing models that are based on deep learning techniques use already pre-trained distributed word representations, commonly called word embeddings. Determining the most qualitative word embeddings is of crucial importance for such models. However, selecting the appropriate word embeddings is a perplexing task since the projected embedding space is not intuitive to humans.In this paper, we explore different approaches for creating distributed word representations. We perform an intrinsic evaluation of several state-of-the-art word embedding methods. Their performance on capturing word similarities is analysed with existing benchmark datasets for word pairs similarities. The research in this paper conducts a correlation analysis between ground truth word similarities and similarities obtained by different word embedding methods.
Schema-agnositc queries over large-schema databases: a distributional semanti...Andre Freitas
The evolution of data environments towards the growth in the size, complexity, dy-
namicity and decentralisation (SCoDD) of schemas drastically impacts contemporary
data management. The SCoDD trend emerges as a central data management concern
in Big Data scenarios, where users and applications have a demand for more complete
data, produced by independent data sources, under different semantic assumptions and
contexts of use. Most Database Management Systems (DBMSs) today target a closed
communication scenario, where the symbolic schema of the database is known a priori
by the database user, which is able to interpret it in an unambiguous way. The context
in which the data is consumed and produced is well-defined and it is typically the
same context in which the data was created. In contrast, data management under the
SCoDD conditions target an open communication scenario where the symbolic system of
the database is unknown by the user and multiple interpretation contexts are possible.
In this case the database can be created under a different context from the database
user. The emergence of this new data environment demands the revisit of the semantic
assumptions behind databases and the design of data access mechanisms which can
support semantically heterogeneous (open communication) data environments.
This work aims at filling this gap by proposing a complementary semantic model for
databases, based on distributional semantic models. Distributional semantics provides a
complementary perspective to the formal perspective of database semantics, which supports
semantic approximation as a first-class database operation. Differently from models
which describe uncertain and incomplete data or probabilistic databases, distributional-
relational models focuses on the construction of conceptual approximation approaches
for databases, supported by a comprehensive semantic model automatically built from
large-scale unstructured data external to the database, which serves as a semantic/com-
monsense knowledge base. The semantic model can be used to support schema-agnosticqueries, i.e. abstracting the data consumer from a specific conceptualization behind the
data.
The proposed distributional-relational semantic model is supported by a distributional
structured vector space model, named τ −Space, which represents structured data under
a distributional semantic model representation which, in coordination with a query plan-
ning approach, supports a schema-agnostic query mechanism for large-schema databases.
The query mechanism is materialized in the Treo query engine and is evaluated using
schema-agnostic natural language queries.
The evaluation of the query mechanism confirms that distributional semantics provides
a high-recall, medium-high precision, and low maintainability solution to cope with
the abstraction and conceptual-level differences in schema-agnostic queries over largeschema/
schema-less open domain dataset
Concept and example of a semantic solution implemented with SQL views to cooperate with users on queries over structured data with independence from database schema knowledge and technology.
The Smart Way to Invest in Artificial Intelligence and Machine Learning: Lisha Li, Amplify Partners
AI and ML are seeping into every startup, at least into every pitch deck. But what does it mean to build an AI/ML company? Some startups do require a closet filled with five PhD’s in data science, but that doesn’t necessarily mean yours does. Building intelligently with AI and ML.
Presentation of "Challenges in transfer learning in NLP" from Madrid Natural Language Processing Meetup Event, May, 2019.
https://www.meetup.com/es-ES/Madrid-Natural-Language-Processing-meetup/
Practical related work in repository: https://github.com/laraolmos/madrid-nlp-meetup
Macroeconomics- Movie Location
This will be used as part of your Personal Professional Portfolio once graded.
Objective:
Prepare a presentation or a paper using research, basic comparative analysis, data organization and application of economic information. You will make an informed assessment of an economic climate outside of the United States to accomplish an entertainment industry objective.
Delivering Micro-Credentials in Technical and Vocational Education and TrainingAG2 Design
Explore how micro-credentials are transforming Technical and Vocational Education and Training (TVET) with this comprehensive slide deck. Discover what micro-credentials are, their importance in TVET, the advantages they offer, and the insights from industry experts. Additionally, learn about the top software applications available for creating and managing micro-credentials. This presentation also includes valuable resources and a discussion on the future of these specialised certifications.
For more detailed information on delivering micro-credentials in TVET, visit this https://tvettrainer.com/delivering-micro-credentials-in-tvet/
Normal Labour/ Stages of Labour/ Mechanism of LabourWasim Ak
Normal labor is also termed spontaneous labor, defined as the natural physiological process through which the fetus, placenta, and membranes are expelled from the uterus through the birth canal at term (37 to 42 weeks
Introduction to AI for Nonprofits with Tapp NetworkTechSoup
Dive into the world of AI! Experts Jon Hill and Tareq Monaur will guide you through AI's role in enhancing nonprofit websites and basic marketing strategies, making it easy to understand and apply.
The simplified electron and muon model, Oscillating Spacetime: The Foundation...RitikBhardwaj56
Discover the Simplified Electron and Muon Model: A New Wave-Based Approach to Understanding Particles delves into a groundbreaking theory that presents electrons and muons as rotating soliton waves within oscillating spacetime. Geared towards students, researchers, and science buffs, this book breaks down complex ideas into simple explanations. It covers topics such as electron waves, temporal dynamics, and the implications of this model on particle physics. With clear illustrations and easy-to-follow explanations, readers will gain a new outlook on the universe's fundamental nature.
Thinking of getting a dog? Be aware that breeds like Pit Bulls, Rottweilers, and German Shepherds can be loyal and dangerous. Proper training and socialization are crucial to preventing aggressive behaviors. Ensure safety by understanding their needs and always supervising interactions. Stay safe, and enjoy your furry friends!
2024.06.01 Introducing a competency framework for languag learning materials ...Sandy Millin
http://sandymillin.wordpress.com/iateflwebinar2024
Published classroom materials form the basis of syllabuses, drive teacher professional development, and have a potentially huge influence on learners, teachers and education systems. All teachers also create their own materials, whether a few sentences on a blackboard, a highly-structured fully-realised online course, or anything in between. Despite this, the knowledge and skills needed to create effective language learning materials are rarely part of teacher training, and are mostly learnt by trial and error.
Knowledge and skills frameworks, generally called competency frameworks, for ELT teachers, trainers and managers have existed for a few years now. However, until I created one for my MA dissertation, there wasn’t one drawing together what we need to know and do to be able to effectively produce language learning materials.
This webinar will introduce you to my framework, highlighting the key competencies I identified from my research. It will also show how anybody involved in language teaching (any language, not just English!), teacher training, managing schools or developing language learning materials can benefit from using the framework.
This presentation includes basic of PCOS their pathology and treatment and also Ayurveda correlation of PCOS and Ayurvedic line of treatment mentioned in classics.
Exploiting Artificial Intelligence for Empowering Researchers and Faculty, In...Dr. Vinod Kumar Kanvaria
Exploiting Artificial Intelligence for Empowering Researchers and Faculty,
International FDP on Fundamentals of Research in Social Sciences
at Integral University, Lucknow, 06.06.2024
By Dr. Vinod Kumar Kanvaria
A review of the growth of the Israel Genealogy Research Association Database Collection for the last 12 months. Our collection is now passed the 3 million mark and still growing. See which archives have contributed the most. See the different types of records we have, and which years have had records added. You can also see what we have for the future.
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...Levi Shapiro
Letter from the Congress of the United States regarding Anti-Semitism sent June 3rd to MIT President Sally Kornbluth, MIT Corp Chair, Mark Gorenberg
Dear Dr. Kornbluth and Mr. Gorenberg,
The US House of Representatives is deeply concerned by ongoing and pervasive acts of antisemitic
harassment and intimidation at the Massachusetts Institute of Technology (MIT). Failing to act decisively to ensure a safe learning environment for all students would be a grave dereliction of your responsibilities as President of MIT and Chair of the MIT Corporation.
This Congress will not stand idly by and allow an environment hostile to Jewish students to persist. The House believes that your institution is in violation of Title VI of the Civil Rights Act, and the inability or
unwillingness to rectify this violation through action requires accountability.
Postsecondary education is a unique opportunity for students to learn and have their ideas and beliefs challenged. However, universities receiving hundreds of millions of federal funds annually have denied
students that opportunity and have been hijacked to become venues for the promotion of terrorism, antisemitic harassment and intimidation, unlawful encampments, and in some cases, assaults and riots.
The House of Representatives will not countenance the use of federal funds to indoctrinate students into hateful, antisemitic, anti-American supporters of terrorism. Investigations into campus antisemitism by the Committee on Education and the Workforce and the Committee on Ways and Means have been expanded into a Congress-wide probe across all relevant jurisdictions to address this national crisis. The undersigned Committees will conduct oversight into the use of federal funds at MIT and its learning environment under authorities granted to each Committee.
• The Committee on Education and the Workforce has been investigating your institution since December 7, 2023. The Committee has broad jurisdiction over postsecondary education, including its compliance with Title VI of the Civil Rights Act, campus safety concerns over disruptions to the learning environment, and the awarding of federal student aid under the Higher Education Act.
• The Committee on Oversight and Accountability is investigating the sources of funding and other support flowing to groups espousing pro-Hamas propaganda and engaged in antisemitic harassment and intimidation of students. The Committee on Oversight and Accountability is the principal oversight committee of the US House of Representatives and has broad authority to investigate “any matter” at “any time” under House Rule X.
• The Committee on Ways and Means has been investigating several universities since November 15, 2023, when the Committee held a hearing entitled From Ivory Towers to Dark Corners: Investigating the Nexus Between Antisemitism, Tax-Exempt Universities, and Terror Financing. The Committee followed the hearing with letters to those institutions on January 10, 202
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...
Yun-Nung (Vivian) Chen - 2015 - Matrix Factorization with Knowledge Graph Propagation for Unsupervised Spoken Language Understanding
1. Matrix Factorization with Knowledge Graph Propagation
for Unsupervised Spoken Language Understanding
Yun-Nung (Vivian) Chen
William Yang Wang
Anatole Gershman
Alexander I. Rudnicky
1
Email: yvchen@cs.cmu.edu
Website: http://vivianchen.idv.tw
4. A POPULAR ROBOT - BAYMAX
Big Hero 6 -- Video content owned and licensed by Disney Entertainment, Marvel Entertainment, LLC, etc 4
Baymax is capable of maintaining a good spoken dialogue system and learning
new knowledge for better understanding and interacting with people.
5. SPOKEN DIALOGUE SYSTEM (SDS)
Spoken dialogue systems are the intelligent agents that are able to help
users finish tasks more efficiently via speech interactions.
Spoken dialogue systems are being incorporated into various devices
(smart-phones, smart TVs, in-car navigating system, etc).
Apple’
s Siri
Microsoft’s
Cortana
Amazon’
s Echo
Samsung’s SMART TV
Google Now
https://www.apple.com/ios/siri/
http://www.windowsphone.com/en-us/how-to/wp8/cortana/meet-cortana
http://www.xbox.com/en-US/
http://www.amazon.com/oc/echo/
http://www.samsung.com/us/experience/smart-tv/
https://www.google.com/landing/now/
Microsoft’s
XBOX Kinect
5
6. CHALLENGES FOR SDS
An SDS in a new domain requires
1) A hand-crafted domain ontology
2) Utterances labeled with semantic representations
3) An SLU component for mapping utterances into semantic representations
With increasing spoken interactions, building domain ontologies and
annotating utterances cost a lot so that the data does not scale up.
The goal is to enable an SDS to automatically learn this knowledge so
that open domain requests can be handled.
6
7. INTERACTION EXAMPLE
find an inexpensive eating place for taiwanese food
User
Intelligent Agent
Q: How does a dialogue system process this request?
Inexpensive Taiwanese eating places include Din Tai
Fung, etc. What do you want to choose?
7
8. SDS PROCESS – AVAILABLE DOMAIN ONTOLOGY
target
foodprice
AMOD
NN
seeking
PREP_FOR
Organized Domain Knowledge
find an inexpensive eating place for taiwanese food
Intelligent Agent
8
User
9. SDS PROCESS – AVAILABLE DOMAIN ONTOLOGY
target
foodprice
AMOD
NN
seeking
PREP_FOR
Organized Domain Knowledge
find an inexpensive eating place for taiwanese food
Intelligent Agent
9
Ontology Induction (semantic slot)
User
10. SDS PROCESS – AVAILABLE DOMAIN ONTOLOGY
target
foodprice
AMOD
NN
seeking
PREP_FOR
Organized Domain Knowledge
find an inexpensive eating place for taiwanese food
User
Intelligent Agent
10
Ontology Induction (semantic slot)
Structure Learning (inter-slot relation)
11. SDS PROCESS – SPOKEN LANGUAGE UNDERSTANDING (SLU)
target
foodprice
AMOD
NN
seeking
PREP_FOR
Organized Domain Knowledge
find an inexpensive eating place for taiwanese food
Intelligent Agent
11
seeking=“find”
target=“eating place”
price=“inexpensive”
food=“taiwanese food”
Spoken Language Understanding
User
12. find an inexpensive eating place for taiwanese food
SELECT restaurant {
restaurant.price=“inexpensive”
restaurant.food=“taiwanese food”
}
Din Tai Fung
Boiling Point
:
:
12
SDS PROCESS – DIALOGUE MANAGEMENT (DM)
Intelligent Agent
User
Inexpensive Taiwanese eating places include Din Tai
Fung, Boiling Point, etc. What do you want to choose?
13. GOALS
find an inexpensive eating place for taiwanese food
User
13
target
foodprice
AMOD
NN
seeking PREP_FOR
SELECT restaurant {
restaurant.price=“inexpensive”
restaurant.food=“taiwanese food”
}
• Ontology Induction (semantic slot)
• Structure Learning (inter-slot relation)
• Spoken Language Understanding
14. GOALS
• Ontology Induction
• Structure Learning
• Spoken Language
Understanding
Knowledge Acquisition SLU Modeling
find an inexpensive eating place for taiwanese food
User
14
15. SPOKEN LANGUAGE UNDERSTANDING
Input: user utterances
Output: the domain-specific semantic concepts included in each utterance
15
SLU Model
target=“restaurant”
price=“cheap”
“can I have a cheap restaurant”
Ontology Induction
Unlabeled
Collection
Semantic KG
Frame-Semantic Parsing
Fw Fs
Feature Model
Rw
Rs
Knowledge Graph
Propagation Model
Word Relation Model
Lexical KG
Slot Relation Model
Structure
Learning
.
Semantic KG
SLU Modeling by Matrix Factorization
Semantic Representation
17. PROBABILISTIC FRAME-SEMANTIC PARSING
FrameNet [Baker et al., 1998]
a linguistically semantic resource, based on the frame-semantics
theory
words/phrases can be represented as frames
“low fat milk” “milk” evokes the “food” frame;
“low fat” fills the descriptor frame element
SEMAFOR [Das et al., 2014]
a state-of-the-art frame-semantics parser, trained on manually
annotated FrameNet sentences
Baker et al., "The berkeley framenet project," in Proc. of International Conference on Computational linguistics, 1998.
Das et al., " Frame-semantic parsing," in Proc. of Computational Linguistics, 2014. 17
18. FRAME-SEMANTIC PARSING FOR UTTERANCES
can i have a cheap restaurant
Frame: capability
FT LU: can FE LU: i
Frame: expensiveness
FT LU: cheap
Frame: locale by use
FT/FE LU: restaurant
1st Issue: adapting generic frames to domain-specific settings for SDSs
Good!
Good!
?
FT: Frame Target; FE: Frame Element; LU: Lexical Unit
18
19. SPOKEN LANGUAGE UNDERSTANDING
19
Input: user utterances
Output: the domain-specific semantic concepts included in each utterance
SLU Model
target=“restaurant”
price=“cheap”
“can I have a cheap restaurant”
Ontology Induction
Unlabeled
Collection
Semantic KG
Frame-Semantic Parsing
Fw Fs
Feature Model
Rw
Rs
Knowledge Graph
Propagation Model
Word Relation Model
Lexical KG
Slot Relation Model
Structure
Learning
.
Semantic KG
SLU Modeling by Matrix Factorization
Semantic Representation
Y.-N. Chen et al., "Matrix Factorization with Knowledge Graph Propagation for Unsupervised Spoken Language Understanding," in Proc. of ACL-IJCNLP, 2015.
21. 1ST ISSUE: HOW TO ADAPT GENERIC SLOTS TO A DOMAIN-SPECIFIC SETTING?
KNOWLEDGE GRAPH PROPAGATION MODEL
Assumption: The domain-specific words/slots have more dependency to each other.
Word Relation Model Slot Relation Model
word
relation
matrix
slot
relation
matrix
‧
1
Word Observation Slot Candidate
Train
cheap restaurant foodexpensiveness
1
locale_by_use
11
1 1
food
1 1
1
Test
1
1
Slot Induction
Relation matrices allow each node to propagate scores to its neighbors in the knowledge
graph, so that domain-specific words/slots have higher scores after matrix multiplication.
i like
1 1
capability
1
locale_by_use
food expensiveness
seeking
relational_quantitydesiring
Utterance 1
i would like a cheap restaurant
……
find a restaurant with chinese food
Utterance 2
show me a list of cheap restaurants
Test Utterance
21
22. KNOWLEDGE GRAPH CONSTRUCTION
22
ccomp
amod
dobjnsubj det
Syntactic dependency parsing on utterances
can i have a cheap restaurant
capability expensiveness locale_by_use
Word-based lexical
knowledge graph
Slot-based semantic
knowledge graph
restaurant
can
have
i
a
cheap
w
w
capability
locale_by_use expensiveness
s
23. KNOWLEDGE GRAPH CONSTRUCTION
23
Word-based lexical
knowledge graph
Slot-based semantic
knowledge graph
restaurant
can
have
i
a
cheap
w
w
capability
locale_by_use expensiveness
s
The edge between a node pair is weighted as relation importance to
propagate the scores via a relation matrix
How to decide the weights to represent relation importance?
24. WEIGHT MEASUREMENT BY EMBEDDINGS
Levy and Goldberg, " Dependency-Based Word Embeddings," in Proc. of ACL, 2014. 24
Dependency-based word embeddings
Dependency-based slot embeddings
can = [0.8 … 0.24]
have = [0.3 … 0.21]
:
:
expensiveness = [0.12 … 0.7]
capability = [0.3 … 0.6]
:
:
can i have a cheap restaurant
ccomp
amod
dobjnsubj det
have acapability expensiveness locale_by_use
ccomp
amod
dobjnsubj det
25. WEIGHT MEASUREMENT BY EMBEDDINGS
Compute edge weights to represent relation importance
Slot-to-slot semantic relation 𝑅 𝑠
𝑆
: similarity between slot embeddings
Slot-to-slot dependency relation 𝑅 𝑠
𝐷
: dependency score between slot embeddings
Word-to-word semantic relation 𝑅 𝑤
𝑆 : similarity between word embeddings
Word-to-word dependency relation 𝑅 𝑤
𝐷
: dependency score between word
embeddings
25
𝑅 𝑤
𝑆𝐷
= 𝑅 𝑤
𝑆
+𝑅 𝑤
𝐷
𝑅 𝑠
𝑆𝐷 = 𝑅 𝑠
𝑆+𝑅 𝑠
𝐷
w1
w2
w3
w4
w5
w6
w7
s2
s1 s3
Y.-N. Chen et al., “Jointly Modeling Inter-Slot Relations by Random Walk on Knowledge Graphs for Unsupervised Spoken
Language Understanding," in Proc. of NAACL, 2015.
26. KNOWLEDGE GRAPH PROPAGATION MODEL
26
Word Relation Model Slot Relation Model
word
relation
matrix
slot
relation
matrix
‧
1
Word Observation Slot Candidate
Train
cheap restaurant foodexpensiveness
1
locale_by_use
11
1 1
food
1 1
1
Test
1
1
Slot Induction
𝑅 𝑤
𝑆𝐷
𝑅 𝑠
𝑆𝐷
27. FEATURE MODEL
27
Ontology Induction
SLU
Fw Fs
Structure
Learning
.
1
Utterance 1
i would like a cheap restaurant
Word Observation Slot Candidate
Train
………
cheap restaurant foodexpensiveness
1
locale_by_use
11
find a restaurant with chinese food
Utterance 2
1 1
food
1 1
1
Test
1
1
.97.90 .95.85
.93 .92.98.05 .05
Slot Induction
show me a list of cheap restaurants
Test Utterance hidden semantics
2nd Issue: unobserved hidden semantics may benefit understanding
29. 2ND ISSUE: HOW TO LEARN IMPLICIT SEMANTICS?
MATRIX FACTORIZATION (MF)
29
Reasoning with Matrix Factorization
Word Relation Model Slot Relation Model
word
relation
matrix
slot
relation
matrix
‧
1
Word Observation Slot Candidate
Train
cheap restaurant foodexpensiveness
1
locale_by_use
11
1 1
food
1 1
1
Test
1
1
.97.90 .95.85
.93 .92.98.05 .05
Slot Induction
MF method completes a partially-missing matrix based on a low-rank latent
semantics assumption.
𝑅 𝑤
𝑆𝐷
𝑅 𝑠
𝑆𝐷
30. MATRIX FACTORIZATION (MF)
The decomposed matrices represent low-rank latent semantics for utterances
and words/slots respectively
The product of two matrices fills the probability of hidden semantics
30
1
Word Observation Slot Candidate
Train
cheap restaurant foodexpensiveness
1
locale_by_use
11
1 1
food
1 1
1
Test
1
1
.97.90 .95.85
.93 .92.98.05 .05
𝑼
𝑾 + 𝑺
≈ 𝑼 × 𝒅 𝒅 × 𝑾 + 𝑺×
31. BAYESIAN PERSONALIZED RANKING FOR MF
31
Model implicit feedback
not treat unobserved facts as negative samples (true or false)
give observed facts higher scores than unobserved facts
Objective:
1
𝑓+
𝑓−
𝑓−
The objective is to learn a set of well-ranked semantic slots per utterance.
𝑢
𝑥
32. 2ND ISSUE: HOW TO LEARN IMPLICIT SEMANTICS?
MATRIX FACTORIZATION (MF)
32
Reasoning with Matrix Factorization
Word Relation Model Slot Relation Model
word
relation
matrix
slot
relation
matrix
‧
1
Word Observation Slot Candidate
Train
cheap restaurant foodexpensiveness
1
locale_by_use
11
1 1
food
1 1
1
Test
1
1
.97.90 .95.85
.93 .92.98.05 .05
Slot Induction
𝑅 𝑤
𝑆𝐷
𝑅 𝑠
𝑆𝐷
MF method completes a partially-missing matrix based on a low-rank latent
semantics assumption.
34. EXPERIMENTAL SETUP
Dataset
Cambridge University SLU corpus [Henderson, 2012]
Restaurant recommendation in an in-car setting in Cambridge
WER = 37%
vocabulary size = 1868
2,166 dialogues
15,453 utterances
dialogue slot: addr, area, food, name,
phone, postcode, price range,
task, type
The mapping table between induced and reference slots
Henderson et al., "Discriminative spoken language understanding using word confusion networks," in Proc. of SLT, 2012. 34
35. EXPERIMENT 1: QUALITY OF SEMANTICS ESTIMATION
Metric: Mean Average Precision (MAP) of all estimated slot probabilities for
each utterance
35
Approach
ASR Manual
w/o w/ Explicit w/o w/ Explicit
Explicit
Support Vector Machine 32.5 36.6
Multinomial Logistic Regression 34.0 38.8
36. 36
EXPERIMENT 1: QUALITY OF SEMANTICS ESTIMATION
Metric: Mean Average Precision (MAP) of all estimated slot probabilities for
each utterance
Approach
ASR Manual
w/o w/ Explicit w/o w/ Explicit
Explicit
Support Vector Machine 32.5 36.6
Multinomial Logistic Regression 34.0 38.8
Implicit
Baseline
Random
Majority
MF
Feature Model
Feature Model +
Knowledge Graph Propagation
Modeling
Implicit
Semantics
37. 37
EXPERIMENT 1: QUALITY OF SEMANTICS ESTIMATION
Metric: Mean Average Precision (MAP) of all estimated slot probabilities for
each utterance
Approach
ASR Manual
w/o w/ Explicit w/o w/ Explicit
Explicit
Support Vector Machine 32.5 36.6
Multinomial Logistic Regression 34.0 38.8
Implicit
Baseline
Random 3.4 2.6
Majority 15.4 16.4
MF
Feature Model 24.2 22.6
Feature Model +
Knowledge Graph Propagation
40.5*
(+19.1%)
52.1*
(+34.3%)
Modeling
Implicit
Semantics
38. Approach
ASR Manual
w/o w/ Explicit w/o w/ Explicit
Explicit
Support Vector Machine 32.5 36.6
Multinomial Logistic Regression 34.0 38.8
Implicit
Baseline
Random 3.4 22.5 2.6 25.1
Majority 15.4 32.9 16.4 38.4
MF
Feature Model 24.2 37.6* 22.6 45.3*
Feature Model +
Knowledge Graph Propagation
40.5*
(+19.1%)
43.5*
(+27.9%)
52.1*
(+34.3%)
53.4*
(+37.6%)
Modeling
Implicit
Semantics
The MF approach effectively models hidden semantics to improve SLU.
Adding a knowledge graph propagation model further improves performance.
38
EXPERIMENT 1: QUALITY OF SEMANTICS ESTIMATION
Metric: Mean Average Precision (MAP) of all estimated slot probabilities for
each utterance
39. All types of relations are useful to infer hidden semantics.
Approach ASR Manual
Feature Model 37.6 45.3
Feature +
Knowledge Graph
Propagation
Semantic
𝑅 𝑤
𝑆
0
0 𝑅 𝑠
𝑆 41.4* 51.6*
Dependency
𝑅 𝑤
𝐷
0
0 𝑅 𝑠
𝐷 41.6* 49.0*
Word 𝑅 𝑤
𝑆𝐷
0
0 0
39.2* 45.2
Slot
0 0
0 𝑅 𝑠
𝑆𝐷 42.1* 49.9*
Both
𝑅w
𝑆𝐷
0
0 𝑅 𝑠
𝑆𝐷
EXPERIMENT 2: EFFECTIVENESS OF RELATIONS
39
40. Approach ASR Manual
Feature Model 37.6 45.3
Feature +
Knowledge Graph
Propagation
Semantic
𝑅 𝑤
𝑆
0
0 𝑅 𝑠
𝑆 41.4* 51.6*
Dependency
𝑅 𝑤
𝐷
0
0 𝑅 𝑠
𝐷 41.6* 49.0*
Word 𝑅 𝑤
𝑆𝐷
0
0 0
39.2* 45.2
Slot
0 0
0 𝑅 𝑠
𝑆𝐷 42.1* 49.9*
Both
𝑅w
𝑆𝐷
0
0 𝑅 𝑠
𝑆𝐷 43.5* (+15.7%) 53.4* (+17.9%)
Combining different relations further improves the performance.
EXPERIMENT 2: EFFECTIVENESS OF RELATIONS
43
All types of relations are useful to infer hidden semantics.
42. CONCLUSIONS
Ontology induction and knowledge graph construction enable systems to
automatically acquire open domain knowledge.
MF for SLU provides a principle model that is able to
unify the automatically acquired knowledge
adapt to a domain-specific setting
and then allows systems to consider implicit semantics for better understanding.
The work shows the feasibility and the potential of improving
generalization, maintenance, efficiency, and scalability of SDSs.
The proposed unsupervised SLU achieves 43% of MAP on ASR-transcribed
conversations.
42