SlideShare a Scribd company logo
Noun Paraphrasing
Based on a Variety of Contexts
Tomoyuki Kajiwara and Kazuhide Yamamoto
Nagaoka University of Technology, Japan
Abstract
We propose a method to paraphrase nouns
in consideration of the contexts.
The Characteristic of Our Proposed Method
–  It can paraphrase robust without the word frequency.
•  Our Number of Differences based method is better
than the Co-occurrence Frequency based method.
–  It can paraphrase depending on the context.
•  e.g. Reduce the burdens on the back.
•  NoD : load, stress, damage, exhaustion, tense, etc.
•  CoF : cost, expense, actual cost, etc. (money-related)
2
Teacher
Instructor
Lexical Paraphrasing, Lexical Substitution
The different linguistic representation
showing the same meaning.
3
Application of the Lexical Paraphrasing
•  For Reading Assistance (Lexical Simplification)
–  Never judge people by external appearance.
–  Never judge people by outside appearance.
•  For Machine Translation (pre-editing)
–  その本なら書類の下にある
It is under the papers if it is the book.
–  その本 は 書類の下にある
The book is under the papers. ✔
✔
✔
4
Difficulty of the Lexical Paraphrasing
•  Force someone to shoulder a huge increase in his
financial burdens .
–  Force someone to shoulder a huge increase in
his financial costs .
–  Force someone to shoulder a huge increase in
his financial loads .
•  Reduce the burdens on the back.
–  Reduce the costs on the back.
–  Reduce the loads on the back.
✔
✔
It changes depending on the context
whether paraphrasing is possible or impossible.
5
Input: Look for the access to the airport.
Output: Look for the way to the airport.
Approach
restaurant
market
purpose
transfer
fee
way
bus
transportation
delivery
look for the *** *** to the airport
1. way 2. transfer 3. fee
To sort by the context similarity
6
Input: Look for the access to the airport.
Output: Look for the way to the airport.
Approach
restaurant
market
purpose
transfer
fee
way
bus
transportation
delivery
look for the *** *** to the airport
1. way 2. transfer 3. fee
To sort by the context similarity
To generate a proper sentence
To select a suitable paraphrase
7
Proposed Method
We propose a method to paraphrase nouns
in consideration of the contexts.
1.  To extract candidate words
used in the same context as the input sentence
2.  To calculate the similarity
between the original and candidate words
•  The number of differences of the context
in the candidate word.
•  The number of differences of the common context
between the original and the candidate word.
3.  To select a candidate word
with the maximum similarity as the paraphrase
original → paraphrase
8
Proposed Method
We propose a method to paraphrase nouns
in consideration of the contexts.
1.  To extract candidate words
used in the same context as the input sentence
2.  To calculate the similarity
between the original and candidate words
•  The number of differences of the context
in the candidate word.
•  The number of differences of the common context
between the original and the candidate word.
3.  To select a candidate word
with the maximum similarity as the paraphrase
original → paraphrase
9
To extract candidate words
•  To extract candidate words used in the same context
•  But words used in the completely same context is hardly found
↓
•  On the basis of an object word access ,
an input sentence is divided into a pre- and a post-context.
Look for the access to the airport.
look for the *** *** to the airport
pre-
context
post-
context
restaurant transfer
market fee
purpose way
transfer bus
fee transportation
way delivery
10
To extract candidate words
Look for the access to the airport.
look for the *** *** to the airport
pre-
context
post-
context
restaurant transfer
market fee
purpose way
transfer bus
fee transportation
way delivery
•  Words appearing in common
may be used in the input sentence
•  We can generate a proper sentence
11
Proposed Method
We propose a method to paraphrase nouns
in consideration of the contexts.
1.  To extract candidate words
used in the same context as the input sentence
2.  To calculate the similarity
between the original and candidate words
•  The number of differences of the context
in the candidate word.
•  The number of differences of the common context
between the original and the candidate word.
3.  To select a candidate word
with the maximum similarity as the paraphrase
original → paraphrase
12
To calculate similarity between words
The larger number of differences of the common context
between the original and the candidate word,
the larger paraphrasability.
1
The larger number of differences of the context
in the candidate word, the smaller paraphrasability.2
common(A, B): The number of differences of the common context between A and B
difference(A): The number of differences of the context in A
TNC: The total number of differences of the context
13
similarity(original,candidate) =
common(original,candidate)× log(
TNC
difference(candidate)
)
1 2
tf(w): The number of occurrences of the word
df(w): The number of documents occurring the word
TND: The total number of documents
common(A, B): The number of differences of the common context
difference(A): The number of differences of the context
TNC: The total number of differences of the context
tf (word)× log(
TND
df (word)
)
common(original,candidate)× log(
TNC
difference(candidate)
)
TF-IDF
14
New Statistics:
Number of Occurrences → Number of Differences
Proposed Method
We propose a method to paraphrase nouns
in consideration of the contexts.
1.  To extract candidate words
used in the same context as the input sentence
2.  To calculate the similarity
between the original and candidate words
•  The number of differences of the context
in the candidate word.
•  The number of differences of the common context
between the original and the candidate word.
3.  To select a candidate word
with the maximum similarity as the paraphrase
original → paraphrase
15
The characteristic of our proposed method
•  Extraction
–  We can generate a proper sentence
based on the common contexts.
•  Selection
–  We can select a suitable paraphrase
based on the number of differences of the context.
To compare with the co-occurrence frequency
and pointwise mutual information experimentally
16
Comparative Methods
•  Marton et al. (2009) Improved Statistical Machine Translation
Using Monolingually-Derived Paraphrases.
•  Bhagat and Ravichandran (2008) Large Scale Acquisition of
Paraphrases for Learning Surface Patterns.
1.  Both of these methods generate a feature vector
from contexts of the target word original .
2.  They calculate a cosine similarity
between the feature vectors.
3.  They select a word with the maximum similarity
as the paraphrase .
17
Comparative Methods
•  [Marton 09]:Co-occurrence frequency based method
•  [Bhagat 08]: Pointwise mutual information based method
1.  Both of these methods generate a feature vector
from contexts of the target word original .
2.  They calculate a cosine similarity
between the feature vectors.
3.  They select a word with the maximum similarity
as the paraphrase .
18
Experimental setup
•  Japanese
–  In this experiment, we paraphrase for Japanese nouns.
–  This approach is language-independent.
•  Definition of a context
–  We define the content words in the phrase which is
dependency to a noun as context.
Look for the access to the airport.
19
Experimental setup
•  Web Japanese N-gram: To extract candidate words
–  Japanese word N (1-7) grams. (We use 7-gram as sentence.)
–  Each N-gram appears more than 20 times in the Web.
–  We use 200 sentences in the following 1.3M sentences.
•  Noun … Noun(paraphrase target) … Verb(original form).
* Japanese is SOV language.
•  Kyoto University case frame: To calculate similarity
–  Japanese predicate and Japanese noun pairs from the Web.
–  It is contained 34k predicates and 824k nouns. (We use all.)
–  We define these predicates as context,
and we calculate similarity between these nouns.
20
Number of paraphrasable nouns
to the 1st place of similarity
21
Number of paraphrasable nouns
to the 1st place of similarity
High frequent words (e.g. こと(thing)) have a bad influence.
Postfix words have a bad influence.
(e.g. the word that describe the number of items)
22
The proposed method is robust
because we don t depend on the word frequency.
Relationship by rank of similarity
and number of paraphrasable nouns
23
Relationship by rank of similarity
and number of paraphrasable nouns
There are few differences.
24
Many paraphrase appear with rank 1.
Examples of the paraphrasing
in consideration of context
•  Assign a maximum penalty of N$.
–  Comparative method: imprisonment, pecuniary penalty, etc.
–  Our method: paying penalty, administrative penalty, etc.
•  imprisonment does not appear as a candidate.
•  Reduce the burdens on the back.
–  Comparative method: cost, expenses, actual cost, etc.
•  All of which are money-related.
•  Any words listed within the top 10 are not appropriate.
–  Our method: load, stress, damage, exhaustion, tense, etc.
•  All of which are appropriate paraphrase in the context.
25
Conclusion
We propose a method to paraphrase nouns
in consideration of the contexts.
26
The Characteristic of Our Proposed Method
–  It can paraphrase robust without the word frequency.
•  Our Number of Differences based method is better
than the Co-occurrence Frequency based method.
–  It can paraphrase depending on the context.
•  e.g. Reduce the burdens on the back.
•  NoD : load, stress, damage, exhaustion, tense, etc.
•  CoF : cost, expense, actual cost, etc. (money-related)

More Related Content

What's hot

NLP_KASHK:Context-Free Grammar for English
NLP_KASHK:Context-Free Grammar for EnglishNLP_KASHK:Context-Free Grammar for English
NLP_KASHK:Context-Free Grammar for English
Hemantha Kulathilake
 
NLP_KASHK:Parsing with Context-Free Grammar
NLP_KASHK:Parsing with Context-Free Grammar NLP_KASHK:Parsing with Context-Free Grammar
NLP_KASHK:Parsing with Context-Free Grammar
Hemantha Kulathilake
 
DETERMINING CUSTOMER SATISFACTION IN-ECOMMERCE
DETERMINING CUSTOMER SATISFACTION IN-ECOMMERCEDETERMINING CUSTOMER SATISFACTION IN-ECOMMERCE
DETERMINING CUSTOMER SATISFACTION IN-ECOMMERCE
AbdurrahimDerric
 
Natural Language processing Parts of speech tagging, its classes, and how to ...
Natural Language processing Parts of speech tagging, its classes, and how to ...Natural Language processing Parts of speech tagging, its classes, and how to ...
Natural Language processing Parts of speech tagging, its classes, and how to ...
Rajnish Raj
 
A Survey of Various Methods for Text Summarization
A Survey of Various Methods for Text SummarizationA Survey of Various Methods for Text Summarization
A Survey of Various Methods for Text Summarization
IJERD Editor
 
Text summarization
Text summarization Text summarization
Text summarization
prateek khandelwal
 
Word embedding
Word embedding Word embedding
Word embedding
ShivaniChoudhary74
 
Cc35451454
Cc35451454Cc35451454
Cc35451454
IJERA Editor
 
Document Classification Using KNN with Fuzzy Bags of Word Representation
Document Classification Using KNN with Fuzzy Bags of Word RepresentationDocument Classification Using KNN with Fuzzy Bags of Word Representation
Document Classification Using KNN with Fuzzy Bags of Word Representation
suthi
 
GDG Tbilisi 2017. Word Embedding Libraries Overview: Word2Vec and fastText
GDG Tbilisi 2017. Word Embedding Libraries Overview: Word2Vec and fastTextGDG Tbilisi 2017. Word Embedding Libraries Overview: Word2Vec and fastText
GDG Tbilisi 2017. Word Embedding Libraries Overview: Word2Vec and fastText
rudolf eremyan
 
NLP
NLPNLP
Ijcai 2007 Pedersen
Ijcai 2007 PedersenIjcai 2007 Pedersen
Ijcai 2007 Pedersen
University of Minnesota, Duluth
 
ON THE UTILITY OF A SYLLABLE-LIKE SEGMENTATION FOR LEARNING A TRANSLITERATION...
ON THE UTILITY OF A SYLLABLE-LIKE SEGMENTATION FOR LEARNING A TRANSLITERATION...ON THE UTILITY OF A SYLLABLE-LIKE SEGMENTATION FOR LEARNING A TRANSLITERATION...
ON THE UTILITY OF A SYLLABLE-LIKE SEGMENTATION FOR LEARNING A TRANSLITERATION...
cscpconf
 
Text summarization
Text summarizationText summarization
Text summarization
Akash Karwande
 
Ijartes v1-i1-002
Ijartes v1-i1-002Ijartes v1-i1-002
Ijartes v1-i1-002
IJARTES
 
Word sense disambiguation using wsd specific wordnet of polysemy words
Word sense disambiguation using wsd specific wordnet of polysemy wordsWord sense disambiguation using wsd specific wordnet of polysemy words
Word sense disambiguation using wsd specific wordnet of polysemy words
ijnlc
 
A Self-Supervised Tibetan-Chinese Vocabulary Alignment Method
A Self-Supervised Tibetan-Chinese Vocabulary Alignment MethodA Self-Supervised Tibetan-Chinese Vocabulary Alignment Method
A Self-Supervised Tibetan-Chinese Vocabulary Alignment Method
dannyijwest
 
A SELF-SUPERVISED TIBETAN-CHINESE VOCABULARY ALIGNMENT METHOD
A SELF-SUPERVISED TIBETAN-CHINESE VOCABULARY ALIGNMENT METHODA SELF-SUPERVISED TIBETAN-CHINESE VOCABULARY ALIGNMENT METHOD
A SELF-SUPERVISED TIBETAN-CHINESE VOCABULARY ALIGNMENT METHOD
IJwest
 
Parts of Speect Tagging
Parts of Speect TaggingParts of Speect Tagging
Parts of Speect Tagging
theyaseen51
 
Aaai 2006 Pedersen
Aaai 2006 PedersenAaai 2006 Pedersen

What's hot (20)

NLP_KASHK:Context-Free Grammar for English
NLP_KASHK:Context-Free Grammar for EnglishNLP_KASHK:Context-Free Grammar for English
NLP_KASHK:Context-Free Grammar for English
 
NLP_KASHK:Parsing with Context-Free Grammar
NLP_KASHK:Parsing with Context-Free Grammar NLP_KASHK:Parsing with Context-Free Grammar
NLP_KASHK:Parsing with Context-Free Grammar
 
DETERMINING CUSTOMER SATISFACTION IN-ECOMMERCE
DETERMINING CUSTOMER SATISFACTION IN-ECOMMERCEDETERMINING CUSTOMER SATISFACTION IN-ECOMMERCE
DETERMINING CUSTOMER SATISFACTION IN-ECOMMERCE
 
Natural Language processing Parts of speech tagging, its classes, and how to ...
Natural Language processing Parts of speech tagging, its classes, and how to ...Natural Language processing Parts of speech tagging, its classes, and how to ...
Natural Language processing Parts of speech tagging, its classes, and how to ...
 
A Survey of Various Methods for Text Summarization
A Survey of Various Methods for Text SummarizationA Survey of Various Methods for Text Summarization
A Survey of Various Methods for Text Summarization
 
Text summarization
Text summarization Text summarization
Text summarization
 
Word embedding
Word embedding Word embedding
Word embedding
 
Cc35451454
Cc35451454Cc35451454
Cc35451454
 
Document Classification Using KNN with Fuzzy Bags of Word Representation
Document Classification Using KNN with Fuzzy Bags of Word RepresentationDocument Classification Using KNN with Fuzzy Bags of Word Representation
Document Classification Using KNN with Fuzzy Bags of Word Representation
 
GDG Tbilisi 2017. Word Embedding Libraries Overview: Word2Vec and fastText
GDG Tbilisi 2017. Word Embedding Libraries Overview: Word2Vec and fastTextGDG Tbilisi 2017. Word Embedding Libraries Overview: Word2Vec and fastText
GDG Tbilisi 2017. Word Embedding Libraries Overview: Word2Vec and fastText
 
NLP
NLPNLP
NLP
 
Ijcai 2007 Pedersen
Ijcai 2007 PedersenIjcai 2007 Pedersen
Ijcai 2007 Pedersen
 
ON THE UTILITY OF A SYLLABLE-LIKE SEGMENTATION FOR LEARNING A TRANSLITERATION...
ON THE UTILITY OF A SYLLABLE-LIKE SEGMENTATION FOR LEARNING A TRANSLITERATION...ON THE UTILITY OF A SYLLABLE-LIKE SEGMENTATION FOR LEARNING A TRANSLITERATION...
ON THE UTILITY OF A SYLLABLE-LIKE SEGMENTATION FOR LEARNING A TRANSLITERATION...
 
Text summarization
Text summarizationText summarization
Text summarization
 
Ijartes v1-i1-002
Ijartes v1-i1-002Ijartes v1-i1-002
Ijartes v1-i1-002
 
Word sense disambiguation using wsd specific wordnet of polysemy words
Word sense disambiguation using wsd specific wordnet of polysemy wordsWord sense disambiguation using wsd specific wordnet of polysemy words
Word sense disambiguation using wsd specific wordnet of polysemy words
 
A Self-Supervised Tibetan-Chinese Vocabulary Alignment Method
A Self-Supervised Tibetan-Chinese Vocabulary Alignment MethodA Self-Supervised Tibetan-Chinese Vocabulary Alignment Method
A Self-Supervised Tibetan-Chinese Vocabulary Alignment Method
 
A SELF-SUPERVISED TIBETAN-CHINESE VOCABULARY ALIGNMENT METHOD
A SELF-SUPERVISED TIBETAN-CHINESE VOCABULARY ALIGNMENT METHODA SELF-SUPERVISED TIBETAN-CHINESE VOCABULARY ALIGNMENT METHOD
A SELF-SUPERVISED TIBETAN-CHINESE VOCABULARY ALIGNMENT METHOD
 
Parts of Speect Tagging
Parts of Speect TaggingParts of Speect Tagging
Parts of Speect Tagging
 
Aaai 2006 Pedersen
Aaai 2006 PedersenAaai 2006 Pedersen
Aaai 2006 Pedersen
 

Viewers also liked

単語分散表現のアライメントに基づく文間類似度を用いたテキスト平易化のための単言語パラレルコーパスの構築
単語分散表現のアライメントに基づく文間類似度を用いたテキスト平易化のための単言語パラレルコーパスの構築単語分散表現のアライメントに基づく文間類似度を用いたテキスト平易化のための単言語パラレルコーパスの構築
単語分散表現のアライメントに基づく文間類似度を用いたテキスト平易化のための単言語パラレルコーパスの構築
Tomoyuki Kajiwara
 
joint_seminar
joint_seminarjoint_seminar
joint_seminar
Tomoyuki Kajiwara
 
文献紹介:Simple English Wikipedia: A New Text Simplification Task
文献紹介:Simple English Wikipedia: A New Text Simplification Task文献紹介:Simple English Wikipedia: A New Text Simplification Task
文献紹介:Simple English Wikipedia: A New Text Simplification Task
Tomoyuki Kajiwara
 
文章読解支援のための語彙平易化
文章読解支援のための語彙平易化文章読解支援のための語彙平易化
文章読解支援のための語彙平易化
Tomoyuki Kajiwara
 
Evaluation Dataset and System for Japanese Lexical Simplification
Evaluation Dataset and System for Japanese Lexical SimplificationEvaluation Dataset and System for Japanese Lexical Simplification
Evaluation Dataset and System for Japanese Lexical Simplification
Tomoyuki Kajiwara
 
文章読解支援のための語彙平易化@第1回NLP東京Dの会
文章読解支援のための語彙平易化@第1回NLP東京Dの会文章読解支援のための語彙平易化@第1回NLP東京Dの会
文章読解支援のための語彙平易化@第1回NLP東京Dの会
Tomoyuki Kajiwara
 
Incorporating word reordering knowledge into attention-based neural machine t...
Incorporating word reordering knowledge into attention-based neural machine t...Incorporating word reordering knowledge into attention-based neural machine t...
Incorporating word reordering knowledge into attention-based neural machine t...
sekizawayuuki
 
高頻度語は平易なのか?
高頻度語は平易なのか?高頻度語は平易なのか?
高頻度語は平易なのか?
Tomoyuki Kajiwara
 
tmu_science_cafe02
tmu_science_cafe02tmu_science_cafe02
tmu_science_cafe02
Tomoyuki Kajiwara
 

Viewers also liked (9)

単語分散表現のアライメントに基づく文間類似度を用いたテキスト平易化のための単言語パラレルコーパスの構築
単語分散表現のアライメントに基づく文間類似度を用いたテキスト平易化のための単言語パラレルコーパスの構築単語分散表現のアライメントに基づく文間類似度を用いたテキスト平易化のための単言語パラレルコーパスの構築
単語分散表現のアライメントに基づく文間類似度を用いたテキスト平易化のための単言語パラレルコーパスの構築
 
joint_seminar
joint_seminarjoint_seminar
joint_seminar
 
文献紹介:Simple English Wikipedia: A New Text Simplification Task
文献紹介:Simple English Wikipedia: A New Text Simplification Task文献紹介:Simple English Wikipedia: A New Text Simplification Task
文献紹介:Simple English Wikipedia: A New Text Simplification Task
 
文章読解支援のための語彙平易化
文章読解支援のための語彙平易化文章読解支援のための語彙平易化
文章読解支援のための語彙平易化
 
Evaluation Dataset and System for Japanese Lexical Simplification
Evaluation Dataset and System for Japanese Lexical SimplificationEvaluation Dataset and System for Japanese Lexical Simplification
Evaluation Dataset and System for Japanese Lexical Simplification
 
文章読解支援のための語彙平易化@第1回NLP東京Dの会
文章読解支援のための語彙平易化@第1回NLP東京Dの会文章読解支援のための語彙平易化@第1回NLP東京Dの会
文章読解支援のための語彙平易化@第1回NLP東京Dの会
 
Incorporating word reordering knowledge into attention-based neural machine t...
Incorporating word reordering knowledge into attention-based neural machine t...Incorporating word reordering knowledge into attention-based neural machine t...
Incorporating word reordering knowledge into attention-based neural machine t...
 
高頻度語は平易なのか?
高頻度語は平易なのか?高頻度語は平易なのか?
高頻度語は平易なのか?
 
tmu_science_cafe02
tmu_science_cafe02tmu_science_cafe02
tmu_science_cafe02
 

Similar to Noun Paraphrasing Based on a Variety of Contexts

Natural Language Processing
Natural Language ProcessingNatural Language Processing
Natural Language Processing
Toine Bogers
 
DETECTING OXYMORON IN A SINGLE STATEMENT
DETECTING OXYMORON IN A SINGLE STATEMENTDETECTING OXYMORON IN A SINGLE STATEMENT
DETECTING OXYMORON IN A SINGLE STATEMENT
WarNik Chow
 
Junki Matsuo - 2015 - Source Phrase Segmentation and Translation for Japanese...
Junki Matsuo - 2015 - Source Phrase Segmentation and Translation for Japanese...Junki Matsuo - 2015 - Source Phrase Segmentation and Translation for Japanese...
Junki Matsuo - 2015 - Source Phrase Segmentation and Translation for Japanese...
Association for Computational Linguistics
 
Chat bot using text similarity approach
Chat bot using text similarity approachChat bot using text similarity approach
Chat bot using text similarity approach
dinesh_joshy
 
L6.pptxsdv dfbdfjftj hgjythgfvfhjyggunghb fghtffn
L6.pptxsdv dfbdfjftj hgjythgfvfhjyggunghb fghtffnL6.pptxsdv dfbdfjftj hgjythgfvfhjyggunghb fghtffn
L6.pptxsdv dfbdfjftj hgjythgfvfhjyggunghb fghtffn
RwanEnan
 
THE ABILITY OF WORD EMBEDDINGS TO CAPTURE WORD SIMILARITIES
THE ABILITY OF WORD EMBEDDINGS TO CAPTURE WORD SIMILARITIESTHE ABILITY OF WORD EMBEDDINGS TO CAPTURE WORD SIMILARITIES
THE ABILITY OF WORD EMBEDDINGS TO CAPTURE WORD SIMILARITIES
kevig
 
THE ABILITY OF WORD EMBEDDINGS TO CAPTURE WORD SIMILARITIES
THE ABILITY OF WORD EMBEDDINGS TO CAPTURE WORD SIMILARITIESTHE ABILITY OF WORD EMBEDDINGS TO CAPTURE WORD SIMILARITIES
THE ABILITY OF WORD EMBEDDINGS TO CAPTURE WORD SIMILARITIES
kevig
 
Analysis of lexico syntactic patterns for antonym pair extraction from a turk...
Analysis of lexico syntactic patterns for antonym pair extraction from a turk...Analysis of lexico syntactic patterns for antonym pair extraction from a turk...
Analysis of lexico syntactic patterns for antonym pair extraction from a turk...
csandit
 
ANALYSIS OF LEXICO-SYNTACTIC PATTERNS FOR ANTONYM PAIR EXTRACTION FROM A TURK...
ANALYSIS OF LEXICO-SYNTACTIC PATTERNS FOR ANTONYM PAIR EXTRACTION FROM A TURK...ANALYSIS OF LEXICO-SYNTACTIC PATTERNS FOR ANTONYM PAIR EXTRACTION FROM A TURK...
ANALYSIS OF LEXICO-SYNTACTIC PATTERNS FOR ANTONYM PAIR EXTRACTION FROM A TURK...
cscpconf
 
IR.pptx
IR.pptxIR.pptx
IR.pptx
MahamSajid4
 
P99 1067
P99 1067P99 1067
P99 1067
ALEXANDRASUWANN
 
Sneha Rajana - Deep Learning Architectures for Semantic Relation Detection Tasks
Sneha Rajana - Deep Learning Architectures for Semantic Relation Detection TasksSneha Rajana - Deep Learning Architectures for Semantic Relation Detection Tasks
Sneha Rajana - Deep Learning Architectures for Semantic Relation Detection Tasks
MLconf
 
Introduction to Natural Language Processing
Introduction to Natural Language ProcessingIntroduction to Natural Language Processing
Introduction to Natural Language Processing
Pranav Gupta
 
Analyzing Arguments during a Debate using Natural Language Processing in Python
Analyzing Arguments during a Debate using Natural Language Processing in PythonAnalyzing Arguments during a Debate using Natural Language Processing in Python
Analyzing Arguments during a Debate using Natural Language Processing in Python
Abhinav Gupta
 
wordembedding.pptx
wordembedding.pptxwordembedding.pptx
wordembedding.pptx
JOBANPREETSINGH62
 
Two Level Disambiguation Model for Query Translation
Two Level Disambiguation Model for Query TranslationTwo Level Disambiguation Model for Query Translation
Two Level Disambiguation Model for Query Translation
IJECEIAES
 
Knowledge based System
Knowledge based SystemKnowledge based System
Knowledge based System
Tamanna36
 
semeval2016
semeval2016semeval2016
semeval2016
Lukáš Svoboda
 
Unit-1 PPL PPTvvhvmmmmmmmmmmmmmmmmmmmmmm
Unit-1 PPL PPTvvhvmmmmmmmmmmmmmmmmmmmmmmUnit-1 PPL PPTvvhvmmmmmmmmmmmmmmmmmmmmmm
Unit-1 PPL PPTvvhvmmmmmmmmmmmmmmmmmmmmmm
DhruvKushwaha12
 
Dependency-Based Word Embeddings
Dependency-Based Word EmbeddingsDependency-Based Word Embeddings
Dependency-Based Word Embeddings
Bikash Chandra Karmokar
 

Similar to Noun Paraphrasing Based on a Variety of Contexts (20)

Natural Language Processing
Natural Language ProcessingNatural Language Processing
Natural Language Processing
 
DETECTING OXYMORON IN A SINGLE STATEMENT
DETECTING OXYMORON IN A SINGLE STATEMENTDETECTING OXYMORON IN A SINGLE STATEMENT
DETECTING OXYMORON IN A SINGLE STATEMENT
 
Junki Matsuo - 2015 - Source Phrase Segmentation and Translation for Japanese...
Junki Matsuo - 2015 - Source Phrase Segmentation and Translation for Japanese...Junki Matsuo - 2015 - Source Phrase Segmentation and Translation for Japanese...
Junki Matsuo - 2015 - Source Phrase Segmentation and Translation for Japanese...
 
Chat bot using text similarity approach
Chat bot using text similarity approachChat bot using text similarity approach
Chat bot using text similarity approach
 
L6.pptxsdv dfbdfjftj hgjythgfvfhjyggunghb fghtffn
L6.pptxsdv dfbdfjftj hgjythgfvfhjyggunghb fghtffnL6.pptxsdv dfbdfjftj hgjythgfvfhjyggunghb fghtffn
L6.pptxsdv dfbdfjftj hgjythgfvfhjyggunghb fghtffn
 
THE ABILITY OF WORD EMBEDDINGS TO CAPTURE WORD SIMILARITIES
THE ABILITY OF WORD EMBEDDINGS TO CAPTURE WORD SIMILARITIESTHE ABILITY OF WORD EMBEDDINGS TO CAPTURE WORD SIMILARITIES
THE ABILITY OF WORD EMBEDDINGS TO CAPTURE WORD SIMILARITIES
 
THE ABILITY OF WORD EMBEDDINGS TO CAPTURE WORD SIMILARITIES
THE ABILITY OF WORD EMBEDDINGS TO CAPTURE WORD SIMILARITIESTHE ABILITY OF WORD EMBEDDINGS TO CAPTURE WORD SIMILARITIES
THE ABILITY OF WORD EMBEDDINGS TO CAPTURE WORD SIMILARITIES
 
Analysis of lexico syntactic patterns for antonym pair extraction from a turk...
Analysis of lexico syntactic patterns for antonym pair extraction from a turk...Analysis of lexico syntactic patterns for antonym pair extraction from a turk...
Analysis of lexico syntactic patterns for antonym pair extraction from a turk...
 
ANALYSIS OF LEXICO-SYNTACTIC PATTERNS FOR ANTONYM PAIR EXTRACTION FROM A TURK...
ANALYSIS OF LEXICO-SYNTACTIC PATTERNS FOR ANTONYM PAIR EXTRACTION FROM A TURK...ANALYSIS OF LEXICO-SYNTACTIC PATTERNS FOR ANTONYM PAIR EXTRACTION FROM A TURK...
ANALYSIS OF LEXICO-SYNTACTIC PATTERNS FOR ANTONYM PAIR EXTRACTION FROM A TURK...
 
IR.pptx
IR.pptxIR.pptx
IR.pptx
 
P99 1067
P99 1067P99 1067
P99 1067
 
Sneha Rajana - Deep Learning Architectures for Semantic Relation Detection Tasks
Sneha Rajana - Deep Learning Architectures for Semantic Relation Detection TasksSneha Rajana - Deep Learning Architectures for Semantic Relation Detection Tasks
Sneha Rajana - Deep Learning Architectures for Semantic Relation Detection Tasks
 
Introduction to Natural Language Processing
Introduction to Natural Language ProcessingIntroduction to Natural Language Processing
Introduction to Natural Language Processing
 
Analyzing Arguments during a Debate using Natural Language Processing in Python
Analyzing Arguments during a Debate using Natural Language Processing in PythonAnalyzing Arguments during a Debate using Natural Language Processing in Python
Analyzing Arguments during a Debate using Natural Language Processing in Python
 
wordembedding.pptx
wordembedding.pptxwordembedding.pptx
wordembedding.pptx
 
Two Level Disambiguation Model for Query Translation
Two Level Disambiguation Model for Query TranslationTwo Level Disambiguation Model for Query Translation
Two Level Disambiguation Model for Query Translation
 
Knowledge based System
Knowledge based SystemKnowledge based System
Knowledge based System
 
semeval2016
semeval2016semeval2016
semeval2016
 
Unit-1 PPL PPTvvhvmmmmmmmmmmmmmmmmmmmmmm
Unit-1 PPL PPTvvhvmmmmmmmmmmmmmmmmmmmmmmUnit-1 PPL PPTvvhvmmmmmmmmmmmmmmmmmmmmmm
Unit-1 PPL PPTvvhvmmmmmmmmmmmmmmmmmmmmmm
 
Dependency-Based Word Embeddings
Dependency-Based Word EmbeddingsDependency-Based Word Embeddings
Dependency-Based Word Embeddings
 

More from Tomoyuki Kajiwara

20190315 nlp
20190315 nlp20190315 nlp
20190315 nlp
Tomoyuki Kajiwara
 
20180208公聴会
20180208公聴会20180208公聴会
20180208公聴会
Tomoyuki Kajiwara
 
20150702文章読解支援のための日本語の語彙平易化システム
20150702文章読解支援のための日本語の語彙平易化システム20150702文章読解支援のための日本語の語彙平易化システム
20150702文章読解支援のための日本語の語彙平易化システムTomoyuki Kajiwara
 
文献紹介:SemEval-2012 Task 1: English Lexical Simplification
文献紹介:SemEval-2012 Task 1: English Lexical Simplification文献紹介:SemEval-2012 Task 1: English Lexical Simplification
文献紹介:SemEval-2012 Task 1: English Lexical Simplification
Tomoyuki Kajiwara
 
文献紹介:新聞記事中の難解語を平易な表現へ変換する手法の提案
文献紹介:新聞記事中の難解語を平易な表現へ変換する手法の提案文献紹介:新聞記事中の難解語を平易な表現へ変換する手法の提案
文献紹介:新聞記事中の難解語を平易な表現へ変換する手法の提案
Tomoyuki Kajiwara
 
日本語の語彙平易化システムおよび評価セットの構築
日本語の語彙平易化システムおよび評価セットの構築日本語の語彙平易化システムおよび評価セットの構築
日本語の語彙平易化システムおよび評価セットの構築
Tomoyuki Kajiwara
 
文献紹介:格フレームの対応付けに基づく用言の言い換え
文献紹介:格フレームの対応付けに基づく用言の言い換え文献紹介:格フレームの対応付けに基づく用言の言い換え
文献紹介:格フレームの対応付けに基づく用言の言い換え
Tomoyuki Kajiwara
 
文献紹介:言い換え技術に関する研究動向
文献紹介:言い換え技術に関する研究動向文献紹介:言い換え技術に関する研究動向
文献紹介:言い換え技術に関する研究動向
Tomoyuki Kajiwara
 
日本語の語彙平易化評価セットの構築
日本語の語彙平易化評価セットの構築日本語の語彙平易化評価セットの構築
日本語の語彙平易化評価セットの構築
Tomoyuki Kajiwara
 
日本語の語彙平易化システムの構築
日本語の語彙平易化システムの構築日本語の語彙平易化システムの構築
日本語の語彙平易化システムの構築
Tomoyuki Kajiwara
 
日本語の語彙的換言知識の質的評価
日本語の語彙的換言知識の質的評価日本語の語彙的換言知識の質的評価
日本語の語彙的換言知識の質的評価
Tomoyuki Kajiwara
 
文脈の多様性に基づく名詞換言の評価
文脈の多様性に基づく名詞換言の評価文脈の多様性に基づく名詞換言の評価
文脈の多様性に基づく名詞換言の評価
Tomoyuki Kajiwara
 
文脈の多様性に基づく名詞換言の提案
文脈の多様性に基づく名詞換言の提案文脈の多様性に基づく名詞換言の提案
文脈の多様性に基づく名詞換言の提案
Tomoyuki Kajiwara
 
機械学習を用いたニ格深層格の自動付与の検討
機械学習を用いたニ格深層格の自動付与の検討機械学習を用いたニ格深層格の自動付与の検討
機械学習を用いたニ格深層格の自動付与の検討
Tomoyuki Kajiwara
 
Selecting Proper Lexical Paraphrase for Children
Selecting Proper Lexical Paraphrase for ChildrenSelecting Proper Lexical Paraphrase for Children
Selecting Proper Lexical Paraphrase for Children
Tomoyuki Kajiwara
 
小学生の読解支援に向けた語釈文から語彙的換言を選択する手法
小学生の読解支援に向けた語釈文から語彙的換言を選択する手法小学生の読解支援に向けた語釈文から語彙的換言を選択する手法
小学生の読解支援に向けた語釈文から語彙的換言を選択する手法
Tomoyuki Kajiwara
 
小学生の読解支援に向けた複数の換言知識を併用した語彙平易化と評価
小学生の読解支援に向けた複数の換言知識を併用した語彙平易化と評価小学生の読解支援に向けた複数の換言知識を併用した語彙平易化と評価
小学生の読解支援に向けた複数の換言知識を併用した語彙平易化と評価
Tomoyuki Kajiwara
 
小学生の読解支援に向けた語釈文による換言
小学生の読解支援に向けた語釈文による換言小学生の読解支援に向けた語釈文による換言
小学生の読解支援に向けた語釈文による換言
Tomoyuki Kajiwara
 
対話型自動作曲システムに関する研究 -Aメロ, Bメロ, サビで異なる印象を感じさせる楽曲生成-
対話型自動作曲システムに関する研究 -Aメロ, Bメロ, サビで異なる印象を感じさせる楽曲生成-対話型自動作曲システムに関する研究 -Aメロ, Bメロ, サビで異なる印象を感じさせる楽曲生成-
対話型自動作曲システムに関する研究 -Aメロ, Bメロ, サビで異なる印象を感じさせる楽曲生成-
Tomoyuki Kajiwara
 
IGAを用いた個人の感性を反映した楽曲作成に関する研究 -Aメロ, Bメロ, サビに異なる感性的印象を感じさせる楽曲生成手法-
IGAを用いた個人の感性を反映した楽曲作成に関する研究 -Aメロ, Bメロ, サビに異なる感性的印象を感じさせる楽曲生成手法-IGAを用いた個人の感性を反映した楽曲作成に関する研究 -Aメロ, Bメロ, サビに異なる感性的印象を感じさせる楽曲生成手法-
IGAを用いた個人の感性を反映した楽曲作成に関する研究 -Aメロ, Bメロ, サビに異なる感性的印象を感じさせる楽曲生成手法-
Tomoyuki Kajiwara
 

More from Tomoyuki Kajiwara (20)

20190315 nlp
20190315 nlp20190315 nlp
20190315 nlp
 
20180208公聴会
20180208公聴会20180208公聴会
20180208公聴会
 
20150702文章読解支援のための日本語の語彙平易化システム
20150702文章読解支援のための日本語の語彙平易化システム20150702文章読解支援のための日本語の語彙平易化システム
20150702文章読解支援のための日本語の語彙平易化システム
 
文献紹介:SemEval-2012 Task 1: English Lexical Simplification
文献紹介:SemEval-2012 Task 1: English Lexical Simplification文献紹介:SemEval-2012 Task 1: English Lexical Simplification
文献紹介:SemEval-2012 Task 1: English Lexical Simplification
 
文献紹介:新聞記事中の難解語を平易な表現へ変換する手法の提案
文献紹介:新聞記事中の難解語を平易な表現へ変換する手法の提案文献紹介:新聞記事中の難解語を平易な表現へ変換する手法の提案
文献紹介:新聞記事中の難解語を平易な表現へ変換する手法の提案
 
日本語の語彙平易化システムおよび評価セットの構築
日本語の語彙平易化システムおよび評価セットの構築日本語の語彙平易化システムおよび評価セットの構築
日本語の語彙平易化システムおよび評価セットの構築
 
文献紹介:格フレームの対応付けに基づく用言の言い換え
文献紹介:格フレームの対応付けに基づく用言の言い換え文献紹介:格フレームの対応付けに基づく用言の言い換え
文献紹介:格フレームの対応付けに基づく用言の言い換え
 
文献紹介:言い換え技術に関する研究動向
文献紹介:言い換え技術に関する研究動向文献紹介:言い換え技術に関する研究動向
文献紹介:言い換え技術に関する研究動向
 
日本語の語彙平易化評価セットの構築
日本語の語彙平易化評価セットの構築日本語の語彙平易化評価セットの構築
日本語の語彙平易化評価セットの構築
 
日本語の語彙平易化システムの構築
日本語の語彙平易化システムの構築日本語の語彙平易化システムの構築
日本語の語彙平易化システムの構築
 
日本語の語彙的換言知識の質的評価
日本語の語彙的換言知識の質的評価日本語の語彙的換言知識の質的評価
日本語の語彙的換言知識の質的評価
 
文脈の多様性に基づく名詞換言の評価
文脈の多様性に基づく名詞換言の評価文脈の多様性に基づく名詞換言の評価
文脈の多様性に基づく名詞換言の評価
 
文脈の多様性に基づく名詞換言の提案
文脈の多様性に基づく名詞換言の提案文脈の多様性に基づく名詞換言の提案
文脈の多様性に基づく名詞換言の提案
 
機械学習を用いたニ格深層格の自動付与の検討
機械学習を用いたニ格深層格の自動付与の検討機械学習を用いたニ格深層格の自動付与の検討
機械学習を用いたニ格深層格の自動付与の検討
 
Selecting Proper Lexical Paraphrase for Children
Selecting Proper Lexical Paraphrase for ChildrenSelecting Proper Lexical Paraphrase for Children
Selecting Proper Lexical Paraphrase for Children
 
小学生の読解支援に向けた語釈文から語彙的換言を選択する手法
小学生の読解支援に向けた語釈文から語彙的換言を選択する手法小学生の読解支援に向けた語釈文から語彙的換言を選択する手法
小学生の読解支援に向けた語釈文から語彙的換言を選択する手法
 
小学生の読解支援に向けた複数の換言知識を併用した語彙平易化と評価
小学生の読解支援に向けた複数の換言知識を併用した語彙平易化と評価小学生の読解支援に向けた複数の換言知識を併用した語彙平易化と評価
小学生の読解支援に向けた複数の換言知識を併用した語彙平易化と評価
 
小学生の読解支援に向けた語釈文による換言
小学生の読解支援に向けた語釈文による換言小学生の読解支援に向けた語釈文による換言
小学生の読解支援に向けた語釈文による換言
 
対話型自動作曲システムに関する研究 -Aメロ, Bメロ, サビで異なる印象を感じさせる楽曲生成-
対話型自動作曲システムに関する研究 -Aメロ, Bメロ, サビで異なる印象を感じさせる楽曲生成-対話型自動作曲システムに関する研究 -Aメロ, Bメロ, サビで異なる印象を感じさせる楽曲生成-
対話型自動作曲システムに関する研究 -Aメロ, Bメロ, サビで異なる印象を感じさせる楽曲生成-
 
IGAを用いた個人の感性を反映した楽曲作成に関する研究 -Aメロ, Bメロ, サビに異なる感性的印象を感じさせる楽曲生成手法-
IGAを用いた個人の感性を反映した楽曲作成に関する研究 -Aメロ, Bメロ, サビに異なる感性的印象を感じさせる楽曲生成手法-IGAを用いた個人の感性を反映した楽曲作成に関する研究 -Aメロ, Bメロ, サビに異なる感性的印象を感じさせる楽曲生成手法-
IGAを用いた個人の感性を反映した楽曲作成に関する研究 -Aメロ, Bメロ, サビに異なる感性的印象を感じさせる楽曲生成手法-
 

Recently uploaded

Medical Orthopedic PowerPoint Templates.pptx
Medical Orthopedic PowerPoint Templates.pptxMedical Orthopedic PowerPoint Templates.pptx
Medical Orthopedic PowerPoint Templates.pptx
terusbelajar5
 
Shallowest Oil Discovery of Turkiye.pptx
Shallowest Oil Discovery of Turkiye.pptxShallowest Oil Discovery of Turkiye.pptx
Shallowest Oil Discovery of Turkiye.pptx
Gokturk Mehmet Dilci
 
NuGOweek 2024 Ghent programme overview flyer
NuGOweek 2024 Ghent programme overview flyerNuGOweek 2024 Ghent programme overview flyer
NuGOweek 2024 Ghent programme overview flyer
pablovgd
 
Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...
Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...
Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...
University of Maribor
 
Micronuclei test.M.sc.zoology.fisheries.
Micronuclei test.M.sc.zoology.fisheries.Micronuclei test.M.sc.zoology.fisheries.
Micronuclei test.M.sc.zoology.fisheries.
Aditi Bajpai
 
The binding of cosmological structures by massless topological defects
The binding of cosmological structures by massless topological defectsThe binding of cosmological structures by massless topological defects
The binding of cosmological structures by massless topological defects
Sérgio Sacani
 
Nucleic Acid-its structural and functional complexity.
Nucleic Acid-its structural and functional complexity.Nucleic Acid-its structural and functional complexity.
Nucleic Acid-its structural and functional complexity.
Nistarini College, Purulia (W.B) India
 
3D Hybrid PIC simulation of the plasma expansion (ISSS-14)
3D Hybrid PIC simulation of the plasma expansion (ISSS-14)3D Hybrid PIC simulation of the plasma expansion (ISSS-14)
3D Hybrid PIC simulation of the plasma expansion (ISSS-14)
David Osipyan
 
Thornton ESPP slides UK WW Network 4_6_24.pdf
Thornton ESPP slides UK WW Network 4_6_24.pdfThornton ESPP slides UK WW Network 4_6_24.pdf
Thornton ESPP slides UK WW Network 4_6_24.pdf
European Sustainable Phosphorus Platform
 
ESR spectroscopy in liquid food and beverages.pptx
ESR spectroscopy in liquid food and beverages.pptxESR spectroscopy in liquid food and beverages.pptx
ESR spectroscopy in liquid food and beverages.pptx
PRIYANKA PATEL
 
Eukaryotic Transcription Presentation.pptx
Eukaryotic Transcription Presentation.pptxEukaryotic Transcription Presentation.pptx
Eukaryotic Transcription Presentation.pptx
RitabrataSarkar3
 
20240520 Planning a Circuit Simulator in JavaScript.pptx
20240520 Planning a Circuit Simulator in JavaScript.pptx20240520 Planning a Circuit Simulator in JavaScript.pptx
20240520 Planning a Circuit Simulator in JavaScript.pptx
Sharon Liu
 
Travis Hills' Endeavors in Minnesota: Fostering Environmental and Economic Pr...
Travis Hills' Endeavors in Minnesota: Fostering Environmental and Economic Pr...Travis Hills' Endeavors in Minnesota: Fostering Environmental and Economic Pr...
Travis Hills' Endeavors in Minnesota: Fostering Environmental and Economic Pr...
Travis Hills MN
 
molar-distalization in orthodontics-seminar.pptx
molar-distalization in orthodontics-seminar.pptxmolar-distalization in orthodontics-seminar.pptx
molar-distalization in orthodontics-seminar.pptx
Anagha Prasad
 
BREEDING METHODS FOR DISEASE RESISTANCE.pptx
BREEDING METHODS FOR DISEASE RESISTANCE.pptxBREEDING METHODS FOR DISEASE RESISTANCE.pptx
BREEDING METHODS FOR DISEASE RESISTANCE.pptx
RASHMI M G
 
aziz sancar nobel prize winner: from mardin to nobel
aziz sancar nobel prize winner: from mardin to nobelaziz sancar nobel prize winner: from mardin to nobel
aziz sancar nobel prize winner: from mardin to nobel
İsa Badur
 
8.Isolation of pure cultures and preservation of cultures.pdf
8.Isolation of pure cultures and preservation of cultures.pdf8.Isolation of pure cultures and preservation of cultures.pdf
8.Isolation of pure cultures and preservation of cultures.pdf
by6843629
 
The debris of the ‘last major merger’ is dynamically young
The debris of the ‘last major merger’ is dynamically youngThe debris of the ‘last major merger’ is dynamically young
The debris of the ‘last major merger’ is dynamically young
Sérgio Sacani
 
Equivariant neural networks and representation theory
Equivariant neural networks and representation theoryEquivariant neural networks and representation theory
Equivariant neural networks and representation theory
Daniel Tubbenhauer
 
What is greenhouse gasses and how many gasses are there to affect the Earth.
What is greenhouse gasses and how many gasses are there to affect the Earth.What is greenhouse gasses and how many gasses are there to affect the Earth.
What is greenhouse gasses and how many gasses are there to affect the Earth.
moosaasad1975
 

Recently uploaded (20)

Medical Orthopedic PowerPoint Templates.pptx
Medical Orthopedic PowerPoint Templates.pptxMedical Orthopedic PowerPoint Templates.pptx
Medical Orthopedic PowerPoint Templates.pptx
 
Shallowest Oil Discovery of Turkiye.pptx
Shallowest Oil Discovery of Turkiye.pptxShallowest Oil Discovery of Turkiye.pptx
Shallowest Oil Discovery of Turkiye.pptx
 
NuGOweek 2024 Ghent programme overview flyer
NuGOweek 2024 Ghent programme overview flyerNuGOweek 2024 Ghent programme overview flyer
NuGOweek 2024 Ghent programme overview flyer
 
Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...
Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...
Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...
 
Micronuclei test.M.sc.zoology.fisheries.
Micronuclei test.M.sc.zoology.fisheries.Micronuclei test.M.sc.zoology.fisheries.
Micronuclei test.M.sc.zoology.fisheries.
 
The binding of cosmological structures by massless topological defects
The binding of cosmological structures by massless topological defectsThe binding of cosmological structures by massless topological defects
The binding of cosmological structures by massless topological defects
 
Nucleic Acid-its structural and functional complexity.
Nucleic Acid-its structural and functional complexity.Nucleic Acid-its structural and functional complexity.
Nucleic Acid-its structural and functional complexity.
 
3D Hybrid PIC simulation of the plasma expansion (ISSS-14)
3D Hybrid PIC simulation of the plasma expansion (ISSS-14)3D Hybrid PIC simulation of the plasma expansion (ISSS-14)
3D Hybrid PIC simulation of the plasma expansion (ISSS-14)
 
Thornton ESPP slides UK WW Network 4_6_24.pdf
Thornton ESPP slides UK WW Network 4_6_24.pdfThornton ESPP slides UK WW Network 4_6_24.pdf
Thornton ESPP slides UK WW Network 4_6_24.pdf
 
ESR spectroscopy in liquid food and beverages.pptx
ESR spectroscopy in liquid food and beverages.pptxESR spectroscopy in liquid food and beverages.pptx
ESR spectroscopy in liquid food and beverages.pptx
 
Eukaryotic Transcription Presentation.pptx
Eukaryotic Transcription Presentation.pptxEukaryotic Transcription Presentation.pptx
Eukaryotic Transcription Presentation.pptx
 
20240520 Planning a Circuit Simulator in JavaScript.pptx
20240520 Planning a Circuit Simulator in JavaScript.pptx20240520 Planning a Circuit Simulator in JavaScript.pptx
20240520 Planning a Circuit Simulator in JavaScript.pptx
 
Travis Hills' Endeavors in Minnesota: Fostering Environmental and Economic Pr...
Travis Hills' Endeavors in Minnesota: Fostering Environmental and Economic Pr...Travis Hills' Endeavors in Minnesota: Fostering Environmental and Economic Pr...
Travis Hills' Endeavors in Minnesota: Fostering Environmental and Economic Pr...
 
molar-distalization in orthodontics-seminar.pptx
molar-distalization in orthodontics-seminar.pptxmolar-distalization in orthodontics-seminar.pptx
molar-distalization in orthodontics-seminar.pptx
 
BREEDING METHODS FOR DISEASE RESISTANCE.pptx
BREEDING METHODS FOR DISEASE RESISTANCE.pptxBREEDING METHODS FOR DISEASE RESISTANCE.pptx
BREEDING METHODS FOR DISEASE RESISTANCE.pptx
 
aziz sancar nobel prize winner: from mardin to nobel
aziz sancar nobel prize winner: from mardin to nobelaziz sancar nobel prize winner: from mardin to nobel
aziz sancar nobel prize winner: from mardin to nobel
 
8.Isolation of pure cultures and preservation of cultures.pdf
8.Isolation of pure cultures and preservation of cultures.pdf8.Isolation of pure cultures and preservation of cultures.pdf
8.Isolation of pure cultures and preservation of cultures.pdf
 
The debris of the ‘last major merger’ is dynamically young
The debris of the ‘last major merger’ is dynamically youngThe debris of the ‘last major merger’ is dynamically young
The debris of the ‘last major merger’ is dynamically young
 
Equivariant neural networks and representation theory
Equivariant neural networks and representation theoryEquivariant neural networks and representation theory
Equivariant neural networks and representation theory
 
What is greenhouse gasses and how many gasses are there to affect the Earth.
What is greenhouse gasses and how many gasses are there to affect the Earth.What is greenhouse gasses and how many gasses are there to affect the Earth.
What is greenhouse gasses and how many gasses are there to affect the Earth.
 

Noun Paraphrasing Based on a Variety of Contexts

  • 1. Noun Paraphrasing Based on a Variety of Contexts Tomoyuki Kajiwara and Kazuhide Yamamoto Nagaoka University of Technology, Japan
  • 2. Abstract We propose a method to paraphrase nouns in consideration of the contexts. The Characteristic of Our Proposed Method –  It can paraphrase robust without the word frequency. •  Our Number of Differences based method is better than the Co-occurrence Frequency based method. –  It can paraphrase depending on the context. •  e.g. Reduce the burdens on the back. •  NoD : load, stress, damage, exhaustion, tense, etc. •  CoF : cost, expense, actual cost, etc. (money-related) 2
  • 3. Teacher Instructor Lexical Paraphrasing, Lexical Substitution The different linguistic representation showing the same meaning. 3
  • 4. Application of the Lexical Paraphrasing •  For Reading Assistance (Lexical Simplification) –  Never judge people by external appearance. –  Never judge people by outside appearance. •  For Machine Translation (pre-editing) –  その本なら書類の下にある It is under the papers if it is the book. –  その本 は 書類の下にある The book is under the papers. ✔ ✔ ✔ 4
  • 5. Difficulty of the Lexical Paraphrasing •  Force someone to shoulder a huge increase in his financial burdens . –  Force someone to shoulder a huge increase in his financial costs . –  Force someone to shoulder a huge increase in his financial loads . •  Reduce the burdens on the back. –  Reduce the costs on the back. –  Reduce the loads on the back. ✔ ✔ It changes depending on the context whether paraphrasing is possible or impossible. 5
  • 6. Input: Look for the access to the airport. Output: Look for the way to the airport. Approach restaurant market purpose transfer fee way bus transportation delivery look for the *** *** to the airport 1. way 2. transfer 3. fee To sort by the context similarity 6
  • 7. Input: Look for the access to the airport. Output: Look for the way to the airport. Approach restaurant market purpose transfer fee way bus transportation delivery look for the *** *** to the airport 1. way 2. transfer 3. fee To sort by the context similarity To generate a proper sentence To select a suitable paraphrase 7
  • 8. Proposed Method We propose a method to paraphrase nouns in consideration of the contexts. 1.  To extract candidate words used in the same context as the input sentence 2.  To calculate the similarity between the original and candidate words •  The number of differences of the context in the candidate word. •  The number of differences of the common context between the original and the candidate word. 3.  To select a candidate word with the maximum similarity as the paraphrase original → paraphrase 8
  • 9. Proposed Method We propose a method to paraphrase nouns in consideration of the contexts. 1.  To extract candidate words used in the same context as the input sentence 2.  To calculate the similarity between the original and candidate words •  The number of differences of the context in the candidate word. •  The number of differences of the common context between the original and the candidate word. 3.  To select a candidate word with the maximum similarity as the paraphrase original → paraphrase 9
  • 10. To extract candidate words •  To extract candidate words used in the same context •  But words used in the completely same context is hardly found ↓ •  On the basis of an object word access , an input sentence is divided into a pre- and a post-context. Look for the access to the airport. look for the *** *** to the airport pre- context post- context restaurant transfer market fee purpose way transfer bus fee transportation way delivery 10
  • 11. To extract candidate words Look for the access to the airport. look for the *** *** to the airport pre- context post- context restaurant transfer market fee purpose way transfer bus fee transportation way delivery •  Words appearing in common may be used in the input sentence •  We can generate a proper sentence 11
  • 12. Proposed Method We propose a method to paraphrase nouns in consideration of the contexts. 1.  To extract candidate words used in the same context as the input sentence 2.  To calculate the similarity between the original and candidate words •  The number of differences of the context in the candidate word. •  The number of differences of the common context between the original and the candidate word. 3.  To select a candidate word with the maximum similarity as the paraphrase original → paraphrase 12
  • 13. To calculate similarity between words The larger number of differences of the common context between the original and the candidate word, the larger paraphrasability. 1 The larger number of differences of the context in the candidate word, the smaller paraphrasability.2 common(A, B): The number of differences of the common context between A and B difference(A): The number of differences of the context in A TNC: The total number of differences of the context 13 similarity(original,candidate) = common(original,candidate)× log( TNC difference(candidate) ) 1 2
  • 14. tf(w): The number of occurrences of the word df(w): The number of documents occurring the word TND: The total number of documents common(A, B): The number of differences of the common context difference(A): The number of differences of the context TNC: The total number of differences of the context tf (word)× log( TND df (word) ) common(original,candidate)× log( TNC difference(candidate) ) TF-IDF 14 New Statistics: Number of Occurrences → Number of Differences
  • 15. Proposed Method We propose a method to paraphrase nouns in consideration of the contexts. 1.  To extract candidate words used in the same context as the input sentence 2.  To calculate the similarity between the original and candidate words •  The number of differences of the context in the candidate word. •  The number of differences of the common context between the original and the candidate word. 3.  To select a candidate word with the maximum similarity as the paraphrase original → paraphrase 15
  • 16. The characteristic of our proposed method •  Extraction –  We can generate a proper sentence based on the common contexts. •  Selection –  We can select a suitable paraphrase based on the number of differences of the context. To compare with the co-occurrence frequency and pointwise mutual information experimentally 16
  • 17. Comparative Methods •  Marton et al. (2009) Improved Statistical Machine Translation Using Monolingually-Derived Paraphrases. •  Bhagat and Ravichandran (2008) Large Scale Acquisition of Paraphrases for Learning Surface Patterns. 1.  Both of these methods generate a feature vector from contexts of the target word original . 2.  They calculate a cosine similarity between the feature vectors. 3.  They select a word with the maximum similarity as the paraphrase . 17
  • 18. Comparative Methods •  [Marton 09]:Co-occurrence frequency based method •  [Bhagat 08]: Pointwise mutual information based method 1.  Both of these methods generate a feature vector from contexts of the target word original . 2.  They calculate a cosine similarity between the feature vectors. 3.  They select a word with the maximum similarity as the paraphrase . 18
  • 19. Experimental setup •  Japanese –  In this experiment, we paraphrase for Japanese nouns. –  This approach is language-independent. •  Definition of a context –  We define the content words in the phrase which is dependency to a noun as context. Look for the access to the airport. 19
  • 20. Experimental setup •  Web Japanese N-gram: To extract candidate words –  Japanese word N (1-7) grams. (We use 7-gram as sentence.) –  Each N-gram appears more than 20 times in the Web. –  We use 200 sentences in the following 1.3M sentences. •  Noun … Noun(paraphrase target) … Verb(original form). * Japanese is SOV language. •  Kyoto University case frame: To calculate similarity –  Japanese predicate and Japanese noun pairs from the Web. –  It is contained 34k predicates and 824k nouns. (We use all.) –  We define these predicates as context, and we calculate similarity between these nouns. 20
  • 21. Number of paraphrasable nouns to the 1st place of similarity 21
  • 22. Number of paraphrasable nouns to the 1st place of similarity High frequent words (e.g. こと(thing)) have a bad influence. Postfix words have a bad influence. (e.g. the word that describe the number of items) 22 The proposed method is robust because we don t depend on the word frequency.
  • 23. Relationship by rank of similarity and number of paraphrasable nouns 23
  • 24. Relationship by rank of similarity and number of paraphrasable nouns There are few differences. 24 Many paraphrase appear with rank 1.
  • 25. Examples of the paraphrasing in consideration of context •  Assign a maximum penalty of N$. –  Comparative method: imprisonment, pecuniary penalty, etc. –  Our method: paying penalty, administrative penalty, etc. •  imprisonment does not appear as a candidate. •  Reduce the burdens on the back. –  Comparative method: cost, expenses, actual cost, etc. •  All of which are money-related. •  Any words listed within the top 10 are not appropriate. –  Our method: load, stress, damage, exhaustion, tense, etc. •  All of which are appropriate paraphrase in the context. 25
  • 26. Conclusion We propose a method to paraphrase nouns in consideration of the contexts. 26 The Characteristic of Our Proposed Method –  It can paraphrase robust without the word frequency. •  Our Number of Differences based method is better than the Co-occurrence Frequency based method. –  It can paraphrase depending on the context. •  e.g. Reduce the burdens on the back. •  NoD : load, stress, damage, exhaustion, tense, etc. •  CoF : cost, expense, actual cost, etc. (money-related)