Framing Few Shot Knowledge Graph Completion with Large Language Models

MODUL Technology GmbH
MODUL Technology GmbHInnovation in Data and Media Extraction, Annotation and Analysis
FRAMING FEW-SHOT
KNOWLEDGE GRAPH
COMPLETION WITH LARGE
LANGUAGE MODELS
Adrian M.P. Brașoveanu
Lyndon J.B. Nixon
Albert Weichselbraun
Arno Scharl
NLP4KGC@SEMANTICS 2023
LLMs 2020-2023
LARGE LANGUAGE MODELS
Generative AI
ChatGPT 3.5/4.0
Claude 2
Cohere Chat
Falcon
LLaMa2
Flan-T5
Core Innovation:
Ecosystems
Agents
LangChain
KGs
Tools
Problem Solving
Mixture of
Experts (MoE)?
Image Copyright © Language Models are Few-Shot Learners (2020) by Tom
B. Brown et al. NeurIPS 2020.
LLM Reasoning Strategies(1): CoT
LARGE LANGUAGE MODELS
Relation
Extraction with
CoT
Explanation is All
You Need!
Step-by-step
reasoning
Augmented Text
leads to better
results!
Image Copyright © Revisiting Relation Extraction in the era of Large Language
Models by Wadhwa et al. ACL(1) 2023.
LLM Reasoning Strategies (2): ToT
LARGE LANGUAGE MODELS
CoT contains
explanations
ToT extends CoT
Multiple paths
towards an answer
CoT-SC – Majority
voting mechanism
ToT – more similar
to the human
selection process
ToT allows for
parallel exploration
of ideas as
opposed to linear
exploration (CoT).
Image Copyright © Tree of Thoughts: Deliberate Problem Solving with Large
Language Models (2023) by Yao et al.
Knowledge Graphs (KG)
LARGE LANGUAGE MODELS
Sustainability
KG
Built with Wikidata.
Missing relations:
- country-specific
- region-specific
KG Completion
(KGC)
Can we fill the
missing relations
using LLMs?
Evaluating Large Language Models
LARGE LANGUAGE MODELS
Single interface
nat.dev/chat
Includes
ChatGPT3.5/4
(with 32k cw)
Claude1/2
(with 100k cw)
Cohere Chat
MPT30B
Falcon40B
LLaMa2
Functionality
Playground
Compare
Chat
Metrics
Evaluating Large Language Models
LARGE LANGUAGE MODELS
Relations
Only Relations
Explanations
CoT
Completions
Restricted CoT
Self-Scoring
Truthfulness Proxy
Evaluating Large Language Models
LARGE LANGUAGE MODELS
Tools
GPT-3.5
GPT-4.0
Claude2
MPT-30B
Few-Shot
Input: 12-14
annotated texts
Output: 50
annotated texts
We want all the
texts annotated in
a large batch if
possible
Evaluating Large Language Models
LARGE LANGUAGE MODELS
Taxonomy of
Errors
These are only the
most frequent
errors!
And the Winner
Is?
ChatGPT and
Claude2 have
similar
performance
Conclusion?
LARGE LANGUAGE MODELS
Self-Scoring
Consecutive runs
Huge differences
And the Winner
Is?
ChatGPT and
Claude2 have
similar
performance
Acknowledgments
PROJECTS
DWBI Vienna - Vienna Science and Technology Fund (WWTF) [10.47379/ICT20096]
SDG-HUB – FFG (GA No. 892212)
CONTACT
adrian.brasoveanu@modul.ac.at
THANK YOU!
1 of 11

More Related Content

Similar to Framing Few Shot Knowledge Graph Completion with Large Language Models(20)

TopicmodelsTopicmodels
Topicmodels
Ajay Ohri2.8K views
Plug play language_modelsPlug play language_models
Plug play language_models
Mohammad Moslem Uddin208 views
Deep Learning | Speaker IndentificationDeep Learning | Speaker Indentification
Deep Learning | Speaker Indentification
Sai Kiran Kadam42 views
2010 INTERSPEECH 2010 INTERSPEECH
2010 INTERSPEECH
WarNik Chow75 views
The Value and Benefits of Data-to-Text TechnologiesThe Value and Benefits of Data-to-Text Technologies
The Value and Benefits of Data-to-Text Technologies
International Journal of Modern Research in Engineering and Technology38 views
CommitBERT.pdfCommitBERT.pdf
CommitBERT.pdf
ssuserdd444a12 views
2309.08491.pdf2309.08491.pdf
2309.08491.pdf
TatianaAlmeida4960855 views

More from MODUL Technology GmbH(20)

Media mining for smarter tourismMedia mining for smarter tourism
Media mining for smarter tourism
MODUL Technology GmbH437 views
NoTube: Pattern-based Recommendations (part 3)NoTube: Pattern-based Recommendations (part 3)
NoTube: Pattern-based Recommendations (part 3)
MODUL Technology GmbH602 views
NoTube: Pattern-based Recommendations (part 1)NoTube: Pattern-based Recommendations (part 1)
NoTube: Pattern-based Recommendations (part 1)
MODUL Technology GmbH550 views
NoTube: Pattern-based Recommendations (part 1)NoTube: Pattern-based Recommendations (part 1)
NoTube: Pattern-based Recommendations (part 1)
MODUL Technology GmbH948 views
NoTube: Recommendations (Collaborative)NoTube: Recommendations (Collaborative)
NoTube: Recommendations (Collaborative)
MODUL Technology GmbH505 views
NoTube: User Profiling (Beancounter)NoTube: User Profiling (Beancounter)
NoTube: User Profiling (Beancounter)
MODUL Technology GmbH828 views
NoTube: BBC show caseNoTube: BBC show case
NoTube: BBC show case
MODUL Technology GmbH390 views
NoTube: Stoneroos show caseNoTube: Stoneroos show case
NoTube: Stoneroos show case
MODUL Technology GmbH637 views
NoTube: RAI Show CaseNoTube: RAI Show Case
NoTube: RAI Show Case
MODUL Technology GmbH398 views
NoTube: ArchitectureNoTube: Architecture
NoTube: Architecture
MODUL Technology GmbH645 views
NoTube: Loudness NormalisationNoTube: Loudness Normalisation
NoTube: Loudness Normalisation
MODUL Technology GmbH550 views

Recently uploaded(20)

ChatGPT and AI for Web DevelopersChatGPT and AI for Web Developers
ChatGPT and AI for Web Developers
Maximiliano Firtman143 views
ISWC2023-McGuinnessTWC16x9FinalShort.pdfISWC2023-McGuinnessTWC16x9FinalShort.pdf
ISWC2023-McGuinnessTWC16x9FinalShort.pdf
Deborah McGuinness80 views
[2023] Putting the R! in R&D.pdf[2023] Putting the R! in R&D.pdf
[2023] Putting the R! in R&D.pdf
Eleanor McHugh31 views
Green Leaf Consulting: Capabilities DeckGreen Leaf Consulting: Capabilities Deck
Green Leaf Consulting: Capabilities Deck
GreenLeafConsulting147 views

Framing Few Shot Knowledge Graph Completion with Large Language Models

  • 1. FRAMING FEW-SHOT KNOWLEDGE GRAPH COMPLETION WITH LARGE LANGUAGE MODELS Adrian M.P. Brașoveanu Lyndon J.B. Nixon Albert Weichselbraun Arno Scharl NLP4KGC@SEMANTICS 2023
  • 2. LLMs 2020-2023 LARGE LANGUAGE MODELS Generative AI ChatGPT 3.5/4.0 Claude 2 Cohere Chat Falcon LLaMa2 Flan-T5 Core Innovation: Ecosystems Agents LangChain KGs Tools Problem Solving Mixture of Experts (MoE)? Image Copyright © Language Models are Few-Shot Learners (2020) by Tom B. Brown et al. NeurIPS 2020.
  • 3. LLM Reasoning Strategies(1): CoT LARGE LANGUAGE MODELS Relation Extraction with CoT Explanation is All You Need! Step-by-step reasoning Augmented Text leads to better results! Image Copyright © Revisiting Relation Extraction in the era of Large Language Models by Wadhwa et al. ACL(1) 2023.
  • 4. LLM Reasoning Strategies (2): ToT LARGE LANGUAGE MODELS CoT contains explanations ToT extends CoT Multiple paths towards an answer CoT-SC – Majority voting mechanism ToT – more similar to the human selection process ToT allows for parallel exploration of ideas as opposed to linear exploration (CoT). Image Copyright © Tree of Thoughts: Deliberate Problem Solving with Large Language Models (2023) by Yao et al.
  • 5. Knowledge Graphs (KG) LARGE LANGUAGE MODELS Sustainability KG Built with Wikidata. Missing relations: - country-specific - region-specific KG Completion (KGC) Can we fill the missing relations using LLMs?
  • 6. Evaluating Large Language Models LARGE LANGUAGE MODELS Single interface nat.dev/chat Includes ChatGPT3.5/4 (with 32k cw) Claude1/2 (with 100k cw) Cohere Chat MPT30B Falcon40B LLaMa2 Functionality Playground Compare Chat Metrics
  • 7. Evaluating Large Language Models LARGE LANGUAGE MODELS Relations Only Relations Explanations CoT Completions Restricted CoT Self-Scoring Truthfulness Proxy
  • 8. Evaluating Large Language Models LARGE LANGUAGE MODELS Tools GPT-3.5 GPT-4.0 Claude2 MPT-30B Few-Shot Input: 12-14 annotated texts Output: 50 annotated texts We want all the texts annotated in a large batch if possible
  • 9. Evaluating Large Language Models LARGE LANGUAGE MODELS Taxonomy of Errors These are only the most frequent errors! And the Winner Is? ChatGPT and Claude2 have similar performance
  • 10. Conclusion? LARGE LANGUAGE MODELS Self-Scoring Consecutive runs Huge differences And the Winner Is? ChatGPT and Claude2 have similar performance
  • 11. Acknowledgments PROJECTS DWBI Vienna - Vienna Science and Technology Fund (WWTF) [10.47379/ICT20096] SDG-HUB – FFG (GA No. 892212) CONTACT adrian.brasoveanu@modul.ac.at THANK YOU!