Call Girls in Aiims Metro Delhi 💯Call Us 🔝9953322196🔝 💯Escort.
Applications of Large Language Models in Materials Discovery and Design
1. Applications of Large Language Models in
Materials Discovery and Design
Anubhav Jain
Lawrence Berkeley National Laboratory
MRS Fall meeting, Nov 2023
Slides (already) posted to hackingmaterials.lbl.gov
3. Today is the 1
year birthday of
ChatGPT!
3
To celebrate the occasion, I used
ChatGPT to generate an image of a
birthday cake for itself
The results tell you a lot of what you
need to know about the current
state of these kinds of models
4. Today is the 1
year birthday of
ChatGPT!
4
To celebrate the occasion, I
used ChatGPT to generate an
image of a birthday cake for
itself
Somehow, the results tell you a
lot of what you need to know
about the current state of
these kinds of models
5. Prior to LLMs, we trained custom models to
perform simple NLP tasks and did just “OK”
5
• A little over a year ago, even
simple tasks like labeling words
into categories (“NER”) required
custom models
• The models took time to develop
and train
• For example, we tried a custom
BERT model that took 1 month to
train on 8 NVIDIA V100
GPUs…and got slightly better
performance than simpler
models
Weston, L.; Tshitoyan, V.; Dagdelen, J.; Kononova, O.; Trewartha,
A.; Persson, K. A.; Ceder, G.; Jain, A. Named Entity Recognition and
Normalization Applied to Large-Scale Information Extraction from
the Materials Science Literature. J. Chem. Inf. Model. 2019.
6. The NER was also just the first step to more
complex data extraction
6
About 80%–90%
accuracy
achieved
~60% accuracy
(based on our internal
testing)
Accuracy unclear,
as good test sets
unavailable.
Maybe 70%?
“Structured information extraction from complex scientific text
with fine-tuned large language models”, in review,
https://arxiv.org/abs/2212.05238
7. Things are much easier today …
• We no longer design the LLM models
• Training / fine-tuning is done via an API
• We mainly focus on domain-specific labeling and labeling efficiency …
• Others use “zero-shot” LLMs so don’t even need to label/fine-tune!
7
“Structured information extraction from complex scientific text with fine-tuned large
language models”, in review, https://arxiv.org/abs/2212.05238
8. This means we can focus on applications!
E.g., doping
• Doping is difficult to calculate, and there is no large doping database
• It is therefore a good application for NLP data extraction
8
9. Mapping the doping in specific materials
9
Mn-doped (52 mentions) Cr-doped (83 mentions)
N-doped (46 mentions)
Fe-doped (80 mentions)
• Based on parsing scientific
~350,000 abstracts
• Final data set contains over
>200,000 host-dopant links with
f1 score ~0.8
• Using the data set, we can look
up the doping data for any
material composition along with
applications tied to that specific
dopant
10. Predicting dopants
Given partial information about
a material’s dopants, we can
predict what other dopants may
be likely using collaborative
filtering
10
Lu2O3 dopants
Count
Dopant element
Decreasing frequency
Eu Yb Er Tm …
! = 3 (%ℎ'(( )*+,(- +./0%1.!+)
masked solution algorithm sees
Training
, = 5 (5 40(++(+ *//.5(-)
3rd & 5th prediction correct
1st, 2nd, & 4th predictions wrong
Prediction
Decreasing recommendation strength
Sr Y Eu Ni Yb
2 of 3 solutions (66%
recovered) in k=5 guesses
11. Model does OK – although room for
improvement
11
If you mask 3 top known
dopants and try to re-
predict them in 5
guesses, you recover
~35% of them (about 1)
Data across >2000 hosts
We will share the full
data set with the
community so they can
also try to make models
12. Thoughts on the
future - RAG
• Previously, ChatGPT tried to
answer all questions “from
memory”
• Led to hallucination and other
issues
• Now, ChatGPT can search the
web to answer questions
(retrieval augmented generation
or RAG)
• One could also search code
documentation, user manuals,
long reports, journal articles, etc.
to produce answers
12
13. Example – turning our group handbook into
Q&A tool in ~1 hour using GPT Apps
13
Too much reading for most people …
So many words!
14. Example – turning our group handbook into
Q&A tool in ~1 hour using GPT Apps
14
Too much reading for most people …
So many words!
Training GPT (via conversation) to deliver
information from handbook via Q&A
16. How will this change materials science in the
next few years?
• One change will be a
transformation of user interfaces
• Materials databases will be
natively integrated with LLM
interfaces
• APIs will be easier to use since
LLMs will help translate human
intent to API calls
16
“Show me materials from Materials Project that
contain Ca, have a band gap >1.2 eV, and have a bulk
modulus >100 GPa.”
”Also include materials from OQMD, Jarvis, and any
other materials databases you are aware of.”
17. Acknowledgements
17
• Alex Dunn
• John Dagdelen
• Nick Walker
• Sanghoon Lee
• Amalie Trewartha
• Leigh Weston
• Kristin Persson
• Gerbrand Ceder
Funded by Toyota
Research Institute
and
DOE-BES Materials
Project program