The document discusses pretraining models for natural language processing tasks. It outlines several ways to pretrain models, including pretraining decoders as language models, pretraining encoders using a masked language modeling objective, and pretraining encoder-decoder architectures. The document also discusses how pretrained models can be finetuned on downstream tasks to improve performance.
This presentation was provided by William Mattingly of the Smithsonian Institution, during the second segment of the NISO training series "AI & Prompt Design." Session Two: Large Language Models, was held on April 11, 2024.
DELAB - sequence generation seminar
Title
Open vocabulary problem
Table of contents
1. Open vocabulary problem
1-1. Open vocabulary problem
1-2. Ignore rare words
1-3. Approximative Softmax
1-4. Back-off Models
1-5. Character-level model
2. Solution1: Byte Pair Encoding(BPE)
3. Solution2: WordPieceModel(WPM)
Presented by Ted Xiao at RobotXSpace on 4/18/2017. This workshop covers the fundamentals of Natural Language Processing, crucial NLP approaches, and an overview of NLP in industry.
The NLP muppets revolution! @ Data Science London 2019
video: https://skillsmatter.com/skillscasts/13940-a-deep-dive-into-contextual-word-embeddings-and-understanding-what-nlp-models-learn
event: https://www.meetup.com/Data-Science-London/events/261483332/
This presentation was provided by William Mattingly of the Smithsonian Institution, during the second segment of the NISO training series "AI & Prompt Design." Session Two: Large Language Models, was held on April 11, 2024.
DELAB - sequence generation seminar
Title
Open vocabulary problem
Table of contents
1. Open vocabulary problem
1-1. Open vocabulary problem
1-2. Ignore rare words
1-3. Approximative Softmax
1-4. Back-off Models
1-5. Character-level model
2. Solution1: Byte Pair Encoding(BPE)
3. Solution2: WordPieceModel(WPM)
Presented by Ted Xiao at RobotXSpace on 4/18/2017. This workshop covers the fundamentals of Natural Language Processing, crucial NLP approaches, and an overview of NLP in industry.
The NLP muppets revolution! @ Data Science London 2019
video: https://skillsmatter.com/skillscasts/13940-a-deep-dive-into-contextual-word-embeddings-and-understanding-what-nlp-models-learn
event: https://www.meetup.com/Data-Science-London/events/261483332/
Beyond the Symbols: A 30-minute Overview of NLPMENGSAYLOEM1
This presentation delves into the world of Natural Language Processing (NLP), exploring its goal to make human language understandable to machines. The complexities of language, such as ambiguity and complex structures, are highlighted as major challenges. The talk underscores the evolution of NLP through deep learning methodologies, leading to a new era defined by large-scale language models. However, obstacles like low-resource languages and ethical issues including bias and hallucination are acknowledged as enduring challenges in the field. Overall, the presentation provides a condensed, yet comprehensive view of NLP's accomplishments and ongoing hurdles.
Deep Learning for Natural Language ProcessingJonathan Mugan
Deep Learning represents a significant advance in artificial intelligence because it enables computers to represent concepts using vectors instead of symbols. Representing concepts using vectors is particularly useful in natural language processing, and this talk will elucidate those benefits and provide an understandable introduction to the technologies that make up deep learning. The talk will outline ways to get started in deep learning, and it will conclude with a discussion of the gaps that remain between our current technologies and true computer understanding.
Comparative study of Text-to-Speech Synthesis for Indian Languages by using S...ravi sharma
This paper explores syllable approach to building language independent text to speech systems for Indian Languages. The use of common phone set, common question set and borrowing context-independent monophone models along with syllable approach across languages makes the procedure easier and less time-consuming, without compromising the synthesized speech quality. Systems can be built without even knowing the language. This is especially quite beneficial in the Indian scenario.
Introduction to NLP with some practical exercises (tokenization, keyword extraction, topic modelling) using Python libraries like NLTK, Gensim and TextBlob, plus a general overview of the field.
The best known natural language processing tool is GPT-3, from OpenAI, which uses AI and statistics to predict the next word in a sentence based on the preceding words. NLP practitioners call tools like this “language models,” and they can be used for simple analytics tasks, such as classifying documents and analyzing the sentiment in blocks of text, as well as more advanced tasks, such as answering questions and summarizing reports. Language models are already reshaping traditional text analytics, but GPT-3 was an especially pivotal language model because, at 10x larger than any previous model upon release, it was the first large language model, which enabled it to perform even more advanced tasks like programming and solving high school–level math problems. The latest version, called InstructGPT, has been fine-tuned by humans to generate responses that are much better aligned with human values and user intentions, and Google’s latest model shows further impressive breakthroughs on language and reasoning.
For businesses, the three areas where GPT-3 has appeared most promising are writing, coding, and discipline-specific reasoning. OpenAI, the Microsoft-funded creator of GPT-3, has developed a GPT-3-based language model intended to act as an assistant for programmers by generating code from natural language input. This tool, Codex, is already powering products like Copilot for Microsoft’s subsidiary GitHub and is capable of creating a basic video game simply by typing instructions. This transformative capability was already expected to change the nature of how programmers do their jobs, but models continue to improve — the latest from Google’s DeepMind AI lab, for example, demonstrates the critical thinking and logic skills necessary to outperform most humans in programming competitions.
Models like GPT-3 are considered to be foundation models — an emerging AI research area — which also work for other types of data such as images and video. Foundation models can even be trained on multiple forms of data at the same time, like OpenAI’s DALL·E 2, which is trained on language and images to generate high-resolution renderings of imaginary scenes or objects simply from text prompts. Due to their potential to transform the nature of cognitive work, economists expect that foundation models may affect every part of the economy and could lead to increases in economic growth similar to the industrial revolution.
Visual-Semantic Embeddings: some thoughts on LanguageRoelof Pieters
Language technology is rapidly evolving. A resurgence in the use of distributed semantic representations and word embeddings, combined with the rise of deep neural networks has led to new approaches and new state of the art results in many natural language processing tasks. One such exciting - and most recent - trend can be seen in multimodal approaches fusing techniques and models of natural language processing (NLP) with that of computer vision.
The talk is aimed at giving an overview of the NLP part of this trend. It will start with giving a short overview of the challenges in creating deep networks for language, as well as what makes for a “good” language models, and the specific requirements of semantic word spaces for multi-modal embeddings.
DotNet 2019 | Pablo Doval - Recurrent Neural Networks with TF2.0Plain Concepts
In this session we will explore Recurrent Neural Networks (RNN) - a type of neural networks specially designed to process sequences - and their applications to time series and text processing (NLP). To make the session even more interesting, all the code will be developed using the latest version of TensorFlow 2.0, using the implementation of the models to discuss the major changes with respect to versions 1.x of the Deep Learning framework, and it will leverage MLFLlow within Azure Databricks as a development platform and model serving.
In this talk Chengqing presents some work on development of statistical machine translation (MT) system based on the open source toolkit Moses at CASIA. In recent years, CASIA have developed several MT systems, including Chinese-to-English and English-to-Chinese, Japanese-to-Chinese, Arabic-to-Chinese, Uigur-to-Chinese and Tibetan-to-Chinese MT systems etc. Moses is a basic translation engine in our systems. Chengqing shows audience how CASIA use and extend Moses to develop the multilingual MT systems.
This presentation is a part of the MosesCore project that encourages the development and usage of open source machine translation tools, notably the Moses statistical MT toolkit.
MosesCore is supporetd by the European Commission Grant Number 288487 under the 7th Framework Programme.
Latest news on Twitter - #MosesCore
Word embedding, Vector space model, language modelling, Neural language model, Word2Vec, GloVe, Fasttext, ELMo, BERT, distilBER, roBERTa, sBERT, Transformer, Attention
Beyond the Symbols: A 30-minute Overview of NLPMENGSAYLOEM1
This presentation delves into the world of Natural Language Processing (NLP), exploring its goal to make human language understandable to machines. The complexities of language, such as ambiguity and complex structures, are highlighted as major challenges. The talk underscores the evolution of NLP through deep learning methodologies, leading to a new era defined by large-scale language models. However, obstacles like low-resource languages and ethical issues including bias and hallucination are acknowledged as enduring challenges in the field. Overall, the presentation provides a condensed, yet comprehensive view of NLP's accomplishments and ongoing hurdles.
Deep Learning for Natural Language ProcessingJonathan Mugan
Deep Learning represents a significant advance in artificial intelligence because it enables computers to represent concepts using vectors instead of symbols. Representing concepts using vectors is particularly useful in natural language processing, and this talk will elucidate those benefits and provide an understandable introduction to the technologies that make up deep learning. The talk will outline ways to get started in deep learning, and it will conclude with a discussion of the gaps that remain between our current technologies and true computer understanding.
Comparative study of Text-to-Speech Synthesis for Indian Languages by using S...ravi sharma
This paper explores syllable approach to building language independent text to speech systems for Indian Languages. The use of common phone set, common question set and borrowing context-independent monophone models along with syllable approach across languages makes the procedure easier and less time-consuming, without compromising the synthesized speech quality. Systems can be built without even knowing the language. This is especially quite beneficial in the Indian scenario.
Introduction to NLP with some practical exercises (tokenization, keyword extraction, topic modelling) using Python libraries like NLTK, Gensim and TextBlob, plus a general overview of the field.
The best known natural language processing tool is GPT-3, from OpenAI, which uses AI and statistics to predict the next word in a sentence based on the preceding words. NLP practitioners call tools like this “language models,” and they can be used for simple analytics tasks, such as classifying documents and analyzing the sentiment in blocks of text, as well as more advanced tasks, such as answering questions and summarizing reports. Language models are already reshaping traditional text analytics, but GPT-3 was an especially pivotal language model because, at 10x larger than any previous model upon release, it was the first large language model, which enabled it to perform even more advanced tasks like programming and solving high school–level math problems. The latest version, called InstructGPT, has been fine-tuned by humans to generate responses that are much better aligned with human values and user intentions, and Google’s latest model shows further impressive breakthroughs on language and reasoning.
For businesses, the three areas where GPT-3 has appeared most promising are writing, coding, and discipline-specific reasoning. OpenAI, the Microsoft-funded creator of GPT-3, has developed a GPT-3-based language model intended to act as an assistant for programmers by generating code from natural language input. This tool, Codex, is already powering products like Copilot for Microsoft’s subsidiary GitHub and is capable of creating a basic video game simply by typing instructions. This transformative capability was already expected to change the nature of how programmers do their jobs, but models continue to improve — the latest from Google’s DeepMind AI lab, for example, demonstrates the critical thinking and logic skills necessary to outperform most humans in programming competitions.
Models like GPT-3 are considered to be foundation models — an emerging AI research area — which also work for other types of data such as images and video. Foundation models can even be trained on multiple forms of data at the same time, like OpenAI’s DALL·E 2, which is trained on language and images to generate high-resolution renderings of imaginary scenes or objects simply from text prompts. Due to their potential to transform the nature of cognitive work, economists expect that foundation models may affect every part of the economy and could lead to increases in economic growth similar to the industrial revolution.
Visual-Semantic Embeddings: some thoughts on LanguageRoelof Pieters
Language technology is rapidly evolving. A resurgence in the use of distributed semantic representations and word embeddings, combined with the rise of deep neural networks has led to new approaches and new state of the art results in many natural language processing tasks. One such exciting - and most recent - trend can be seen in multimodal approaches fusing techniques and models of natural language processing (NLP) with that of computer vision.
The talk is aimed at giving an overview of the NLP part of this trend. It will start with giving a short overview of the challenges in creating deep networks for language, as well as what makes for a “good” language models, and the specific requirements of semantic word spaces for multi-modal embeddings.
DotNet 2019 | Pablo Doval - Recurrent Neural Networks with TF2.0Plain Concepts
In this session we will explore Recurrent Neural Networks (RNN) - a type of neural networks specially designed to process sequences - and their applications to time series and text processing (NLP). To make the session even more interesting, all the code will be developed using the latest version of TensorFlow 2.0, using the implementation of the models to discuss the major changes with respect to versions 1.x of the Deep Learning framework, and it will leverage MLFLlow within Azure Databricks as a development platform and model serving.
In this talk Chengqing presents some work on development of statistical machine translation (MT) system based on the open source toolkit Moses at CASIA. In recent years, CASIA have developed several MT systems, including Chinese-to-English and English-to-Chinese, Japanese-to-Chinese, Arabic-to-Chinese, Uigur-to-Chinese and Tibetan-to-Chinese MT systems etc. Moses is a basic translation engine in our systems. Chengqing shows audience how CASIA use and extend Moses to develop the multilingual MT systems.
This presentation is a part of the MosesCore project that encourages the development and usage of open source machine translation tools, notably the Moses statistical MT toolkit.
MosesCore is supporetd by the European Commission Grant Number 288487 under the 7th Framework Programme.
Latest news on Twitter - #MosesCore
Word embedding, Vector space model, language modelling, Neural language model, Word2Vec, GloVe, Fasttext, ELMo, BERT, distilBER, roBERTa, sBERT, Transformer, Attention
How to Make a Field invisible in Odoo 17Celine George
It is possible to hide or invisible some fields in odoo. Commonly using “invisible” attribute in the field definition to invisible the fields. This slide will show how to make a field invisible in odoo 17.
How to Split Bills in the Odoo 17 POS ModuleCeline George
Bills have a main role in point of sale procedure. It will help to track sales, handling payments and giving receipts to customers. Bill splitting also has an important role in POS. For example, If some friends come together for dinner and if they want to divide the bill then it is possible by POS bill splitting. This slide will show how to split bills in odoo 17 POS.
Palestine last event orientationfvgnh .pptxRaedMohamed3
An EFL lesson about the current events in Palestine. It is intended to be for intermediate students who wish to increase their listening skills through a short lesson in power point.
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdfTechSoup
In this webinar you will learn how your organization can access TechSoup's wide variety of product discount and donation programs. From hardware to software, we'll give you a tour of the tools available to help your nonprofit with productivity, collaboration, financial management, donor tracking, security, and more.
Unit 8 - Information and Communication Technology (Paper I).pdfThiyagu K
This slides describes the basic concepts of ICT, basics of Email, Emerging Technology and Digital Initiatives in Education. This presentations aligns with the UGC Paper I syllabus.
Ethnobotany and Ethnopharmacology:
Ethnobotany in herbal drug evaluation,
Impact of Ethnobotany in traditional medicine,
New development in herbals,
Bio-prospecting tools for drug discovery,
Role of Ethnopharmacology in drug evaluation,
Reverse Pharmacology.
2024.06.01 Introducing a competency framework for languag learning materials ...Sandy Millin
http://sandymillin.wordpress.com/iateflwebinar2024
Published classroom materials form the basis of syllabuses, drive teacher professional development, and have a potentially huge influence on learners, teachers and education systems. All teachers also create their own materials, whether a few sentences on a blackboard, a highly-structured fully-realised online course, or anything in between. Despite this, the knowledge and skills needed to create effective language learning materials are rarely part of teacher training, and are mostly learnt by trial and error.
Knowledge and skills frameworks, generally called competency frameworks, for ELT teachers, trainers and managers have existed for a few years now. However, until I created one for my MA dissertation, there wasn’t one drawing together what we need to know and do to be able to effectively produce language learning materials.
This webinar will introduce you to my framework, highlighting the key competencies I identified from my research. It will also show how anybody involved in language teaching (any language, not just English!), teacher training, managing schools or developing language learning materials can benefit from using the framework.
The French Revolution, which began in 1789, was a period of radical social and political upheaval in France. It marked the decline of absolute monarchies, the rise of secular and democratic republics, and the eventual rise of Napoleon Bonaparte. This revolutionary period is crucial in understanding the transition from feudalism to modernity in Europe.
For more information, visit-www.vavaclasses.com
Synthetic Fiber Construction in lab .pptxPavel ( NSTU)
Synthetic fiber production is a fascinating and complex field that blends chemistry, engineering, and environmental science. By understanding these aspects, students can gain a comprehensive view of synthetic fiber production, its impact on society and the environment, and the potential for future innovations. Synthetic fibers play a crucial role in modern society, impacting various aspects of daily life, industry, and the environment. ynthetic fibers are integral to modern life, offering a range of benefits from cost-effectiveness and versatility to innovative applications and performance characteristics. While they pose environmental challenges, ongoing research and development aim to create more sustainable and eco-friendly alternatives. Understanding the importance of synthetic fibers helps in appreciating their role in the economy, industry, and daily life, while also emphasizing the need for sustainable practices and innovation.
2. Lecture Plan
1. A brief note on subword modeling
2. Motivating model pretraining from word embeddings
3. Model pretraining three ways
1. Decoders
2. Encoders
3. Encoder-Decoders
4. Interlude: what do we think pretraining is teaching?
5. Very large models and in-context learning
Reminders:
Assignment 5 is out today! It covers lecture 9 (Tuesday) and lecture 10 (Today)!
It has ~pedagogically relevant math~ so get started!
2
3. Word structure and subword models
Let’s take a look at the assumptions we’ve made about a language’s vocabulary.
We assume a fixed vocab of tens of thousands of words, built from the training set.
All novel words seen at test time are mapped to a single UNK.
word vocab mapping embedding
hat → pizza (index)
learn → tasty (index)
taaaaasty → UNK (index)
laern → UNK (index)
Transformerify → UNK (index)
3
Common
words
Variations
misspellings
novel items
4. Word structure and subword models
Finite vocabulary assumptions make even less sense in many languages.
• Many languages exhibit complex morphology, or word structure.
• The effect is more word types, each occurring fewer times.
4
Example: Swahili verbs can have
hundreds of conjugations, each
encoding a wide variety of
information. (Tense, mood,
definiteness, negation, information
about the object, ++)
Here’s a small fraction of the
conjugations for ambia – to tell.
[Wiktionary]
5. The byte-pair encoding algorithm
Subword modeling in NLP encompasses a wide range of methods for reasoning about
structure below the word level. (Parts of words, characters, bytes.)
• The dominant modern paradigm is to learn a vocabulary of parts of words (subword tokens).
• At training and testing time, each word is split into a sequence of known subwords.
Byte-pair encoding is a simple, effective strategy for defining a subword vocabulary.
1. Start with a vocabulary containing only characters and an “end-of-word” symbol.
2. Using a corpus of text, find the most common adjacent characters “a,b”; add “ab” as a subword.
3. Replace instances of the character pair with the new subword; repeat until desired vocab size.
Originally used in NLP for machine translation; now a similar method (WordPiece) is used in pretrained
models.
5 [Sennrich et al., 2016, Wu et al., 2016]
6. Word structure and subword models
Common words end up being a part of the subword vocabulary, while rarer words are split
into (sometimes intuitive, sometimes not) components.
In the worst case, words are split into as many subwords as they have characters.
word vocab mapping embedding
hat → hat
learn → learn
taaaaasty → taa## aaa## sty
laern → la## ern##
Transformerify → Transformer## ify
6
Common
words
Variations
misspellings
novel items
7. Outline
1. A brief note on subword modeling
2. Motivating model pretraining from word embeddings
3. Model pretraining three ways
1. Decoders
2. Encoders
3. Encoder-Decoders
4. Interlude: what do we think pretraining is teaching?
5. Very large models and in-context learning
7
8. Motivating word meaning and context
Recall the adage we mentioned at the beginning of the course:
“You shall know a word by the company it keeps” (J. R. Firth 1957: 11)
This quote is a summary of distributional semantics, and motivated word2vec. But:
“… the complete meaning of a word is always contextual,
and no study of meaning apart from a complete context
can be taken seriously.” (J. R. Firth 1935)
Consider I record the record: the two instances of record mean different things.
8 [Thanks to Yoav Goldberg on Twitter for pointing out the 1935 Firth quote.]
9. Where we were: pretrained word embeddings
Circa 2017:
• Start with pretrained word embeddings (no
context!)
• Learn how to incorporate context in an LSTM
or Transformer while training on the task.
Some issues to think about:
• The training data we have for our
downstream task (like question answering)
must be sufficient to teach all contextual
aspects of language.
• Most of the parameters in our network are
randomly initialized!
9
… the movie was …
ෝ
𝒚
Not pretrained
pretrained
(word embeddings)
[Recall, movie gets the same word embedding,
no matter what sentence it shows up in]
10. Where we’re going: pretraining whole models
In modern NLP:
• All (or almost all) parameters in NLP
networks are initialized via pretraining.
• Pretraining methods hide parts of the input
from the model, and train the model to
reconstruct those parts.
• This has been exceptionally effective at
building strong:
• representations of language
• parameter initializations for strong NLP
models.
• Probability distributions over language that
we can sample from
10
… the movie was …
ෝ
𝒚
Pretrained jointly
[This model has learned how to represent
entire sentences through pretraining]
11. What can we learn from reconstructing the input?
11
Stanford University is located in __________, California.
12. What can we learn from reconstructing the input?
12
I put ___ fork down on the table.
13. What can we learn from reconstructing the input?
13
The woman walked across the street,
checking for traffic over ___ shoulder.
14. What can we learn from reconstructing the input?
14
I went to the ocean to see the fish, turtles, seals, and _____.
15. What can we learn from reconstructing the input?
15
Overall, the value I got from the two hours watching
it was the sum total of the popcorn and the drink.
The movie was ___.
16. What can we learn from reconstructing the input?
16
Iroh went into the kitchen to make some tea.
Standing next to Iroh, Zuko pondered his destiny.
Zuko left the ______.
17. What can we learn from reconstructing the input?
17
I was thinking about the sequence that goes
1, 1, 2, 3, 5, 8, 13, 21, ____
18. The Transformer Encoder-Decoder [Vaswani et al., 2017]
Transformer
Encoder
Word
Embeddings
Position
Representations
+
Transformer
Encoder
[input sequence]
Transformer
Decoder
Word
Embeddings
Position
Representations
+
Transformer
Decoder
[output sequence]
[decoder attends
to encoder states]
Looking back at the whole model, zooming in on an Encoder block:
[predictions!]
18
19. The Transformer Encoder-Decoder [Vaswani et al., 2017]
Word
Embeddings
Position
Representations
+
Transformer
Encoder
[input sequence]
Transformer
Decoder
Word
Embeddings
Position
Representations
+
Transformer
Decoder
[output sequence]
[decoder attends
to encoder states]
Looking back at the whole model, zooming in on an Encoder block:
[predictions!]
Multi-Head Attention
Residual + LayerNorm
Feed-Forward
Residual + LayerNorm
19
20. The Transformer Encoder-Decoder [Vaswani et al., 2017]
Transformer
Encoder
Word
Embeddings
Position
Representations
+
Transformer
Encoder
[input sequence]
Word
Embeddings
Position
Representations
+
Transformer
Decoder
[output sequence]
Looking back at the whole model,
zooming in on a Decoder block:
[predictions!]
Residual + LayerNorm
Multi-Head Cross-Attention
Masked Multi-Head Self-Attention
Residual + LayerNorm
Feed-Forward
Residual + LayerNorm
20
21. The Transformer Encoder-Decoder [Vaswani et al., 2017]
Transformer
Encoder
Word
Embeddings
Position
Representations
+
Transformer
Encoder
[input sequence]
Word
Embeddings
Position
Representations
+
Transformer
Decoder
[output sequence]
The only new part is attention from decoder to encoder.
Like we saw last week!
[predictions!]
Residual + LayerNorm
Multi-Head Cross-Attention
Masked Multi-Head Self-Attention
Residual + LayerNorm
Feed-Forward
Residual + LayerNorm
21
22. Pretraining through language modeling [Dai and Le, 2015]
Recall the language modeling task:
• Model 𝑝𝜃 𝑤𝑡 𝑤1:𝑡−1), the probability
distribution over words given their past
contexts.
• There’s lots of data for this! (In English.)
Pretraining through language modeling:
• Train a neural network to perform language
modeling on a large amount of text.
• Save the network parameters.
22
Decoder
(Transformer, LSTM, ++ )
Iroh goes to make tasty tea
goes to make tasty tea END
23. The Pretraining / Finetuning Paradigm
Pretraining can improve NLP applications by serving as parameter initialization.
23
Decoder
(Transformer, LSTM, ++ )
Iroh goes to make tasty tea
goes to make tasty tea END
Step 1: Pretrain (on language modeling)
Lots of text; learn general things!
Decoder
(Transformer, LSTM, ++ )
☺/
Step 2: Finetune (on your task)
Not many labels; adapt to the task!
… the movie was …
24. Stochastic gradient descent and pretrain/finetune
Why should pretraining and finetuning help, from a “training neural nets” perspective?
• Consider, provides parameters
𝜃 by approximating min
𝜃
ℒpretrain 𝜃 .
• (The pretraining loss.)
• Then, finetuning approximates min
𝜃
ℒfinetune 𝜃 , starting at
𝜃.
• (The finetuning loss)
• The pretraining may matter because stochastic gradient descent sticks (relatively)
close to
𝜃 during finetuning.
• So, maybe the finetuning local minima near
𝜃 tend to generalize well!
• And/or, maybe the gradients of finetuning loss near
𝜃 propagate nicely!
24
25. Lecture Plan
1. A brief note on subword modeling
2. Motivating model pretraining from word embeddings
3. Model pretraining three ways
1. Decoders
2. Encoders
3. Encoder-Decoders
4. Interlude: what do we think pretraining is teaching?
5. Very large models and in-context learning
25
26. Pretraining for three types of architectures
The neural architecture influences the type of pretraining, and natural use cases.
26
Decoders
• Language models! What we’ve seen so far.
• Nice to generate from; can’t condition on future words
Encoders
• Gets bidirectional context – can condition on future!
• Wait, how do we pretrain them?
Encoder-
Decoders
• Good parts of decoders and encoders?
• What’s the best way to pretrain them?
27. Pretraining for three types of architectures
The neural architecture influences the type of pretraining, and natural use cases.
27
Decoders
• Language models! What we’ve seen so far.
• Nice to generate from; can’t condition on future words
Encoders
• Gets bidirectional context – can condition on future!
• Wait, how do we pretrain them?
Encoder-
Decoders
• Good parts of decoders and encoders?
• What’s the best way to pretrain them?
28. ℎ1, … , ℎ𝑇
Pretraining decoders
When using language model pretrained decoders, we can ignore
that they were trained to model 𝑝 𝑤𝑡 𝑤1:𝑡−1).
28
We can finetune them by training a classifier
on the last word’s hidden state.
ℎ1, … , ℎ𝑇 = Decoder 𝑤1, … , 𝑤𝑇
𝑦 ∼ 𝐴𝑤𝑇 + 𝑏
Where 𝐴 and 𝑏 are randomly initialized and
specified by the downstream task.
Gradients backpropagate through the whole
network.
☺/
𝑤1, … , 𝑤𝑇
Linear 𝐴, 𝑏
[Note how the linear layer hasn’t been
pretrained and must be learned from scratch.]
29. Pretraining decoders
It’s natural to pretrain decoders as language models and then
use them as generators, finetuning their 𝑝𝜃 𝑤𝑡 𝑤1:𝑡−1)!
29
This is helpful in tasks where the output is a
sequence with a vocabulary like that at
pretraining time!
• Dialogue (context=dialogue history)
• Summarization (context=document)
ℎ1, … , ℎ𝑇 = Decoder 𝑤1, … , 𝑤𝑇
𝑤𝑡 ∼ 𝐴𝑤𝑡−1 + 𝑏
Where 𝐴, 𝑏 were pretrained in the language
model!
𝑤2 𝑤3 𝑤4 𝑤5 𝑤6
[Note how the linear layer has been pretrained.]
𝐴, 𝑏
ℎ1, … , ℎ𝑇
𝑤1 𝑤2 𝑤3 𝑤4 𝑤5
30. Generative Pretrained Transformer (GPT) [Radford et al., 2018]
2018’s GPT was a big success in pretraining a decoder!
• Transformer decoder with 12 layers.
• 768-dimensional hidden states, 3072-dimensional feed-forward hidden layers.
• Byte-pair encoding with 40,000 merges
• Trained on BooksCorpus: over 7000 unique books.
• Contains long spans of contiguous text, for learning long-distance dependencies.
• The acronym “GPT” never showed up in the original paper; it could stand for
“Generative PreTraining” or “Generative Pretrained Transformer”
30 [Devlin et al., 2018]
31. Generative Pretrained Transformer (GPT) [Radford et al., 2018]
How do we format inputs to our decoder for finetuning tasks?
Natural Language Inference: Label pairs of sentences as entailing/contradictory/neutral
Premise: The man is in the doorway
Hypothesis: The person is near the door
Radford et al., 2018 evaluate on natural language inference.
Here’s roughly how the input was formatted, as a sequence of tokens for the decoder.
[START] The man is in the doorway [DELIM] The person is near the door [EXTRACT]
The linear classifier is applied to the representation of the [EXTRACT] token.
31
entailment
33. We mentioned how pretrained decoders can be used in their capacities as language models.
GPT-2, a larger version of GPT trained on more data, was shown to produce relatively
convincing samples of natural language.
Increasingly convincing generations (GPT2) [Radford et al., 2018]
34. Pretraining for three types of architectures
The neural architecture influences the type of pretraining, and natural use cases.
34
Decoders
• Language models! What we’ve seen so far.
• Nice to generate from; can’t condition on future words
Encoders
• Gets bidirectional context – can condition on future!
• Wait, how do we pretrain them?
Encoder-
Decoders
• Good parts of decoders and encoders?
• What’s the best way to pretrain them?
35. ℎ1, … , ℎ𝑇
Pretraining encoders: what pretraining objective to use?
So far, we’ve looked at language model pretraining. But encoders get bidirectional
context, so we can’t do language modeling!
35
Idea: replace some fraction of words in the
input with a special [MASK] token; predict
these words.
ℎ1, … , ℎ𝑇 = Encoder 𝑤1, … , 𝑤𝑇
𝑦𝑖 ∼ 𝐴𝑤𝑖 + 𝑏
Only add loss terms from words that are
“masked out.” If
𝑥 is the masked version of 𝑥,
we’re learning 𝑝𝜃(𝑥|
𝑥). Called Masked LM.
I [M] to the [M]
went store
𝐴, 𝑏
[Devlin et al., 2018]
36. BERT: Bidirectional Encoder Representations from Tranformers
Devlin et al., 2018 proposed the “Masked LM” objective and released the weights of a
pretrained Transformer, a model they labeled BERT.
36
Some more details about Masked LM for BERT:
• Predict a random 15% of (sub)word tokens.
• Replace input word with [MASK] 80% of the time
• Replace input word with a random token 10% of
the time
• Leave input word unchanged 10% of the time (but
still predict it!)
• Why? Doesn’t let the model get complacent and not
build strong representations of non-masked words.
(No masks are seen at fine-tuning time!)
[Predict these!]
I pizza to the [M]
went store
Transformer
Encoder
[Devlin et al., 2018]
to
[Masked]
[Replaced] [Not replaced]
37. BERT: Bidirectional Encoder Representations from Tranformers
37
• The pretraining input to BERT was two separate contiguous chunks of text:
• BERT was trained to predict whether one chunk follows the other or is randomly
sampled.
• Later work has argued this “next sentence prediction” is not necessary.
[Devlin et al., 2018, Liu et al., 2019]
38. BERT: Bidirectional Encoder Representations from Tranformers
Details about BERT
• Two models were released:
• BERT-base: 12 layers, 768-dim hidden states, 12 attention heads, 110 million params.
• BERT-large: 24 layers, 1024-dim hidden states, 16 attention heads, 340 million params.
• Trained on:
• BooksCorpus (800 million words)
• English Wikipedia (2,500 million words)
• Pretraining is expensive and impractical on a single GPU.
• BERT was pretrained with 64 TPU chips for a total of 4 days.
• (TPUs are special tensor operation acceleration hardware)
• Finetuning is practical and common on a single GPU
• “Pretrain once, finetune many times.”
38 [Devlin et al., 2018]
39. BERT: Bidirectional Encoder Representations from Tranformers
BERT was massively popular and hugely versatile; finetuning BERT led to new state-of-
the-art results on a broad range of tasks.
39
• QQP: Quora Question Pairs (detect paraphrase
questions)
• QNLI: natural language inference over question
answering data
• SST-2: sentiment analysis
• CoLA: corpus of linguistic acceptability (detect
whether sentences are grammatical.)
• STS-B: semantic textual similarity
• MRPC: microsoft paraphrase corpus
• RTE: a small natural language inference corpus
[Devlin et al., 2018]
40. Limitations of pretrained encoders
Those results looked great! Why not used pretrained encoders for everything?
40
If your task involves generating sequences, consider using a pretrained decoder; BERT and other
pretrained encoders don’t naturally lead to nice autoregressive (1-word-at-a-time) generation
methods.
Pretrained Encoder
Iroh goes to [MASK] tasty tea
make/brew/craft
Pretrained Decoder
Iroh goes to make tasty tea
goes to make tasty tea END
41. Extensions of BERT
You’ll see a lot of BERT variants like RoBERTa, SpanBERT, +++
41
Some generally accepted improvements to the BERT pretraining formula:
• RoBERTa: mainly just train BERT for longer and remove next sentence prediction!
• SpanBERT: masking contiguous spans of words makes a harder, more useful pretraining task
[Liu et al., 2019; Joshi et al., 2020]
BERT
[MASK] irr## esi## sti## [MASK] good
It’s
SpanBERT
bly
It’ [MASK] good
irr## esi## sti## bly
[MASK]
[MASK]
[MASK]
42. Extensions of BERT
A takeaway from the RoBERTa paper: more compute, more data can improve pretraining
even when not changing the underlying Transformer encoder.
42 [Liu et al., 2019; Joshi et al., 2020]
43. Pretraining for three types of architectures
The neural architecture influences the type of pretraining, and natural use cases.
43
Decoders
• Language models! What we’ve seen so far.
• Nice to generate from; can’t condition on future words
Encoders
• Gets bidirectional context – can condition on future!
• Wait, how do we pretrain them?
Encoder-
Decoders
• Good parts of decoders and encoders?
• What’s the best way to pretrain them?
44. Pretraining encoder-decoders: what pretraining objective to use?
For encoder-decoders, we could do something like language modeling, but where a
prefix of every input is provided to the encoder and is not predicted.
44
ℎ1, … , ℎ𝑇 = Encoder 𝑤1, … , 𝑤𝑇
ℎ𝑇+1, … , ℎ2 = 𝐷𝑒𝑐𝑜𝑑𝑒𝑟 𝑤1, … , 𝑤𝑇, ℎ1, … , ℎ𝑇
𝑦𝑖 ∼ 𝐴𝑤𝑖 + 𝑏, 𝑖 > 𝑇
The encoder portion benefits from
bidirectional context; the decoder portion is
used to train the whole model through
language modeling.
[Raffel et al., 2018]
𝑤1, … , 𝑤𝑇
𝑤𝑇+1, … , 𝑤2𝑇
𝑤𝑇+2, … ,
45. Pretraining encoder-decoders: what pretraining objective to use?
What Raffel et al., 2018 found to work best was span corruption. Their model: T5.
45
Replace different-length spans from the input
with unique placeholders; decode out the
spans that were removed!
This is implemented in text
preprocessing: it’s still an objective
that looks like language modeling at
the decoder side.
46. Pretraining encoder-decoders: what pretraining objective to use?
Raffel et al., 2018 found encoder-decoders to work better than decoders for their tasks,
and span corruption (denoising) to work better than language modeling.
47. Pretraining encoder-decoders: what pretraining objective to use?
A fascinating property
of T5: it can be
finetuned to answer a
wide range of
questions, retrieving
knowledge from its
parameters.
NQ: Natural Questions
WQ: WebQuestions
TQA: Trivia QA
All “open-domain”
versions
[Raffel et al., 2018]
220 million params
770 million params
3 billion params
11 billion params
48. Outline
1. A brief note on subword modeling
2. Motivating model pretraining from word embeddings
3. Model pretraining three ways
1. Decoders
2. Encoders
3. Encoder-Decoders
4. Interlude: what do we think pretraining is teaching?
5. Very large models and in-context learning
48
49. What kinds of things does pretraining learn?
There’s increasing evidence that pretrained models learn a wide variety of things about
the statistical properties of language. Taking our examples from the start of class:
• Stanford University is located in __________, California. [Trivia]
• I put ___ fork down on the table. [syntax]
• The woman walked across the street, checking for traffic over ___ shoulder. [coreference]
• I went to the ocean to see the fish, turtles, seals, and _____. [lexical semantics/topic]
• Overall, the value I got from the two hours watching it was the sum total of the popcorn
and the drink. The movie was ___. [sentiment]
• Iroh went into the kitchen to make some tea. Standing next to Iroh, Zuko pondered his
destiny. Zuko left the ______. [some reasoning – this is harder]
• I was thinking about the sequence that goes 1, 1, 2, 3, 5, 8, 13, 21, ____ [some basic
arithmetic; they don’t learn the Fibonnaci sequence]
• Models also learn – and can exacerbate racism, sexism, all manner of bad biases.
• More on all this in the interpretability lecture!
49
50. Outline
1. A brief note on subword modeling
2. Motivating model pretraining from word embeddings
3. Model pretraining three ways
1. Decoders
2. Encoders
3. Encoder-Decoders
4. Interlude: what do we think pretraining is teaching?
5. Very large models and in-context learning
50
51. GPT-3, In-context learning, and very large models
So far, we’ve interacted with pretrained models in two ways:
• Sample from the distributions they define (maybe providing a prompt)
• Fine-tune them on a task we care about, and take their predictions.
Very large language models seem to perform some kind of learning without gradient
steps simply from examples you provide within their contexts.
GPT-3 is the canonical example of this. The largest T5 model had 11 billion parameters.
GPT-3 has 175 billion parameters.
51
52. GPT-3, In-context learning, and very large models
Very large language models seem to perform some kind of learning without gradient
steps simply from examples you provide within their contexts.
The in-context examples seem to specify the task to be performed, and the conditional
distribution mocks performing the task to a certain extent.
Input (prefix within a single Transformer decoder context):
“ thanks -> merci
hello -> bonjour
mint -> menthe
otter -> ”
Output (conditional generations):
loutre…”
52
53. GPT-3, In-context learning, and very large models
Very large language models seem to perform some kind of learning without gradient
steps simply from examples you provide within their contexts.
53
54. Parting remarks
These models are still not well-understood.
“Small” models like BERT have become general tools in a wide range of settings.
More on this in later lectures!
Assignment 5 out today! Tuesday’s and today’s lectures in its subject matter.
54