The document provides information about parsing and context-free grammars. It defines key concepts such as nonterminals, terminals, productions, derivations, ambiguity, left recursion, left factoring, LL(1) parsing, and computing first sets. It also lists different types of parsing including top-down parsing, bottom-up parsing, backtracking, predictive parsing, LR parsing, operator precedence parsing, and recursive descent parsing.
This presentation introduces the Boosting ensemble method for machine learning. It's objective is to compare Boosting to the Random Forest ensemble method, explain the difference between AdaBoost and Gradient Boosting and annotate the pseudo-code for each algorithm for classification and regression, respectively. Kirkwood gave this instructional demonstration while applying to be a Data Scientist in Residence at Galvanize, Boulder.
Bias: It is the amount by which Machine Learning (ML) model predictions differ from the actual value of the target.
Variance: It is the amount by which the ML model prediction would change if we estimate it using different training datasets.
This presentation introduces the Boosting ensemble method for machine learning. It's objective is to compare Boosting to the Random Forest ensemble method, explain the difference between AdaBoost and Gradient Boosting and annotate the pseudo-code for each algorithm for classification and regression, respectively. Kirkwood gave this instructional demonstration while applying to be a Data Scientist in Residence at Galvanize, Boulder.
Bias: It is the amount by which Machine Learning (ML) model predictions differ from the actual value of the target.
Variance: It is the amount by which the ML model prediction would change if we estimate it using different training datasets.
Principles of compiler design ppt on regular expression . Conversion regular expression to dfa and dfa to nfa. Direct method for converting regular expression to dfa
This is the presentation on Syntactic Analysis in NLP.It includes topics like Introduction to parsing, Basic parsing strategies, Top-down parsing, Bottom-up
parsing, Dynamic programming – CYK parser, Issues in basic parsing methods, Earley algorithm, Parsing
using Probabilistic Context Free Grammars.
This slide is prepared By these following Students of Dept. of CSE JnU, Dhaka. Thanks To: Nusrat Jahan, Arifatun Nesa, Fatema Akter, Maleka Khatun, Tamanna Tabassum.
The Roman Empire A Historical Colossus.pdfkaushalkr1407
The Roman Empire, a vast and enduring power, stands as one of history's most remarkable civilizations, leaving an indelible imprint on the world. It emerged from the Roman Republic, transitioning into an imperial powerhouse under the leadership of Augustus Caesar in 27 BCE. This transformation marked the beginning of an era defined by unprecedented territorial expansion, architectural marvels, and profound cultural influence.
The empire's roots lie in the city of Rome, founded, according to legend, by Romulus in 753 BCE. Over centuries, Rome evolved from a small settlement to a formidable republic, characterized by a complex political system with elected officials and checks on power. However, internal strife, class conflicts, and military ambitions paved the way for the end of the Republic. Julius Caesar’s dictatorship and subsequent assassination in 44 BCE created a power vacuum, leading to a civil war. Octavian, later Augustus, emerged victorious, heralding the Roman Empire’s birth.
Under Augustus, the empire experienced the Pax Romana, a 200-year period of relative peace and stability. Augustus reformed the military, established efficient administrative systems, and initiated grand construction projects. The empire's borders expanded, encompassing territories from Britain to Egypt and from Spain to the Euphrates. Roman legions, renowned for their discipline and engineering prowess, secured and maintained these vast territories, building roads, fortifications, and cities that facilitated control and integration.
The Roman Empire’s society was hierarchical, with a rigid class system. At the top were the patricians, wealthy elites who held significant political power. Below them were the plebeians, free citizens with limited political influence, and the vast numbers of slaves who formed the backbone of the economy. The family unit was central, governed by the paterfamilias, the male head who held absolute authority.
Culturally, the Romans were eclectic, absorbing and adapting elements from the civilizations they encountered, particularly the Greeks. Roman art, literature, and philosophy reflected this synthesis, creating a rich cultural tapestry. Latin, the Roman language, became the lingua franca of the Western world, influencing numerous modern languages.
Roman architecture and engineering achievements were monumental. They perfected the arch, vault, and dome, constructing enduring structures like the Colosseum, Pantheon, and aqueducts. These engineering marvels not only showcased Roman ingenuity but also served practical purposes, from public entertainment to water supply.
Model Attribute Check Company Auto PropertyCeline George
In Odoo, the multi-company feature allows you to manage multiple companies within a single Odoo database instance. Each company can have its own configurations while still sharing common resources such as products, customers, and suppliers.
Honest Reviews of Tim Han LMA Course Program.pptxtimhan337
Personal development courses are widely available today, with each one promising life-changing outcomes. Tim Han’s Life Mastery Achievers (LMA) Course has drawn a lot of interest. In addition to offering my frank assessment of Success Insider’s LMA Course, this piece examines the course’s effects via a variety of Tim Han LMA course reviews and Success Insider comments.
Francesca Gottschalk - How can education support child empowerment.pptxEduSkills OECD
Francesca Gottschalk from the OECD’s Centre for Educational Research and Innovation presents at the Ask an Expert Webinar: How can education support child empowerment?
Palestine last event orientationfvgnh .pptxRaedMohamed3
An EFL lesson about the current events in Palestine. It is intended to be for intermediate students who wish to increase their listening skills through a short lesson in power point.
Unit 8 - Information and Communication Technology (Paper I).pdfThiyagu K
This slides describes the basic concepts of ICT, basics of Email, Emerging Technology and Digital Initiatives in Education. This presentations aligns with the UGC Paper I syllabus.
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...Levi Shapiro
Letter from the Congress of the United States regarding Anti-Semitism sent June 3rd to MIT President Sally Kornbluth, MIT Corp Chair, Mark Gorenberg
Dear Dr. Kornbluth and Mr. Gorenberg,
The US House of Representatives is deeply concerned by ongoing and pervasive acts of antisemitic
harassment and intimidation at the Massachusetts Institute of Technology (MIT). Failing to act decisively to ensure a safe learning environment for all students would be a grave dereliction of your responsibilities as President of MIT and Chair of the MIT Corporation.
This Congress will not stand idly by and allow an environment hostile to Jewish students to persist. The House believes that your institution is in violation of Title VI of the Civil Rights Act, and the inability or
unwillingness to rectify this violation through action requires accountability.
Postsecondary education is a unique opportunity for students to learn and have their ideas and beliefs challenged. However, universities receiving hundreds of millions of federal funds annually have denied
students that opportunity and have been hijacked to become venues for the promotion of terrorism, antisemitic harassment and intimidation, unlawful encampments, and in some cases, assaults and riots.
The House of Representatives will not countenance the use of federal funds to indoctrinate students into hateful, antisemitic, anti-American supporters of terrorism. Investigations into campus antisemitism by the Committee on Education and the Workforce and the Committee on Ways and Means have been expanded into a Congress-wide probe across all relevant jurisdictions to address this national crisis. The undersigned Committees will conduct oversight into the use of federal funds at MIT and its learning environment under authorities granted to each Committee.
• The Committee on Education and the Workforce has been investigating your institution since December 7, 2023. The Committee has broad jurisdiction over postsecondary education, including its compliance with Title VI of the Civil Rights Act, campus safety concerns over disruptions to the learning environment, and the awarding of federal student aid under the Higher Education Act.
• The Committee on Oversight and Accountability is investigating the sources of funding and other support flowing to groups espousing pro-Hamas propaganda and engaged in antisemitic harassment and intimidation of students. The Committee on Oversight and Accountability is the principal oversight committee of the US House of Representatives and has broad authority to investigate “any matter” at “any time” under House Rule X.
• The Committee on Ways and Means has been investigating several universities since November 15, 2023, when the Committee held a hearing entitled From Ivory Towers to Dark Corners: Investigating the Nexus Between Antisemitism, Tax-Exempt Universities, and Terror Financing. The Committee followed the hearing with letters to those institutions on January 10, 202
Macroeconomics- Movie Location
This will be used as part of your Personal Professional Portfolio once graded.
Objective:
Prepare a presentation or a paper using research, basic comparative analysis, data organization and application of economic information. You will make an informed assessment of an economic climate outside of the United States to accomplish an entertainment industry objective.
4. Role of parser
• Parser obtains a string of token from the lexical analyzer and reports syntax
error if any otherwise generates syntax tree.
• There are two types of parser:
1. Top-down parser
2. Bottom-up parser
Rest of front
end
Parse
tree
Token
IR
Lexical
analyzer
Symbol table
Parser
Get next
token
Source
program
Parse
tree
6. Context free grammar
• A context free grammar (CFG) is a 4-tuple 𝐺 = (𝑉, 𝛴, 𝑆, 𝑃) where,
𝑉 is finite set of non terminals,
𝛴 is disjoint finite set of terminals,
𝑆 is an element of 𝑉 and it’s a start symbol,
𝑃 is a finite set formulas of the form 𝐴 → 𝛼 where 𝐴 ∈ 𝑉 and 𝛼 ∈ (𝑉 ∪ 𝛴)∗
Nonterminal symbol:
The name of syntax category of a language, e.g., noun, verb, etc.
It is written as a single capital letter, or as a name enclosed between < … >, e.g., A or
<Noun>
<Noun Phrase> → <Article><Noun>
<Article> → a | an | the
<Noun> → boy | apple
7. Context free grammar
• A context free grammar (CFG) is a 4-tuple 𝐺 = (𝑉, 𝛴, 𝑆, 𝑃) where,
𝑉 is finite set of non terminals,
𝛴 is disjoint finite set of terminals,
𝑆 is an element of 𝑉 and it’s a start symbol,
𝑃 is a finite set formulas of the form 𝐴 → 𝛼 where 𝐴 ∈ 𝑉 and 𝛼 ∈ (𝑉 ∪ 𝛴)∗
Terminal symbol:
A symbol in the alphabet.
It is denoted by lower case letter and punctuation marks used in language.
<Noun Phrase> → <Article><Noun>
<Article> → a | an | the
<Noun> → boy | apple
8. Context free grammar
• A context free grammar (CFG) is a 4-tuple 𝐺 = (𝑉, 𝛴, 𝑆, 𝑃) where,
𝑉 is finite set of non terminals,
𝛴 is disjoint finite set of terminals,
𝑆 is an element of 𝑉 and it’s a start symbol,
𝑃 is a finite set formulas of the form 𝐴 → 𝛼 where 𝐴 ∈ 𝑉 and 𝛼 ∈ (𝑉 ∪ 𝛴)∗
Start symbol:
First nonterminal symbol of the grammar is called start symbol.
<Noun Phrase> → <Article><Noun>
<Article> → a | an | the
<Noun> → boy | apple
9. Context free grammar
• A context free grammar (CFG) is a 4-tuple 𝐺 = (𝑉, 𝛴, 𝑆, 𝑃) where,
𝑉 is finite set of non terminals,
𝛴 is disjoint finite set of terminals,
𝑆 is an element of 𝑉 and it’s a start symbol,
𝑃 is a finite set formulas of the form 𝐴 → 𝛼 where 𝐴 ∈ 𝑉 and 𝛼 ∈ (𝑉 ∪ 𝛴)∗
Production:
A production, also called a rewriting rule, is a rule of grammar. It has the form of
A nonterminal symbol → String of terminal and nonterminal symbols
<Noun Phrase> → <Article><Noun>
<Article> → a | an | the
<Noun> → boy | apple
10. Example: Grammar
Write terminals, non terminals, start symbol, and productions for following
grammar.
E E O E| (E) | -E | id
O + | - | * | / | ↑
Terminals: id + - * / ↑ ( )
Non terminals: E, O
Start symbol: E
Productions: E E O E| (E) | -E | id
O + | - | * | / | ↑
12. Derivation
• Derivation is used to find whether the string belongs to a given grammar or not.
• Types of derivations are:
1. Leftmost derivation
2. Rightmost derivation
13. Leftmost derivation
• A derivation of a string 𝑊 in a grammar 𝐺 is a left most derivation if at every
step the left most non terminal is replaced.
• Grammar: SS+S | S-S | S*S | S/S | a Output string: a*a-a
S
S-S
S*S-S
a*S-S
a*a-S
a*a-a a
S - S
a
a
S * S
S
Parse tree represents the
structure of derivation
Leftmost Derivation Parse tree
14. Rightmost derivation
• A derivation of a string W in a grammar G is a right most derivation if at every
step the right most non terminal is replaced.
• It is all called canonical derivation.
• Grammar: SS+S | S-S | S*S | S/S | a Output string: a*a-a
S
S*S
S*S-S
S*S-a
S*a-a
a*a-a
a
S * S
a
a S - S
S
Rightmost Derivation Parse Tree
15. Exercise: Derivation
1. Perform leftmost derivation and draw parse tree.
SA1B
A0A | 𝜖
B0B | 1B | 𝜖
Output string: 1001
2. Perform leftmost derivation and draw parse tree.
S0S1 | 01 Output string: 000111
3. Perform rightmost derivation and draw parse tree.
EE+E | E*E | id | (E) | -E
Output string: id + id * id
17. Ambiguity
• In formal language grammar, ambiguity would arise if identical string can occur
on the RHS of two or more productions.
• Grammar:
N1 → α
N2 → α
• α can be derived from either N1 or N2
𝑵𝟏 𝑵𝟐
𝜶
Replaced by
𝑵𝟏 or 𝑵𝟐 ?
18. Ambiguous grammar
• Ambiguous grammar is one that produces more than one leftmost or more
than one rightmost derivation for the same sentence.
• Grammar: SS+S | S*S | (S) | a Output string: a+a*a
S S
S*S S+S
S+S*S a+S
a+S*S a+S*S
a+a*S a+a*S
a+a*a a+a*a
• Here, Two leftmost derivation for string a+a*a is possible hence, above
grammar is ambiguous.
a
S * S
a
a
S
S + S
a
S + S
a
a S * S
S
19. Exercise: Ambiguous Grammar
Check Ambiguity in following grammars:
1. S aS | Sa | 𝜖 (output string: aaaa)
2. S aSbS | bSaS | 𝜖 (output string: abab)
3. S SS+ | SS* | a (output string: aa+a*)
4. <exp> → <exp> + <term> | <term>
<term> → <term> * <letter> | <letter>
<letter> → a|b|c|…|z (output string: a+b*c)
5. Prove that the CFG with productions: S a | Sa | bSS | SSb | SbS is
ambiguous (Hint: consider output string yourself)
21. Left recursion
• A grammar is said to be left recursive if it has a non terminal 𝐴 such that there
is a derivation 𝑨𝑨𝜶 for some string 𝛼.
Algorithm to eliminate left recursion
1. Arrange the non terminals in some order 𝐴1, … , 𝐴𝑛
2. For 𝑖: = 1 𝑡𝑜 𝑛 do begin
for 𝑗: = 1 𝑡𝑜 𝑖 − 1 do begin
replace each production of the form 𝐴𝑖 → 𝐴𝑖𝛾
by the productions 𝐴𝑖 → 𝛿1𝛾| 𝛿2𝛾| … . . | 𝛿𝑘𝛾,
where 𝐴𝑗 → 𝛿1 | 𝛿2 | … . | 𝛿𝑘 are all the current 𝐴𝑗
productions;
end
eliminate the immediate left recursion among the 𝐴𝑖 - productions
end
23. Examples: Left recursion elimination
EE+T | T
ETE’
E’+TE’ | ε
TT*F | F
TFT’
T’*FT’ | ε
XX%Y | Z
XZX’
X’%YX’ | ε
24. Exercise: Left recursion
1. AAbd | Aa | a
BBe | b
2. AAB | AC | a | b
3. SA | B
AABC | Acd | a | aa
BBee | b
4. ExpExp+term | Exp-term | term
25. Left factoring
Left factoring is a grammar transformation that is useful for producing a grammar
suitable for predictive parsing.
Algorithm to left factor a grammar
Input: Grammar G
Output: An equivalent left factored grammar.
Method:
For each non terminal A find the longest prefix 𝛼 common to two or more of its
alternatives. If 𝛼 ≠∈, i.e., there is a non trivial common prefix, replace all the A
productions 𝐴 𝛼𝛽1 𝛼𝛽2 … … … … . . 𝛼𝛽𝑛 𝛾 where 𝛾 represents all alternatives that
do not begin with 𝛼 by
𝐴 → 𝛼 𝐴′| 𝛾
𝐴′ → 𝛽1 | 𝛽2 | … … … … . |𝛽𝑛
Here A' is new non terminal. Repeatedly apply this transformation until no two
alternatives for a non-terminal have a common prefix.
30. Parsing
• Parsing is a technique that takes input string and produces output either a
parse tree if string is valid sentence of grammar, or an error message indicating
that string is not a valid.
• Types of parsing are:
1. Top down parsing: In top down parsing parser build parse tree from top to
bottom.
2. Bottom up parsing: Bottom up parser starts from leaves and work up to the
root.
31. Classification of parsing methods
Parsing
Top down parsing Bottom up parsing (Shift reduce)
Back tracking
Parsing without
backtracking (predictive
parsing)
LR parsing
Operator precedence
LALR
CLR
SLR
Recursive
descent
LL(1)
32. Backtracking
• In backtracking, expansion of nonterminal symbol we choose one alternative
and if any mismatch occurs then we try another alternative.
• Grammar: S cAd Input string: cad
A ab | a
c A d
S
c A d
S
a b
c A d
S
a Parsing done
Make prediction
Backtrack
Make prediction
34. Parsing Methods
Parsing
Top down parsing Bottom up parsing (Shift reduce)
Back tracking
Parsing without
backtracking (predictive
parsing)
LR parsing
Operator precedence
LALR
CLR
SLR
Recursive
descent
LL(1)
35. LL(1) parser (predictive parser)
• LL(1) is non recursive top down parser.
1. First L indicates input is scanned from left to right.
2. The second L means it uses leftmost derivation for input string
3. 1 means it uses only input symbol to predict the parsing process.
Predictive
parsing
program
Parsing table M
INPUT
OUTPUT
Stack
a + b $
X
Y
Z
$
36. LL(1) parsing (predictive parsing)
Steps to construct LL(1) parser
1. Remove left recursion / Perform left factoring (if any).
2. Compute FIRST and FOLLOW of non terminals.
3. Construct predictive parsing table.
4. Parse the input string using parsing table.
37. Rules to compute first of non terminal
1. If 𝐴 → 𝛼 and 𝛼 is terminal, add 𝛼 to 𝐹𝐼𝑅𝑆𝑇(𝐴).
2. If 𝐴 → ∈, add ∈ to 𝐹𝐼𝑅𝑆𝑇(𝐴).
3. If 𝑋 is nonterminal and 𝑋𝑌1 𝑌2 … . 𝑌𝑘 is a production, then place 𝑎 in
𝐹𝐼𝑅𝑆𝑇(𝑋) if for some 𝑖, a is in 𝐹𝐼𝑅𝑆𝑇(𝑌𝑖), and 𝜖 is in all of
𝐹𝐼𝑅𝑆𝑇(𝑌1), … … … , 𝐹𝐼𝑅𝑆𝑇(𝑌𝑖−1 );that is 𝑌1 … 𝑌𝑖−1 ⇒ 𝜖. If 𝜖 is in 𝐹𝐼𝑅𝑆𝑇(𝑌𝑗)
for all 𝑗 = 1,2, … . . , 𝑘 then add 𝜖 to 𝐹𝐼𝑅𝑆𝑇(𝑋).
Everything in 𝐹𝐼𝑅𝑆𝑇(𝑌1) is surely in 𝐹𝐼𝑅𝑆𝑇(𝑋) If 𝑌1 does not derive 𝜖, then
we do nothing more to 𝐹𝐼𝑅𝑆𝑇(𝑋), but if 𝑌1 ⇒ 𝜖, then we add 𝐹𝐼𝑅𝑆𝑇(𝑌2)
and so on.
38. Rules to compute first of non terminal
Simplification of Rule 3
If 𝐴 → 𝑌1𝑌2 … … . . 𝑌𝐾 ,
• If 𝑌1 does not derives ∈ 𝑡ℎ𝑒𝑛, 𝐹𝐼𝑅𝑆𝑇(𝐴) = 𝐹𝐼𝑅𝑆𝑇(𝑌1)
• If 𝑌1 derives ∈ 𝑡ℎ𝑒𝑛,
𝐹𝐼𝑅𝑆𝑇 𝐴 = 𝐹𝐼𝑅𝑆𝑇 𝑌1 − 𝜖 𝑈 𝐹𝐼𝑅𝑆𝑇(𝑌2)
• If 𝑌1 & Y2 derives ∈ 𝑡ℎ𝑒𝑛,
𝐹𝐼𝑅𝑆𝑇 𝐴 = 𝐹𝐼𝑅𝑆𝑇 𝑌1 − 𝜖 𝑈 𝐹𝐼𝑅𝑆𝑇(𝑌2) − 𝜖 𝑈 𝐹𝐼𝑅𝑆𝑇(𝑌3)
• If 𝑌1 , Y2 & Y3 derives ∈ 𝑡ℎ𝑒𝑛,
𝐹𝐼𝑅𝑆𝑇 𝐴 = 𝐹𝐼𝑅𝑆𝑇 𝑌1 − 𝜖 𝑈 𝐹𝐼𝑅𝑆𝑇(𝑌2) − 𝜖 𝑈 𝐹𝐼𝑅𝑆𝑇(𝑌3) − 𝜖 𝑈 𝐹𝐼𝑅𝑆𝑇(𝑌4)
• If 𝑌1 , Y2 , Y3 …..YK all derives ∈ 𝑡ℎ𝑒𝑛,
𝐹𝐼𝑅𝑆𝑇 𝐴 = 𝐹𝐼𝑅𝑆𝑇 𝑌1 − 𝜖 𝑈 𝐹𝐼𝑅𝑆𝑇(𝑌2) − 𝜖 𝑈 𝐹𝐼𝑅𝑆𝑇(𝑌3) − 𝜖 𝑈 𝐹𝐼𝑅𝑆𝑇(𝑌4) −
𝜖 𝑈 … … … … 𝐹𝐼𝑅𝑆𝑇(𝑌𝑘) (note: if all non terminals derives ∈ then add ∈ to FIRST(A))
39. Rules to compute FOLLOW of non terminal
1. Place $ 𝑖𝑛 𝑓𝑜𝑙𝑙𝑜𝑤 𝑆 . (S is start symbol)
2. If A → 𝛼𝐵𝛽 , then everything in 𝐹𝐼𝑅𝑆𝑇(𝛽) except for 𝜖 is placed in
𝐹𝑂𝐿𝐿𝑂𝑊(𝐵)
3. If there is a production A → 𝛼𝐵 or a production A → 𝛼𝐵𝛽 where 𝐹𝐼𝑅𝑆𝑇(𝛽)
contains 𝜖 then everything in F𝑂𝐿𝐿𝑂𝑊 𝐴 = 𝐹𝑂𝐿𝐿𝑂𝑊 𝐵
40. How to apply rules to find FOLLOW of non terminal?
A → 𝛼𝐵𝛽
𝛽 𝑖𝑠 𝑝𝑟𝑒𝑠𝑒𝑛𝑡
𝛽 𝑖𝑠 𝑎𝑏𝑠𝑒𝑛𝑡
𝑅𝑢𝑙𝑒 3 𝛽 is terminal 𝛽 𝑖𝑠 𝑁𝑜𝑛𝑡𝑒𝑟𝑚𝑖𝑛𝑎𝑙
𝑅𝑢𝑙𝑒 2 𝛽 derives 𝜖
𝛽 does not derives 𝜖
𝑅𝑢𝑙𝑒 2 𝑅𝑢𝑙𝑒 2 + 𝑅𝑢𝑙𝑒 3
41. Rules to construct predictive parsing table
1. For each production 𝐴 → 𝛼 of the grammar, do steps 2 and 3.
2. For each terminal 𝑎 in 𝑓𝑖𝑟𝑠𝑡(𝛼), Add 𝐴 → 𝛼 to 𝑀[𝐴, 𝑎].
3. If 𝜖 is in 𝑓𝑖𝑟𝑠𝑡(𝛼), Add 𝐴 → 𝛼 to 𝑀[𝐴, 𝑏] for each terminal 𝑏 in 𝐹𝑂𝐿𝐿𝑂𝑊(𝐵).
If 𝜖 is in 𝑓𝑖𝑟𝑠𝑡(𝛼), and $ is in 𝐹𝑂𝐿𝐿𝑂𝑊(𝐴), add 𝐴 → 𝛼 to 𝑀[𝐴, $].
4. Make each undefined entry of M be error.
42. Example-1: LL(1) parsing
Step 1: Not required
Step 2: Compute FIRST
First(S)
SaBa
First(B)
BbB B𝜖
SaBa
BbB | ϵ
S a B a
A 𝛼
B 𝜖
A 𝜖
Rule 1
add 𝛼 to 𝐹𝐼𝑅𝑆𝑇(𝐴)
Rule 2
add 𝜖 to 𝐹𝐼𝑅𝑆𝑇(𝐴)
FIRST(B)={ b , 𝜖 }
NT First
S { a }
B {b,𝜖}
B b B
A 𝛼
Rule 1
add 𝛼 to 𝐹𝐼𝑅𝑆𝑇(𝐴)
FIRST(S)={ a }
43. Example-1: LL(1) parsing
Step 2: Compute FOLLOW
Follow(S)
Follow(B)
SaBa BbB
SaBa
BbB | ϵ
Follow(B)={ a
B b B
Follow(S)={ $ }
NT First Follow
S {a} {$}
B {b,𝜖} {a}
S a B a
A 𝛂 B
Rule 3
Follow(A)=follow(B)
A 𝛂 B 𝛃
Rule 2
First(𝛽) − 𝜖
Rule 1: Place $ in FOLLOW(S)
}
44. Example-1: LL(1) parsing
Step 3: Prepare predictive parsing table
SaBa
a=FIRST(aBa)={ a }
M[S,a]=SaBa
SaBa
BbB | ϵ NT First Follow
S {a} {$}
B {b,𝜖} {a}
NT Input Symbol
a b $
S SaBa
B
Rule: 2
A 𝛼
a = first(𝛼)
M[A,a] = A 𝛼
45. Example-1: LL(1) parsing
Step 3: Prepare predictive parsing table
BbB
a=FIRST(bB)={ b }
M[B,b]=BbB
SaBa
BbB | ϵ NT First Follow
S {a} {$}
B {b,𝜖} {a}
NT Input Symbol
a b $
S SaBa
B BbB
Rule: 2
A 𝛼
a = first(𝛼)
M[A,a] = A 𝛼
46. Example-1: LL(1) parsing
Step 3: Prepare predictive parsing table
Bϵ
b=FOLLOW(B)={ a }
M[B,a]=B𝜖
SaBa
BbB | ϵ NT First Follow
S {a} {$}
B {b,𝜖} {a}
NT Input Symbol
a b $
S SaBa Error Error
B Bϵ BbB Error
Rule: 3
A 𝛼
b = follow(A)
M[A,b] = A 𝛼
47. Example-2: LL(1) parsing
Step 1: Not required
Step 2: Compute FIRST
First(S)
SaB S𝜖
SaB | ϵ
BbC | ϵ
CcS | ϵ
S 𝜖
A 𝜖
Rule 2
add 𝜖 to 𝐹𝐼𝑅𝑆𝑇(𝐴)
FIRST(S)={ a , 𝜖 }
NT First
S { a, 𝜖 }
B {b,𝜖}
S a B
A 𝛼
Rule 1
add 𝛼 to 𝐹𝐼𝑅𝑆𝑇(𝐴)
C {c,𝜖}
48. Example-2: LL(1) parsing
Step 1: Not required
Step 2: Compute FIRST
First(B)
BbC B𝜖
SaB | ϵ
BbC | ϵ
CcS | ϵ
B 𝜖
A 𝜖
Rule 2
add 𝜖 to 𝐹𝐼𝑅𝑆𝑇(𝐴)
FIRST(B)={ b , 𝜖 }
NT First
S { a, 𝜖 }
B {b,𝜖}
B b C
A 𝛼
Rule 1
add 𝛼 to 𝐹𝐼𝑅𝑆𝑇(𝐴)
C {c,𝜖}
49. Example-2: LL(1) parsing
Step 1: Not required
Step 2: Compute FIRST
First(C)
CcS C𝜖
SaB | ϵ
BbC | ϵ
CcS | ϵ
C 𝜖
A 𝜖
Rule 2
add 𝜖 to 𝐹𝐼𝑅𝑆𝑇(𝐴)
FIRST(B)={ c , 𝜖 }
NT First
S { a, 𝜖 }
B {b,𝜖}
C c S
A 𝛼
Rule 1
add 𝛼 to 𝐹𝐼𝑅𝑆𝑇(𝐴)
C {c,𝜖}
50. Example-2: LL(1) parsing
Step 2: Compute FOLLOW
Follow(S)
CcS
BbC SaB
={$}
C c S
Follow(S)={ $ }
NT First Follow
S {a,𝜖} {$}
B {b,𝜖} {$}
C {c,𝜖} {$}
A 𝛂 B
Rule 3
Follow(A)=follow(B)
Rule 1: Place $ in FOLLOW(S)
SaB | ϵ
BbC | ϵ
CcS | ϵ
Follow(S)=Follow(C)
B b C
A 𝛂 B
Rule 3
Follow(A)=follow(B)
Follow(C)=Follow(B)={$}
S a B
A 𝛂 B
Rule 3
Follow(A)=follow(B)
Follow(B)=Follow(S)={$}
51. Example-2: LL(1) parsing
Step 3: Prepare predictive parsing table
SaB
a=FIRST(aB)={ a }
M[S,a]=SaB
N
T
Input Symbol
a b c $
S SaB
B
C
Rule: 2
A 𝛼
a = first(𝛼)
M[A,a] = A 𝛼
SaB | ϵ
BbC | ϵ
CcS | ϵ
NT First Follow
S {a,𝜖} {$}
B {b,𝜖} {$}
C {c,𝜖} {$}
52. Example-2: LL(1) parsing
Step 3: Prepare predictive parsing table
S𝜖
b=FOLLOW(S)={ $ }
M[S,$]=S𝜖
N
T
Input Symbol
a b c $
S SaB S𝜖
B
C
SaB | ϵ
BbC | ϵ
CcS | ϵ
NT First Follow
S {a} {$}
B {b,𝜖} {$}
C {c,𝜖} {$}
Rule: 3
A 𝛼
b = follow(A)
M[A,b] = A 𝛼
53. Example-2: LL(1) parsing
Step 3: Prepare predictive parsing table
BbC
a=FIRST(bC)={ b }
M[B,b]=BbC
N
T
Input Symbol
a b c $
S SaB S𝜖
B BbC
C
SaB | ϵ
BbC | ϵ
CcS | ϵ
NT First Follow
S {a} {$}
B {b,𝜖} {$}
C {c,𝜖} {$}
Rule: 2
A 𝛼
a = first(𝛼)
M[A,a] = A 𝛼
54. Example-2: LL(1) parsing
Step 3: Prepare predictive parsing table
B𝜖
b=FOLLOW(B)={ $ }
M[B,$]=B𝜖
N
T
Input Symbol
a b c $
S SaB S𝜖
B BbC B𝜖
C
SaB | ϵ
BbC | ϵ
CcS | ϵ
NT First Follow
S {a} {$}
B {b,𝜖} {$}
C {c,𝜖} {$}
Rule: 3
A 𝛼
b = follow(A)
M[A,b] = A 𝛼
55. Example-2: LL(1) parsing
Step 3: Prepare predictive parsing table
CcS
a=FIRST(cS)={ c }
M[C,c]=CcS
N
T
Input Symbol
a b c $
S SaB S𝜖
B BbC B𝜖
C CcS
SaB | ϵ
BbC | ϵ
CcS | ϵ
NT First Follow
S {a} {$}
B {b,𝜖} {$}
C {c,𝜖} {$}
Rule: 2
A 𝛼
a = first(𝛼)
M[A,a] = A 𝛼
56. Example-2: LL(1) parsing
Step 3: Prepare predictive parsing table
C𝜖
b=FOLLOW(C)={ $ }
M[C,$]=C𝜖
N
T
Input Symbol
a b c $
S SaB Error Error S𝜖
B Error BbB Error B𝜖
C Error Error CcS C𝜖
SaB | ϵ
BbC | ϵ
CcS | ϵ
NT First Follow
S {a} {$}
B {b,𝜖} {$}
C {c,𝜖} {$}
Rule: 3
A 𝛼
b = follow(A)
M[A,b] = A 𝛼