SlideShare a Scribd company logo
1 of 23
Top-Down and Bottom-Up Parsing
Top Down Parsing

Bottom Up Parsing
Top Down Parsing
Things to know:
Top down parsing is constructing a parse tree for the input starting from the root and
   create nodes of the parse tree in preorder(depth first).
A general form of top down parsing is the recursive descent parsing.
A recursive descent parsing is a top down parsing technique that execute a set of
   recursive procedures to process the input, that involves backtracking(means
   scanning the input repeatedly).
Backtracking is time consuming and therefore, inefficient. That‟s why a special case
   of top down parsing was developed, called predictive parsing, where no
   backtracking is required.
A dilemma can occur if there is a left recursive grammar. Even with backtracking, you
   can find the parser to go into an infinite loop.
There are two types of recursion, left recursive and right recursive, based on it‟s
   name, a left recursive grammar build trees that grows down to the left, while right
   recursive is vice versa.
Top-down Parse tree of Grammar G(Where input=id):
G= E -> T E‟
   E‟-> +T E‟ | ε                  E        E          E          E
   T-> F T‟
                                        T       E’ T       E’ T        E’
   T‟-> *F T‟ | ε
   F-> (E) | id                                 F      T’ F       T’

                                                        id

An example of a simple production with left recursive grammar
Consider the grammar: expr -> expr + term
This is an example of a left recursive grammar.
Whenever we call expr, the same procedure is called out, and the parser will loop forever.

By carefully writing a grammar, one can eliminate left recursion from it.
expr -> expr + term, can be written as

expr -> expr + term | term

After obtaining a grammar that needs no backtracking, we can use the
PREDICTIVE PARSER
Top Down Parsing Techniques



    Recursive-Descent Parsing

    Predictive Parsing
Recursive-Descent
Recursive-Descent Parsing               Parsing
  A recursive-descent parsing program consists of a set of procedures, one for each
      nonterminal. Execution begins with the procedure for the start symbol, which halts
      and announces success if its procedure body scans the entire input string.
  General recursive-descent may require backtracking; that is, it may require repeated
      scans over the input.
  Consider the grammar with input string “cad”:
  S -> c A d
  A -> a b | a

                       S                   S                     S

                   c   A   d           c   A    d           c    A   d

                                       a        b                a

  c   a   d



                                                                     Back
Predictive Parsing-a parsing technique that uses a lookahead symbol to
determine if the current input arguments matches the lookahead symbol.



                                           Construction of
              First and
                                             Predictive
               Follow
                                           Parsing Tables


              LL(1)
                                           Error Recovery
            Grammars


                                                            Back
First and
      Follow

First and Follow aids the construction of a predictive parser.
They allow us to fill in the entries of a predictive parsing table.

 a is any string of terminals , then First(a) is the set of terminals
that begin the strings derived from a. If a is an empty string(ɛ),
then ɛ is also in First(a).

Follow (A), for a nonterminal A, to be the set of terminals a that
can appear immediately to the right of A in a sentential form.
First and
         Follow
Rules in computing FIRST (A) where A -> X, and X can be a terminal or nonterminal, or
    even ε(empty string).
1) If X is a terminal, then FIRST(A)= X.
2) If X is ε, then FIRST (A) = ε.
3) If X is a nonterminal and Y and Z are nonterminals, with a production of
     A -> X
     X -> Y
     Y -> Za
     Z-> b; then FIRST(A) = b; where FIRST(nonterminal1) -> FIRST(nonterminal2)or until
     you reach the first terminal of the production. In that case
(FIRST(nonterminaln) =FIRST(nonterminaln+1))

4) If X is a nonterminal and contains two productions. EX:
X -> a | b; then FIRST (A) = {a , b}
v
First and Follow


 Consider again grammar G:     ANSWERS(FIRST):
     1)      E -> T E’         1) FIRST(E) = FIRST(T) =
             E’ -> +T E’ | ε      FIRST(F) = { ( , id }
              T -> F T’           FIRST (E’) = { + , ε }
             T‘ -> *F T’ | ε      FIRST (T’) = { *, ε }
             F -> ( E ) | id

      2)    S -> iEtSS’ | a    2)     FIRST(S)= { i , a }
            S’ -> eS | ε              FIRST(S’)= { e, ε }
            E -> b                    FIRST(E) = { b }
First and
           Follow
Rules in computing FOLLOW ( X) where X is a nonterminal
1) If X is a part of a production and is succeeded by a terminal, for example: A -> Xa; then
     Follow(X) = { a }
2) If X is the start symbol for a grammar, for ex:
     X -> AB
     A -> a
     B -> b; then add $(end marker symbol) to FOLLOW (X); FOLLOW(X)= { $ }
3) If X is a part of a production and followed by another non terminal, get the FIRST of that
     succeeding nonterminal.
     ex: A -> XD
         D -> aB ; then FOLLOW(X)= FIRST(D) = { a }; and if FIRST(D) contains ε
(ex: D->aB | ε), then everything in FOLLOW(D) is in FOLLOW(X).
4) If X is the last symbol of a production, ex: S -> abX, then
     FOLLOW(X)= FOLLOW(S)
First and
               Follow

•   Consider again grammar G:                             ANSWERS FOR FOLLOW:
    1) E -> T E‟                                          1) FOLLOW(E) = FOLLOW(E‟)= { ) , $}
       E‟ -> +T E‟ | ε                                       FOLLOW (T)= FOLLOW(T‟)= { +, ), $}
       T -> F T‟                                             FOLLOW (F) = { +, * , ), $}
       T„ -> *F T‟ | ε
       F -> ( E ) | id
2) S -> iEtSS‟ | a                                        2)   FOLLOW (S) = FOLLOW (S‟)={ e, $}
       S‟ -> eS | ε                                            FOLLOW(E)= { t }
       E -> b
ANSWERS(FIRST):
1) FIRST(E) = FIRST(T) = FIRST(F) = { ( , id }
    FIRST (E‟) = { + , ε }
    FIRST (T‟) = { *, ε }

2) FIRST(S)= { i , a }; FIRST(S‟)= { e, ε }; FIRST(E) =
   {b}                                                                               Back
ANSWERS(FOLLOW):
Construction of
  Predictive
Parsing Tables
The general idea is to use the FIRST AND FOLLOW to
  construct the parsing tables.
Each FIRST of every production is labeled in the table
  whenever the input matches with it.
When a FIRST of a production contains ε, then we get
  the Follow of the production
Consider again grammar G:
   Construction of                    E -> T E‟
                                      E‟ -> + T E‟ | ε
     Predictive                       T -> F T‟

   Parsing Tables                     T- -> *FT | ε
                                      F -> ( E ) | id
                                      and their First and Follow

FIRST(E) = FIRST(T) = FIRST(F) = { ( , id }        FOLLOW(E) = FOLLOW(E‟)= { ) , $}
FIRST (E‟) = { + , ε }                             FOLLOW (T)= FOLLOW(T‟)= { +, ), $}
FIRST (T‟) = { *, ε }                              FOLLOW (F) = { +, * , ), $}

Nontermi
  nals
                 Id           +               *          (           )          $

    E         E->TE‟                                  E->TE‟
    E‟                    E‟->+TE‟                                 E‟->ε      E‟->ε
    T         T->FT‟                                   T-FT‟
    T‟                      T‟-> ε      T‟->*FT‟                   T‟->ε      T‟->ε
    F          F-> id                                 F->(E)
Nontermin        Id      +                         (       )               $
    als                              *
                                                                                   B
     E       E->TE‟                             E->TE‟
     E‟                E‟->+TE‟                           E‟->ε            E‟->ε
     T       T->FT‟                             T->FT‟
     T‟                 T‟-> ε    T‟->*FT‟                T‟->ε            T‟->ε
     F       F-> id                             F->(E)
          STACK                    INPUT                        ACTION
$E                                       id + id * id $
$E‟T                                     id + id * id $         E->TE‟
$E‟T‟F                                   id + id * id $         T->FT‟
$E‟T‟id                                  id + id * id $         F-> id
$E‟T‟                                       + id * id $
$E‟                                         + id * id $          T‟-> ε
$E‟T +                                      + id * id $         E‟->+TE‟
$E‟T                                          id * id $
$E‟T‟F                                        id * id $         T->FT‟
$E‟T‟id                                       id * id $         F-> id
$E‟T‟                                            * id $
$E‟T‟F*                                          * id $         T‟->*FT‟
$E‟T‟F                                             id $
$E‟T‟id                                            id $          F-> id
$E‟T‟                                                 $
$E‟                                                   $         Back
                                                                 T‟->ε
$                                                     $          E‟->ε
LL(1)
       Grammars

• What does LL(1) mean?
The first “L” in LL(1) stands for scanning the input from left to right, the second “L”
   is for producing a leftmost derivation, and the “1” for using one input symbol of
   lookahead at each step to make parsing action decisions.
No ambiguous or left recursive grammar is LL(1).
         NonTer
         minals
                       a          b          e           i          t        $

            S        S->a                           S->iEtSS‟
                                           S‟->ε                           S‟->ε
            S‟
                                          S‟->eS
            E                   E->b
LL(1)
      Grammars

There remains a question of what should be done when a parsing table has
   multiple-defined entries.
One solution is to transform the grammar by eliminating all left recursion and then
   left factoring when possible, but not all grammars can yield an LL(1) grammar
   at all.

The main difficulty in using a predictive parsing is in writing a grammar for the
   source language such that a predictive parser can be constructed from the
   grammar.
To alleviate some of the difficulty, one can use a operator precedence, or even
   better the LR parser, that provides both the benefits of predictive parsing and
   operator precedence automatically.
                                                                BACK
Error Recovery

When does an error possibly occur?
-An error is detected when the terminal on the top of the stack
  does not match the next input symbol or when the
  nonterminal A is on the top of the stack, a is the next input
  symbol, and the parsing table entry M[A, a] is empty.
How can we deal with errors?
Panic-mode error recovery is based on the idea of skipping
  symbols on the input until a token in a selected set of synch
  tokens appears.
Error Recovery

How does it work?
Using follow and first symbols as synchronizing tokens works
  well. The parsing table will be filled with “synch” tokens
  obtained from the FOLLOW set of the nonterminal.

When a parser looks up entry M[A,a] and finds it blank, then a
 is skipped. If the entry is “synch”, then the nonterminal is
 popped in an attempt to resume parsing.
Nontermin        Id        +                        (          )            $
    als                                *
     E          E->TE‟                           E->TE‟      synch         synch
     E‟                  E‟->+TE‟                            E‟->ε         E‟->ε
     T          T->FT‟    synch                  T->FT‟     synch          synch
     T‟                   T‟-> ε    T‟->*FT‟                T‟->ε          T‟->ε
     F       STACK id
                 F->      synch      synch
                                     INPUT       F->(E)         ACTIONsynch
                                                             synch
$E                                         ) id * + id $        Error, skip )
$E                                           id * + id $     Id is in FIRST(E)
$E‟ T                                        id * + id $
$E‟ T‟F                                      id * + id $
$E‟ T‟id                                     id * + id $
$E‟ T‟                                          * + id $
$E‟ T‟ F *                                      * + id $
$E‟ T‟ F                                          + id $   Error, M[F, +1 = synch
$E‟ T‟                                            + id $    F has been popped
$E‟                                               + id $
$E‟ T+                                            + id $
$E‟ T                                               id $
$E‟ T‟ F                                            id $
$E‟ T‟ id                                           id $
$E‟T‟                                                  $
$E‟                                                    $
$                                                      $
                                                                    Back
Error Recovery

• Another error recovery procedure is the Phrase-level
  Recovery. This is implemented by filling in the blank entries
  in the parsing table with pointers to error routines. These
  routines can also pop symbols from the stack, change,
  insert or delete symbols on the input, and issue
  appropriate error messages. The alteration of stack
  symbols is very questionable and risky.



                                                BACK
Bottom Up Parsing
A general style of bottom up parsing will be introduced, it is
  the shift-reduce parsing.
Shift reduce parsing works based on its name, “Shift” and
  “Reduce”, so whenever the stack holds symbols that
  cannot be reduced anymore, we shift another input, and
  when it matches, we reduce.
Consider the ff:


Bottom Up Parsing
                                               grammar:
                                               E-> E + E
                                               E -> E * E
                                               E -> (E )
                                               E-> id
        STACK        INPUT                     ACTION
1)    $              id1 + id2 * id3 $   Shift
2)    $id1               + id2 * id3 $   Reduce by E ->id
3)    $E                 + id2 * id3 $   Shift
4)    $E +                 id2 * id3 $   Shift
5)    $E + id2                 * id3 $   Reduce by E->id
6)    $E + E                   * id3 $   Shift
7)    $E + E *                   id3 $   Shift
8)    $E + E * id3                   $   Reduce by E->id
9)    $E + E * E                     $   Reduce by E-> E * E
10)   $E + E                         $   Reduce by E-> E+ E
11)   $E                             $   ACCEPT

More Related Content

What's hot (20)

Module 11
Module 11Module 11
Module 11
 
First and follow set
First and follow setFirst and follow set
First and follow set
 
Ch4a
Ch4aCh4a
Ch4a
 
Recognition-of-tokens
Recognition-of-tokensRecognition-of-tokens
Recognition-of-tokens
 
Lecture 07 08 syntax analysis-4
Lecture 07 08 syntax analysis-4Lecture 07 08 syntax analysis-4
Lecture 07 08 syntax analysis-4
 
Topdown parsing
Topdown parsingTopdown parsing
Topdown parsing
 
Topdown parsing
Topdown parsingTopdown parsing
Topdown parsing
 
Ch5b
Ch5bCh5b
Ch5b
 
Top Down Parsing, Predictive Parsing
Top Down Parsing, Predictive ParsingTop Down Parsing, Predictive Parsing
Top Down Parsing, Predictive Parsing
 
Parsing (Automata)
Parsing (Automata)Parsing (Automata)
Parsing (Automata)
 
Ch06
Ch06Ch06
Ch06
 
Parsing
ParsingParsing
Parsing
 
Ch03
Ch03Ch03
Ch03
 
Theory of automata and formal language lab manual
Theory of automata and formal language lab manualTheory of automata and formal language lab manual
Theory of automata and formal language lab manual
 
ALF 5 - Parser Top-Down
ALF 5 - Parser Top-DownALF 5 - Parser Top-Down
ALF 5 - Parser Top-Down
 
Ch5a
Ch5aCh5a
Ch5a
 
Ch04
Ch04Ch04
Ch04
 
Syntaxdirected
SyntaxdirectedSyntaxdirected
Syntaxdirected
 
Polish
PolishPolish
Polish
 
LR parsing
LR parsingLR parsing
LR parsing
 

Similar to Here are the key points about LL(1) grammars:- LL(1) grammars are a class of context-free grammars that can be parsed using a single-token lookahead, left-to-right parsing technique called LL parsing. - In an LL(1) grammar, the production to be used in any parsing step is uniquely determined by the next input token (1 token lookahead). This allows construction of an LL(1) parsing table.- LL(1) grammars have the property that all productions with the same left-hand nonterminal must have disjoint FIRST sets. This ensures the parser can always make a unique decision about which production to use based on the lookahead token

ALF 5 - Parser Top-Down (2018)
ALF 5 - Parser Top-Down (2018)ALF 5 - Parser Top-Down (2018)
ALF 5 - Parser Top-Down (2018)Alexandru Radovici
 
CS17604_TOP Parser Compiler Design Techniques
CS17604_TOP Parser Compiler Design TechniquesCS17604_TOP Parser Compiler Design Techniques
CS17604_TOP Parser Compiler Design Techniquesd72994185
 
Compiler Construction | Lecture 4 | Parsing
Compiler Construction | Lecture 4 | Parsing Compiler Construction | Lecture 4 | Parsing
Compiler Construction | Lecture 4 | Parsing Eelco Visser
 
6-Practice Problems - LL(1) parser-16-05-2023.pptx
6-Practice Problems - LL(1) parser-16-05-2023.pptx6-Practice Problems - LL(1) parser-16-05-2023.pptx
6-Practice Problems - LL(1) parser-16-05-2023.pptxvenkatapranaykumarGa
 
Chapter 5 - Syntax Directed Translation.ppt
Chapter 5 - Syntax Directed Translation.pptChapter 5 - Syntax Directed Translation.ppt
Chapter 5 - Syntax Directed Translation.pptMulugetaGebino
 
Chapter_5_Syntax_Directed_Translation.ppt
Chapter_5_Syntax_Directed_Translation.pptChapter_5_Syntax_Directed_Translation.ppt
Chapter_5_Syntax_Directed_Translation.pptJigarThummar1
 
compiler-lecture-6nn-14112022-110738am.ppt
compiler-lecture-6nn-14112022-110738am.pptcompiler-lecture-6nn-14112022-110738am.ppt
compiler-lecture-6nn-14112022-110738am.pptSheikhMuhammadSaad3
 
Chapter_5_Syntax_Directed_Translation.ppt
Chapter_5_Syntax_Directed_Translation.pptChapter_5_Syntax_Directed_Translation.ppt
Chapter_5_Syntax_Directed_Translation.pptSatyamVerma61
 
Build a compiler in 2hrs - NCrafts Paris 2015
Build a compiler in 2hrs -  NCrafts Paris 2015Build a compiler in 2hrs -  NCrafts Paris 2015
Build a compiler in 2hrs - NCrafts Paris 2015Phillip Trelford
 

Similar to Here are the key points about LL(1) grammars:- LL(1) grammars are a class of context-free grammars that can be parsed using a single-token lookahead, left-to-right parsing technique called LL parsing. - In an LL(1) grammar, the production to be used in any parsing step is uniquely determined by the next input token (1 token lookahead). This allows construction of an LL(1) parsing table.- LL(1) grammars have the property that all productions with the same left-hand nonterminal must have disjoint FIRST sets. This ensures the parser can always make a unique decision about which production to use based on the lookahead token (20)

ALF 5 - Parser Top-Down (2018)
ALF 5 - Parser Top-Down (2018)ALF 5 - Parser Top-Down (2018)
ALF 5 - Parser Top-Down (2018)
 
CS17604_TOP Parser Compiler Design Techniques
CS17604_TOP Parser Compiler Design TechniquesCS17604_TOP Parser Compiler Design Techniques
CS17604_TOP Parser Compiler Design Techniques
 
Top down parsing
Top down parsingTop down parsing
Top down parsing
 
LL(1) Parsers
LL(1) ParsersLL(1) Parsers
LL(1) Parsers
 
11CS10033.pptx
11CS10033.pptx11CS10033.pptx
11CS10033.pptx
 
LL(1) parsing
LL(1) parsingLL(1) parsing
LL(1) parsing
 
Compiler Construction | Lecture 4 | Parsing
Compiler Construction | Lecture 4 | Parsing Compiler Construction | Lecture 4 | Parsing
Compiler Construction | Lecture 4 | Parsing
 
6-Practice Problems - LL(1) parser-16-05-2023.pptx
6-Practice Problems - LL(1) parser-16-05-2023.pptx6-Practice Problems - LL(1) parser-16-05-2023.pptx
6-Practice Problems - LL(1) parser-16-05-2023.pptx
 
Ch5b.ppt
Ch5b.pptCh5b.ppt
Ch5b.ppt
 
Assignment10
Assignment10Assignment10
Assignment10
 
Cs419 lec9 constructing parsing table ll1
Cs419 lec9   constructing parsing table ll1Cs419 lec9   constructing parsing table ll1
Cs419 lec9 constructing parsing table ll1
 
Chapter 5 Syntax Directed Translation
Chapter 5   Syntax Directed TranslationChapter 5   Syntax Directed Translation
Chapter 5 Syntax Directed Translation
 
PARSING.ppt
PARSING.pptPARSING.ppt
PARSING.ppt
 
Chapter 5 - Syntax Directed Translation.ppt
Chapter 5 - Syntax Directed Translation.pptChapter 5 - Syntax Directed Translation.ppt
Chapter 5 - Syntax Directed Translation.ppt
 
Chapter_5_Syntax_Directed_Translation.ppt
Chapter_5_Syntax_Directed_Translation.pptChapter_5_Syntax_Directed_Translation.ppt
Chapter_5_Syntax_Directed_Translation.ppt
 
Ch8a
Ch8aCh8a
Ch8a
 
compiler-lecture-6nn-14112022-110738am.ppt
compiler-lecture-6nn-14112022-110738am.pptcompiler-lecture-6nn-14112022-110738am.ppt
compiler-lecture-6nn-14112022-110738am.ppt
 
Chapter_5_Syntax_Directed_Translation.ppt
Chapter_5_Syntax_Directed_Translation.pptChapter_5_Syntax_Directed_Translation.ppt
Chapter_5_Syntax_Directed_Translation.ppt
 
Build a compiler in 2hrs - NCrafts Paris 2015
Build a compiler in 2hrs -  NCrafts Paris 2015Build a compiler in 2hrs -  NCrafts Paris 2015
Build a compiler in 2hrs - NCrafts Paris 2015
 
Ch2
Ch2Ch2
Ch2
 

Recently uploaded

Contemporary philippine arts from the regions_PPT_Module_12 [Autosaved] (1).pptx
Contemporary philippine arts from the regions_PPT_Module_12 [Autosaved] (1).pptxContemporary philippine arts from the regions_PPT_Module_12 [Autosaved] (1).pptx
Contemporary philippine arts from the regions_PPT_Module_12 [Autosaved] (1).pptxRoyAbrique
 
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxSOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxiammrhaywood
 
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions  for the students and aspirants of Chemistry12th.pptxOrganic Name Reactions  for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions for the students and aspirants of Chemistry12th.pptxVS Mahajan Coaching Centre
 
CARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptxCARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptxGaneshChakor2
 
Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)eniolaolutunde
 
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991RKavithamani
 
Employee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxEmployee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxNirmalaLoungPoorunde1
 
Arihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdfArihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdfchloefrazer622
 
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Krashi Coaching
 
Introduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher EducationIntroduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher Educationpboyjonauth
 
Z Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphZ Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphThiyagu K
 
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...EduSkills OECD
 
1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdfQucHHunhnh
 
Paris 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityParis 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityGeoBlogs
 
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Sapana Sha
 
Sanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfSanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfsanyamsingh5019
 
Introduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxIntroduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxpboyjonauth
 
Interactive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationInteractive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationnomboosow
 

Recently uploaded (20)

Contemporary philippine arts from the regions_PPT_Module_12 [Autosaved] (1).pptx
Contemporary philippine arts from the regions_PPT_Module_12 [Autosaved] (1).pptxContemporary philippine arts from the regions_PPT_Module_12 [Autosaved] (1).pptx
Contemporary philippine arts from the regions_PPT_Module_12 [Autosaved] (1).pptx
 
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxSOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
 
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions  for the students and aspirants of Chemistry12th.pptxOrganic Name Reactions  for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
 
CARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptxCARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptx
 
Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)
 
Staff of Color (SOC) Retention Efforts DDSD
Staff of Color (SOC) Retention Efforts DDSDStaff of Color (SOC) Retention Efforts DDSD
Staff of Color (SOC) Retention Efforts DDSD
 
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991
 
Employee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxEmployee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptx
 
Arihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdfArihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdf
 
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
 
Introduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher EducationIntroduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher Education
 
Z Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphZ Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot Graph
 
Mattingly "AI & Prompt Design: Structured Data, Assistants, & RAG"
Mattingly "AI & Prompt Design: Structured Data, Assistants, & RAG"Mattingly "AI & Prompt Design: Structured Data, Assistants, & RAG"
Mattingly "AI & Prompt Design: Structured Data, Assistants, & RAG"
 
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
 
1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdf
 
Paris 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityParis 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activity
 
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
 
Sanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfSanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdf
 
Introduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxIntroduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptx
 
Interactive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationInteractive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communication
 

Here are the key points about LL(1) grammars:- LL(1) grammars are a class of context-free grammars that can be parsed using a single-token lookahead, left-to-right parsing technique called LL parsing. - In an LL(1) grammar, the production to be used in any parsing step is uniquely determined by the next input token (1 token lookahead). This allows construction of an LL(1) parsing table.- LL(1) grammars have the property that all productions with the same left-hand nonterminal must have disjoint FIRST sets. This ensures the parser can always make a unique decision about which production to use based on the lookahead token

  • 3. Top Down Parsing Things to know: Top down parsing is constructing a parse tree for the input starting from the root and create nodes of the parse tree in preorder(depth first). A general form of top down parsing is the recursive descent parsing. A recursive descent parsing is a top down parsing technique that execute a set of recursive procedures to process the input, that involves backtracking(means scanning the input repeatedly). Backtracking is time consuming and therefore, inefficient. That‟s why a special case of top down parsing was developed, called predictive parsing, where no backtracking is required. A dilemma can occur if there is a left recursive grammar. Even with backtracking, you can find the parser to go into an infinite loop. There are two types of recursion, left recursive and right recursive, based on it‟s name, a left recursive grammar build trees that grows down to the left, while right recursive is vice versa.
  • 4. Top-down Parse tree of Grammar G(Where input=id): G= E -> T E‟ E‟-> +T E‟ | ε E E E E T-> F T‟ T E’ T E’ T E’ T‟-> *F T‟ | ε F-> (E) | id F T’ F T’ id An example of a simple production with left recursive grammar Consider the grammar: expr -> expr + term This is an example of a left recursive grammar. Whenever we call expr, the same procedure is called out, and the parser will loop forever. By carefully writing a grammar, one can eliminate left recursion from it. expr -> expr + term, can be written as expr -> expr + term | term After obtaining a grammar that needs no backtracking, we can use the PREDICTIVE PARSER
  • 5. Top Down Parsing Techniques Recursive-Descent Parsing Predictive Parsing
  • 6. Recursive-Descent Recursive-Descent Parsing Parsing A recursive-descent parsing program consists of a set of procedures, one for each nonterminal. Execution begins with the procedure for the start symbol, which halts and announces success if its procedure body scans the entire input string. General recursive-descent may require backtracking; that is, it may require repeated scans over the input. Consider the grammar with input string “cad”: S -> c A d A -> a b | a S S S c A d c A d c A d a b a c a d Back
  • 7. Predictive Parsing-a parsing technique that uses a lookahead symbol to determine if the current input arguments matches the lookahead symbol. Construction of First and Predictive Follow Parsing Tables LL(1) Error Recovery Grammars Back
  • 8. First and Follow First and Follow aids the construction of a predictive parser. They allow us to fill in the entries of a predictive parsing table. a is any string of terminals , then First(a) is the set of terminals that begin the strings derived from a. If a is an empty string(ɛ), then ɛ is also in First(a). Follow (A), for a nonterminal A, to be the set of terminals a that can appear immediately to the right of A in a sentential form.
  • 9. First and Follow Rules in computing FIRST (A) where A -> X, and X can be a terminal or nonterminal, or even ε(empty string). 1) If X is a terminal, then FIRST(A)= X. 2) If X is ε, then FIRST (A) = ε. 3) If X is a nonterminal and Y and Z are nonterminals, with a production of A -> X X -> Y Y -> Za Z-> b; then FIRST(A) = b; where FIRST(nonterminal1) -> FIRST(nonterminal2)or until you reach the first terminal of the production. In that case (FIRST(nonterminaln) =FIRST(nonterminaln+1)) 4) If X is a nonterminal and contains two productions. EX: X -> a | b; then FIRST (A) = {a , b}
  • 10. v First and Follow Consider again grammar G: ANSWERS(FIRST): 1) E -> T E’ 1) FIRST(E) = FIRST(T) = E’ -> +T E’ | ε FIRST(F) = { ( , id } T -> F T’ FIRST (E’) = { + , ε } T‘ -> *F T’ | ε FIRST (T’) = { *, ε } F -> ( E ) | id 2) S -> iEtSS’ | a 2) FIRST(S)= { i , a } S’ -> eS | ε FIRST(S’)= { e, ε } E -> b FIRST(E) = { b }
  • 11. First and Follow Rules in computing FOLLOW ( X) where X is a nonterminal 1) If X is a part of a production and is succeeded by a terminal, for example: A -> Xa; then Follow(X) = { a } 2) If X is the start symbol for a grammar, for ex: X -> AB A -> a B -> b; then add $(end marker symbol) to FOLLOW (X); FOLLOW(X)= { $ } 3) If X is a part of a production and followed by another non terminal, get the FIRST of that succeeding nonterminal. ex: A -> XD D -> aB ; then FOLLOW(X)= FIRST(D) = { a }; and if FIRST(D) contains ε (ex: D->aB | ε), then everything in FOLLOW(D) is in FOLLOW(X). 4) If X is the last symbol of a production, ex: S -> abX, then FOLLOW(X)= FOLLOW(S)
  • 12. First and Follow • Consider again grammar G: ANSWERS FOR FOLLOW: 1) E -> T E‟ 1) FOLLOW(E) = FOLLOW(E‟)= { ) , $} E‟ -> +T E‟ | ε FOLLOW (T)= FOLLOW(T‟)= { +, ), $} T -> F T‟ FOLLOW (F) = { +, * , ), $} T„ -> *F T‟ | ε F -> ( E ) | id 2) S -> iEtSS‟ | a 2) FOLLOW (S) = FOLLOW (S‟)={ e, $} S‟ -> eS | ε FOLLOW(E)= { t } E -> b ANSWERS(FIRST): 1) FIRST(E) = FIRST(T) = FIRST(F) = { ( , id } FIRST (E‟) = { + , ε } FIRST (T‟) = { *, ε } 2) FIRST(S)= { i , a }; FIRST(S‟)= { e, ε }; FIRST(E) = {b} Back ANSWERS(FOLLOW):
  • 13. Construction of Predictive Parsing Tables The general idea is to use the FIRST AND FOLLOW to construct the parsing tables. Each FIRST of every production is labeled in the table whenever the input matches with it. When a FIRST of a production contains ε, then we get the Follow of the production
  • 14. Consider again grammar G: Construction of E -> T E‟ E‟ -> + T E‟ | ε Predictive T -> F T‟ Parsing Tables T- -> *FT | ε F -> ( E ) | id and their First and Follow FIRST(E) = FIRST(T) = FIRST(F) = { ( , id } FOLLOW(E) = FOLLOW(E‟)= { ) , $} FIRST (E‟) = { + , ε } FOLLOW (T)= FOLLOW(T‟)= { +, ), $} FIRST (T‟) = { *, ε } FOLLOW (F) = { +, * , ), $} Nontermi nals Id + * ( ) $ E E->TE‟ E->TE‟ E‟ E‟->+TE‟ E‟->ε E‟->ε T T->FT‟ T-FT‟ T‟ T‟-> ε T‟->*FT‟ T‟->ε T‟->ε F F-> id F->(E)
  • 15. Nontermin Id + ( ) $ als * B E E->TE‟ E->TE‟ E‟ E‟->+TE‟ E‟->ε E‟->ε T T->FT‟ T->FT‟ T‟ T‟-> ε T‟->*FT‟ T‟->ε T‟->ε F F-> id F->(E) STACK INPUT ACTION $E id + id * id $ $E‟T id + id * id $ E->TE‟ $E‟T‟F id + id * id $ T->FT‟ $E‟T‟id id + id * id $ F-> id $E‟T‟ + id * id $ $E‟ + id * id $ T‟-> ε $E‟T + + id * id $ E‟->+TE‟ $E‟T id * id $ $E‟T‟F id * id $ T->FT‟ $E‟T‟id id * id $ F-> id $E‟T‟ * id $ $E‟T‟F* * id $ T‟->*FT‟ $E‟T‟F id $ $E‟T‟id id $ F-> id $E‟T‟ $ $E‟ $ Back T‟->ε $ $ E‟->ε
  • 16. LL(1) Grammars • What does LL(1) mean? The first “L” in LL(1) stands for scanning the input from left to right, the second “L” is for producing a leftmost derivation, and the “1” for using one input symbol of lookahead at each step to make parsing action decisions. No ambiguous or left recursive grammar is LL(1). NonTer minals a b e i t $ S S->a S->iEtSS‟ S‟->ε S‟->ε S‟ S‟->eS E E->b
  • 17. LL(1) Grammars There remains a question of what should be done when a parsing table has multiple-defined entries. One solution is to transform the grammar by eliminating all left recursion and then left factoring when possible, but not all grammars can yield an LL(1) grammar at all. The main difficulty in using a predictive parsing is in writing a grammar for the source language such that a predictive parser can be constructed from the grammar. To alleviate some of the difficulty, one can use a operator precedence, or even better the LR parser, that provides both the benefits of predictive parsing and operator precedence automatically. BACK
  • 18. Error Recovery When does an error possibly occur? -An error is detected when the terminal on the top of the stack does not match the next input symbol or when the nonterminal A is on the top of the stack, a is the next input symbol, and the parsing table entry M[A, a] is empty. How can we deal with errors? Panic-mode error recovery is based on the idea of skipping symbols on the input until a token in a selected set of synch tokens appears.
  • 19. Error Recovery How does it work? Using follow and first symbols as synchronizing tokens works well. The parsing table will be filled with “synch” tokens obtained from the FOLLOW set of the nonterminal. When a parser looks up entry M[A,a] and finds it blank, then a is skipped. If the entry is “synch”, then the nonterminal is popped in an attempt to resume parsing.
  • 20. Nontermin Id + ( ) $ als * E E->TE‟ E->TE‟ synch synch E‟ E‟->+TE‟ E‟->ε E‟->ε T T->FT‟ synch T->FT‟ synch synch T‟ T‟-> ε T‟->*FT‟ T‟->ε T‟->ε F STACK id F-> synch synch INPUT F->(E) ACTIONsynch synch $E ) id * + id $ Error, skip ) $E id * + id $ Id is in FIRST(E) $E‟ T id * + id $ $E‟ T‟F id * + id $ $E‟ T‟id id * + id $ $E‟ T‟ * + id $ $E‟ T‟ F * * + id $ $E‟ T‟ F + id $ Error, M[F, +1 = synch $E‟ T‟ + id $ F has been popped $E‟ + id $ $E‟ T+ + id $ $E‟ T id $ $E‟ T‟ F id $ $E‟ T‟ id id $ $E‟T‟ $ $E‟ $ $ $ Back
  • 21. Error Recovery • Another error recovery procedure is the Phrase-level Recovery. This is implemented by filling in the blank entries in the parsing table with pointers to error routines. These routines can also pop symbols from the stack, change, insert or delete symbols on the input, and issue appropriate error messages. The alteration of stack symbols is very questionable and risky. BACK
  • 22. Bottom Up Parsing A general style of bottom up parsing will be introduced, it is the shift-reduce parsing. Shift reduce parsing works based on its name, “Shift” and “Reduce”, so whenever the stack holds symbols that cannot be reduced anymore, we shift another input, and when it matches, we reduce.
  • 23. Consider the ff: Bottom Up Parsing grammar: E-> E + E E -> E * E E -> (E ) E-> id STACK INPUT ACTION 1) $ id1 + id2 * id3 $ Shift 2) $id1 + id2 * id3 $ Reduce by E ->id 3) $E + id2 * id3 $ Shift 4) $E + id2 * id3 $ Shift 5) $E + id2 * id3 $ Reduce by E->id 6) $E + E * id3 $ Shift 7) $E + E * id3 $ Shift 8) $E + E * id3 $ Reduce by E->id 9) $E + E * E $ Reduce by E-> E * E 10) $E + E $ Reduce by E-> E+ E 11) $E $ ACCEPT

Editor's Notes

  1. Top Down Parsing