Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Top down and botttom up Parsing


  • Be the first to comment

Top down and botttom up Parsing

  1. 1. Top-Down and Bottom-Up Parsing
  2. 2. Top Down ParsingBottom Up Parsing
  3. 3. Top Down ParsingThings to know:Top down parsing is constructing a parse tree for the input starting from the root and create nodes of the parse tree in preorder(depth first).A general form of top down parsing is the recursive descent parsing.A recursive descent parsing is a top down parsing technique that execute a set of recursive procedures to process the input, that involves backtracking(means scanning the input repeatedly).Backtracking is time consuming and therefore, inefficient. That‟s why a special case of top down parsing was developed, called predictive parsing, where no backtracking is required.A dilemma can occur if there is a left recursive grammar. Even with backtracking, you can find the parser to go into an infinite loop.There are two types of recursion, left recursive and right recursive, based on it‟s name, a left recursive grammar build trees that grows down to the left, while right recursive is vice versa.
  4. 4. Top-down Parse tree of Grammar G(Where input=id):G= E -> T E‟ E‟-> +T E‟ | ε E E E E T-> F T‟ T E’ T E’ T E’ T‟-> *F T‟ | ε F-> (E) | id F T’ F T’ idAn example of a simple production with left recursive grammarConsider the grammar: expr -> expr + termThis is an example of a left recursive grammar.Whenever we call expr, the same procedure is called out, and the parser will loop forever.By carefully writing a grammar, one can eliminate left recursion from it.expr -> expr + term, can be written asexpr -> expr + term | termAfter obtaining a grammar that needs no backtracking, we can use thePREDICTIVE PARSER
  5. 5. Top Down Parsing Techniques Recursive-Descent Parsing Predictive Parsing
  6. 6. Recursive-DescentRecursive-Descent Parsing Parsing A recursive-descent parsing program consists of a set of procedures, one for each nonterminal. Execution begins with the procedure for the start symbol, which halts and announces success if its procedure body scans the entire input string. General recursive-descent may require backtracking; that is, it may require repeated scans over the input. Consider the grammar with input string “cad”: S -> c A d A -> a b | a S S S c A d c A d c A d a b a c a d Back
  7. 7. Predictive Parsing-a parsing technique that uses a lookahead symbol todetermine if the current input arguments matches the lookahead symbol. Construction of First and Predictive Follow Parsing Tables LL(1) Error Recovery Grammars
  8. 8. First and FollowFirst and Follow aids the construction of a predictive parser.They allow us to fill in the entries of a predictive parsing table. a is any string of terminals , then First(a) is the set of terminalsthat begin the strings derived from a. If a is an empty string(ɛ),then ɛ is also in First(a).Follow (A), for a nonterminal A, to be the set of terminals a thatcan appear immediately to the right of A in a sentential form.
  9. 9. First and FollowRules in computing FIRST (X) where X can be a terminal or nonterminal, or even ε(empty string).1) If X is a terminal, then FIRST(X)= X.2) If X is ε, then FIRST (X) = ε.3) If X is a nonterminal and Y and Z are nonterminals, with a production of X -> Y Y -> Za Z-> b; then FIRST(X) = b; where FIRST(nonterminal1) -> FIRST(nonterminal2)or until you reach the first terminal of the production. In that case(FIRST(nonterminaln) =FIRST(nonterminaln+1))4) If X is a nonterminal and contains two productions. EX:X -> a | b; then FIRST (X) = {a , b}
  10. 10. First and Follow• Consider again grammar G: ANSWERS(FIRST): 1) E -> T E‟ 1) FIRST(E) = FIRST(T) = E‟ -> +T E‟ | ε FIRST(F) = { ( , id } T -> F T‟ FIRST (E‟) = { + , ε } T„ -> *F T‟ | ε FIRST (T) = { *, ε } F -> ( E ) | id 2) S -> iEtSS‟ | a 2) FIRST(S)= { i , a } S‟ -> eS | ε FIRST(S‟)= { e, ε } E -> b FIRST(E) = { b }
  11. 11. First and FollowRules in computing FOLLOW ( X) where X is a nonterminal1) If X is a part of a production and is succeeded by a terminal, for example: A -> Xa; then Follow(X) = { a }2) If X is the start symbol for a grammar, for ex: X -> AB A -> a B -> b; then add $ to FOLLOW (X); FOLLOW(X)= { $ }3) If X is a part of a production and followed by another non terminal, get the FIRST of that succeeding nonterminal. ex: A -> XD D -> aB ; then FOLLOW(X)= FIRST(D) = { a }; and if FIRST(D) contains ε(ex: D->aB | ε), then everything in FOLLOW(D) is in FOLLOW(X).4) If X is the last symbol of a production, ex: S -> abX, then FOLLOW(X)= FOLLOW(S)
  12. 12. First and Follow• Consider again grammar G: ANSWERS FOR FOLLOW: 1) E -> T E‟ 1) FOLLOW(E) = FOLLOW(E‟)= { ) , $} E‟ -> +T E‟ | ε FOLLOW (T)= FOLLOW(T‟)= { +, ), $} T -> F T‟ FOLLOW (F) = { +, * , ), $} T„ -> *F T‟ | ε F -> ( E ) | id2) S -> iEtSS‟ | a 2) FOLLOW (S) = FOLLOW (S‟)={ e, $} S‟ -> eS | ε FOLLOW(E)= { t } E -> bANSWERS(FIRST):1) FIRST(E) = FIRST(T) = FIRST(F) = { ( , id } FIRST (E‟) = { + , ε } FIRST (T‟) = { *, ε }2) FIRST(S)= { i , a }; FIRST(S‟)= { e, ε }; FIRST(E) = {b}ANSWERS(FOLLOW):
  13. 13. Construction of PredictiveParsing TablesThe general idea is to use the FIRST AND FOLLOW to construct the parsing tables.Each FIRST of every production is labeled in the table whenever the input matches with it.When a FIRST of a production contains ε, then we get the Follow of the production
  14. 14. Consider again grammar G: Construction of E -> T E‟ E‟ -> + T E‟ | ε Predictive T -> F T‟ Parsing Tables T- -> *FT | ε F -> ( E ) | id and their First and FollowFIRST(E) = FIRST(T) = FIRST(F) = { ( , id } FOLLOW(E) = FOLLOW(E‟)= { ) , $}FIRST (E‟) = { + , ε } FOLLOW (T)= FOLLOW(T‟)= { +, ), $}FIRST (T‟) = { *, ε } FOLLOW (F) = { +, * , ), $}Nontermi nals Id + * ( ) $ E E->TE‟ E->TE‟ E‟ E‟->+TE‟ E‟->ε E‟->ε T T->FT‟ T-FT‟ T‟ T‟-> ε T‟->*FT‟ T‟->ε T‟->ε F F-> id F->(E)
  15. 15. Nontermin Id + ( ) $ als * E E->TE‟ E->TE‟ E‟ E‟->+TE‟ E‟->ε E‟->ε T T->FT‟ T->FT‟ T‟ T‟-> ε T‟->*FT‟ T‟->ε T‟->ε F F-> id F->(E) STACK INPUT ACTION$E id + id * id $$E‟T id + id * id $ E->TE‟$E‟T‟F id + id * id $ T->FT‟$E‟T‟id id + id * id $ F-> id$E‟T‟ + id * id $$E‟ + id * id $ T‟-> ε$E‟T + + id * id $ E‟->+TE‟$E‟T id * id $$E‟T‟F id * id $ T->FT‟$E‟T‟id id * id $ F-> id$E‟T‟ * id $$E‟T‟F* * id $ T‟->*FT‟$E‟T‟F id $$E‟T‟id id $ F-> id$E‟T‟ $$E‟ $ Back T‟->ε$ $ E‟->ε
  16. 16. LL(1) Grammars• What does LL(1) mean?The first “L” in LL(1) stands for scanning the input from left to right, the second “L” is for producing a leftmost derivation, and the “1” for using one input symbol of lookahead at each step to make parsing action decisions.No ambiguous or left recursive grammar is LL(1).
  17. 17. LL(1) GrammarsThere remains a question of what should be done when a parsing table has multiple-defined entries.One solution is to transform the grammar by eliminating all left recursion and then left factoring when possible, but not all grammars can yield an LL(1) grammar at all.The main difficulty in using a predictive parsing is in writing a grammar for the source language such that a predictive parser can be constructed from the grammar.To alleviate some of the difficulty, one can use a operator precedence, or even better the LR parser, that provides both the benefits of predictive parsing and operator precedence automatically. BACK
  18. 18. Error RecoveryWhen does an error possibly occur?-An error is detected when the terminal on the top of the stack does not match the next input symbol or when the nonterminal A is on the top of the stack, a is the next input symbol, and the parsing table entry M[A, a] is empty.How can we deal with errors?Panic-mode error recovery is based on the idea of skipping symbols on the input until a token in a selected set of synch tokens appears.
  19. 19. Error RecoveryHow does it work?Using follow and first symbols as synchronizing tokens works well. The parsing table will be filled with “synch” tokens obtained from the FOLLOW set of the nonterminal.When a parser looks up entry M[A,a] and finds it blank, then a is skipped. If the entry is “synch”, then the nonterminal is popped in an attempt to resume parsing.
  20. 20. Nontermin Id + ( ) $ als * E E->TE‟ E->TE‟ synch synch E‟ E‟->+TE‟ E‟->ε E‟->ε T T->FT‟ synch T->FT‟ synch synch T‟ T‟-> ε T‟->*FT‟ T‟->ε T‟->ε F STACK id F-> synch synch INPUT F->(E) ACTIONsynch synch$E ) id * + id $ Error, skip )$E id * + id $ Id is in FIRST(E)$E‟ T id * + id $$E‟ T‟F id * + id $$E‟ T‟id id * + id $$E‟ T‟ * + id $$E‟ T‟ F * * + id $$E‟ T‟ F + id $ Error, M[F, +1 = synch$E‟ T‟ + id $ F has been popped$E‟ + id $$E‟ T+ + id $$E‟ T id $$E‟ T‟ F id $$E‟ T‟ id id $$E‟T‟ $$E‟ $$ $ Back
  21. 21. Error Recovery• Another error recovery procedure is the Phrase-level Recovery. This is implemented by filling in the blank entries in the parsing table with pointers to error routines. These routines can also pop symbols from the stack, change, insert or delete symbols on the input, and issue appropriate error messages. The alteration of stack symbols is very questionable and risky. BACK
  22. 22. Bottom Up ParsingA general style of bottom up parsing will be introduced, it is the shift-reduce parsing.Shift reduce parsing works based on its name, “Shift” and “Reduce”, so whenever the stack holds symbols that cannot be reduced anymore, we shift another input, and when it matches, we reduce.
  23. 23. Bottom Up Parsing STACK INPUT ACTION1) $ id1 + id2 * id3 $ Shift2) $id1 + id2 * id3 $ Reduce by E -3) $E + id2 * id3 $ >id4) $E + id2 * id3 $ Shift5) $E + id2 * id3 $ Shift6) $E + E * id3 $ Reduce by E->id7) $E + E * id3 $ Shift8) $E + E * id3 $ Shift9) $E + E * E $ Reduce by E->id10)$E + E $ Reduce by E-> E * E11)$E $ Reduce by E-> E+ E ACCEPT

    Be the first to comment

    Login to see the comments

  • MukeshPatel104

    Oct. 16, 2016
  • SrikanthKumar122

    Jan. 18, 2017
  • SurajEswaran

    Feb. 27, 2017
  • achmadhilman1995

    Mar. 25, 2017
  • rubidium7

    Jul. 25, 2017
  • MeetPatel353

    Oct. 8, 2017
  • JansariKajal

    Nov. 10, 2017
  • karanshrm44

    Nov. 15, 2017
  • AnjaliSharma344

    Nov. 30, 2017
  • AhmedAlaa232

    Jan. 8, 2018
  • MoonIma

    Apr. 26, 2018
  • VedantGupta23

    Apr. 29, 2018
  • VikashKumar1344

    May. 18, 2018
  • RaazKumar7

    May. 19, 2018
  • lavanyasudha

    May. 19, 2018
  • MrudulaR3

    Mar. 11, 2019
  • drumil1994

    Oct. 9, 2019
  • harwinderkaur41

    Nov. 29, 2019
  • JaphethDelly

    Apr. 16, 2020
  • AkashKamble85

    Apr. 29, 2021



Total views


On Slideshare


From embeds


Number of embeds