Your SlideShare is downloading. ×
0
Top down     and botttom up 2 LATEST.
Top down     and botttom up 2 LATEST.
Top down     and botttom up 2 LATEST.
Top down     and botttom up 2 LATEST.
Top down     and botttom up 2 LATEST.
Top down     and botttom up 2 LATEST.
Top down     and botttom up 2 LATEST.
Top down     and botttom up 2 LATEST.
Top down     and botttom up 2 LATEST.
Top down     and botttom up 2 LATEST.
Top down     and botttom up 2 LATEST.
Top down     and botttom up 2 LATEST.
Top down     and botttom up 2 LATEST.
Top down     and botttom up 2 LATEST.
Top down     and botttom up 2 LATEST.
Top down     and botttom up 2 LATEST.
Top down     and botttom up 2 LATEST.
Top down     and botttom up 2 LATEST.
Top down     and botttom up 2 LATEST.
Top down     and botttom up 2 LATEST.
Top down     and botttom up 2 LATEST.
Top down     and botttom up 2 LATEST.
Top down     and botttom up 2 LATEST.
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Top down and botttom up 2 LATEST.

2,554

Published on

Weeh

Weeh

Published in: Education, Technology
0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
2,554
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
146
Comments
0
Likes
1
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide
  • Top Down Parsing
  • Transcript

    • 1. Top-Down and Bottom-Up Parsing
    • 2. Top Down ParsingBottom Up Parsing
    • 3. Top Down ParsingThings to know:Top down parsing is constructing a parse tree for the input starting from the root and create nodes of the parse tree in preorder(depth first).A general form of top down parsing is the recursive descent parsing.A recursive descent parsing is a top down parsing technique that execute a set of recursive procedures to process the input, that involves backtracking(means scanning the input repeatedly).Backtracking is time consuming and therefore, inefficient. That‟s why a special case of top down parsing was developed, called predictive parsing, where no backtracking is required.A dilemma can occur if there is a left recursive grammar. Even with backtracking, you can find the parser to go into an infinite loop.There are two types of recursion, left recursive and right recursive, based on it‟s name, a left recursive grammar build trees that grows down to the left, while right recursive is vice versa.
    • 4. Top-down Parse tree of Grammar G(Where input=id):G= E -> T E‟ E‟-> +T E‟ | ε E E E E T-> F T‟ T E’ T E’ T E’ T‟-> *F T‟ | ε F-> (E) | id F T’ F T’ idAn example of a simple production with left recursive grammarConsider the grammar: expr -> expr + termThis is an example of a left recursive grammar.Whenever we call expr, the same procedure is called out, and the parser will loop forever.By carefully writing a grammar, one can eliminate left recursion from it.expr -> expr + term, can be written asexpr -> expr + term | termAfter obtaining a grammar that needs no backtracking, we can use thePREDICTIVE PARSER
    • 5. Top Down Parsing Techniques Recursive-Descent Parsing Predictive Parsing
    • 6. Recursive-DescentRecursive-Descent Parsing Parsing A recursive-descent parsing program consists of a set of procedures, one for each nonterminal. Execution begins with the procedure for the start symbol, which halts and announces success if its procedure body scans the entire input string. General recursive-descent may require backtracking; that is, it may require repeated scans over the input. Consider the grammar with input string “cad”: S -> c A d A -> a b | a S S S c A d c A d c A d a b a c a d Back
    • 7. Predictive Parsing-a parsing technique that uses a lookahead symbol todetermine if the current input arguments matches the lookahead symbol. Construction of First and Predictive Follow Parsing Tables LL(1) Error Recovery Grammars Back
    • 8. First and FollowFirst and Follow aids the construction of a predictive parser.They allow us to fill in the entries of a predictive parsing table. a is any string of terminals , then First(a) is the set of terminalsthat begin the strings derived from a. If a is an empty string(ɛ),then ɛ is also in First(a).Follow (A), for a nonterminal A, to be the set of terminals a thatcan appear immediately to the right of A in a sentential form.
    • 9. First and FollowRules in computing FIRST (A) where A -> X, and X can be a terminal or nonterminal, or even ε(empty string).1) If X is a terminal, then FIRST(A)= X.2) If X is ε, then FIRST (A) = ε.3) If X is a nonterminal and Y and Z are nonterminals, with a production of A -> X X -> Y Y -> Za Z-> b; then FIRST(A) = b; where FIRST(nonterminal1) -> FIRST(nonterminal2)or until you reach the first terminal of the production. In that case(FIRST(nonterminaln) =FIRST(nonterminaln+1))4) If X is a nonterminal and contains two productions. EX:X -> a | b; then FIRST (A) = {a , b}
    • 10. vFirst and Follow Consider again grammar G: ANSWERS(FIRST): 1) E -> T E’ 1) FIRST(E) = FIRST(T) = E’ -> +T E’ | ε FIRST(F) = { ( , id } T -> F T’ FIRST (E’) = { + , ε } T‘ -> *F T’ | ε FIRST (T’) = { *, ε } F -> ( E ) | id 2) S -> iEtSS’ | a 2) FIRST(S)= { i , a } S’ -> eS | ε FIRST(S’)= { e, ε } E -> b FIRST(E) = { b }
    • 11. First and FollowRules in computing FOLLOW ( X) where X is a nonterminal1) If X is a part of a production and is succeeded by a terminal, for example: A -> Xa; then Follow(X) = { a }2) If X is the start symbol for a grammar, for ex: X -> AB A -> a B -> b; then add $(end marker symbol) to FOLLOW (X); FOLLOW(X)= { $ }3) If X is a part of a production and followed by another non terminal, get the FIRST of that succeeding nonterminal. ex: A -> XD D -> aB ; then FOLLOW(X)= FIRST(D) = { a }; and if FIRST(D) contains ε(ex: D->aB | ε), then everything in FOLLOW(D) is in FOLLOW(X).4) If X is the last symbol of a production, ex: S -> abX, then FOLLOW(X)= FOLLOW(S)
    • 12. First and Follow• Consider again grammar G: ANSWERS FOR FOLLOW: 1) E -> T E‟ 1) FOLLOW(E) = FOLLOW(E‟)= { ) , $} E‟ -> +T E‟ | ε FOLLOW (T)= FOLLOW(T‟)= { +, ), $} T -> F T‟ FOLLOW (F) = { +, * , ), $} T„ -> *F T‟ | ε F -> ( E ) | id2) S -> iEtSS‟ | a 2) FOLLOW (S) = FOLLOW (S‟)={ e, $} S‟ -> eS | ε FOLLOW(E)= { t } E -> bANSWERS(FIRST):1) FIRST(E) = FIRST(T) = FIRST(F) = { ( , id } FIRST (E‟) = { + , ε } FIRST (T‟) = { *, ε }2) FIRST(S)= { i , a }; FIRST(S‟)= { e, ε }; FIRST(E) = {b} BackANSWERS(FOLLOW):
    • 13. Construction of PredictiveParsing TablesThe general idea is to use the FIRST AND FOLLOW to construct the parsing tables.Each FIRST of every production is labeled in the table whenever the input matches with it.When a FIRST of a production contains ε, then we get the Follow of the production
    • 14. Consider again grammar G: Construction of E -> T E‟ E‟ -> + T E‟ | ε Predictive T -> F T‟ Parsing Tables T- -> *FT | ε F -> ( E ) | id and their First and FollowFIRST(E) = FIRST(T) = FIRST(F) = { ( , id } FOLLOW(E) = FOLLOW(E‟)= { ) , $}FIRST (E‟) = { + , ε } FOLLOW (T)= FOLLOW(T‟)= { +, ), $}FIRST (T‟) = { *, ε } FOLLOW (F) = { +, * , ), $}Nontermi nals Id + * ( ) $ E E->TE‟ E->TE‟ E‟ E‟->+TE‟ E‟->ε E‟->ε T T->FT‟ T-FT‟ T‟ T‟-> ε T‟->*FT‟ T‟->ε T‟->ε F F-> id F->(E)
    • 15. Nontermin Id + ( ) $ als * B E E->TE‟ E->TE‟ E‟ E‟->+TE‟ E‟->ε E‟->ε T T->FT‟ T->FT‟ T‟ T‟-> ε T‟->*FT‟ T‟->ε T‟->ε F F-> id F->(E) STACK INPUT ACTION$E id + id * id $$E‟T id + id * id $ E->TE‟$E‟T‟F id + id * id $ T->FT‟$E‟T‟id id + id * id $ F-> id$E‟T‟ + id * id $$E‟ + id * id $ T‟-> ε$E‟T + + id * id $ E‟->+TE‟$E‟T id * id $$E‟T‟F id * id $ T->FT‟$E‟T‟id id * id $ F-> id$E‟T‟ * id $$E‟T‟F* * id $ T‟->*FT‟$E‟T‟F id $$E‟T‟id id $ F-> id$E‟T‟ $$E‟ $ Back T‟->ε$ $ E‟->ε
    • 16. LL(1) Grammars• What does LL(1) mean?The first “L” in LL(1) stands for scanning the input from left to right, the second “L” is for producing a leftmost derivation, and the “1” for using one input symbol of lookahead at each step to make parsing action decisions.No ambiguous or left recursive grammar is LL(1). NonTer minals a b e i t $ S S->a S->iEtSS‟ S‟->ε S‟->ε S‟ S‟->eS E E->b
    • 17. LL(1) GrammarsThere remains a question of what should be done when a parsing table has multiple-defined entries.One solution is to transform the grammar by eliminating all left recursion and then left factoring when possible, but not all grammars can yield an LL(1) grammar at all.The main difficulty in using a predictive parsing is in writing a grammar for the source language such that a predictive parser can be constructed from the grammar.To alleviate some of the difficulty, one can use a operator precedence, or even better the LR parser, that provides both the benefits of predictive parsing and operator precedence automatically. BACK
    • 18. Error RecoveryWhen does an error possibly occur?-An error is detected when the terminal on the top of the stack does not match the next input symbol or when the nonterminal A is on the top of the stack, a is the next input symbol, and the parsing table entry M[A, a] is empty.How can we deal with errors?Panic-mode error recovery is based on the idea of skipping symbols on the input until a token in a selected set of synch tokens appears.
    • 19. Error RecoveryHow does it work?Using follow and first symbols as synchronizing tokens works well. The parsing table will be filled with “synch” tokens obtained from the FOLLOW set of the nonterminal.When a parser looks up entry M[A,a] and finds it blank, then a is skipped. If the entry is “synch”, then the nonterminal is popped in an attempt to resume parsing.
    • 20. Nontermin Id + ( ) $ als * E E->TE‟ E->TE‟ synch synch E‟ E‟->+TE‟ E‟->ε E‟->ε T T->FT‟ synch T->FT‟ synch synch T‟ T‟-> ε T‟->*FT‟ T‟->ε T‟->ε F STACK id F-> synch synch INPUT F->(E) ACTIONsynch synch$E ) id * + id $ Error, skip )$E id * + id $ Id is in FIRST(E)$E‟ T id * + id $$E‟ T‟F id * + id $$E‟ T‟id id * + id $$E‟ T‟ * + id $$E‟ T‟ F * * + id $$E‟ T‟ F + id $ Error, M[F, +1 = synch$E‟ T‟ + id $ F has been popped$E‟ + id $$E‟ T+ + id $$E‟ T id $$E‟ T‟ F id $$E‟ T‟ id id $$E‟T‟ $$E‟ $$ $ Back
    • 21. Error Recovery• Another error recovery procedure is the Phrase-level Recovery. This is implemented by filling in the blank entries in the parsing table with pointers to error routines. These routines can also pop symbols from the stack, change, insert or delete symbols on the input, and issue appropriate error messages. The alteration of stack symbols is very questionable and risky. BACK
    • 22. Bottom Up ParsingA general style of bottom up parsing will be introduced, it is the shift-reduce parsing.Shift reduce parsing works based on its name, “Shift” and “Reduce”, so whenever the stack holds symbols that cannot be reduced anymore, we shift another input, and when it matches, we reduce.
    • 23. Consider the ff:Bottom Up Parsing grammar: E-> E + E E -> E * E E -> (E ) E-> id STACK INPUT ACTION1) $ id1 + id2 * id3 $ Shift2) $id1 + id2 * id3 $ Reduce by E ->id3) $E + id2 * id3 $ Shift4) $E + id2 * id3 $ Shift5) $E + id2 * id3 $ Reduce by E->id6) $E + E * id3 $ Shift7) $E + E * id3 $ Shift8) $E + E * id3 $ Reduce by E->id9) $E + E * E $ Reduce by E-> E * E10) $E + E $ Reduce by E-> E+ E11) $E $ ACCEPT

    ×