Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our User Agreement and Privacy Policy.

Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our Privacy Policy and User Agreement for details.

Like this presentation? Why not share!

No Downloads

Total views

1,880

On SlideShare

0

From Embeds

0

Number of Embeds

2

Shares

0

Downloads

162

Comments

0

Likes

2

No embeds

No notes for slide

- 1. Comp 303 - Theory of ComputationContext-Free Languages and Push Down Automata
- 2. Context-Free Languages• Context-Free Languages (CFL) are described using Context-Free Grammars (CFG).• A CFG is a simple recursive method of specifying grammar rules which can generate strings in a language – these languages are the CFL’s.
- 3. Context-Free Grammars• The following is an example of a CFG, call it G1: – A → 1A0 (A and B = variables) –A→B –B→# (0, 1 and # = terminals)• A grammar consists of a collection of substitution rules (projections).• A is start variable in this case – usually occurs on left hand side of topmost rule.
- 4. Context-Free Grammars• Use grammar to describe a language by generating each string of language: – Write down start variable. – Find a variable and a rule which starts with that variable. Replace written variable with the right hand side of this rule. – Repeat the second step until no variables remain.
- 5. Context-Free Grammars• G1 generates the string 1111#0000 using the following sequences:• A → 1A0 → 11A00 → 111A000 → 1111A0000 → 1111B0000 → 1111#0000Language generate {1k#0k | k>=0}
- 6. Context-Free Grammars• All strings generated in this manner constitute the language of the grammar.• L(G1) = language of grammar G1.• Can show that L(G1) is {1n#0n | n ≥ 0}.• Any language that can be generated by some context-free grammar is called a context-free language.
- 7. Context-Free Grammars• Note: – For convenience if a variable has several rules they are often abbreviated: – A → 1A0 and A → B may be represented as: A → 1A0 | B, where “|” represents “or”.
- 8. Defining CFG• Informally a CFG consists of: – A set of replacement rules, each having a Left- Hand Side (LHS) and a Right-Hand Side (RHS). – Two types of symbols; variables and terminals. – LHS of each rule is a single variable (no terminals). – RHS of each rule is a string of zero or more variables and terminals. – A string consists of only terminals.
- 9. Definition of a CFG• Formally, a context-free grammar is a 4-tuple (V, Σ , R, S), where – V are the variables (finite set) – Σ are the terminal states (finite set) – R is the set of rules – S is the start variable, S∈V
- 10. Context-Free Grammars• In grammar G1, V = {A, B}, Σ = {0, 1, #}, S = A, and R is the collections of the rules: – A → 1A0 –A→B –B→#
- 11. Context-Free Grammars• Consider G3 = ({S}, {a,b}, R, S). The set of rules R, is • S → aSb | SS | ε• This grammar generates strings such as ab, abab, aababb and aaabbb.
- 12. Context-Free Grammars• Consider the language of palindromes: – L = {w {a,b}* | w = wr}• The language can be generated by the following rules: – S → aSa S → bSb S→ a S→b S→ε• V = {S}, S = S, Σ = {a,b} and R are rules above.• More examples?
- 13. Derivation TreeDefinition: Let G = (V, T, P, S) be a CFG. A tree is a derivation (or parse) tree if: – Every vertex has a label from V union T union {ε} – The label of the root is S – If a vertex with label A has children with labels X1, X2, …, Xn, from left to right, then A –> X1, X2,…, Xn must be a production in P – If a vertex has label ε, then that vertex is a leaf and the only child of its’ parentMore Generally, a derivation tree can be defined with any non-terminal as the root.
- 14. Designing CFG’s• Many CFL’s are the union of simpler ones.• Construct the smaller simpler CFG’s and then construct them to give the larger CFG for the CFL.
- 15. Designing CFG’s• Construct a grammar for the language: {0 n1n ∪ | n ≥ 0} {1n0n | n ≥ 0}.• Firstly construct the grammar • S1 → 0 S11 | ε for the language {0n1n | n ≥ 0}, and the grammar • S2 → 1 S20 | ε for the language {1n0n | n ≥ 0}, and then add the rule S → S1| S2 to give the grammar: – S → S1| S2 – S1 → 0 S11 | ε – S2 → 1 S20 | ε
- 16. Designing CFG’s• We can construct a CFG for a regular language by first constructing a DFA for the language.• A DFA may be converted into an equivalent CFG as follows: • Make a variable Ri for each state qi of the DFA. • Add the rule Ri → aRj to the CFG if δ(qi, a) = qj is a transition in the DFA. • Add the rule Ri → ε if qi is an accept state of the DFA. • Make R0 the start state of the grammar where q0 is the start state of the machine.
- 17. Union, concatenation and closure of CFG’s• Theorem: If L1 and L2 are CFL’s then L1 ∪ 2, L1L2 L and L*1 are also CFL’s.• That is, the context-free languages are closed under union, concatenation and Kleene-closure.• Begin with two grammars: G1 = (V1, Σ , R1, S1) and G2 = (V2, Σ , R2, S2), generating CFL’s L1 and L2 respectively.
- 18. Union of CFG’s• The new CFG Gx is made as: – Σ remains the same – Sx is the new start variable – Vx = V1 ∪V2 ∪ x} {S – Rx = R1 ∪ ∪ → S |S } R {S 2 x 1 2• Explanation: All we have done is augment the variable set with a new start state and then allowed the new start state to map to either of the two grammars. So, we’ll generate strings ∪ from either L1 or L2, i.e. L1 L2
- 19. Concatenation of CFG’s• The new CFG Gy is made as: – Σ remains the same – Sy is the new start variable – Vy = V1 ∪V ∪ } {S – Ry = R 1 ∪ ∪ → SS} 2 R {S y 2 x 1 2• Explanation: Again, all we have done is to augment the variable set with a new start state, and then allowed the new start state to map to the concatenation of the two original start symbols. So, we will generate strings that begin with strings from L1 and end with strings from L2, i.e. L1L2
- 20. Kleene-Closure of CFG’s• The new CFG Gz is made as: – Σ remains the same – Sz is the new start variable – Vz = V1 ∪{Sz} – Rz = R1 ∪{S → S S | ε} z 1 z• Explanation: Again we have augmented the variable set with a new start state, and then allowed the new start state to map to either S1Sz or ε. This means we can generate strings with zero or more strings made from expanding the variable S1, i.e. L*1
- 21. Pushdown Automata (PDA)• Pushdown Automata are similar to nondeterministic finite automata but have an extra element – stack.• This stack provided extra memory space.• Also allows pushdown automata to recognise some nonregular languages.
- 22. PDAFinite Automata Pushdown Automata state …… state control stack control input input
- 23. PDA• Why is a stack useful? – It can hold an unlimited amount of information.• Remember that a FA was unable to recognise the language {0n1n | n ≥ 0} because it can’t store large numbers.• However, a PDA does not have this problem, due to the presence of a stack: it can use the stack to store how many 0’s it has seen.
- 24. PDA• A PDA can write symbols on the stack and read them back later.• Writing a symbol “pushes down” all the other symbols on the stack.• Only the top symbol in the stack can ever be read – once read it is removed.• Writing a symbol is known as “pushing” and reading a symbol known as “popping”.
- 25. PDA• The PDA has no way of checking for an empty stack.• Gets around this by placing a special character, $, on the stack initially.• Then if it ever sees the $ again it knows that the stack is effectively empty.
- 26. PDA• Important:• Deterministic and non-deterministic PDA’s are not equivalent in power.• Non deterministic PDA’s recognise certain languages which no deterministic pushdown automata can recognise.
- 27. Formal Definition of PDA’s• The formal definition of a PDA is similar to that of a FA, except for the stack.• The stack contains symbols drawn from some alphabet.• The machine may use different alphabets for its input and the stack – We need to specify an input alphabet Σ and a stack alphabet Γ
- 28. Formal Definition of PDA’s• In order to formally define a PDA we need to determine the transition function. Recall: – Σε = Σ ∪ {ε} and Γε = Γ ∪{ε} – The domain of the transition function is Q × Σε × Γε• Therefore, the current state, next input symbol read and top state of the stack determine the next move of the PDA.• Note that either symbol may be ε meaning that the machine may move without reading a symbol from the input or the stack.
- 29. Formal Definition of PDA’s• What is the range of the transition function?• The machine may enter some new state and possibly write to the top of the stack.• The function δ can represent this by returning a member of Q along with a member of Γε, i.e. a member of Q × Γε• A number of legal next moves may be allowed – The transition function incorporates this nondeterminism in the usual way – i.e. returning a set of members of Q × Γε, that is, a member of P(Q × Γε).
- 30. Formal Definition of PDA’s• A pushdown automata is a 6-tuple (Q,Σ,Γ,δ,q0,F), where Q, Σ, Γ, and F are all finite sets, and – Q is the set of states, – Σ is the input alphabet, – Γ is the stack alphabet, – δ: Q×Σε×Γε → P(Q × Γε) is the transition function, – q0 ∈Q is the start state, and –F ⊆ Q is the set of accept states.
- 31. PDA• The computation of a PDA, M, is as follows: – It accepts input w if w can be written as w = w1w2…wm where each w1 ∈ Σε and sequences of states r0,r1,…,rm Q and strings s0, s1,…, sm ∈ exist that satisfy the Γ* following: • r0 = q0 and s0 = ε: M starts out properly, in start state and empty stack. ∈ ∈ • For i = 0, 1,.., m-1, we have (ri+1, b) ∈ δ(ri, wi+1, a), where si = at and si+1 = bt for some a,b Γε and t Γ*: M moves properly ∈ according to the state, stack and next input symbols. • rm F: Accept states occurs when the input end is reached.
- 32. PDA• In a state transition we write “a, b → c” to signify that when machine reads input a it may replace symbol b on top of the stack with a c.• Any of a, b, c can be ε.• If a is ε, the machine may make this transition without reading any input symbol.• If b is ε the machine performs transition without reading and popping any stack symbol.• If c is ε machine does not write any symbol to stack.• Can we design a PDA to recognise the language: {0n1n | n ≥ 0} ?
- 33. Context Free• A language is context free if and only if some pushdown automata recognises it.• Every regular language is recognised by a finite automaton and every finite automaton is automatically a pushdown automaton that ignores the stack, we can note that every regular language is also a context-free language.
- 34. Regular and Context-Free Languages
- 35. Pumping Lemma• The pumping lemma states that every context- free language has a special value called the pumping length such that all longer strings in the language can be “pumped”.• Pumped, this time, means that the string can be divided into five parts such that the 2nd and 4th parts of the string may be repeated together any number of times and the resulting string still be part of the language.
- 36. Pumping Lemma• If A is a context-free language, then there is a number p (the pumping length) where, if s is a string in A of length at least p, then s may be divided into 5 parts, s = uvxyz such that: – for each i ≥ 0, uvixyiz ∈A – |vy| > 0, and – |vxy| ≤ p.

No public clipboards found for this slide

×
### Save the most important slides with Clipping

Clipping is a handy way to collect and organize the most important slides from a presentation. You can keep your great finds in clipboards organized around topics.

Be the first to comment