LEX and YACC are software development tools used for lexical analysis and parsing. LEX is a lexical analyzer generator that accepts an input specification defining lexical units and associated semantic actions. It generates a translator containing tables of lexical units and tokens. YACC is a parser generator that accepts a grammar specification and actions for the language being compiled. It produces a bottom-up parser that uses shift-reduce parsing. These tools allow programmers to specify the syntax of a language and generate code to analyze programs in that language.
Translation of expression(copmiler construction)IrtazaAfzal3
This document discusses compiler construction and the translation of expressions and array references into three-address code. It covers translating expressions with multiple operators into single operator instructions, addressing array elements through calculating relative addresses based on widths and indices, and using an incremental approach to only recompile modified source code.
Type checking involves assigning type expressions to each part of a program according to a type system's logical rules. This allows compilers to determine if a program's types conform or catch errors. Type checking can take the form of type synthesis, which builds up an expression's type from its parts, or type inference, which determines a construct's type from its use. Resolving names is also important for type checking.
The document outlines the major phases of a compiler: lexical analysis, syntax analysis, semantic analysis, intermediate code generation, code optimization, and code generation. It describes the purpose and techniques used in each phase, including how lexical analyzers produce tokens, parsers use context-free grammars to build parse trees, and semantic analyzers perform type checking using attribute grammars. The intermediate code generation phase produces machine-independent codes that are later optimized and translated to machine-specific target codes.
This document discusses recursive descent parsing, which is a top-down parsing method that uses a set of recursive procedures to analyze the syntax of a program. Each nonterminal in a grammar is associated with a procedure. It attempts to construct a parse tree starting from the root node and creating child nodes in a preorder traversal. Recursive descent parsing can involve backtracking if the initial parsing path fails. An example grammar and parsing procedures using backtracking are provided to illustrate the technique.
This document discusses top-down parsing and predictive parsing techniques. It explains that top-down parsers build parse trees from the root node down to the leaf nodes, while bottom-up parsers do the opposite. Recursive descent parsing and predictive parsing are introduced as two common top-down parsing approaches. Recursive descent parsing may involve backtracking, while predictive parsing avoids backtracking by using a parsing table to determine the production to apply. The key steps of a predictive parsing algorithm using a stack and parsing table are outlined.
Compiler Design LR parsing SLR ,LALR CLRRiazul Islam
The document introduces LR parsing and simple LR parsing. It discusses LR(k) parsing which scans input from left to right and constructs a rightmost derivation in reverse. It then focuses on simple LR parsing and describes parser states and items. Examples of parsing expressions are provided to illustrate closure computation. The document also discusses more powerful LR parsers that use lookahead and the canonical LR and LALR methods. It provides examples of LR(1) item sets and describes constructing an LALR parsing table.
Run-Time Environments: Storage organization, Stack Allocation of Space, Access to Nonlocal Data on the Stack, Heap Management, Introduction to Garbage Collection, Introduction to Trace-Based Collection. Code Generation: Issues in the Design of a Code Generator, The Target Language, Addresses in the Target Code, Basic Blocks and Flow Graphs, Optimization of Basic Blocks, A Simple Code Generator, Peephole Optimization, Register Allocation and Assignment, Dynamic Programming Code-Generation
Translation of expression(copmiler construction)IrtazaAfzal3
This document discusses compiler construction and the translation of expressions and array references into three-address code. It covers translating expressions with multiple operators into single operator instructions, addressing array elements through calculating relative addresses based on widths and indices, and using an incremental approach to only recompile modified source code.
Type checking involves assigning type expressions to each part of a program according to a type system's logical rules. This allows compilers to determine if a program's types conform or catch errors. Type checking can take the form of type synthesis, which builds up an expression's type from its parts, or type inference, which determines a construct's type from its use. Resolving names is also important for type checking.
The document outlines the major phases of a compiler: lexical analysis, syntax analysis, semantic analysis, intermediate code generation, code optimization, and code generation. It describes the purpose and techniques used in each phase, including how lexical analyzers produce tokens, parsers use context-free grammars to build parse trees, and semantic analyzers perform type checking using attribute grammars. The intermediate code generation phase produces machine-independent codes that are later optimized and translated to machine-specific target codes.
This document discusses recursive descent parsing, which is a top-down parsing method that uses a set of recursive procedures to analyze the syntax of a program. Each nonterminal in a grammar is associated with a procedure. It attempts to construct a parse tree starting from the root node and creating child nodes in a preorder traversal. Recursive descent parsing can involve backtracking if the initial parsing path fails. An example grammar and parsing procedures using backtracking are provided to illustrate the technique.
This document discusses top-down parsing and predictive parsing techniques. It explains that top-down parsers build parse trees from the root node down to the leaf nodes, while bottom-up parsers do the opposite. Recursive descent parsing and predictive parsing are introduced as two common top-down parsing approaches. Recursive descent parsing may involve backtracking, while predictive parsing avoids backtracking by using a parsing table to determine the production to apply. The key steps of a predictive parsing algorithm using a stack and parsing table are outlined.
Compiler Design LR parsing SLR ,LALR CLRRiazul Islam
The document introduces LR parsing and simple LR parsing. It discusses LR(k) parsing which scans input from left to right and constructs a rightmost derivation in reverse. It then focuses on simple LR parsing and describes parser states and items. Examples of parsing expressions are provided to illustrate closure computation. The document also discusses more powerful LR parsers that use lookahead and the canonical LR and LALR methods. It provides examples of LR(1) item sets and describes constructing an LALR parsing table.
Run-Time Environments: Storage organization, Stack Allocation of Space, Access to Nonlocal Data on the Stack, Heap Management, Introduction to Garbage Collection, Introduction to Trace-Based Collection. Code Generation: Issues in the Design of a Code Generator, The Target Language, Addresses in the Target Code, Basic Blocks and Flow Graphs, Optimization of Basic Blocks, A Simple Code Generator, Peephole Optimization, Register Allocation and Assignment, Dynamic Programming Code-Generation
This document discusses predictive parsers, which are an efficient form of top-down parsing that does not require backtracking. It describes how to construct the transition diagram and parsing tables for a predictive parser from a given grammar. A predictive parser uses these tables to parse input strings in a non-recursive manner by tracking the stack contents and remaining input at each step.
Yacc is a general tool for describing the input to computer programs. It generates a LALR parser that analyzes tokens from Lex and creates a syntax tree based on the grammar rules specified. Yacc was originally developed in the 1970s and generates C code for the syntax analyzer from a grammar similar to BNF. It has been used to build compilers for languages like C, Pascal, and APL as well as for other programs like document retrieval systems.
This document summarizes key topics in intermediate code generation discussed in Chapter 6, including:
1) Variants of syntax trees like DAGs are introduced to share common subexpressions. Three-address code is also discussed where each instruction has at most three operands.
2) Type checking and type expressions are covered, along with translating expressions and statements to three-address code. Control flow statements like if/else are also translated using techniques like backpatching.
3) Backpatching allows symbolic labels in conditional jumps to be resolved by a later pass that inserts actual addresses, avoiding an extra pass. This and other control flow translation topics are covered.
This document provides an overview of syntax analysis in compiler design. It discusses context free grammars, derivations, parse trees, ambiguity, and various parsing techniques. Top-down parsing approaches like recursive descent parsing and LL(1) parsing are described. Bottom-up techniques including shift-reduce parsing and operator precedence parsing are also introduced. The document provides examples and practice problems related to grammar rules, derivations, parse trees, and eliminating ambiguity.
This document discusses syntax analysis in compiler design. It begins by explaining that the lexer takes a string of characters as input and produces a string of tokens as output, which is then input to the parser. The parser takes the string of tokens and produces a parse tree of the program. Context-free grammars are introduced as a natural way to describe the recursive structure of programming languages. Derivations and parse trees are discussed as ways to parse strings based on a grammar. Issues like ambiguity and left recursion in grammars are covered, along with techniques like left factoring that can be used to transform grammars.
The document discusses different types of parsing techniques:
- Parsing is the process of analyzing a string of tokens based on the rules of a formal grammar. It involves constructing a parse tree that represents the syntactic structure of the string based on the grammar.
- The main types of parsing are top-down parsing and bottom-up parsing. Top-down parsing constructs the parse tree from the root node down, while bottom-up parsing constructs it from the leaf nodes up.
- Predictive and recursive descent parsing are forms of top-down parsing, while shift-reduce parsing is a common bottom-up technique. Each method has advantages and limitations regarding efficiency and the type of grammar they can handle.
The document discusses lexical analysis and how it relates to parsing in compilers. It introduces basic terminology like tokens, patterns, lexemes, and attributes. It describes how a lexical analyzer works by scanning input, identifying tokens, and sending tokens to a parser. Regular expressions are used to specify patterns for token recognition. Finite automata like nondeterministic and deterministic finite automata are constructed from regular expressions to recognize tokens.
This document discusses semantic analysis in compilers. It begins by defining semantics and semantic analysis, and provides an example of a syntactically valid but semantically invalid statement. It then discusses how semantic rules are associated with a context-free grammar to perform semantic analysis. It describes the annotated parse tree output of semantic analysis and how semantic rules are associated with grammar productions. The document discusses different ways to represent semantic rules like syntax-directed definitions and attribute grammars. It also covers different types of attributes like synthesized and inherited attributes. Finally, it discusses applications of semantic analysis like type checking and generating intermediate code.
6-Role of Parser, Construction of Parse Tree and Elimination of Ambiguity-06-...movocode
This document discusses the role of a parser in compiler design. It begins by explaining that a parser obtains tokens from a lexical analyzer and generates a syntax tree while also reporting any syntax errors. It then discusses the two main types of parsers: top-down and bottom-up. The document goes on to define context-free grammars and their components such as terminals, non-terminals, the start symbol, and productions. It also discusses the concepts of derivations, leftmost and rightmost derivations, and parsing ambiguities that can arise from grammars. Finally, it covers techniques for eliminating left recursion from grammars.
Syntax-Directed Translation: Syntax-Directed Definitions, Evaluation Orders for SDD's, Applications of Syntax-Directed Translation, Syntax-Directed Translation Schemes, and Implementing L-Attributed SDD's. Intermediate-Code Generation: Variants of Syntax Trees, Three-Address Code, Types and Declarations, Type Checking, Control Flow, Back patching, Switch-Statements
Recursive descent parsing is a top-down parsing method that uses a set of recursive procedures associated with each nonterminal of a grammar to process input and construct a parse tree. It attempts to find a leftmost derivation for an input string by creating nodes of the parse tree in preorder starting from the root. While simple to implement, recursive descent parsing involves backtracking and is not as fast as other methods, with limitations on error reporting and lookahead. However, it can be constructed easily from recognizers by building a parse tree.
The document discusses syntax analysis and parsing. It defines a syntax analyzer as creating the syntactic structure of a source program in the form of a parse tree. A syntax analyzer, also called a parser, checks if a program satisfies the rules of a context-free grammar and produces the parse tree if it does, or error messages otherwise. It describes top-down and bottom-up parsing methods and how parsers use grammars to analyze syntax.
The document discusses the role and process of a lexical analyzer in compiler design. A lexical analyzer groups input characters into lexemes and produces a sequence of tokens as output for the syntactic analyzer. It strips out comments and whitespace, correlates line numbers with errors, and interacts with the symbol table. Lexical analysis improves compiler efficiency, portability, and allows for simpler parser design by separating lexical and syntactic analysis.
Machine-Independent Optimizations: The Principal Sources of Optimization, Introduction to Data-Flow Analysis, Foundations of Data-Flow Analysis, Constant Propagation, Partial Redundancy Elimination, Loops in Flow Graphs
This document discusses single pass assemblers. It notes that single pass assemblers scan a program once to create the equivalent binary, substituting symbolic instructions with machine code. However, this can cause forward reference problems when symbols are used before being defined. The document describes two solutions for single pass assemblers: 1) eliminating forward references by defining all labels before use or prohibiting forward data references, and 2) generating object code directly in memory without writing to disk, requiring reassembly each time.
Compiler code optimizations help improve the performance of generated machine code in three ways:
1) Local optimizations improve individual basic blocks without considering control or data flow between blocks. This includes constant folding, propagation, and dead code elimination.
2) Global optimizations analyze control and data flow across basic blocks through techniques like common subexpression elimination.
3) Peephole optimizations make small, machine-specific improvements by examining one or two instructions at a time, such as replacing redundant loads and stores or using architectural idioms.
This document provides information about the CS416 Compiler Design course, including the instructor details, prerequisites, textbook, grading breakdown, course outline, and an overview of the major parts and phases of a compiler. The course will cover topics such as lexical analysis, syntax analysis using top-down and bottom-up parsing, semantic analysis using attribute grammars, intermediate code generation, code optimization, and code generation.
This document discusses compiler design and how compilers work. It begins with prerequisites and definitions of compilers and their origins. It then describes the architecture of compilers, including lexical analysis, parsing, semantic analysis, code optimization, and code generation. It explains how compilers translate high-level code into machine-executable code. In conclusions, it summarizes that compilers translate code without changing meaning and aim to make code efficient. References for further reading on compiler design principles are also provided.
The document discusses the key aspects of programming language grammar and compilers. It defines lexical and syntactic features, formal languages, grammars, terminals, non-terminals, productions, derivation, syntax trees, ambiguity in grammars, compilers, cross-compilers, p-code compilers, phases of compilation including analysis of source text and synthesis of target text, and code optimization techniques. The overall goal of a compiler is to translate a high-level language program into an equivalent machine language program.
System programming involves designing and implementing system programs like operating systems, compilers, linkers, and loaders that allow user programs to run efficiently on a computer system. A key part of system programming is developing system software like operating systems, assemblers, compilers, and debuggers. An operating system acts as an interface between the user and computer hardware, managing processes, memory, devices, and files. Assemblers and compilers translate programs into machine-readable code. Loaders place object code into memory for execution. System programming optimizes computer system performance and resource utilization.
This document discusses predictive parsers, which are an efficient form of top-down parsing that does not require backtracking. It describes how to construct the transition diagram and parsing tables for a predictive parser from a given grammar. A predictive parser uses these tables to parse input strings in a non-recursive manner by tracking the stack contents and remaining input at each step.
Yacc is a general tool for describing the input to computer programs. It generates a LALR parser that analyzes tokens from Lex and creates a syntax tree based on the grammar rules specified. Yacc was originally developed in the 1970s and generates C code for the syntax analyzer from a grammar similar to BNF. It has been used to build compilers for languages like C, Pascal, and APL as well as for other programs like document retrieval systems.
This document summarizes key topics in intermediate code generation discussed in Chapter 6, including:
1) Variants of syntax trees like DAGs are introduced to share common subexpressions. Three-address code is also discussed where each instruction has at most three operands.
2) Type checking and type expressions are covered, along with translating expressions and statements to three-address code. Control flow statements like if/else are also translated using techniques like backpatching.
3) Backpatching allows symbolic labels in conditional jumps to be resolved by a later pass that inserts actual addresses, avoiding an extra pass. This and other control flow translation topics are covered.
This document provides an overview of syntax analysis in compiler design. It discusses context free grammars, derivations, parse trees, ambiguity, and various parsing techniques. Top-down parsing approaches like recursive descent parsing and LL(1) parsing are described. Bottom-up techniques including shift-reduce parsing and operator precedence parsing are also introduced. The document provides examples and practice problems related to grammar rules, derivations, parse trees, and eliminating ambiguity.
This document discusses syntax analysis in compiler design. It begins by explaining that the lexer takes a string of characters as input and produces a string of tokens as output, which is then input to the parser. The parser takes the string of tokens and produces a parse tree of the program. Context-free grammars are introduced as a natural way to describe the recursive structure of programming languages. Derivations and parse trees are discussed as ways to parse strings based on a grammar. Issues like ambiguity and left recursion in grammars are covered, along with techniques like left factoring that can be used to transform grammars.
The document discusses different types of parsing techniques:
- Parsing is the process of analyzing a string of tokens based on the rules of a formal grammar. It involves constructing a parse tree that represents the syntactic structure of the string based on the grammar.
- The main types of parsing are top-down parsing and bottom-up parsing. Top-down parsing constructs the parse tree from the root node down, while bottom-up parsing constructs it from the leaf nodes up.
- Predictive and recursive descent parsing are forms of top-down parsing, while shift-reduce parsing is a common bottom-up technique. Each method has advantages and limitations regarding efficiency and the type of grammar they can handle.
The document discusses lexical analysis and how it relates to parsing in compilers. It introduces basic terminology like tokens, patterns, lexemes, and attributes. It describes how a lexical analyzer works by scanning input, identifying tokens, and sending tokens to a parser. Regular expressions are used to specify patterns for token recognition. Finite automata like nondeterministic and deterministic finite automata are constructed from regular expressions to recognize tokens.
This document discusses semantic analysis in compilers. It begins by defining semantics and semantic analysis, and provides an example of a syntactically valid but semantically invalid statement. It then discusses how semantic rules are associated with a context-free grammar to perform semantic analysis. It describes the annotated parse tree output of semantic analysis and how semantic rules are associated with grammar productions. The document discusses different ways to represent semantic rules like syntax-directed definitions and attribute grammars. It also covers different types of attributes like synthesized and inherited attributes. Finally, it discusses applications of semantic analysis like type checking and generating intermediate code.
6-Role of Parser, Construction of Parse Tree and Elimination of Ambiguity-06-...movocode
This document discusses the role of a parser in compiler design. It begins by explaining that a parser obtains tokens from a lexical analyzer and generates a syntax tree while also reporting any syntax errors. It then discusses the two main types of parsers: top-down and bottom-up. The document goes on to define context-free grammars and their components such as terminals, non-terminals, the start symbol, and productions. It also discusses the concepts of derivations, leftmost and rightmost derivations, and parsing ambiguities that can arise from grammars. Finally, it covers techniques for eliminating left recursion from grammars.
Syntax-Directed Translation: Syntax-Directed Definitions, Evaluation Orders for SDD's, Applications of Syntax-Directed Translation, Syntax-Directed Translation Schemes, and Implementing L-Attributed SDD's. Intermediate-Code Generation: Variants of Syntax Trees, Three-Address Code, Types and Declarations, Type Checking, Control Flow, Back patching, Switch-Statements
Recursive descent parsing is a top-down parsing method that uses a set of recursive procedures associated with each nonterminal of a grammar to process input and construct a parse tree. It attempts to find a leftmost derivation for an input string by creating nodes of the parse tree in preorder starting from the root. While simple to implement, recursive descent parsing involves backtracking and is not as fast as other methods, with limitations on error reporting and lookahead. However, it can be constructed easily from recognizers by building a parse tree.
The document discusses syntax analysis and parsing. It defines a syntax analyzer as creating the syntactic structure of a source program in the form of a parse tree. A syntax analyzer, also called a parser, checks if a program satisfies the rules of a context-free grammar and produces the parse tree if it does, or error messages otherwise. It describes top-down and bottom-up parsing methods and how parsers use grammars to analyze syntax.
The document discusses the role and process of a lexical analyzer in compiler design. A lexical analyzer groups input characters into lexemes and produces a sequence of tokens as output for the syntactic analyzer. It strips out comments and whitespace, correlates line numbers with errors, and interacts with the symbol table. Lexical analysis improves compiler efficiency, portability, and allows for simpler parser design by separating lexical and syntactic analysis.
Machine-Independent Optimizations: The Principal Sources of Optimization, Introduction to Data-Flow Analysis, Foundations of Data-Flow Analysis, Constant Propagation, Partial Redundancy Elimination, Loops in Flow Graphs
This document discusses single pass assemblers. It notes that single pass assemblers scan a program once to create the equivalent binary, substituting symbolic instructions with machine code. However, this can cause forward reference problems when symbols are used before being defined. The document describes two solutions for single pass assemblers: 1) eliminating forward references by defining all labels before use or prohibiting forward data references, and 2) generating object code directly in memory without writing to disk, requiring reassembly each time.
Compiler code optimizations help improve the performance of generated machine code in three ways:
1) Local optimizations improve individual basic blocks without considering control or data flow between blocks. This includes constant folding, propagation, and dead code elimination.
2) Global optimizations analyze control and data flow across basic blocks through techniques like common subexpression elimination.
3) Peephole optimizations make small, machine-specific improvements by examining one or two instructions at a time, such as replacing redundant loads and stores or using architectural idioms.
This document provides information about the CS416 Compiler Design course, including the instructor details, prerequisites, textbook, grading breakdown, course outline, and an overview of the major parts and phases of a compiler. The course will cover topics such as lexical analysis, syntax analysis using top-down and bottom-up parsing, semantic analysis using attribute grammars, intermediate code generation, code optimization, and code generation.
This document discusses compiler design and how compilers work. It begins with prerequisites and definitions of compilers and their origins. It then describes the architecture of compilers, including lexical analysis, parsing, semantic analysis, code optimization, and code generation. It explains how compilers translate high-level code into machine-executable code. In conclusions, it summarizes that compilers translate code without changing meaning and aim to make code efficient. References for further reading on compiler design principles are also provided.
The document discusses the key aspects of programming language grammar and compilers. It defines lexical and syntactic features, formal languages, grammars, terminals, non-terminals, productions, derivation, syntax trees, ambiguity in grammars, compilers, cross-compilers, p-code compilers, phases of compilation including analysis of source text and synthesis of target text, and code optimization techniques. The overall goal of a compiler is to translate a high-level language program into an equivalent machine language program.
System programming involves designing and implementing system programs like operating systems, compilers, linkers, and loaders that allow user programs to run efficiently on a computer system. A key part of system programming is developing system software like operating systems, assemblers, compilers, and debuggers. An operating system acts as an interface between the user and computer hardware, managing processes, memory, devices, and files. Assemblers and compilers translate programs into machine-readable code. Loaders place object code into memory for execution. System programming optimizes computer system performance and resource utilization.
A macro processor allows programmers to define macros, which are single line abbreviations for blocks of code. The macro processor performs macro expansion by replacing macro calls with the corresponding block of instructions defined in the macro. It uses a two pass approach, where the first pass identifies macro definitions and saves them to a table, and the second pass identifies macro calls and replaces them with the defined code, substituting any arguments.
An operator precedence parser is a top-down parsing technique that uses the precedence of operators in a mathematical or programming language to determine the parse tree of an expression. It works by scanning an expression from left to right, and whenever it encounters an operator, it associates the operator with its operands based on the operator precedence rules without using parentheses. This allows it to build the parse tree using only left-to-right scanning instead of recursive descent.
recognizer for a language, Deterministic finite automata, Non-deterministic finite automata, conversion of NFA to DFA, Regular Expression to NFA, Thomsons Construction
The document discusses different options for assembler design, including one-pass, two-pass, and multi-pass assemblers. A one-pass assembler generates object code directly without a second pass over the source code. It handles forward references by omitting operand addresses until symbols are defined and linking instructions to symbols. A multi-pass assembler allows forward references to be resolved over multiple passes by tracking symbol dependencies.
There are two main types of assemblers: one-pass and multi-pass assemblers. A two-pass assembler with an overlay structure can be used for small memory systems by separating the code and data for each pass. One-pass assemblers deal with forward references by either requiring data definitions come before uses or inserting undefined symbols into a table. Load-and-go one-pass assemblers generate code directly into memory. Multi-pass assemblers restrict forward references on the first pass and use linking to resolve them later.
Parsing is the process of analyzing a string of symbols according to the rules of a formal grammar to determine its syntactic structure. It involves expanding a parse tree from the start symbol to the leaves using either a top-down or bottom-up approach. Top-down parsers expand the leftmost non-terminal at each step, while bottom-up parsers start at the leaves and grow the tree toward the root. Bottom-up, or LR, parsing is preferred in practice.
This document provides an overview of measurement theory and software metrics. It discusses key concepts in measurement like metrology, property-oriented measurement, scales, and measurement validation. Examples are provided to illustrate direct and indirect measurement as well as different scale types like nominal, ordinal, interval, and ratio scales. Measurement properties around completeness, uniqueness, and extendibility are also covered.
The document discusses scanning as a fast reading technique where the reader moves their eyes quickly over a text to find specific information without reading every word. It recommends scanning exercises to help readers recognize words and move their eyes across a page more quickly. Mastering scanning can aid in finding information faster and improve reading comprehension.
This document discusses linkers and loaders from a programmer's perspective. It covers key concepts like object files, program loading, linking with static and dynamic libraries, symbol resolution, relocation, position independent code, and shared libraries. The main points are that linkers combine object files and resolve symbols, loaders load programs into memory, static libraries are linked at compile time, dynamic libraries allow sharing of code and dynamic loading, and position independent code allows shared libraries to be loaded at any address.
This document discusses operator precedence parsing. It describes operator grammars that can be parsed efficiently using an operator precedence parser. It explains how precedence relations are defined between terminal symbols and how these relations are used during the shift-reduce parsing process to determine whether to shift or reduce at each step. It also addresses handling unary minus operators and recovering from shift/reduce errors during parsing.
The document discusses various aspects of assembler design and implementation including:
1) The basic functions of an assembler in translating mnemonic codes to machine language and assigning addresses to symbols.
2) Machine-dependent features like different instruction formats and addressing modes, and how programs are relocated during assembly.
3) Machine-independent features including the use of literals, symbol-defining statements, expressions, program blocks, and linking of control sections between programs.
This document discusses assembly language and assemblers. It begins by explaining that assembly language provides a more readable and convenient way to program compared to machine language. It then describes how an assembler works, translating assembly language programs into machine code. The elements of assembly language are defined, including mnemonic operation codes, symbolic operands, and data declarations. The document also covers instruction formats, sample assembly language programs, and the processing an assembler performs to generate machine code from assembly code.
Systems programming involves developing programs that interface computer systems with users and other programs. These programs include compilers, interpreters, and I/O routines. Systems programs must handle unpredictable events like errors and coordinate asynchronously executing programs. The document introduces concepts like syntax, semantics, domains, semantic gaps, and language processors like compilers and interpreters. It discusses how programming languages bridge gaps between application and execution domains.
The document discusses macros in C programming. It provides examples of simple macros for functions like squaring a number and swapping variables. It also discusses more advanced macro features like flow of control, expansion time variables, and parameter attributes. Finally, it describes the algorithm and tables used by the macro preprocessor to expand macro definitions during compilation.
LEX and YACC are software development tools used for lexical analysis and parsing. LEX accepts an input specification consisting of lexical units and semantic actions to build translation rules and tables. YACC generates parsers and is used to build compilers for languages like C and Pascal by accepting a grammar and actions. Bottom-up parsing with shift-reduce parsers are used to build syntax trees through a sequence of reductions, detecting errors if the input cannot be reduced to the start symbol.
Syntax analysis involves converting a stream of tokens into a parse tree using grammar production rules. It recognizes the structure of a program using grammar rules. There are three main types of parsers - top-down, bottom-up, and universal. Top-down parsers build parse trees from the top-down while bottom-up parsers work from the leaves up. Bottom-up shift-reduce parsing uses a stack and input buffer, making shift and reduce decisions based on parser states to replace substrings matching productions. The largest class of grammars bottom-up parsers can handle is LR grammars.
Syntax analysis involves converting expressions into postfix notation, performing syntax-directed translations, and parsing programs. Key points:
1. Postfix notation converts expressions without parentheses by evaluating operators after their operands.
2. Syntax-directed translation attaches code fragments to grammar rules to recursively translate syntax trees.
3. Parsing methods like recursive descent parse top-down by predicting upcoming tokens based on grammar rules.
- Lexical analyzer reads source program character by character to produce tokens. It returns tokens to the parser one by one as requested.
- A token represents a set of strings defined by a pattern and has a type and attribute to uniquely identify a lexeme. Regular expressions are used to specify patterns for tokens.
- A finite automaton can be used as a lexical analyzer to recognize tokens. Non-deterministic finite automata (NFA) and deterministic finite automata (DFA) are commonly used, with DFA being more efficient for implementation. Regular expressions for tokens are first converted to NFA and then to DFA.
- Lexical analyzer reads source program character by character to produce tokens. It returns tokens to the parser one by one as requested.
- A token represents a set of strings defined by a pattern and has a type and attribute to uniquely identify a lexeme. Regular expressions are used to specify patterns for tokens.
- A finite automaton can be used as a lexical analyzer to recognize tokens. Non-deterministic finite automata (NFA) and deterministic finite automata (DFA) are commonly used, with DFA being more efficient for implementation. Regular expressions for tokens are first converted to NFA and then to DFA.
The document discusses syntax analysis and error handling in parsers. It describes common syntax errors like missing or mismatched parentheses. It also discusses different error recovery strategies like panic mode recovery and error productions. The document provides examples of context-free grammars and derivations. It explains ambiguity in grammars and different ways to resolve it like precedence declarations. Finally, it discusses top-down parsing and how to construct a predictive parsing table for an LL(1) grammar.
The document discusses different types of parsing including:
1) Top-down parsing which starts at the root node and builds the parse tree recursively, requiring backtracking for ambiguous grammars.
2) Bottom-up parsing which starts at the leaf nodes and applies grammar rules in reverse to reach the start symbol using shift-reduce parsing.
3) LL(1) and LR parsing which are predictive parsing techniques using parsing tables constructed from FIRST and FOLLOW sets to avoid backtracking.
New compiler design 101 April 13 2024.pdfeliasabdi2024
This document provides an overview of syntax analysis, also known as parsing. It discusses the functions and responsibilities of a parser, context-free grammars, concepts and terminology related to grammars, writing and designing grammars, resolving grammar problems, top-down and bottom-up parsing approaches, typical parser errors and recovery strategies. The document also reviews lexical analysis and context-free grammars as they relate to parsing during compilation.
Syntax defines the grammatical rules of a programming language. There are three levels of syntax: lexical, concrete, and abstract. Lexical syntax defines tokens like literals and identifiers. Concrete syntax defines the actual representation using tokens. Abstract syntax describes a program's information without implementation details. Backus-Naur Form (BNF) uses rewriting rules to specify a grammar. BNF grammars can be ambiguous. Extended BNF simplifies recursive rules. Syntax analysis transforms a program into an abstract syntax tree used for semantic analysis and code generation.
Lexical analysis is the first phase of compilation where the character stream is converted to tokens. It must be fast. It separates concerns by having a scanner handle tokenization and a parser handle syntax trees. Regular expressions are used to specify patterns for tokens. A regular expression specification can be converted to a finite state automaton and then to a deterministic finite automaton to build a scanner that efficiently recognizes tokens.
The document discusses syntax analysis in compiler design. It defines syntax analysis as the process of analyzing a string of symbols according to the rules of a formal grammar. This involves checking the syntax against a context-free grammar, which is more powerful than regular expressions and can check balancing of tokens. The output of syntax analysis is a parse tree. It separates lexical analysis and parsing for simplicity and efficiency. Lexical analysis breaks the source code into tokens, while parsing analyzes token streams against production rules to detect errors and generate the parse tree.
The document discusses constructing a DFA from a regular expression and NFA. It provides an algorithm for the subset construction which works by taking each state of the NFA as a set and constructing the transition table of the DFA. Each state of the DFA is a set of NFA states. An example is provided to demonstrate converting the NFA for (a/b)n*abb to an equivalent DFA.
The document discusses constructing a DFA from a regular expression and NFA. It provides an algorithm for the subset construction which works by treating each DFA state as a set of NFA states. Transitions are determined by taking the epsilon closure of the NFA states reachable on the input symbol from the current set of states. An example applies the algorithm to construct the DFA for the regular expression (a/b)n*abb from its NFA.
The document discusses constructing a non-deterministic finite automaton (NFA) from a regular expression and converting an NFA to a deterministic finite automaton (DFA). It provides an algorithm for constructing an NFA from a regular expression using Thompson's construction. It also provides an algorithm called the subset construction for converting an NFA to an equivalent DFA.
The document discusses constructing a DFA from a regular expression and NFA. It provides an algorithm for the subset construction which works by keeping track of sets of NFA states. Each state of the DFA is a set of NFA states. The start state is the epsilon-closure of the NFA start state. New states are added by computing the epsilon-closure of the move function. The construction continues until all states are marked.
The document discusses the role of the parser in compiler design. It explains that the parser takes a stream of tokens from the lexical analyzer and checks if the source program satisfies the rules of the context-free grammar. If so, it creates a parse tree representing the syntactic structure. Parsers are categorized as top-down or bottom-up based on the direction they build the parse tree. The document also covers context-free grammars, derivations, parse trees, ambiguity, and techniques for eliminating left-recursion from grammars.
The document discusses constructing a DFA from a regular expression and NFA. It provides an algorithm for the subset construction which works by treating each DFA state as a set of NFA states. Transitions are determined by taking the epsilon closure of the NFA states reachable on the input symbol from the current set of states. An example applies the algorithm to construct the DFA for the regular expression (a/b)n*abb from its NFA.
The document provides information about regular expressions and finite automata. It discusses how regular expressions are used to describe programming language tokens. It explains how regular expressions map to languages and the basic operations used to build regular expressions like concatenation, alternation, and Kleene closure. The document also discusses deterministic finite automata (DFAs), non-deterministic finite automata (NFAs), and algorithms for converting regular expressions to NFAs and DFAs. It covers minimizing DFAs and using finite automata for lexical analysis in scanners.
pointer, structure ,union and intro to file handlingRai University
1) Pointers allow programs to store and pass around the memory addresses of variables and dynamically allocated memory. They provide a way to indirectly access and modify data from different parts of a program.
2) Pointers must be declared with a variable type and the * symbol. Common pointer operators are * to dereference a pointer and & to get the address of a variable.
3) Passing pointers to functions allows the function to modify the variables in the caller's scope by dereferencing the pointers. This is commonly used to return multiple values from a function.
1. Cleanroom software engineering is an incremental and iterative development process that emphasizes formal methods for design and verification.
2. It involves requirements gathering, box structure specification, formal design, correctness verification, code generation and inspection, statistical test planning and usage testing, and certification.
3. The cleanroom process uses box structures like black boxes, state boxes, and clear boxes to refine specifications into architectural and procedural designs through an iterative process. Formal methods are used to verify correctness at each stage.
The document discusses software documentation and quality assurance techniques. It provides details on various types of software documentation like requirements specification, design documentation, test documentation, standards and metrics. It also summarizes key quality assurance techniques like reviews, testing, modeling and their types. Specific techniques like inspections and audits are explained in detail with roles and procedures.
This document discusses various ways to measure internal attributes of software such as size, functionality, complexity, structure, and modularity. It describes commonly used measures like lines of code, function points, object points, and complexity metrics. It also covers control flow graphs, module coupling, cohesion, and information flow metrics. Finally, it briefly mentions data structures and difficulties measuring overall complexity before concluding with external measures.
The document discusses software metrics and quality assurance. It provides recommendations for reference books on the topics and their relevant chapters. It defines software metrics as attributes that measure a software program's length, functionality, reuse, and faults. Quality assurance ensures a software program's fitness for purpose, conformance to specifications, and excellence. The document also discusses the importance of measurement in software engineering for understanding processes, estimating costs, and evaluating quality. It provides examples of the types of information needed from manager and engineering perspectives to control software development. Finally, it introduces the representational theory of measurement and its key aspects of empirical relations, mapping rules, and representation conditions.
The document discusses various techniques for creating estimates in software development projects, including prediction, estimation ranges, and accuracy evaluation. It covers common estimation models like COCOMO and SLIM that relate project size to effort using mathematical formulas and adjustment factors. The goal of estimates is to provide reasonably accurate predictions to guide project planning and decision making.
A macro processor allows programmers to define macros, which are abbreviations for blocks of code. The macro processor performs macro expansion by replacing macro calls with the corresponding block of instructions defined in the macro. It uses a two pass approach where the first pass identifies macro definitions and saves them to a table, and the second pass identifies macro calls and replaces them with the saved definitions, substituting any arguments.
This document provides an overview of wound healing, its functions, stages, mechanisms, factors affecting it, and complications.
A wound is a break in the integrity of the skin or tissues, which may be associated with disruption of the structure and function.
Healing is the body’s response to injury in an attempt to restore normal structure and functions.
Healing can occur in two ways: Regeneration and Repair
There are 4 phases of wound healing: hemostasis, inflammation, proliferation, and remodeling. This document also describes the mechanism of wound healing. Factors that affect healing include infection, uncontrolled diabetes, poor nutrition, age, anemia, the presence of foreign bodies, etc.
Complications of wound healing like infection, hyperpigmentation of scar, contractures, and keloid formation.
বাংলাদেশের অর্থনৈতিক সমীক্ষা ২০২৪ [Bangladesh Economic Review 2024 Bangla.pdf] কম্পিউটার , ট্যাব ও স্মার্ট ফোন ভার্সন সহ সম্পূর্ণ বাংলা ই-বুক বা pdf বই " সুচিপত্র ...বুকমার্ক মেনু 🔖 ও হাইপার লিংক মেনু 📝👆 যুক্ত ..
আমাদের সবার জন্য খুব খুব গুরুত্বপূর্ণ একটি বই ..বিসিএস, ব্যাংক, ইউনিভার্সিটি ভর্তি ও যে কোন প্রতিযোগিতা মূলক পরীক্ষার জন্য এর খুব ইম্পরট্যান্ট একটি বিষয় ...তাছাড়া বাংলাদেশের সাম্প্রতিক যে কোন ডাটা বা তথ্য এই বইতে পাবেন ...
তাই একজন নাগরিক হিসাবে এই তথ্য গুলো আপনার জানা প্রয়োজন ...।
বিসিএস ও ব্যাংক এর লিখিত পরীক্ষা ...+এছাড়া মাধ্যমিক ও উচ্চমাধ্যমিকের স্টুডেন্টদের জন্য অনেক কাজে আসবে ...
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...PECB
Denis is a dynamic and results-driven Chief Information Officer (CIO) with a distinguished career spanning information systems analysis and technical project management. With a proven track record of spearheading the design and delivery of cutting-edge Information Management solutions, he has consistently elevated business operations, streamlined reporting functions, and maximized process efficiency.
Certified as an ISO/IEC 27001: Information Security Management Systems (ISMS) Lead Implementer, Data Protection Officer, and Cyber Risks Analyst, Denis brings a heightened focus on data security, privacy, and cyber resilience to every endeavor.
His expertise extends across a diverse spectrum of reporting, database, and web development applications, underpinned by an exceptional grasp of data storage and virtualization technologies. His proficiency in application testing, database administration, and data cleansing ensures seamless execution of complex projects.
What sets Denis apart is his comprehensive understanding of Business and Systems Analysis technologies, honed through involvement in all phases of the Software Development Lifecycle (SDLC). From meticulous requirements gathering to precise analysis, innovative design, rigorous development, thorough testing, and successful implementation, he has consistently delivered exceptional results.
Throughout his career, he has taken on multifaceted roles, from leading technical project management teams to owning solutions that drive operational excellence. His conscientious and proactive approach is unwavering, whether he is working independently or collaboratively within a team. His ability to connect with colleagues on a personal level underscores his commitment to fostering a harmonious and productive workplace environment.
Date: May 29, 2024
Tags: Information Security, ISO/IEC 27001, ISO/IEC 42001, Artificial Intelligence, GDPR
-------------------------------------------------------------------------------
Find out more about ISO training and certification services
Training: ISO/IEC 27001 Information Security Management System - EN | PECB
ISO/IEC 42001 Artificial Intelligence Management System - EN | PECB
General Data Protection Regulation (GDPR) - Training Courses - EN | PECB
Webinars: https://pecb.com/webinars
Article: https://pecb.com/article
-------------------------------------------------------------------------------
For more information about PECB:
Website: https://pecb.com/
LinkedIn: https://www.linkedin.com/company/pecb/
Facebook: https://www.facebook.com/PECBInternational/
Slideshare: http://www.slideshare.net/PECBCERTIFICATION
Strategies for Effective Upskilling is a presentation by Chinwendu Peace in a Your Skill Boost Masterclass organisation by the Excellence Foundation for South Sudan on 08th and 09th June 2024 from 1 PM to 3 PM on each day.
Beyond Degrees - Empowering the Workforce in the Context of Skills-First.pptxEduSkills OECD
Iván Bornacelly, Policy Analyst at the OECD Centre for Skills, OECD, presents at the webinar 'Tackling job market gaps with a skills-first approach' on 12 June 2024
it describes the bony anatomy including the femoral head , acetabulum, labrum . also discusses the capsule , ligaments . muscle that act on the hip joint and the range of motion are outlined. factors affecting hip joint stability and weight transmission through the joint are summarized.
हिंदी वर्णमाला पीपीटी, hindi alphabet PPT presentation, hindi varnamala PPT, Hindi Varnamala pdf, हिंदी स्वर, हिंदी व्यंजन, sikhiye hindi varnmala, dr. mulla adam ali, hindi language and literature, hindi alphabet with drawing, hindi alphabet pdf, hindi varnamala for childrens, hindi language, hindi varnamala practice for kids, https://www.drmullaadamali.com
4. • LEX accepts and input specification which
consists of two components
• Specification of string representing Lexical
units
• Specification of semantic action aimed at
building TR (Translation Rule)
• TR consists of set of tables of lexical units and
a sequence of tokens for the lexical units
occurring in the source statement.
6. • YACC is available on Unix system.
• YACC can be used for the production of compiler for
PASCAL FORTRAN C C ++
• Lexical scanner must be supplied for use with YACC.
• This scanner is called by the parser when ever a new input
token is needed.
• The YACC parser generator accepts and input grammar for
the language being complied and set of actions
corresponding to rules of grammar.
• The parser generated by YACC use the bottom up parse
method.
• The parser produced by YACC has very good error detection
properties.
9. • The scanner recognizes words
• The parser recognizes syntactic units
• Parser functions:
– Check validity of source string based on specified
syntax rules
– Determine the syntactic structure of source string
10. • For an invalid string, the parser issues a
diagnostic message reporting the cause &
nature of errors in the string
• For valid string, it builds a parse tree to reflect
the sequence of the derivations or reduction
performed during parsing.
• Each step in parsing can identify an
elementary sub tree by deriving a string from
an NT of reducing a string to an NT
11. • Check and verify syntax based on specified
syntax rules
– Are regular expressions sufficient for describing
syntax?
• Example 1: Infix expressions
• Example 2: Nested parentheses
– We use Context-Free Grammars (CFGs) to specify
context-free syntax.
• A CFG describes how a sentence of a language may be
generated.
12. CFG
• A CFG is a quadruple (N, T, R, S) where
– N is the set of non-terminal symbols
– T is the set of terminal symbols
– S N is the starting symbol
– R is a set of rules
• Example: The grammar of nested parentheses
G = (N, T, R, S) where
– N = {S}
– T ={ (, ) }
– R ={ S (S) , S SS, S }
13. Derivations
• The language described by a CFG is the set of strings that can be derived
from the start symbol using the rules of the grammar.
• At each step, we choose a non-terminal to replace.
S (S) (SS) ((S)S) (( )S) (( )(S)) (( )((S))) (( )(( )))
sentential form
derivation
This example demonstrates a leftmost derivation : one where we always
expand the leftmost non-terminal in the sentential form.
14. Derivations and parse trees
• We can describe a derivation using a graphical
representation called parse tree:
– the root is labeled with the start symbol, S
– each internal node is labeled with a non-terminal
– the children of an internal node A are the right-
hand side of a production A
– each leaf is labeled with a terminal
• A parse tree has a unique leftmost and a
unique rightmost derivation (however, we
cannot tell which one was used by looking at
the tree)
15. Derivations and parse trees
• So, how can we use the grammar described
earlier to verify the syntax of "(( )((( ))))"?
– We must try to find a derivation for that string.
– We can work top-down (starting at the root/start
symbol) or bottom-up (starting at the leaves).
• Careful!
– There may be more than one grammars to
describe the same language.
– Not all grammars are suitable
17. Top-down Parsing
• Starts with sentence symbol & Builds down
towards terminal.
• It derives a identical string to a given I/P string
by applying rules of grammar to distinguish
symbol.
• Output would be a syntax tree for I/P string
• At every stage of derivation, an NT is chosen &
derivation affected according to grammar rule.
18. e.g. consider the grammar
ET+E/T
T V* T /V
V id
• Source string id + id * id
Prediction Predicted Sentential Form
ET+E T+E
TV V+ E
V id id + E
ET id + T
T V* T id + V * T
V id id + id * T
TV id + id * V
V id id + id * id
19. Limitations of Top-down parsing
1. The need of back tracking is must. Therefore
semantic analysis cant be implemented with
syntax analysis.
2. Back tracking slowdowns the parsing even if
now semantic actions are performed during
parsing.
3. Precise error indication is not possible in top
down analysis. When ever a mismatch is
encountered , the parser performs the standard
action of backtracking. When no predictions are
possible, the input string is declared erroneous.
20. 3. Certain grammar specification are not
amendable (suitable) to top down analysis.
The left-to-left nature of parser would push
the parser into an infinite loop of prediction
making. To make top-down parsing tensile ,it
is necessary to rewrite a grammar so as to
eliminate left recursion.
21. e.g. consider the grammar
E E+ E / E*E/E/id
• Source string id + id * id
• Backtracking
Applied Rule Predicted Sentential Applied Rule Predicted Sentential
Form Form
E E*E E* E E E+E E+E
E id id* E E id id + E
E E+ E Id * E+E E E*E Id + E*E
E id id *id + E E id Id + id * E
E id id *id + id E id Id + id * id
22. e.g. consider the grammar
E E+ E / E*E/E/id
• Source string id + id * id
• Left recursion
Applied Rule Predicted Sentential Form
E E*E E* E
E E*E E*E*E
E E*E E*E*E*E
E E*E E*E*E*E*E
E E*E E*E*E*E*E*E
23. Top-Down parsing without
backtracking
• Whenever a prediction has to be made for leftmost NT
of sentential form, a decision would be made as to
which RHS alternative for NT can be lead to a sentence
resembling input string.
• We must select RHS alternative which can produce the
next input symbol
• The grammar may too be modified to fulfill condition
• Due to deterministic nature of parsing such parses are
know as predictive parses. A popular from of predictive
parser used in practice is called recursive decent parser.
24. • e.g.
ET+E/T
TV*T/V
V id
• The modified grammar is--
ET E’
E’+E/€
TV T’
T’*T/€
V id
25. Prediction Predicted sentential form
ET E’ T E’
TV T’ V T’ E’
V id id T’ E’
T’€ id E’
E’+E id + E
ET E’ id + T E’
T V T’ id + V T’ E’
V id id +id T’ E’
T’*T id + id * T E’
TV T’ id + id * V T’ E’
V id id + id * id T’E’
T’€ id + id * E’
E’€ id + id * id
26. Recursive Descent Parser
• If recursive rule are exist in grammar then all
these procedures will be recursive & such parse
known as RDP.
• It is constructed by writing routines to recognize
each non-terminal symbol.
• It is well suited for many type of attributed
grammar.
• Synthesized attribute can be used because it
gives depth-first construct of parse tree
• It uses simple prediction parsing strategy.
27. • Error detection is restricted to routines which
gives defined set of symbols in first position.
• It makes possible recursive call to parse
procedures till the required terminal string is
obtain.
• RDP are easy to construct if programming
language permits.
28. Predictive Parser
(Table Driven Parser)
• When recursion is not permitted by
programming language in that case these
parsers are used.
• These are the table driven parsers, uses
prediction technique to eliminate back
tracking.
• For a given NT a prediction & a first terminal
symbol is produced.
29. • A parse table indicates what RHS alternative is
used to make prediction.
• It uses its own stack to store NT for which
prediction is not yet made.
30. • e.g.
ET+E/T
TV*T/V
V id
• The modified grammar is--
ET E’
E’+TE’/€
TV T’
T’*VT’/€
V id
31. Parse Table
NT Source Symbol
id + * -|
E ET E’
E’ E’+TE’ E’ €
T TV T’
T’ T’*VT’ T’ €
V V id
32. Prediction Symbol Predicted sentential form
ET E’ id T E’
TV T’ id V T’ E’
V id + id T’ E’
T’€ + id E’
E’+E id id + E
ET E’ id id + T E’
T V T’ id id + V T’ E’
V id * id +id T’ E’
TV T’ id id + id * V T’ E’
V id --| id + id * id T’E’
T’€ --| id + id * E’
E’€ id + id * id
33. Bottom–up Parsing [Shift Reduce
Parser]
• A bottom up parser attempt to develop the
syntax tree for an input string through a
sequence of reductions.
• If the input string can be reduced to the
distinguished symbol , the string is valid. If not
, error would have be detected and indicated
during the process of reduction itself.
• Attempts at reduction starts with the first
symbol in the string and process to the right.
34. Reduction should be processed as
follows
• For current sentential form, n symbols to the
left of current position are matches with all
RHS alternative of grammar.
• IF match is found, these n symbols are
replaced with NT on LHS of the rule.
• If symbol do not find a match, then n-1
symbols are matched, followed by n-2 symbols
etc.
35. • Until it is determined that no reduction is
possible at current stage of parsing, at this
point one new symbol of input string would
be admitted for parsing. This is known as Shift
action. Due to this nature of parsing , these
parses are known as left-to-left parser or shift
reduce parser.
36. Handles
• Handle of a string:
• Substring that matches the RHS of some
production AND whose reduction to the non-
terminal on the LHS is a step along the reverse
of some rightmost derivation
• A certain sentential form may have many
different handles.
• Right sentential forms of a non-ambiguous
grammar have one unique handle
37. • Rules of Production:-
• E E+E
• E E*E
• EE
• E id
50. Operator-Precedence Parser
• Operator grammar
– small, but an important class of grammars
– we may have an efficient operator precedence parser
(a shift-reduce parser) for an operator grammar.
• In an operator grammar, no production rule can have:
– at the right side
– two adjacent non-terminals at the right side.
• Ex:
E AB E EOE E E+E |
A a E id E*E |
B b O +|*|/ E/E | id
not operator grammar not operator grammar operator grammar
51. Precedence Relations
• In operator-precedence parsing, we define three
disjoint precedence relations between certain pairs of
terminals.
a <. b b has higher precedence than a
a =· b b has same precedence as a
a .> b b has lower precedence than a
• The determination of correct precedence relations
between terminals are based on the traditional
notions of associativity and precedence of operators.
(Unary minus causes a problem).
52. Using Operator-Precedence Relations
• The intention of the precedence relations is to
find the handle of a right-sentential form,
<. with marking the left end,
=· appearing in the interior of the handle, and
.> marking the right hand.
• In our input string $a1a2...an$, we insert the
precedence relation between the pairs of
terminals (the precedence relation holds
between the terminals in that pair).
53. Using Operator -Precedence Relations
E E+E | E-E | E*E | E/E | E^E | (E) | -E | id id + * $
id .> .> .>
The partial operator-precedence + <. .> <. .>
table for this grammar
* <. .> .> .>
$ <. <. <.
• Then the input string id+id*id with the precedence
relations inserted will be:
$ <. id .> + <. id .> * <. id .> $
54. To Find The Handles
1. Scan the string from left end until the first .> is
encountered.
2. Then scan backwards (to the left) over any =· until a <.
is encountered.
3. The handle contains everything to left of the first .>
and to the right of the <. is encountered.
$ <. id .> + <. id .> * <. id .> $ E id $ id + id * id $
$ <. + <. id .> * <. id .> $ E id $ E + id * id $
$ <. + <. * <. id .> $ E id $ E + E * id $
$ <. + < . * .> $ E E*E $ E + E * .E $
$ <. + . > $ E E+E $E+E$
$$ $E$