The symbol table is used throughout the compiler to store information about program entities like classes, instances, methods and variables. It has two main components - a name table to uniquely identify names, and an entity table with an entry for each program entity. The main symbol table operations are insert to add a new name, and lookup to find a name. Other functions initialize and finalize scopes when entering or exiting blocks. The symbol table incrementally collects information and transforms the entire program into a table that is used by various compiler phases.
The document discusses symbol tables, which are data structures used by compilers to track semantic information about identifiers, variables, functions, classes, etc. It provides details on:
- How various compiler phases like lexical analysis, syntax analysis, semantic analysis, code generation utilize and update the symbol table.
- Common data structures used to implement symbol tables like linear lists, hash tables and how they work.
- The information typically stored for different symbols like name, type, scope, memory location etc.
- Organization of symbol tables for block-structured vs non-block structured languages, including using multiple nested tables vs a single global table.
Turing machines are a simple mathematical model of computation that was introduced by Alan Turing in 1936. A Turing machine consists of a finite set of states, an infinite tape divided into cells, and a head that can read and write symbols on the tape. It operates based on a transition function that changes the state and head position based on the current state and symbol. Turing machines can be used as language acceptors by accepting inputs that cause them to halt in an accepting state, or as transducers by treating the initial tape as input and final tape as output. Variations include multi-tape, non-deterministic, multi-head, and multi-dimensional Turing machines. Turing machines are useful for determining decid
This document discusses semantic analysis in compilers. It begins by defining semantics and semantic analysis, and provides an example of a syntactically valid but semantically invalid statement. It then discusses how semantic rules are associated with a context-free grammar to perform semantic analysis. It describes the annotated parse tree output of semantic analysis and how semantic rules are associated with grammar productions. The document discusses different ways to represent semantic rules like syntax-directed definitions and attribute grammars. It also covers different types of attributes like synthesized and inherited attributes. Finally, it discusses applications of semantic analysis like type checking and generating intermediate code.
This document discusses parsing and context-free grammars. It defines parsing as verifying that tokens generated by a lexical analyzer follow syntactic rules of a language using a parser. Context-free grammars are defined using terminals, non-terminals, productions and a start symbol. Top-down and bottom-up parsing are introduced. Techniques for grammar analysis and improvement like left factoring, eliminating left recursion, calculating first and follow sets are explained with examples.
A non-recursive predictive parser uses an explicit stack instead of recursion to mimic leftmost derivations in a grammar. The parser has an input buffer, stack, parsing table, and output stream. It uses the parsing table to shift and reduce based on the top of the stack and next input symbol to produce a leftmost derivation if the input is in the language, otherwise reporting an error. The moves of the parser on a sample input correspond to the sentential forms in the leftmost derivation of that input string.
This document discusses the complexity of algorithms and the tradeoff between algorithm cost and time. It defines algorithm complexity as a function of input size that measures the time and space used by an algorithm. Different complexity classes are described such as polynomial, sub-linear, and exponential time. Examples are given to find the complexity of bubble sort and linear search algorithms. The concept of space-time tradeoffs is introduced, where using more space can reduce computation time. Genetic algorithms are proposed to efficiently solve large-scale construction time-cost tradeoff problems.
The symbol table is used throughout the compiler to store information about program entities like classes, instances, methods and variables. It has two main components - a name table to uniquely identify names, and an entity table with an entry for each program entity. The main symbol table operations are insert to add a new name, and lookup to find a name. Other functions initialize and finalize scopes when entering or exiting blocks. The symbol table incrementally collects information and transforms the entire program into a table that is used by various compiler phases.
The document discusses symbol tables, which are data structures used by compilers to track semantic information about identifiers, variables, functions, classes, etc. It provides details on:
- How various compiler phases like lexical analysis, syntax analysis, semantic analysis, code generation utilize and update the symbol table.
- Common data structures used to implement symbol tables like linear lists, hash tables and how they work.
- The information typically stored for different symbols like name, type, scope, memory location etc.
- Organization of symbol tables for block-structured vs non-block structured languages, including using multiple nested tables vs a single global table.
Turing machines are a simple mathematical model of computation that was introduced by Alan Turing in 1936. A Turing machine consists of a finite set of states, an infinite tape divided into cells, and a head that can read and write symbols on the tape. It operates based on a transition function that changes the state and head position based on the current state and symbol. Turing machines can be used as language acceptors by accepting inputs that cause them to halt in an accepting state, or as transducers by treating the initial tape as input and final tape as output. Variations include multi-tape, non-deterministic, multi-head, and multi-dimensional Turing machines. Turing machines are useful for determining decid
This document discusses semantic analysis in compilers. It begins by defining semantics and semantic analysis, and provides an example of a syntactically valid but semantically invalid statement. It then discusses how semantic rules are associated with a context-free grammar to perform semantic analysis. It describes the annotated parse tree output of semantic analysis and how semantic rules are associated with grammar productions. The document discusses different ways to represent semantic rules like syntax-directed definitions and attribute grammars. It also covers different types of attributes like synthesized and inherited attributes. Finally, it discusses applications of semantic analysis like type checking and generating intermediate code.
This document discusses parsing and context-free grammars. It defines parsing as verifying that tokens generated by a lexical analyzer follow syntactic rules of a language using a parser. Context-free grammars are defined using terminals, non-terminals, productions and a start symbol. Top-down and bottom-up parsing are introduced. Techniques for grammar analysis and improvement like left factoring, eliminating left recursion, calculating first and follow sets are explained with examples.
A non-recursive predictive parser uses an explicit stack instead of recursion to mimic leftmost derivations in a grammar. The parser has an input buffer, stack, parsing table, and output stream. It uses the parsing table to shift and reduce based on the top of the stack and next input symbol to produce a leftmost derivation if the input is in the language, otherwise reporting an error. The moves of the parser on a sample input correspond to the sentential forms in the leftmost derivation of that input string.
This document discusses the complexity of algorithms and the tradeoff between algorithm cost and time. It defines algorithm complexity as a function of input size that measures the time and space used by an algorithm. Different complexity classes are described such as polynomial, sub-linear, and exponential time. Examples are given to find the complexity of bubble sort and linear search algorithms. The concept of space-time tradeoffs is introduced, where using more space can reduce computation time. Genetic algorithms are proposed to efficiently solve large-scale construction time-cost tradeoff problems.
This document summarizes graph coloring using backtracking. It defines graph coloring as minimizing the number of colors used to color a graph. The chromatic number is the fewest colors needed. Graph coloring is NP-complete. The document outlines a backtracking algorithm that tries assigning colors to vertices, checks if the assignment is valid (no adjacent vertices have the same color), and backtracks if not. It provides pseudocode for the algorithm and lists applications like scheduling, Sudoku, and map coloring.
Intermediate code generation in Compiler DesignKuppusamy P
The document discusses intermediate code generation in compilers. It begins by explaining that intermediate code generation is the final phase of the compiler front-end and its goal is to translate the program into a format expected by the back-end. Common intermediate representations include three address code and static single assignment form. The document then discusses why intermediate representations are used, how to choose an appropriate representation, and common types of representations like graphical IRs and linear IRs.
This document summarizes and compares paging and segmentation, two common memory management techniques. Paging divides physical memory into fixed-size frames and logical memory into same-sized pages. It maps pages to frames using a page table. Segmentation divides logical memory into variable-sized segments and uses a segment table to map segment numbers to physical addresses. Paging avoids external fragmentation but can cause internal fragmentation, while segmentation avoids internal fragmentation but can cause external fragmentation. Both approaches separate logical and physical address spaces but represent different models of how a process views memory.
Syntax analysis is the second phase of compiler design after lexical analysis. The parser checks if the input string follows the rules and structure of the formal grammar. It builds a parse tree to represent the syntactic structure. If the input string can be derived from the parse tree using the grammar, it is syntactically correct. Otherwise, an error is reported. Parsers use various techniques like panic-mode, phrase-level, and global correction to handle syntax errors and attempt to continue parsing. Context-free grammars are commonly used with productions defining the syntax rules. Derivations show the step-by-step application of productions to generate the input string from the start symbol.
This document discusses various strategies for register allocation and assignment in compiler design. It notes that assigning values to specific registers simplifies compiler design but can result in inefficient register usage. Global register allocation aims to assign frequently used values to registers for the duration of a single block. Usage counts provide an estimate of how many loads/stores could be saved by assigning a value to a register. Graph coloring is presented as a technique where an interference graph is constructed and coloring aims to assign registers efficiently despite interference between values.
This document discusses various techniques for optimizing computer code, including:
1. Local optimizations that improve performance within basic blocks, such as constant folding, propagation, and elimination of redundant computations.
2. Global optimizations that analyze control flow across basic blocks, such as common subexpression elimination.
3. Loop optimizations that improve performance of loops by removing invariant data and induction variables.
4. Machine-dependent optimizations like peephole optimizations that replace instructions with more efficient alternatives.
The goal of optimizations is to improve speed and efficiency while preserving program meaning and correctness. Optimizations can occur at multiple stages of development and compilation.
The document discusses symbol tables, which are data structures used by compilers to store information about identifiers and other constructs from source code. A symbol table contains entries for each identifier with attributes like name, type, scope, and other properties. It allows efficient access to this information during analysis and synthesis phases of compilation. Symbol tables can be implemented using arrays, binary search trees, or hash tables, with hash tables being commonly used due to fast search times.
The document provides an overview of compilers by discussing:
1. Compilers translate source code into executable target code by going through several phases including lexical analysis, syntax analysis, semantic analysis, code optimization, and code generation.
2. An interpreter directly executes source code statement by statement while a compiler produces target code as translation. Compiled code generally runs faster than interpreted code.
3. The phases of a compiler include a front end that analyzes the source code and produces intermediate code, and a back end that optimizes and generates the target code.
This slide is prepared By these following Students of Dept. of CSE JnU, Dhaka. Thanks To: Nusrat Jahan, Arifatun Nesa, Fatema Akter, Maleka Khatun, Tamanna Tabassum.
This document provides information about the CS416 Compiler Design course, including the instructor details, prerequisites, textbook, grading breakdown, course outline, and an overview of the major parts and phases of a compiler. The course will cover topics such as lexical analysis, syntax analysis using top-down and bottom-up parsing, semantic analysis using attribute grammars, intermediate code generation, code optimization, and code generation.
Syntax-Directed Translation: Syntax-Directed Definitions, Evaluation Orders for SDD's, Applications of Syntax-Directed Translation, Syntax-Directed Translation Schemes, and Implementing L-Attributed SDD's. Intermediate-Code Generation: Variants of Syntax Trees, Three-Address Code, Types and Declarations, Type Checking, Control Flow, Back patching, Switch-Statements
This document summarizes key topics in intermediate code generation discussed in Chapter 6, including:
1) Variants of syntax trees like DAGs are introduced to share common subexpressions. Three-address code is also discussed where each instruction has at most three operands.
2) Type checking and type expressions are covered, along with translating expressions and statements to three-address code. Control flow statements like if/else are also translated using techniques like backpatching.
3) Backpatching allows symbolic labels in conditional jumps to be resolved by a later pass that inserts actual addresses, avoiding an extra pass. This and other control flow translation topics are covered.
The document discusses finite automata including nondeterministic finite automata (NFAs) and deterministic finite automata (DFAs). It provides examples of NFAs and DFAs that recognize particular strings, including strings containing certain substrings. It also gives examples of DFA state machines and discusses using finite automata to recognize regular languages.
Lex is a program generator designed for lexical processing of character input streams. It works by translating a table of regular expressions and corresponding program fragments provided by the user into a program. This program then reads an input stream, partitions it into strings matching the given expressions, and executes the associated program fragments in order. Flex is a fast lexical analyzer generator that is an alternative to Lex. It generates scanners that recognize lexical patterns in text based on pairs of regular expressions and C code provided by the user.
Linked lists are linear data structures where each node contains a data field and a pointer to the next node. There are two types: singly linked lists where each node has a single next pointer, and doubly linked lists where each node has next and previous pointers. Common operations on linked lists include insertion and deletion which have O(1) time complexity for singly linked lists but require changing multiple pointers for doubly linked lists. Linked lists are useful when the number of elements is dynamic as they allow efficient insertions and deletions without shifting elements unlike arrays.
Multiversion Concurrency Control TechniquesRaj vardhan
Multiversion Concurrency Control Techniques
Q. What is multiversion concurrency control technique? Explain how multiversion concurrency control can be achieved by using Time Stamp Ordering.
The document discusses minimum edit distance and how it can be used to quantify the similarity between two strings. Minimum edit distance is defined as the minimum number of editing operations like insertion, deletion, substitution needed to transform one string into another. Levenshtein distance assigns a cost of 1 to each insertion, deletion, or substitution, and calculates the minimum edits between two strings using dynamic programming to build up solutions from sub-problems. The algorithm can also be modified to produce an alignment between the strings by storing back pointers and doing a backtrace.
The document discusses various indexing techniques used to improve data access performance in databases, including ordered indices like B-trees and B+-trees, as well as hashing techniques. It covers the basic concepts, data structures, operations, advantages and disadvantages of each approach. B-trees and B+-trees store index entries in sorted order to support range queries efficiently, while hashing distributes entries uniformly across buckets using a hash function but does not support ranges.
The document discusses syntax analysis and parsing. It defines a syntax analyzer as creating the syntactic structure of a source program in the form of a parse tree. A syntax analyzer, also called a parser, checks if a program satisfies the rules of a context-free grammar and produces the parse tree if it does, or error messages otherwise. It describes top-down and bottom-up parsing methods and how parsers use grammars to analyze syntax.
This document provides an overview of syntax analysis in compiler design. It discusses context free grammars, derivations, parse trees, ambiguity, and various parsing techniques. Top-down parsing approaches like recursive descent parsing and LL(1) parsing are described. Bottom-up techniques including shift-reduce parsing and operator precedence parsing are also introduced. The document provides examples and practice problems related to grammar rules, derivations, parse trees, and eliminating ambiguity.
Overview of Language Processor : Fundamentals of LP , Symbol Table , Data Str...Bhavin Darji
Fundamentals of Language Processor
Analysis Phase
Synthesis Phase
Lexical Analysis
Syntax Analysis
Semantic Analysis
Intermediate Code Generation
Symbol Table
Criteria of Classification of Data Structure of Language Processing
Linear Data Structure
Non-linear Data Structure
Symbol Table Organization
Sequential Search Organization
Binary Search Organization
Hash Table Organization
Allocation Data Structure : Stacks and Heaps
Cs6660 compiler design may june 2016 Answer Keyappasami
The document describes the various phases of a compiler:
1. Lexical analysis breaks the source code into tokens.
2. Syntax analysis generates a parse tree from the tokens.
3. Semantic analysis checks for semantic correctness using the parse tree and symbol table.
4. Intermediate code generation produces machine-independent code.
5. Code optimization improves the intermediate code.
6. Code generation translates the optimized code into target machine code.
This document summarizes graph coloring using backtracking. It defines graph coloring as minimizing the number of colors used to color a graph. The chromatic number is the fewest colors needed. Graph coloring is NP-complete. The document outlines a backtracking algorithm that tries assigning colors to vertices, checks if the assignment is valid (no adjacent vertices have the same color), and backtracks if not. It provides pseudocode for the algorithm and lists applications like scheduling, Sudoku, and map coloring.
Intermediate code generation in Compiler DesignKuppusamy P
The document discusses intermediate code generation in compilers. It begins by explaining that intermediate code generation is the final phase of the compiler front-end and its goal is to translate the program into a format expected by the back-end. Common intermediate representations include three address code and static single assignment form. The document then discusses why intermediate representations are used, how to choose an appropriate representation, and common types of representations like graphical IRs and linear IRs.
This document summarizes and compares paging and segmentation, two common memory management techniques. Paging divides physical memory into fixed-size frames and logical memory into same-sized pages. It maps pages to frames using a page table. Segmentation divides logical memory into variable-sized segments and uses a segment table to map segment numbers to physical addresses. Paging avoids external fragmentation but can cause internal fragmentation, while segmentation avoids internal fragmentation but can cause external fragmentation. Both approaches separate logical and physical address spaces but represent different models of how a process views memory.
Syntax analysis is the second phase of compiler design after lexical analysis. The parser checks if the input string follows the rules and structure of the formal grammar. It builds a parse tree to represent the syntactic structure. If the input string can be derived from the parse tree using the grammar, it is syntactically correct. Otherwise, an error is reported. Parsers use various techniques like panic-mode, phrase-level, and global correction to handle syntax errors and attempt to continue parsing. Context-free grammars are commonly used with productions defining the syntax rules. Derivations show the step-by-step application of productions to generate the input string from the start symbol.
This document discusses various strategies for register allocation and assignment in compiler design. It notes that assigning values to specific registers simplifies compiler design but can result in inefficient register usage. Global register allocation aims to assign frequently used values to registers for the duration of a single block. Usage counts provide an estimate of how many loads/stores could be saved by assigning a value to a register. Graph coloring is presented as a technique where an interference graph is constructed and coloring aims to assign registers efficiently despite interference between values.
This document discusses various techniques for optimizing computer code, including:
1. Local optimizations that improve performance within basic blocks, such as constant folding, propagation, and elimination of redundant computations.
2. Global optimizations that analyze control flow across basic blocks, such as common subexpression elimination.
3. Loop optimizations that improve performance of loops by removing invariant data and induction variables.
4. Machine-dependent optimizations like peephole optimizations that replace instructions with more efficient alternatives.
The goal of optimizations is to improve speed and efficiency while preserving program meaning and correctness. Optimizations can occur at multiple stages of development and compilation.
The document discusses symbol tables, which are data structures used by compilers to store information about identifiers and other constructs from source code. A symbol table contains entries for each identifier with attributes like name, type, scope, and other properties. It allows efficient access to this information during analysis and synthesis phases of compilation. Symbol tables can be implemented using arrays, binary search trees, or hash tables, with hash tables being commonly used due to fast search times.
The document provides an overview of compilers by discussing:
1. Compilers translate source code into executable target code by going through several phases including lexical analysis, syntax analysis, semantic analysis, code optimization, and code generation.
2. An interpreter directly executes source code statement by statement while a compiler produces target code as translation. Compiled code generally runs faster than interpreted code.
3. The phases of a compiler include a front end that analyzes the source code and produces intermediate code, and a back end that optimizes and generates the target code.
This slide is prepared By these following Students of Dept. of CSE JnU, Dhaka. Thanks To: Nusrat Jahan, Arifatun Nesa, Fatema Akter, Maleka Khatun, Tamanna Tabassum.
This document provides information about the CS416 Compiler Design course, including the instructor details, prerequisites, textbook, grading breakdown, course outline, and an overview of the major parts and phases of a compiler. The course will cover topics such as lexical analysis, syntax analysis using top-down and bottom-up parsing, semantic analysis using attribute grammars, intermediate code generation, code optimization, and code generation.
Syntax-Directed Translation: Syntax-Directed Definitions, Evaluation Orders for SDD's, Applications of Syntax-Directed Translation, Syntax-Directed Translation Schemes, and Implementing L-Attributed SDD's. Intermediate-Code Generation: Variants of Syntax Trees, Three-Address Code, Types and Declarations, Type Checking, Control Flow, Back patching, Switch-Statements
This document summarizes key topics in intermediate code generation discussed in Chapter 6, including:
1) Variants of syntax trees like DAGs are introduced to share common subexpressions. Three-address code is also discussed where each instruction has at most three operands.
2) Type checking and type expressions are covered, along with translating expressions and statements to three-address code. Control flow statements like if/else are also translated using techniques like backpatching.
3) Backpatching allows symbolic labels in conditional jumps to be resolved by a later pass that inserts actual addresses, avoiding an extra pass. This and other control flow translation topics are covered.
The document discusses finite automata including nondeterministic finite automata (NFAs) and deterministic finite automata (DFAs). It provides examples of NFAs and DFAs that recognize particular strings, including strings containing certain substrings. It also gives examples of DFA state machines and discusses using finite automata to recognize regular languages.
Lex is a program generator designed for lexical processing of character input streams. It works by translating a table of regular expressions and corresponding program fragments provided by the user into a program. This program then reads an input stream, partitions it into strings matching the given expressions, and executes the associated program fragments in order. Flex is a fast lexical analyzer generator that is an alternative to Lex. It generates scanners that recognize lexical patterns in text based on pairs of regular expressions and C code provided by the user.
Linked lists are linear data structures where each node contains a data field and a pointer to the next node. There are two types: singly linked lists where each node has a single next pointer, and doubly linked lists where each node has next and previous pointers. Common operations on linked lists include insertion and deletion which have O(1) time complexity for singly linked lists but require changing multiple pointers for doubly linked lists. Linked lists are useful when the number of elements is dynamic as they allow efficient insertions and deletions without shifting elements unlike arrays.
Multiversion Concurrency Control TechniquesRaj vardhan
Multiversion Concurrency Control Techniques
Q. What is multiversion concurrency control technique? Explain how multiversion concurrency control can be achieved by using Time Stamp Ordering.
The document discusses minimum edit distance and how it can be used to quantify the similarity between two strings. Minimum edit distance is defined as the minimum number of editing operations like insertion, deletion, substitution needed to transform one string into another. Levenshtein distance assigns a cost of 1 to each insertion, deletion, or substitution, and calculates the minimum edits between two strings using dynamic programming to build up solutions from sub-problems. The algorithm can also be modified to produce an alignment between the strings by storing back pointers and doing a backtrace.
The document discusses various indexing techniques used to improve data access performance in databases, including ordered indices like B-trees and B+-trees, as well as hashing techniques. It covers the basic concepts, data structures, operations, advantages and disadvantages of each approach. B-trees and B+-trees store index entries in sorted order to support range queries efficiently, while hashing distributes entries uniformly across buckets using a hash function but does not support ranges.
The document discusses syntax analysis and parsing. It defines a syntax analyzer as creating the syntactic structure of a source program in the form of a parse tree. A syntax analyzer, also called a parser, checks if a program satisfies the rules of a context-free grammar and produces the parse tree if it does, or error messages otherwise. It describes top-down and bottom-up parsing methods and how parsers use grammars to analyze syntax.
This document provides an overview of syntax analysis in compiler design. It discusses context free grammars, derivations, parse trees, ambiguity, and various parsing techniques. Top-down parsing approaches like recursive descent parsing and LL(1) parsing are described. Bottom-up techniques including shift-reduce parsing and operator precedence parsing are also introduced. The document provides examples and practice problems related to grammar rules, derivations, parse trees, and eliminating ambiguity.
Overview of Language Processor : Fundamentals of LP , Symbol Table , Data Str...Bhavin Darji
Fundamentals of Language Processor
Analysis Phase
Synthesis Phase
Lexical Analysis
Syntax Analysis
Semantic Analysis
Intermediate Code Generation
Symbol Table
Criteria of Classification of Data Structure of Language Processing
Linear Data Structure
Non-linear Data Structure
Symbol Table Organization
Sequential Search Organization
Binary Search Organization
Hash Table Organization
Allocation Data Structure : Stacks and Heaps
Cs6660 compiler design may june 2016 Answer Keyappasami
The document describes the various phases of a compiler:
1. Lexical analysis breaks the source code into tokens.
2. Syntax analysis generates a parse tree from the tokens.
3. Semantic analysis checks for semantic correctness using the parse tree and symbol table.
4. Intermediate code generation produces machine-independent code.
5. Code optimization improves the intermediate code.
6. Code generation translates the optimized code into target machine code.
This document provides the instructions and sample questions for a Computer Science exam. It details the structure and format of the exam, which consists of two parts (A and B) with multiple choice, short answer, long answer, and descriptive questions. Part A contains two sections - short answer questions to be answered in one word/line, and case study questions with subparts. Part B contains three sections - short answer questions worth 2 marks each, long answer questions worth 3 marks each, and a very long answer question worth 5 marks. All programming questions must be answered in Python. The document provides examples of different types of questions on topics like SQL, cybercrime, networks, Python programming, and more.
This document describes the design of a compiler for a simple C-like language. It outlines the main phases of the compiler including lexical analysis, syntax analysis, code generation, and optional semantic analysis. It provides details on token specification, grammar rules, and the management of registers and memory during code generation. The lexical analyzer breaks source code into tokens which are stored in tables. The syntax analyzer checks grammar rules and detects errors. Code generation transforms the parsed code into SAYEH assembly code. Register and memory addresses are tracked in tables to efficiently allocate resources during code generation.
The document discusses various applications of stacks including reversing data, parsing, postponing operations, and backtracking. It provides examples of converting infix expressions to postfix notation using a stack and evaluating postfix expressions. The key stack operations of push, pop, and accessing the stack top are defined. Implementation of a stack using an array is also mentioned.
The document discusses the fundamentals of algorithms and data structures. It covers key topics like algorithm analysis, asymptotic notation, different data structures like arrays, stacks, queues, linked lists, trees, graphs and their applications. It also discusses storage management techniques like lists and garbage collection. The course aims to teach students important algorithm design techniques and how to evaluate algorithms based on parameters like time complexity, space complexity and efficiency.
The document discusses the instruction set of the 8085 microprocessor. It states that the 8085 has 246 instructions that are each represented by an 8-bit binary value called the op-code or instruction byte. It also mentions that an instruction is a binary pattern inside a microprocessor that performs a specific function, and the complete set of instructions a microprocessor supports is called its instruction set.
/
p
This document provides an overview of the CS-2251 DESIGN AND ANALYSIS OF ALGORITHMS course. It defines algorithms and discusses algorithm design and analysis processes. It covers different algorithm efficiency measures, specification methods, important problem types, classification techniques, and examples like the Euclid algorithm. Key aspects of sorting, searching, graph, combinatorial, and numerical problems are outlined. The features of efficient algorithms and orders of algorithms are defined.
tt
h
The document summarizes the history and key features of the C programming language. C was developed in the late 1960s and early 1970s at Bell Labs to be a high-level language that produced efficient machine code. It was designed to allow systems programming for operating systems and utilities. C borrowed elements from its predecessors BCPL and B but added data types, structures, unions, and functions to support structured programming.
The document discusses data structures and provides examples of common data structures like stacks, queues, trees, graphs. It defines key concepts like abstract data types and how they are implemented. It provides examples of applications of stacks, like for recursion, converting infix to postfix notation, validating expressions. It also discusses common problems like towers of Hanoi and N-Queens that can be solved using concepts like backtracking and data structures like stacks.
The CPU is made up of 3 major parts: the register set, control unit, and arithmetic logic unit. The register set stores intermediate values during program execution. The control unit generates control signals and selects operations for the ALU. The ALU performs arithmetic and logic operations. Computer architecture includes instruction formats, addressing modes, instruction sets, and CPU register organization. Registers are organized in a bus structure to efficiently transfer data and perform microoperations under control of the control unit. Common instruction fields are the operation code, address fields, and addressing mode fields. Instructions can be classified by the number of address fields as zero-address, one-address, two-address, or three-address instructions. Common addressing modes specify how operands
- C is a commonly used language for embedded systems due to its portability, efficiency, and conciseness. It was developed in the late 1960s and early 1970s alongside Unix.
- C was designed for systems programming tasks like operating systems and compilers. It was influenced by its predecessors BCPL and B and is well-suited for direct hardware manipulation.
- C uses expressions, conditional and iterative statements, functions, and other constructs to provide a structured and portable way to write low-level systems code while avoiding the complexity of assembly.
Here are the key points covered in the essay:
- Exercise 15.1 involves creating a custom backup job in Windows 7 to back up selected files and folders to a hard disk partition.
- The C: system drive does not appear as a backup destination because you cannot back up a drive to itself.
- A warning appears when selecting the X: drive for backup because although it appears as a separate drive letter, it is physically located on the same hard disk as the system drive C:. Backing up to this location would not provide the benefits of an off-site backup if the hard disk failed.
- When selecting folders and files for backup, you must ensure the selected items are not part of an operating system
The document provides an overview of common topics that confuse new C programmers, including control structures, variable types, pointers, arrays, structs, linked lists, and recursion. It discusses each concept in 1-3 paragraphs, explaining what they are and providing basic examples. It also covers debugging strategies such as printing variables, reducing complexity, and inserting early returns to isolate issues.
C was created in the early 1970s at Bell Labs by Dennis Ritchie. It is commonly used for systems programming like operating systems and compilers. C code is compiled into efficient, portable machine code. A C program structure includes preprocessor commands, functions, variables, statements/expressions, and comments. Key C language elements are keywords, identifiers, constants, comments, and symbols. C supports various data types like int, char, float, and double that determine memory usage and value representation. Variables must be declared with a specific data type before use.
- C is a commonly used language for embedded systems that is portable, produces efficient code, and uses a fairly concise syntax.
- It was developed in the late 1960s and early 1970s and was influenced by the B programming language.
- C uses basic data types, functions, expressions, statements, and other constructs to provide powerful yet flexible programming capabilities while using relatively little memory.
Symbolic Computation and Automated Reasoning in Differential GeometryM Reza Rahmati
The document describes symbolic computations in applied differential geometry using the REDUCE programming language. It includes three main sections:
1) Introduction to symbolic computations in REDUCE, including Taylor series expansions, integration, and matrix calculations.
2) Implementation of differential geometric objects like differential forms and vector fields, and operations on them such as exterior products and derivatives.
3) Computation of infinitesimal symmetries of exterior differential systems, which represent differential equations, through the machinery developed in section 2. This involves determining and solving overdetermined systems of partial differential equations.
This document provides a marking scheme for a Computer Science exam for Class XII. It includes instructions, 6 sections (A through C), and multiple choice questions with parts. Section A focuses on C++ programming, Section B on Python, and Section C is compulsory for all students. Questions assess topics like functions, OOPs concepts, arrays, file handling, and more. For each part, the number of marks and what is required are specified. Sample code is provided to test understanding of concepts like data abstraction, inheritance, operator overloading and more.
Similar to Symbol table management and error handling in compiler design (20)
How to Build a Module in Odoo 17 Using the Scaffold MethodCeline George
Odoo provides an option for creating a module by using a single line command. By using this command the user can make a whole structure of a module. It is very easy for a beginner to make a module. There is no need to make each file manually. This slide will show how to create a module using the scaffold method.
it describes the bony anatomy including the femoral head , acetabulum, labrum . also discusses the capsule , ligaments . muscle that act on the hip joint and the range of motion are outlined. factors affecting hip joint stability and weight transmission through the joint are summarized.
The simplified electron and muon model, Oscillating Spacetime: The Foundation...RitikBhardwaj56
Discover the Simplified Electron and Muon Model: A New Wave-Based Approach to Understanding Particles delves into a groundbreaking theory that presents electrons and muons as rotating soliton waves within oscillating spacetime. Geared towards students, researchers, and science buffs, this book breaks down complex ideas into simple explanations. It covers topics such as electron waves, temporal dynamics, and the implications of this model on particle physics. With clear illustrations and easy-to-follow explanations, readers will gain a new outlook on the universe's fundamental nature.
This slide is special for master students (MIBS & MIFB) in UUM. Also useful for readers who are interested in the topic of contemporary Islamic banking.
Executive Directors Chat Leveraging AI for Diversity, Equity, and InclusionTechSoup
Let’s explore the intersection of technology and equity in the final session of our DEI series. Discover how AI tools, like ChatGPT, can be used to support and enhance your nonprofit's DEI initiatives. Participants will gain insights into practical AI applications and get tips for leveraging technology to advance their DEI goals.
How to Add Chatter in the odoo 17 ERP ModuleCeline George
In Odoo, the chatter is like a chat tool that helps you work together on records. You can leave notes and track things, making it easier to talk with your team and partners. Inside chatter, all communication history, activity, and changes will be displayed.
How to Setup Warehouse & Location in Odoo 17 InventoryCeline George
In this slide, we'll explore how to set up warehouses and locations in Odoo 17 Inventory. This will help us manage our stock effectively, track inventory levels, and streamline warehouse operations.
How to Make a Field Mandatory in Odoo 17Celine George
In Odoo, making a field required can be done through both Python code and XML views. When you set the required attribute to True in Python code, it makes the field required across all views where it's used. Conversely, when you set the required attribute in XML views, it makes the field required only in the context of that particular view.
2. Symbol Table
Symbol table is data structure created and maintained by
compilers to store information about the occurrence of various
entities such as variable names, function names, objects,
classes, interfaces, etc.
Symbol table is used by both the analysis and the synthesis parts
of a compiler.
04/11/19 swati chauhan (KIET)
3. Symbol Table
When identifiers are found, they will be entered into a symbol table,
which will hold all relevant information about identifiers and other
symbols, variables, constants, procedures statements e.t.c,
This information about the name:-
Type
Its Form , Its Location
It will be used later by the semantic analyzer and the code generator.
Lexical
Analyzer
Semantic
Analyzer
Code
Generator
Symbol
Table
Syntax
Analyzer
04/11/19
swati chauhan (KIET)
4. Efficient to add new entries to the S.T
Dynamic in nature
Issues in Symbol Table:
Format of entries
Method of Access
Place where they are stored
04/11/19 swati chauhan (KIET)
5. Contents of Symbol Table
Name Information
Capabilities of S.T :
1)Checking (Determine whether the given information is in
the table)
2)Adding New Information
3)Access the information of Name
4)Deletion
04/11/19 swati chauhan (KIET)
6. Symbol Table Entries
We will store the following information about
Name and each entry in the symbol table is
associated with attributes that support the
compiler in different phases:
The name (as a string).
Size and Dimension
The data type.
Its scope (global, local, or parameter).
Its offset from the base pointer (for local variables and
parameters only).
04/11/19 swati chauhan (KIET)
7. Implementation
Use linear Array of records ,one record per name.
Entries of S.T are not uniform so to make it uniform some
information is kept outside the table and pointer to this information
stored in S.T.
Record (consist known no. of consecutive words of memory,, so
names stored in record)
It is appropriate if upper bound on the length of identifier is
given…..
04/11/19 swati chauhan (KIET)
8. Data Structure for S.T
Required to make n-entries and m-inquiries
1) Lists:
Name 1 Info 1 Name 2 Info 2 ……...... ……….. ……….. ………..
It is easy to implement
Addition is easy
Retrieve Information
ADVANTAGES: Minimum space is required
Addition in table is easy
DISADVANTAGE: Higher Access time
Available
9. 2)Binary Search Tree:
Efficient approach to organize S.T with two field :::
Left and Right
Algorithm for searching name in B.S.T
P= initially a pointer to root
1) If Name = Name (P) then Return /* success */
2) Else if Name < Name (P) then
P:= left(P) /* visit left child */
3) Else Name (P) < Name then
P:= Right (P) /* visit right child */
Addition
Firstly search, if doesn’t exist then create new node at proper
position.
swati chauhan (KIET)
10. 3)Hash Table
Consists K words [0….k-1]
Pointers into the storage table (linked list)
Searching Name in S.T
Apply hash function to name
h(Name) -> {0…..k-1 (integer) }
Addition new Name
Create a record at available space in storage table and link that record
to h(Name)th list.
Hashing > BST > Linear List
04/11/19 swati chauhan (KIET)
11. Representing Scope information in S.T
Scope: The region of a program where a binding
is active
The same name in a different scope can have a different binding
Rules for governing scope :
1) If name declared within a block B then valid only within B
2) If B1()
{…………..<-
B2()
{…….}
}
04/11/19 swati chauhan (KIET)
12. Require complicated S.T organization
So use Multiple symbol tables, one for each block
Each Table has : Name and Information
If New block entered
Then Push an empty table into the stack for storing names
and information.
Ex:- Program main
Var x,y : integer ;
Procedure P:
Var x,a : boolean ;
Procedure Q:
Var x,y,z : real;
Begin …… end
Begin ……end
Begin ……end
04/11/19 swati chauhan (KIET)
13. Symbol Table organization that compiles
with static scope information rules
Next technique to represent scope information in S.T:
1) Write nesting depth of each procedure block
2) Use pair (Procedure name, nesting depth) to access the
information from the table
04/11/19 swati chauhan (KIET)
14. Error Detection & Recovery
programmers make mistakes
Error
Compile Time Run Time
Lexical Phase
Error
Syntactic Phase
Error
Semantic Error
swati chauhan (KIET)
• Overflow {Indicates that the magnitude
of a computational result is too large to represent.}
Underflow {ndicates that the magnitude
of a computational result is too close to zero to
represent.}
• invalid subscript {}
• An integer division by zero
15. Sources of Error
Algorithmic Error
Coding Error
A program may exceed a compiler or machine limit
Ex:- Array declaration with too many dimensions to fit into S.T
Error in the phases of compiler ( during translating program into
object code)
Some Transcription Errors
The insertion of an extra character
Deletion of required character
Replacement of correct character by an incorrect character
04/11/19 swati chauhan (KIET)
16. 1) Lexical Phase Error
If after some processing lexical analyzer discover that
no prefix of remaining input fits to any token class then invoke
error recovery routine.
Simplest way to recover it
skip the erroneous character until L.A finds next token
Disadvantage:
set the problems for next phases of compiler
Ex: comment , number, string e.t.c.
/* there is a program
3.1 4 {lexical error}
“cse department
04/11/19 swati chauhan (KIET)
17. Error Recovery
Panic Mode Recovery:
1) The parser discovers an error.
2) If any unwanted character occurs then delete that
character to recover error.
3) Rejects input symbols until a “synchronizing”
token usually a statement delimiter as:
semicolon ; , end } is encountered.
3) Parser delete stack entry until finds an entry with which it
can continue parsing.
04/11/19 swati chauhan (KIET)
18. 2) Syntactic Errors
Examples of Syntactic Errors
1) Missing right Parenthesis:
max(A, 2* (3+B) { Deletion error }
2) Extra Comma: for(i=0;,1<100;i++) { insertion error }
3) Colon in place of semicolon :
I = 1: {Replacement Error}
4) Misspelled keyword :
Void mian () {Transposition Error}
5) Extra Blank:
/* comment * / {Insertion Error}
04/11/19 swati chauhan (KIET)
19. Minimum Distance correction of syntactic
error
Theoretical way of defining errors and their location
It is called “Minimum Hamming distance” Method.
Let a program P has errors = k
Find shortest sequence of error transformations that will map to
valid program
Ex: IFA =B THEN
SUM =SUM + A;
ELSE
SUM =SUM - A;
“Minimum Hamming distance” = 1 (Transformation may be the
insertion or deletion of a single character)
IF A =B THEN
SUM =SUM + A;
ELSE
SUM =SUM - A;
20. Recovery from syntactic Error
I) Panic Mode Recovery:
The parser discovers an error. It then discards input
symbols till a designated set of synchronizing token is
found.
● Synchronizing tokens selected are such that their role in
the program is unambiguous, like Delimiters ; } etc.
● Advantage: Simple and never goes into an infinite loop.
04/11/19 swati chauhan (KIET)
21. Panic Mode Recovery in LL(1) Parser
Grammar => E -> TE’
E’
-> +TE’
/ ɛ
T -> F T’
T’
-> * F T’
/ ɛ
F -> (E)/id
id + * ( ) $
E E -> TE’
E -> TE’
E’
E’
-> +TE’
E’
-> ɛ E’
-> ɛ
T T -> F T’
T -> F T’
T’
T’
-> ɛ T’
-> * F T’
T’
-> ɛ T’
-> ɛ
F F -> id F -> (E)
LL(1) Parsing Table for a given Grammar
22. Algorithm of panic mode recovery in
LL(1)Parsing
1) Parser looking for entry in parsing table
2) if M[A, a] = ‘Blank’
then input symbol a skipped
else if M[A, a]= “Synch”
then pop off the nonterminal from the top of the
stack
else top[token] ≠ Input symbol
then pop off the token from the stack
23. Processing:
Fill the synch entries under the follow of
nonterminals
id + * ( ) $
E E -> TE’
E -> TE’
synch synch
E’
E’
-> +TE’
E’
-> ɛ E’
-> ɛ
T T -> F T’
synch T -> F T’
synch synch
T’
T’
-> ɛ T’
-> * F T’
T’
-> ɛ T’
-> ɛ
F F -> id synch synch F -> (E) synch synch
Fill “synch” under the follow of nonterminals ………..
Then perform the operation for the
Input string (w) = * id *+ id $
According to Algorithm
25. II) Phrase –level Recovery
● Local Correction by parser on remaining input, by some
string which allows parser to continue.
● Replacing comma by semicolon, inserting extra semicolon
etc.
● Perform local correction on the input to repair the error
● Drawbacks: Improper replacement might lead to infinite
loops.
Hard to find where is actual error.
● Advantage: It can correct any input string.
swati chauhan (KIET)
26. III) Global Correction
Compiler perform some changes to process
the input string.
It uses simple way , where choose minimal sequence of
changes to obtain least cost correction.
Input:: Incorrect I/P string = X
Grammar= G
Then algorithm will find the parse tree for related
string = Y
Transform X toY by performing some insertion, deletion
and changes in to the token stream.
04/11/19 swati chauhan (KIET)
27. Disadvantages
Too costly to implement in terms of space and
time.
Basically includes theoretical interest.
04/11/19 swati chauhan (KIET)
28. IV) Error Production
A method of predict common errors that might be
encountered.
● Augmenting the grammar for the language at hand, with
productions as : A-> Error.
● Such a parser will detect expected errors when an error
production is used.
● Ex:- Automatic Error recovery in YACC
Use error production with semantic actions
A : Error ɛ {semantic action to recover error}.
● Advantage: Error diagnostics is very fast.
29. 3) Recovery from Semantic error
Sources of Error
i) Undeclared names and type incompatibilities.
ii) Recovery
a) Type Checking, where compiler report the nature
and location of error.
b)Declare the undeclared names and stored into the
symbol table
31. Program Address Space
Any program you run has, associated with it, some memory which is
divided into:
Code Segment
Data Segment (Holds Global Data)
Stack (where the local variables and other temporary information
is stored)
Heap
Code
Segment
Data Segment
Stack
HeapThe Heap grows
downwards
The Stack
grows
upwards
32. Local Variables:Stack Allocation
When we have a declaration of the form “int a;”:
a variable with identifier “a” and some memory allocated to it is created in the
stack. The attributes of “a” are:
Name: a
Data type: int
Scope: visible only inside the function it is defined, disappears once we
exit the function
Address: address of the memory location reserved for it. Note: Memory
is allocated in the stack for a even before it is initialized.
Size: typically 2 bytes
Value: Will be set once the variable is initialized
Since the memory allocated for the variable is set in the beginning itself, we
cannot use the stack in cases where the amount of memory required is not
known in advance. This motivates the need for HEAP
33. Pointers
We know what a pointer is. Let us say we have declared a pointer “int
*p;” The attributes of “a” are:
Name: p
Data type: Integer address
Scope: Local or Global
Address: Address in the data segment or stack segment
Size: 32 bits in a 32-bit architecture
We saw how a fixed memory allocation is done in the stack, now we
want to allocate dynamically. Consider the declaration:
“int *p;”. So the compiler knows that we have a pointer p that may
store the starting address of a variable of type int.
To point “p” to a dynamic variable we need to use a declaration of
the type “ p = new int;”
34. Pointers : Heap Allocation
Dynamic variables are never initialized by the compiler, so it
is a good practice to initialize it.
In more compact notation:
int *p;
p = new int;
*p = 0;
int *p = new
int(0);
35. Static Data Storage Allocation
Compiler allocates space for all
variables (local and global) of
all procedures at compile
timeNo stack/heap allocation;
no overheads
Ex: Fortran IV and Fortran 77
Variable access is fast since
addresses are known at compile
time
No recursion
04/11/19 swati chauhan (KIET)
Main program
variables
Procedure P1
variables
Procedure P2
variables
Procedure P4
variables
Main memory
36. Dynamic Data Storage Allocation
Compiler allocates space only for golbal variables at
compile time
Space for variables of procedures will be allocated at
run-time Stack/heap allocation
Ex: C, C++, Java, Fortran 8/9
Variable access is slow (compared to static allocation)
since addresses are accessed through the stack/heap
pointer
Recursion can be implemented
37. Variable Storage Offset
Computation
The compiler should compute the offsets at which
variables and constants will be stored in the activation
record (AR)
These offsets will be with respect to the pointer pointing
to the beginning of the AR
Variables are usually stored in the AR in the declaration
order
Offsets can be easily computed while performing
semantic analysis of declarations
38. Static Scope and Dynamic
Scope
Static Scope A global identifier refers to the identifier
with that name that is declared in the closest enclosing
scope of the program text
Uses the static(unchanging) relationship between blocks in
the program text
Dynamic Scope A global identifier refers to the identifier
associated with the most recent activation record
Uses the actual sequence of calls that are executed in the
dynamic(changing) execution of the program
Both are identical as far as local variables are concerned