COMPILER DESIGN
III B.Tech – I Sem
Department of Computer Science & Engineering
Prepared By:
B Nandan Kumar
Asst. Professor
DNRCET
DNR College of Engineering & Technology, Department of Computer Science & Engineering
DNR College of Engineering & Technology
Syllabus
UNIT – I
Introduction Language Processing, Structure of a compiler the evaluation of
Programming language, The Science of building a Compiler application of
Compiler Technology. Programming Language Basics. Lexical Analysis-:
The role of lexical analysis buffing, specification of tokens. Recognitions of
tokens the lexical analyzer generator lexical
UNIT –II
Syntax Analysis -: The Role of a parser, Context free Grammars Writing
A grammar, top down passing bottom up parsing Introduction to Lr Parser.
UNIT –III
More Powerful LR parser (LR1, LALR) Using Armigers Grammars Equal
Recovery in Lr parser Syntax Directed Transactions Definition, Evolution
order of SDTS Application of SDTS. Syntax Directed Translation Schemes.
DNR College of Engineering & Technology, Department of Computer Science & Engineering
UNIT – IV
Intermediated Code: Generation Variants of Syntax trees 3 Address code,
Types and Deceleration, Translation of Expressions, Type Checking.
Canted Flow Back patching?
UNIT – V
Runtime Environments, Stack allocation of space, access to Non Local
date on the stack Heap Management code generation – Issues in design
of code generation the target Language Address in the target code Basic
blocks and Flow graphs. A Simple Code generation.
UNIT –VI
Machine Independent Optimization. The principle sources of
Optimization peep hole Optimization, Introduction to Date flow
Analysis.
DNR College of Engineering & Technology, Department of Computer Science & Engineering
1. Compilers, Principles Techniques and Tools.Alfred V Aho,
Monical S. Lam, Ravi Sethi Jeffery D. Ullman, 2nd
edition,
2. Compiler Design K.Muneeswaran, OXFORD
3. Principles of compiler design,2nd edition,Nandhini Prasad,
2. Compiler Construction, Principles and practice, Kenneth C Louden,
CENGAGE
2. Implementations of Compiler, A New approach to Compilers
including the algebraic methods, Yunlinsu ,SPRINGER
TEXT BOOKS:
REFERENCE BOOKS:
DNR College of Engineering & Technology, Department of Computer Science & Engineering
1. Acquire knowledge in different phases and passes of
Compiler, and specifying different types of tokens by
lexical analyzer, and also able to use the Compiler tools
like LEX, YACC, etc.
2. Parser and its types i.e. Top-down and Bottom-up
parsers.
3. Construction of LL, SLR, CLR and LALR parse table.
4. Syntax directed translation, synthesized and inherited
attributes.
5.Techniques for code optimization.
COURSE OUTCOMES:
DNR College of Engineering & Technology, Department of Computer Science & Engineering
UNIT-V
Syllabus:
Runtime Environments, Stack allocation of space, access to Non
Local date on the stack Heap Management code generation –
Issues in design of code generation the target Language Address
in the target code Basic blocks and Flow graphs. A Simple Code
generation.
DNR College of Engineering & Technology, Department of Computer Science & Engineering
Static Data Storage Allocation
 Compiler allocates space for all variables
(local and global) of all procedures at
compile time
 No stack/heap allocation; no
overheads
 Ex: Fortran IV and Fortran 77
 Variable access is fast since addresses
are known at compile time
 No recursion
Main program
variables
Procedure P1
variables
Procedure P2
variables
Procedure P4
variables
Main memory
DNR College of Engineering & Technology, Department of Computer Science & Engineering
Dynamic Data Storage Allocation
 Compiler allocates space only for golbal
variables at compile time
 Space for variables of procedures will be
allocated at run-time
 Stack/heap allocation
 Ex: C, C++, Java, Fortran 8/9
 Variable access is slow (compared to static
allocation) since addresses are accessed
through the stack/heap pointer
 Recursion can be implemened
Activation Record Structure
Return address
Static and Dynamic links
(also called Access and Control link resp.)
(Address of) function result
Actual parameters
Local variables
Temporaries
Saved machine status
Space for local arrays
Variable Storage Offset Computation
 The compiler should compute
 the offsets at which variables and constants will
be stored in the activation record (AR)
 These offsets will be with respect to the pointer
pointing to the beginning of the AR
 Variables are usually stored in the AR in the
declaration order
 Offsets can be easily computed while performing
semantic analysis of declarations
DNR College of Engineering & Technology, Department of Computer Science & Engineering
Static Scope and Dynamic Scope
 Static Scope
 A global identifier refers to the identifier with that
name that is declared in the closest enclosing
scope of the program text
 Uses the static (unchanging) relationship between
blocks in the program text
 Dynamic Scope
 A global identifier refers to the identifier associated
with the most recent activation record
 Uses the actual sequence of calls that are executed
in the
dynamic (changing) execution of the program
 Both are identical as far as local variables are
concerned
Static Scope and Dynamic Scope : An Example
int x = 1;
function g(z) = x+z;
function f(y) = {
int x = y+1;
return g(y*x)
};
f(3);
After the call to g,
Static scope: x = 1
Dynamic scope: x = 4
x 1
y 3
x 4
z 12
outer block
f(3)
g(12)
Stack of activation records
after the call to g
DNR College of Engineering & Technology, Department of Computer Science & Engineering
Static Scope and Dynamic Scope:
Another Example..
•float r = 0.25;
•void show()
{ printf(“%f”,r); } void
small() {
• float r = 0.125; show();
•}
•int main (){
•show(); small(); printf(“n”);
•show(); small(); printf(“n
•”);
•}
 Under static scoping,
the output is
• 0.25 0.25
• 0.25 0.25
 Under dynamic
scoping, the output is
0.25 0.125
• 0.25 0.125
Implementing Dynamic Scope – Deep Access
Method
 Use dynamic link as static link
 Search activation records on the stack to find the
first AR containing the non-local name
 The depth of search depends on the input to the
program and cannot be determined at compile
time
 Needs some information on the identifiers to be
maintained at runtime within the ARs
 Takes longer time to access globals, but no
overhead when activations begin and end
DNR College of Engineering & Technology, Department of Computer Science & Engineering
Implementing Dynamic Scope – Shallow Access
Method
 Allocate some static storage for each name
 When a new AR is created for a procedure
p, a local name n in p takes over the static
storage allocated to name n
 The previous value of n held in static
storage is saved in the AR of p and is
restored when the activation of p ends
 Direct and quick access to globals, but
some overhead is incurred when
activations begin and end
Passing Functions as Parameters – Implementation
with Static Scope
x=4
main
SL
x=7
SL
y=3
g(f)
h(3)
SL chain
closure for
parameter h
pointer to
code for
f
AR for the
call f(3)
An example:
main()
{ int x = 4;
int f (int y) {
return x*y;
}
int g (int → int h){
int x = 7;
return h(3) + x;
}
g(f);// returns 12
}
Heap Memory Management
 Heap is used for allocating space for objects created at
run time
 For example: nodes of dynamic data structures
such as linked lists and trees
 Dynamic memory allocation and deallocation based
on the requirements of the program
 malloc() and free() in C programs
 new() and delete() in C++ programs
 new() and garbage collection in Java programs
 Allocation and deallocation may be completely
manual (C/C++), semi-automatic (Java), or fully
automatic (Lisp)
Program Address Space
 Any program you run has, associated with it, some
memory which is divided into:
 Code Segment
 Data Segment (Holds Global Data)
 Stack (where the local variables and other temporary
information is stored)
 Heap
Code Segment
Data Segment
Stack
Heap
The Heap grows
downwards
The Stack grows
upwards
Local Variables:Stack Allocation
 When we have a declaration of the form “int a;”:
 a variable with identifier “a” and some memory allocated to it is
created in the stack. The attributes of “a” are:
 Name: a
 Data type: int
 Scope: visible only inside the function it is defined, disappears
once we exit the function
 Address: address of the memory location reserved for it. Note:
Memory is allocated in the stack for a even before it is
initialized.
 Size: typically 4 bytes
 Value: Will be set once the variable is initialized
Pointers
 We know what a pointer is. Let us say we have declared a
pointer “int *p;” The attributes of “a” are:
 Name: p
 Data type: Integer address
 Scope: Local or Global
 Address: Address in the data segment or stack segment
 Size: 32 bits in a 32-bit architecture
 We saw how a fixed memory allocation is done in the stack,
now we want to allocate dynamically. Consider the declaration:
 “int *p;”. So the compiler knows that we have a pointer p that
may store the starting address of a variable of type int.
 To point “p” to a dynamic variable we need to use a
declaration
Pointers : Heap Allocation
 Dynamic variables are never initialized by the compile
it is a good practice to initialize it.
 In more compact notation:
int *p;
p = new int;
*p = 0;
int *p = new int(0);
Issues in the design of a code
generator
 Input to the code generator
 Memory management
 Target programs
 Instruction selection
 Register allocation
 Evaluation order
 Approaches to code
generation
Input to the code generator
 Several choices for the intermediate language
 Linear
 3 address
 Virtual
machie
 Graphical
- postfix nottion
- quadruples
- stack machine
code
- syntax tree
&dags
 The intermediate representation of the source program
produced by the front end
Memory management
 Mapping names in the source program to addresses of data
objects in run-time memory
 Done by the front end and the code generator.
 A name in a three- address statement refers to a symbol-
table entry for the name.
 A relative address can be determined
Target Programs
Absolute machine language
Relocatable machine language
Assembly language
Register Allocation
• Instructions with register operands are usually
shorter and faster
• Efficientutilization of registersis important in
generating good code.
Evaluation order

Affect the efficiency of the target code.
 Some computation orders require fewer registers
to hold intermediate results
The order in which computations are
performed
Approaches to code generation
Correctness takes on special signification
It contains a straightforward code generation
algorithm
The output of such code generator can be improved
by peephole optimization technique
 Most important criteria for code generator is that
it
produces correct code
Basic Blocks and Control Flow Graphs
Introduction:
Abasic block is a sequence of consecutive
instructions which are always executed in sequence
without halt or possibilty of branching.
The basic block doesnot have any jump statements
among them.``
Flow Graphs
Introduction:
A flow graph is a graphical representation of a
sequence of instructions with control flow edges.
A flow graph can be defined at the intermediate
code level or target code level.
The nodes of flow graphs are the basic blocks and
flow-of-control to immediately follow node
connected by directed arrow.
Flow Graphs
Points to remember:
The basic blocks are the nodes to the flow graph .
The block whose leader is the first statement is
called initial block.
There is a directed edge from block B1 to B2 if
B2 immediately follows B1 in the given
sequence
Then we say that B1 is the predecessor of B2.
1. ------------ polish places the operator at the right end.
1.Postfix b) Prefix c) Both d) Polish
2. To evaluate the -------------- expression, a stack is used.
2.postfix b) prefix c) both d) polish
3. The general strategy is to scan the postfix code -------------.
a)left-right b)right-left c)middled)end
4. If the attributes of the parent depend on the attributes of the children ,then they are
called as ------------- attributes.
a)made b)discovered c)new d) inherited
ONLINE BITS
Ans:a
Ans:a
Ans:a
Ans:d
5. ------------- is a tree in which each leaf represents an operand and each interior node
an operator.
1.Parser Tree b)Semantic Tree c)Syntax Tree d)Structured Tree
6. The properties of an entity are called as --------------.
2.values b)attributes c)numbers d)digits
7. Usually the “Three address code” contains address two for the ---------- and one for
the result.
3.operand b)operator c)result d) statement
Ans:c
Ans:b
Ans:a
8. The --------------- statement is an abstract form of intermediate code.
a)2-address b)3-address c)Intermediatecode d)address
Ans:b
9. Which is not the way of implement the 3-address statement.
1.Quadruples b) Triples c) Indirect Triples d) Parse Tree
10. -------------- record structure has 4 fields.
2.Quadruples b) Triples c) Indirect Triples d) Parse Tree
11. numbers are used to represent ----------- into the triple structure.
a)pointer b)stack c)queue d)value
12. ---------------- Triples are listing pointers to triples, rather than listing the triples
themselves.
3.Direct b)Indirect c)Multiple d)New
13. ---------------- refers to the location to store the value for a symbol.
4.value b)place c)code d)number
14. ----------- refers to the expression or expressions in the form of three address codes.
5.value b)place c)code d)number
15. ----------- is associating the attributes with the grammar symbols.
a)rotation b)translation c)transformation d)evolving
Ans:d
Ans:a
Ans:a
Ans:b
Ans:b
Ans:c
Ans:b
ONLINE BITS
16. In 3-address code for array reference we assume static allocation of arrays, where
subscripts range from 1 to some limit known at ------------------ time.
1.compile b) run c) execution d) process
17. In Triples uses only 3 ----------------.
a) fields b) operator c) operand d) instruction
18. is used in the several stages of the compiler.
2.Table b) Symbol Table c) Recordsd) Program.
19. Information about the name is entered into the symbol table during
and .
3.lexical and syntactic analysis b) lexical and code generation
c) lexical and error handler d) lexical and code optimization.
20. Each entry in the symbol table is a pair of the form and .
4.Name and information. b) Name and function.
c) Name and Data. d) Name and procedures
21. A compiler needs to collect and use information about the names appearing in the
source program. This information is entered into a data structure called a _____.
a) Symbol Table b) Lexical analysis c) Syntactic analysis d) Records.
ONLINE BITS
Ans:a
Ans:a
Ans:b
Ans:a
.Ans:a
Ans:a
22. Undeclared name and type incompatibilities in .
a)Syntactic errors b) Semantic errors
c) Lexical Phase errors d) Reporting errors.
Ans:b
23. Minimum distance matching in .
a)Syntactic errors b) Semantic errors
c) Lexical Phase errors d) Reporting errors
24. Minimum distance correction is errors.
a)Syntactic Phase errors b) Semantic errors
c) Lexical Phase errors d) Reporting errors.
25. Parser discards input symbol until a token is encountered.
a)synchronizing b) Synchronizing
c) Group d) none.
26. The message should not be redundant in .
a) Syntactic Phase errors b) Semantic errors
c) Lexical Phase errors d) Reporting errors.
Ans:a
Ans:a
Ans:b
Ans:d
ONLINE BITS
27. When an error is detected the reaction of compiler is different,
a)A system crash b)To emit invalid output
c)To merely quit on the first detected error. d) All of the above.
28. Two types of data areas .
a) Common and stack b)Common and equivalence.
c)Register and stack d)Code and equivalence.
Ans:d
Ans:b
29. The accurate term for “Code Optimization” is .
a)Intermediate Code b) Code Improvement
c) Latter Optimization d) Local Optimization.
30. The quality of the object program is generally measured by its .
a)Cost b) Time c) Size or Its running time d) Code Optimization.
31. The code optimization techniques consist of detecting in the program and
_________these patterns
a) Errors and replacing b) Patterns and replacing
c) Errors and editing d) Patterns and editing.
Ans:b
Ans:C
Ans:b
ONLINE BITS
32. may be local or global.
a)Code Optimization b) Variable c) Sub expression d) Patterns.
Ans:a
33. The term constant folding is used for the .
a)Local optimization b) Code optimization
c) Latter optimization d) Loop optimization.
34. performed within a straight line and no jump.
a)Local optimization b) Code optimization
c) Latter optimization d) Loop optimization.
35. From anyone in the loop to any other, there is a path of length one or more is
______________
a)Weakly Connected b) Unique Entity
c) Multi Connected d) Strongly Connected.
36. If some sequences of statements from arithmetic progressions, we say such
identifiers as .
a) Reduction b) Induction Variables
c) Code motion d) Inner Loops.
Ans:c
Ans:a
Ans:d
Ans:b
ONLINE BITS
1 a) Differentiate between Static and Dynamic Storage allocation Strategies. [7M]
Oct-2019
b) What is dangling Reference in storage allocation? Explain with an Example. [8M]
2. What is a Flow Graph? Explain how a given program can be converted in to a Flow
graph? [8M] Oct-2019
3. a) Define Symbol table? Explain about the data structures used for Symbol table. 8M
b) Explain in brief abotu Stack Storage allocation strategy. [8M] May-2019
4 a) Consider the C program and generate the code and Write different object code
forms Main() { int i, a[10]; while (i<=10) a[i]=i*5; } [8M] Oct-2018
b) What is Activation Record? Explain its usage in stack allocation strategy. How it is
different from heap allocation? [7M]
IMPORTANT QUESTIONS
5 a) What is a leader of basic block? Write and explain the algorithm used to find
leaders. Draw flow graph for matrix multiplication. [8M] May-2018
b) Draw and explain the Runtime memory organization static storage allocation
strategy with pros and cons. [8M]
6 a) What is reference counting? Explain how they are used in garbage collection. [8M]
b) Efficient Register allocation and assignment improves the performance of object
code-Justify this statement with suitable examples. Nov-2017
7 a) Define Symbol table. Explain about the data structures for Symbol table. [8M]
b) Explain reducible and non reducible flow graphs with examples. [8M] May-2017
8 a) What is meant by activation of procedure? How it can be represented with
activation tree and record? Explain with quick sort example. [8M] Nov-2016
b) Explain the functional issues to be considered while generating the object code. [8M
9 a) What are the contents of a symbol table? Explain in detail the symbol table
organization for block-Structured languages. [8M] May-2016
b) Explain in detail about Stack allocation scheme. [8M]
IMPORTANCE QUESTIONS
ANSWERS
1. What Is Dynamic Scoping?
Answer :
In dynamic scoping a use of non-local variable refers to the non-local
data declared in most recently called and still active procedure.
Therefore each time new findings are set up for local names called
procedure. In dynamic scoping symbol tables can be required at run
time.
2. Define Symbol Table.
Answer :
Symbol table is a data structure used by the compiler to keep track of
semantics of the variables. It stores information about scope and
binding information about names.
3. What Is A Symbol Table?
Answer :
A symbol table is a data structure containing a record for each
identifier, with fields for the attributes of the identifier. The data
structure allows us to find the record for each identifier quickly and to
store or retrieve data from that record quickly.
Whenever an identifier is detected by a lexical analyzer, it is entered
into the symbol table. The attributes of an identifier cannot be
determined by the lexical analyzer.
1. a) Define Symbol table? Explain about the data structures used for Symbol table. 8M
b) Explain in brief about Stack Storage allocation strategy. [8M] May-2019
A symbol table may serve the following purposes depending upon the language in
hand:
To store the names of all entities in a structured form at one place.
To verify if a variable has been declared.
To implement type checking, by verifying assignments and expressions in the
source
code are semantically correct.
To determine the scope of a name (scope resolution).
If a compiler is to handle a small amount of data, then the symbol table can be
implemented as an unordered list, which is easy to code, but it is only suitable for small
tables only.
Data Structure for Symbol Tables
 In designing a Symbol-Table mechanism, there should be a scheme that allows,
adding new entries and finding existing entries in a table efficiently.
 A symbol table can be implemented in one of the following ways:
1. Linear (sorted or unsorted) list
2. Binary Search Tree
3. about the symbol.
Each scheme is evaluated on the basis of the time required to add n entries and
make in enquiries Hash table Among all, symbol tables are mostly implemented as
hash tables, where the source code symbol itself is treated as a key for the hash
function and the return value is the information
Storage Allocation Information
To denote the locations in the storage belonging to objects at run time.
 Static storage
if the object code is assembly language
 generating assembly code
scan the symbol table
 generates definition
 appended to the executable portion.
 Machine code
generated by compiler – stored with a fixed origin.
 The same remark applies to blocks of data loaded as a module separate from the
executable program.
 In the case of names where storage is allocated on a stack, the compiler need not
allocate storage at all.
The compiler must plan out the activation record for each procedure.
Static vs Dynamic Memory Allocation
Static memory allocation is a method of
allocating memory, and once the
memory is allocated, it is fixed.
Dynamic memory allocation is a
method of allocating memory, and once
the memory is allocated, it can be
changed.
Modification
In static memory allocation, it is not
possible to resize after initial allocation.
In dynamic memory allocation, the
memory can be minimized or maximize
accordingly.
Implementation
Static memory allocation is easy to
implement.
Dynamic memory allocation is
complex to implement.
Speed
In static memory, allocation execution
is faster than dynamic memory
allocation.
In dynamic memory, allocation
execution is slower than static memory
allocation.
2 Differentiate between Static and Dynamic Storage allocation Strategies. [7M]
Oct-2019
Memory Utilization
In static memory allocation, cannot
reuse the unused memory.
Dynamic memory allocation allows
reusing the memory. The programmer
can allocate more memory when
required . He can release the memory
when necessary.
3. What is a Flow Graph? Explain how a given program can be converted in to a Flow
graph? [8M]
Control Flow graph: - Once an `program is partitioned into basic blocks, we
represent
the flow of control between them by a flow graph. The nodes of the flow graph are the
basic blocks. There is an edge from block B to block C if and only if it is possible for
the first instruction in block C to immediately follow the last instruction in block B.
There are two ways that such an edge could be justified:
•There is a conditional or unconditional jump from the end of B to the beginning of C.
•C immediately follows B in the original order of the three-address
instructions, and B does not end in an unconditional jump.
In any of the above cases B is a predecessor of C, and C is a successor of B. Two
nodes
are added to the flow graph, called the entry and exit that do not correspond to
executable intermediate instructions. There is an edge from the entry to the first
executable node of the flow graph, that is, to the basic block that comes from the first
instruction of the intermediate code.
There is an edge to the exit from any basic block
that contains an instruction that could be the last executed instruction of the program.
If the final instruction of the program is not an unconditional jump, then the block
containing the final instruction of the program is one predecessor of the exit, but so is any
basic block that has a jump to code that is not part of the program. Example:- Construct the
flow graph for a source code of a factorial function.

iii-ii cd nCompiler design UNIT-V-1.pptx

  • 1.
    COMPILER DESIGN III B.Tech– I Sem Department of Computer Science & Engineering Prepared By: B Nandan Kumar Asst. Professor DNRCET DNR College of Engineering & Technology, Department of Computer Science & Engineering DNR College of Engineering & Technology
  • 2.
    Syllabus UNIT – I IntroductionLanguage Processing, Structure of a compiler the evaluation of Programming language, The Science of building a Compiler application of Compiler Technology. Programming Language Basics. Lexical Analysis-: The role of lexical analysis buffing, specification of tokens. Recognitions of tokens the lexical analyzer generator lexical UNIT –II Syntax Analysis -: The Role of a parser, Context free Grammars Writing A grammar, top down passing bottom up parsing Introduction to Lr Parser. UNIT –III More Powerful LR parser (LR1, LALR) Using Armigers Grammars Equal Recovery in Lr parser Syntax Directed Transactions Definition, Evolution order of SDTS Application of SDTS. Syntax Directed Translation Schemes. DNR College of Engineering & Technology, Department of Computer Science & Engineering
  • 3.
    UNIT – IV IntermediatedCode: Generation Variants of Syntax trees 3 Address code, Types and Deceleration, Translation of Expressions, Type Checking. Canted Flow Back patching? UNIT – V Runtime Environments, Stack allocation of space, access to Non Local date on the stack Heap Management code generation – Issues in design of code generation the target Language Address in the target code Basic blocks and Flow graphs. A Simple Code generation. UNIT –VI Machine Independent Optimization. The principle sources of Optimization peep hole Optimization, Introduction to Date flow Analysis. DNR College of Engineering & Technology, Department of Computer Science & Engineering
  • 4.
    1. Compilers, PrinciplesTechniques and Tools.Alfred V Aho, Monical S. Lam, Ravi Sethi Jeffery D. Ullman, 2nd edition, 2. Compiler Design K.Muneeswaran, OXFORD 3. Principles of compiler design,2nd edition,Nandhini Prasad, 2. Compiler Construction, Principles and practice, Kenneth C Louden, CENGAGE 2. Implementations of Compiler, A New approach to Compilers including the algebraic methods, Yunlinsu ,SPRINGER TEXT BOOKS: REFERENCE BOOKS: DNR College of Engineering & Technology, Department of Computer Science & Engineering
  • 5.
    1. Acquire knowledgein different phases and passes of Compiler, and specifying different types of tokens by lexical analyzer, and also able to use the Compiler tools like LEX, YACC, etc. 2. Parser and its types i.e. Top-down and Bottom-up parsers. 3. Construction of LL, SLR, CLR and LALR parse table. 4. Syntax directed translation, synthesized and inherited attributes. 5.Techniques for code optimization. COURSE OUTCOMES: DNR College of Engineering & Technology, Department of Computer Science & Engineering
  • 6.
    UNIT-V Syllabus: Runtime Environments, Stackallocation of space, access to Non Local date on the stack Heap Management code generation – Issues in design of code generation the target Language Address in the target code Basic blocks and Flow graphs. A Simple Code generation. DNR College of Engineering & Technology, Department of Computer Science & Engineering
  • 7.
    Static Data StorageAllocation  Compiler allocates space for all variables (local and global) of all procedures at compile time  No stack/heap allocation; no overheads  Ex: Fortran IV and Fortran 77  Variable access is fast since addresses are known at compile time  No recursion Main program variables Procedure P1 variables Procedure P2 variables Procedure P4 variables Main memory DNR College of Engineering & Technology, Department of Computer Science & Engineering
  • 8.
    Dynamic Data StorageAllocation  Compiler allocates space only for golbal variables at compile time  Space for variables of procedures will be allocated at run-time  Stack/heap allocation  Ex: C, C++, Java, Fortran 8/9  Variable access is slow (compared to static allocation) since addresses are accessed through the stack/heap pointer  Recursion can be implemened
  • 9.
    Activation Record Structure Returnaddress Static and Dynamic links (also called Access and Control link resp.) (Address of) function result Actual parameters Local variables Temporaries Saved machine status Space for local arrays
  • 10.
    Variable Storage OffsetComputation  The compiler should compute  the offsets at which variables and constants will be stored in the activation record (AR)  These offsets will be with respect to the pointer pointing to the beginning of the AR  Variables are usually stored in the AR in the declaration order  Offsets can be easily computed while performing semantic analysis of declarations DNR College of Engineering & Technology, Department of Computer Science & Engineering
  • 11.
    Static Scope andDynamic Scope  Static Scope  A global identifier refers to the identifier with that name that is declared in the closest enclosing scope of the program text  Uses the static (unchanging) relationship between blocks in the program text  Dynamic Scope  A global identifier refers to the identifier associated with the most recent activation record  Uses the actual sequence of calls that are executed in the dynamic (changing) execution of the program  Both are identical as far as local variables are concerned
  • 12.
    Static Scope andDynamic Scope : An Example int x = 1; function g(z) = x+z; function f(y) = { int x = y+1; return g(y*x) }; f(3); After the call to g, Static scope: x = 1 Dynamic scope: x = 4 x 1 y 3 x 4 z 12 outer block f(3) g(12) Stack of activation records after the call to g DNR College of Engineering & Technology, Department of Computer Science & Engineering
  • 13.
    Static Scope andDynamic Scope: Another Example.. •float r = 0.25; •void show() { printf(“%f”,r); } void small() { • float r = 0.125; show(); •} •int main (){ •show(); small(); printf(“n”); •show(); small(); printf(“n •”); •}  Under static scoping, the output is • 0.25 0.25 • 0.25 0.25  Under dynamic scoping, the output is 0.25 0.125 • 0.25 0.125
  • 14.
    Implementing Dynamic Scope– Deep Access Method  Use dynamic link as static link  Search activation records on the stack to find the first AR containing the non-local name  The depth of search depends on the input to the program and cannot be determined at compile time  Needs some information on the identifiers to be maintained at runtime within the ARs  Takes longer time to access globals, but no overhead when activations begin and end DNR College of Engineering & Technology, Department of Computer Science & Engineering
  • 15.
    Implementing Dynamic Scope– Shallow Access Method  Allocate some static storage for each name  When a new AR is created for a procedure p, a local name n in p takes over the static storage allocated to name n  The previous value of n held in static storage is saved in the AR of p and is restored when the activation of p ends  Direct and quick access to globals, but some overhead is incurred when activations begin and end
  • 16.
    Passing Functions asParameters – Implementation with Static Scope x=4 main SL x=7 SL y=3 g(f) h(3) SL chain closure for parameter h pointer to code for f AR for the call f(3) An example: main() { int x = 4; int f (int y) { return x*y; } int g (int → int h){ int x = 7; return h(3) + x; } g(f);// returns 12 }
  • 17.
    Heap Memory Management Heap is used for allocating space for objects created at run time  For example: nodes of dynamic data structures such as linked lists and trees  Dynamic memory allocation and deallocation based on the requirements of the program  malloc() and free() in C programs  new() and delete() in C++ programs  new() and garbage collection in Java programs  Allocation and deallocation may be completely manual (C/C++), semi-automatic (Java), or fully automatic (Lisp)
  • 18.
    Program Address Space Any program you run has, associated with it, some memory which is divided into:  Code Segment  Data Segment (Holds Global Data)  Stack (where the local variables and other temporary information is stored)  Heap Code Segment Data Segment Stack Heap The Heap grows downwards The Stack grows upwards
  • 19.
    Local Variables:Stack Allocation When we have a declaration of the form “int a;”:  a variable with identifier “a” and some memory allocated to it is created in the stack. The attributes of “a” are:  Name: a  Data type: int  Scope: visible only inside the function it is defined, disappears once we exit the function  Address: address of the memory location reserved for it. Note: Memory is allocated in the stack for a even before it is initialized.  Size: typically 4 bytes  Value: Will be set once the variable is initialized
  • 20.
    Pointers  We knowwhat a pointer is. Let us say we have declared a pointer “int *p;” The attributes of “a” are:  Name: p  Data type: Integer address  Scope: Local or Global  Address: Address in the data segment or stack segment  Size: 32 bits in a 32-bit architecture  We saw how a fixed memory allocation is done in the stack, now we want to allocate dynamically. Consider the declaration:  “int *p;”. So the compiler knows that we have a pointer p that may store the starting address of a variable of type int.  To point “p” to a dynamic variable we need to use a declaration
  • 21.
    Pointers : HeapAllocation  Dynamic variables are never initialized by the compile it is a good practice to initialize it.  In more compact notation: int *p; p = new int; *p = 0; int *p = new int(0);
  • 22.
    Issues in thedesign of a code generator  Input to the code generator  Memory management  Target programs  Instruction selection  Register allocation  Evaluation order  Approaches to code generation
  • 23.
    Input to thecode generator  Several choices for the intermediate language  Linear  3 address  Virtual machie  Graphical - postfix nottion - quadruples - stack machine code - syntax tree &dags  The intermediate representation of the source program produced by the front end
  • 24.
    Memory management  Mappingnames in the source program to addresses of data objects in run-time memory  Done by the front end and the code generator.  A name in a three- address statement refers to a symbol- table entry for the name.  A relative address can be determined
  • 25.
    Target Programs Absolute machinelanguage Relocatable machine language Assembly language
  • 26.
    Register Allocation • Instructionswith register operands are usually shorter and faster • Efficientutilization of registersis important in generating good code.
  • 27.
    Evaluation order  Affect theefficiency of the target code.  Some computation orders require fewer registers to hold intermediate results The order in which computations are performed
  • 28.
    Approaches to codegeneration Correctness takes on special signification It contains a straightforward code generation algorithm The output of such code generator can be improved by peephole optimization technique  Most important criteria for code generator is that it produces correct code
  • 29.
    Basic Blocks andControl Flow Graphs Introduction: Abasic block is a sequence of consecutive instructions which are always executed in sequence without halt or possibilty of branching. The basic block doesnot have any jump statements among them.``
  • 30.
    Flow Graphs Introduction: A flowgraph is a graphical representation of a sequence of instructions with control flow edges. A flow graph can be defined at the intermediate code level or target code level. The nodes of flow graphs are the basic blocks and flow-of-control to immediately follow node connected by directed arrow.
  • 31.
    Flow Graphs Points toremember: The basic blocks are the nodes to the flow graph . The block whose leader is the first statement is called initial block. There is a directed edge from block B1 to B2 if B2 immediately follows B1 in the given sequence Then we say that B1 is the predecessor of B2.
  • 32.
    1. ------------ polishplaces the operator at the right end. 1.Postfix b) Prefix c) Both d) Polish 2. To evaluate the -------------- expression, a stack is used. 2.postfix b) prefix c) both d) polish 3. The general strategy is to scan the postfix code -------------. a)left-right b)right-left c)middled)end 4. If the attributes of the parent depend on the attributes of the children ,then they are called as ------------- attributes. a)made b)discovered c)new d) inherited ONLINE BITS Ans:a Ans:a Ans:a Ans:d 5. ------------- is a tree in which each leaf represents an operand and each interior node an operator. 1.Parser Tree b)Semantic Tree c)Syntax Tree d)Structured Tree 6. The properties of an entity are called as --------------. 2.values b)attributes c)numbers d)digits 7. Usually the “Three address code” contains address two for the ---------- and one for the result. 3.operand b)operator c)result d) statement Ans:c Ans:b Ans:a
  • 33.
    8. The ---------------statement is an abstract form of intermediate code. a)2-address b)3-address c)Intermediatecode d)address Ans:b 9. Which is not the way of implement the 3-address statement. 1.Quadruples b) Triples c) Indirect Triples d) Parse Tree 10. -------------- record structure has 4 fields. 2.Quadruples b) Triples c) Indirect Triples d) Parse Tree 11. numbers are used to represent ----------- into the triple structure. a)pointer b)stack c)queue d)value 12. ---------------- Triples are listing pointers to triples, rather than listing the triples themselves. 3.Direct b)Indirect c)Multiple d)New 13. ---------------- refers to the location to store the value for a symbol. 4.value b)place c)code d)number 14. ----------- refers to the expression or expressions in the form of three address codes. 5.value b)place c)code d)number 15. ----------- is associating the attributes with the grammar symbols. a)rotation b)translation c)transformation d)evolving Ans:d Ans:a Ans:a Ans:b Ans:b Ans:c Ans:b ONLINE BITS
  • 34.
    16. In 3-addresscode for array reference we assume static allocation of arrays, where subscripts range from 1 to some limit known at ------------------ time. 1.compile b) run c) execution d) process 17. In Triples uses only 3 ----------------. a) fields b) operator c) operand d) instruction 18. is used in the several stages of the compiler. 2.Table b) Symbol Table c) Recordsd) Program. 19. Information about the name is entered into the symbol table during and . 3.lexical and syntactic analysis b) lexical and code generation c) lexical and error handler d) lexical and code optimization. 20. Each entry in the symbol table is a pair of the form and . 4.Name and information. b) Name and function. c) Name and Data. d) Name and procedures 21. A compiler needs to collect and use information about the names appearing in the source program. This information is entered into a data structure called a _____. a) Symbol Table b) Lexical analysis c) Syntactic analysis d) Records. ONLINE BITS Ans:a Ans:a Ans:b Ans:a .Ans:a Ans:a
  • 35.
    22. Undeclared nameand type incompatibilities in . a)Syntactic errors b) Semantic errors c) Lexical Phase errors d) Reporting errors. Ans:b 23. Minimum distance matching in . a)Syntactic errors b) Semantic errors c) Lexical Phase errors d) Reporting errors 24. Minimum distance correction is errors. a)Syntactic Phase errors b) Semantic errors c) Lexical Phase errors d) Reporting errors. 25. Parser discards input symbol until a token is encountered. a)synchronizing b) Synchronizing c) Group d) none. 26. The message should not be redundant in . a) Syntactic Phase errors b) Semantic errors c) Lexical Phase errors d) Reporting errors. Ans:a Ans:a Ans:b Ans:d ONLINE BITS
  • 36.
    27. When anerror is detected the reaction of compiler is different, a)A system crash b)To emit invalid output c)To merely quit on the first detected error. d) All of the above. 28. Two types of data areas . a) Common and stack b)Common and equivalence. c)Register and stack d)Code and equivalence. Ans:d Ans:b 29. The accurate term for “Code Optimization” is . a)Intermediate Code b) Code Improvement c) Latter Optimization d) Local Optimization. 30. The quality of the object program is generally measured by its . a)Cost b) Time c) Size or Its running time d) Code Optimization. 31. The code optimization techniques consist of detecting in the program and _________these patterns a) Errors and replacing b) Patterns and replacing c) Errors and editing d) Patterns and editing. Ans:b Ans:C Ans:b ONLINE BITS
  • 37.
    32. may belocal or global. a)Code Optimization b) Variable c) Sub expression d) Patterns. Ans:a 33. The term constant folding is used for the . a)Local optimization b) Code optimization c) Latter optimization d) Loop optimization. 34. performed within a straight line and no jump. a)Local optimization b) Code optimization c) Latter optimization d) Loop optimization. 35. From anyone in the loop to any other, there is a path of length one or more is ______________ a)Weakly Connected b) Unique Entity c) Multi Connected d) Strongly Connected. 36. If some sequences of statements from arithmetic progressions, we say such identifiers as . a) Reduction b) Induction Variables c) Code motion d) Inner Loops. Ans:c Ans:a Ans:d Ans:b ONLINE BITS
  • 38.
    1 a) Differentiatebetween Static and Dynamic Storage allocation Strategies. [7M] Oct-2019 b) What is dangling Reference in storage allocation? Explain with an Example. [8M] 2. What is a Flow Graph? Explain how a given program can be converted in to a Flow graph? [8M] Oct-2019 3. a) Define Symbol table? Explain about the data structures used for Symbol table. 8M b) Explain in brief abotu Stack Storage allocation strategy. [8M] May-2019 4 a) Consider the C program and generate the code and Write different object code forms Main() { int i, a[10]; while (i<=10) a[i]=i*5; } [8M] Oct-2018 b) What is Activation Record? Explain its usage in stack allocation strategy. How it is different from heap allocation? [7M] IMPORTANT QUESTIONS
  • 39.
    5 a) Whatis a leader of basic block? Write and explain the algorithm used to find leaders. Draw flow graph for matrix multiplication. [8M] May-2018 b) Draw and explain the Runtime memory organization static storage allocation strategy with pros and cons. [8M] 6 a) What is reference counting? Explain how they are used in garbage collection. [8M] b) Efficient Register allocation and assignment improves the performance of object code-Justify this statement with suitable examples. Nov-2017 7 a) Define Symbol table. Explain about the data structures for Symbol table. [8M] b) Explain reducible and non reducible flow graphs with examples. [8M] May-2017 8 a) What is meant by activation of procedure? How it can be represented with activation tree and record? Explain with quick sort example. [8M] Nov-2016 b) Explain the functional issues to be considered while generating the object code. [8M 9 a) What are the contents of a symbol table? Explain in detail the symbol table organization for block-Structured languages. [8M] May-2016 b) Explain in detail about Stack allocation scheme. [8M] IMPORTANCE QUESTIONS
  • 40.
  • 41.
    1. What IsDynamic Scoping? Answer : In dynamic scoping a use of non-local variable refers to the non-local data declared in most recently called and still active procedure. Therefore each time new findings are set up for local names called procedure. In dynamic scoping symbol tables can be required at run time. 2. Define Symbol Table. Answer : Symbol table is a data structure used by the compiler to keep track of semantics of the variables. It stores information about scope and binding information about names.
  • 42.
    3. What IsA Symbol Table? Answer : A symbol table is a data structure containing a record for each identifier, with fields for the attributes of the identifier. The data structure allows us to find the record for each identifier quickly and to store or retrieve data from that record quickly. Whenever an identifier is detected by a lexical analyzer, it is entered into the symbol table. The attributes of an identifier cannot be determined by the lexical analyzer.
  • 43.
    1. a) DefineSymbol table? Explain about the data structures used for Symbol table. 8M b) Explain in brief about Stack Storage allocation strategy. [8M] May-2019 A symbol table may serve the following purposes depending upon the language in hand: To store the names of all entities in a structured form at one place. To verify if a variable has been declared. To implement type checking, by verifying assignments and expressions in the source code are semantically correct. To determine the scope of a name (scope resolution). If a compiler is to handle a small amount of data, then the symbol table can be implemented as an unordered list, which is easy to code, but it is only suitable for small tables only. Data Structure for Symbol Tables  In designing a Symbol-Table mechanism, there should be a scheme that allows, adding new entries and finding existing entries in a table efficiently.  A symbol table can be implemented in one of the following ways:
  • 44.
    1. Linear (sortedor unsorted) list 2. Binary Search Tree 3. about the symbol. Each scheme is evaluated on the basis of the time required to add n entries and make in enquiries Hash table Among all, symbol tables are mostly implemented as hash tables, where the source code symbol itself is treated as a key for the hash function and the return value is the information Storage Allocation Information To denote the locations in the storage belonging to objects at run time.  Static storage if the object code is assembly language  generating assembly code scan the symbol table  generates definition  appended to the executable portion.  Machine code
  • 45.
    generated by compiler– stored with a fixed origin.  The same remark applies to blocks of data loaded as a module separate from the executable program.  In the case of names where storage is allocated on a stack, the compiler need not allocate storage at all. The compiler must plan out the activation record for each procedure.
  • 46.
    Static vs DynamicMemory Allocation Static memory allocation is a method of allocating memory, and once the memory is allocated, it is fixed. Dynamic memory allocation is a method of allocating memory, and once the memory is allocated, it can be changed. Modification In static memory allocation, it is not possible to resize after initial allocation. In dynamic memory allocation, the memory can be minimized or maximize accordingly. Implementation Static memory allocation is easy to implement. Dynamic memory allocation is complex to implement. Speed In static memory, allocation execution is faster than dynamic memory allocation. In dynamic memory, allocation execution is slower than static memory allocation. 2 Differentiate between Static and Dynamic Storage allocation Strategies. [7M] Oct-2019
  • 47.
    Memory Utilization In staticmemory allocation, cannot reuse the unused memory. Dynamic memory allocation allows reusing the memory. The programmer can allocate more memory when required . He can release the memory when necessary.
  • 48.
    3. What isa Flow Graph? Explain how a given program can be converted in to a Flow graph? [8M] Control Flow graph: - Once an `program is partitioned into basic blocks, we represent the flow of control between them by a flow graph. The nodes of the flow graph are the basic blocks. There is an edge from block B to block C if and only if it is possible for the first instruction in block C to immediately follow the last instruction in block B. There are two ways that such an edge could be justified: •There is a conditional or unconditional jump from the end of B to the beginning of C. •C immediately follows B in the original order of the three-address instructions, and B does not end in an unconditional jump. In any of the above cases B is a predecessor of C, and C is a successor of B. Two nodes are added to the flow graph, called the entry and exit that do not correspond to executable intermediate instructions. There is an edge from the entry to the first executable node of the flow graph, that is, to the basic block that comes from the first instruction of the intermediate code.
  • 49.
    There is anedge to the exit from any basic block that contains an instruction that could be the last executed instruction of the program. If the final instruction of the program is not an unconditional jump, then the block containing the final instruction of the program is one predecessor of the exit, but so is any basic block that has a jump to code that is not part of the program. Example:- Construct the flow graph for a source code of a factorial function.