CSC 204 COMPILER CONSTRUCTION 1
PASSES IN COMPILER CONSTRUCTION
What is a Compiler Pass?
A Compiler pass refers to the traversal of a compiler through the entire program. Compiler passes are of
two types Single Pass Compiler, and Two Pass Compiler or Multi-Pass Compiler. These are explained as
follows.
Types of Compiler Pass
1. Single Pass Compiler
If we combine or group all the phases of compiler design in a single module known as a single pass
compiler.
A single pass compiler runs the source code only one time.
In single pass Compiler source code directly transforms into machine code.
It is also called a "Narrow compiler. It has limited scope
A single pass compiler is one that processes the input exactly once, so going directly from lexical analysis to
code generator, and then going back for the next read.
Single pass compiler is faster and smaller than the multi-pass compiler.
In the above diagram, there are all 6 phases are grouped in a single module,
some points of the single pass compiler are as:
•A one-pass/single-pass compiler is a type of compiler that passes through the
part of each compilation unit exactly once.
Problems with Single Pass Compiler
•A disadvantage of a single-pass compiler is that it is less efficient in comparison
with the multipass compiler.
•We can not optimize very well due to the context of expressions are limited.
•As we can’t back up and process it again, so grammar should be limited or
simplified.
•Command interpreters such as bash/sh/tcsh can be considered Single pass
compilers, but they also execute entries as soon as they are processed.
2. Two-Pass compiler or Multi-Pass compiler
• A Two pass/multi-pass Compiler is a type of compiler that processes
the source code or abstract syntax tree of a program two times. A processor that
runs through the program to be translated twice is considered a two-pass compiler.
• First Pass : is refers as
 Front End
 Analytic Part
 Platform Independent
• Second Pass : is refers as
 Back End
 Synthesis Part
 Platform Dependent
Multi Pass Compiler
• The multi-pass compiler processes the source code several
times.
• It divided a large program into multiple small programs and
process them. It develops multiple intermediate codes.
• All of these multi-pass take the output of the previous phase
as an input.
• It is also known as ‘Wide Compiler’. As it can scan every
portion of the program.
Problems that can be Solved With Multi-Pass Compiler
First: If we want to design a compiler for a different programming language for the same
machine. In this case for each programming language, there is a requirement to make the Front
end/first pass for each of them and only one Back end/second pass as:
Second: If we want to design a compiler for the same programming language
for different machines/systems. In this case, we make different Back end for
different Machine/system and make only one Front end for the same
programming language as:
Difference between One Pass and Two Pass Compiler
Single pass Two-pass
It performs Translation in one pass It performs Translation in two pass
It scans the entire file only once. It requires two passes to scan the source
file.
It generates intermediate code It does not generate Intermediate code
It is faster than two pass assembler It is slower than two pass assembler
A loader is not required A loader is required.
No object program is written. A loader is required as the object code is
generated.
Perform some professing of assembler
directives.
Perform processing of assembler directives
not done in pass-1
Intermediate Code Generation in Compiler Design
• In the analysis-synthesis model of a compiler, the front end of a compiler
translates a source program into an independent intermediate code,
then the back end of the compiler uses this intermediate code to
generate the target code (which can be understood by the machine). The
benefits of using machine-independent intermediate code are:
• Because of the machine-independent intermediate code, portability will
be enhanced. For ex, suppose, if a compiler translates the source
language to its target machine language without having the option for
generating intermediate code, then for each new machine, a full native
compiler is required. Because, obviously, there were some modifications
in the compiler itself according to the machine specifications.
• Retargeting is facilitated.
• It is easier to apply source code modification to improve the
performance of source code by optimizing the intermediate code.
If we generate machine code directly from source code then for n target machine
we will have optimizers and n code generator but if we will have a machine-
independent intermediate code, we will have only one optimizer. Intermediate code
can be either language-specific (e.g., Bytecode for Java) or language. independent
(three-address code). The following are commonly used intermediate code
representations:
1. Postfix Notation: Also known as reverse Polish notation or suffix notation.
• In the infix notation, the operator is placed between operands, e.g., a + b. Postfix
notation positions the operator at the right end, as in ab +.
• For any postfix expressions e1 and e2 with a binary operator (+) , applying the
operator yields e1e2+.
• Postfix notation eliminates the need for parentheses, as the operator’s position
and arity allow unambiguous expression decoding.
• In postfix notation, the operator consistently follows the operand.
Example 1: The postfix representation of the expression (a + b) * c is : ab + c *
Example 2: The postfix representation of the expression (a – b) * (c + d) + (a – b) is :
ab – cd + *ab -+
2. Three-Address Code: A three address statement involves a maximum of three
references, consisting of two for operands and one for the result.
• A sequence of three address statements collectively forms a three address code.
• The typical form of a three address statement is expressed as x = y op z, where x, y,
and z represent memory addresses.
• Each variable (x, y, z) in a three address statement is associated with a specific
memory location.
• While a standard three address statement includes three references, there are
instances where a statement may contain fewer than three references, yet it is still
categorized as a three address statement.
Example: The three address code for the expression a + b * c + d : T1 = b * c T2 = a +
T1, T3 = T2 + d; T 1, T2, T3 are temporary variables. There are 3 ways to represent a
Three-Address Code in compiler design:
i) Quadruples
ii) Triples
iii) Indirect Triples
3. Syntax Tree: A syntax tree serves as a condensed representation
of a parse tree.
• The operator and keyword nodes present in the parse tree
undergo a relocation process to become part of their respective
parent nodes in the syntax tree. the internal nodes are operators
and child nodes are operands.
• Creating a syntax tree involves strategically placing parentheses
within the expression. This technique contributes to a more
intuitive representation, making it easier to discern the sequence
in which operands should be processed.
• The syntax tree not only condenses the parse tree but also offers
an improved visual representation of the program’s syntactic
structure,
Example: x = (a + b * c) / (a – b * c)
Advantages of Intermediate Code Generation:
• Easier to implement: Intermediate code generation can simplify
the code generation process by reducing the complexity of the
input code, making it easier to implement.
• Facilitates code optimization: Intermediate code generation can
enable the use of various code optimization techniques, leading to
improved performance and efficiency of the generated code.
• Platform independence: Intermediate code is platform-
independent, meaning that it can be translated into machine code
or bytecode for any platform.
• Code reuse: Intermediate code can be reused in the future to
generate code for other platforms or languages.
• Easier debugging: Intermediate code can be easier to debug than
machine code or bytecode, as it is closer to the original source
code.
Disadvantages of Intermediate Code Generation:
• Increased compilation time: Intermediate code generation can
significantly increase the compilation time, making it less suitable
for real-time or time-critical applications.
• Additional memory usage: Intermediate code generation
requires additional memory to store the intermediate
representation, which can be a concern for memory-limited
systems.
• Increased complexity: Intermediate code generation can increase
the complexity of the compiler design, making it harder to
implement and maintain.
• Reduced performance: The process of generating intermediate
code can result in code that executes slower than code generated
directly from the source code.
Symbol Table in Compiler
Definition
The symbol table is defined as the set of Name and Value pairs.
• Symbol Table is an important data structure created and
maintained by the compiler in order to keep track of semantics of
variables i.e. it stores information about the scope and binding
information about names, information about instances of various
entities such as variable and function names, classes, objects, etc.
• It is built-in lexical and syntax analysis phases.
• The information is collected by the analysis phases of the compiler
and is used by the synthesis phases of the compiler to generate
code.
• It is used by the compiler to achieve compile-time efficiency.
• It is used by various phases of the compiler as follows:-
1.Lexical Analysis: Creates new table entries in the table, for
example like entries about tokens.
2.Syntax Analysis: Adds information regarding attribute type,
scope, dimension, line of reference, use, etc in the table.
3.Semantic Analysis: Uses available information in the table to
check for semantics i.e. to verify that expressions and assignments
are semantically correct(type checking) and update it accordingly.
4.Intermediate Code generation: Refers symbol table for knowing
how much and what type of run-time is allocated and table helps
in adding temporary variable information.
5.Code Optimization: Uses information present in the symbol table
for machine-dependent optimization.
6.Target Code generation: Generates code by using address
information of identifier present in the table.
Symbol Table entries – Each entry in the symbol table is associated
with attributes that support the compiler in different phases.
Use of Symbol Table-
• The symbol tables are typically used in compilers. Basically
compiler is a program which scans the application program (for
instance: your C program) and produces machine code.
• During this scan compiler stores the identifiers of that application
program in the symbol table. These identifiers are stored in the
form of name, value address, type.
• Here the name represents the name of identifier, value represents
the value stored in an identifier, the address represents memory
location of that identifier and type represents the data type of
identifier.
• Thus compiler can keep track of all the identifiers with all the
necessary information.
Items stored in Symbol table:
• Variable names and constants
• Procedure and function names
• Literal constants and strings
• Compiler generated temporaries
• Labels in source languages
Information used by the compiler from Symbol table:
• Data type and name
• Declaring procedures
• Offset in storage
• If structure or record then, a pointer to structure table.
• For parameters, whether parameter passing by value or by reference
• Number and type of arguments passed to function
• Base Address
Operations of Symbol table – The basic operations defined on a
symbol table include:
Operations on Symbol Table:
• Following operations can be performed on symbol table-
• 1. Insertion of an item in the symbol table.
• 2. Deletion of any item from the symbol table.
• 3. Searching of desired item from symbol table.
Implementation of Symbol table –
Following are commonly used data structures for implementing symbol table:-
1.List –
• we use a single array or equivalently several arrays, to store names and their
associated information ,New names are added to the list in the order in which
they are encountered . The position of the end of the array is marked by the
pointer available, pointing to where the next symbol-table entry will go. The
search for a name proceeds backwards from the end of the array to the
beginning. when the name is located the associated information can be found
in the words following next.
• In this method, an array is used to store names and associated
information.
• A pointer “available” is maintained at end of all stored records
and new names are added in the order as they arrive
• To search for a name we start from the beginning of the list till
available pointer and if not found we get an error “use of the
undeclared name”
• While inserting a new name we must ensure that it is not already
present otherwise an error occurs i.e. “Multiple defined names”
• Insertion is fast O(1), but lookup is slow for large tables – O(n) on
average
• The advantage is that it takes a minimum amount of space.
1. Linked List–
1.This implementation is using a linked list. A link field is added to each
record.
2.Searching of names is done in order pointed by the link of the link field.
3.A pointer “First” is maintained to point to the first record of the symbol
table.
4.Insertion is fast O(1), but lookup is slow for large tables – O(n) on average
2. Hash Table–
1.In hashing scheme, two tables are maintained – a hash table and symbol
table and are the most commonly used method to implement symbol
tables.
2.A hash table is an array with an index range: 0 to table size – 1. These
entries are pointers pointing to the names of the symbol table.
3.To search for a name we use a hash function that will result in an integer
between 0 to table size – 1.
4.Insertion and lookup can be made very fast – O(1).
5.The advantage is quick to search is possible and the disadvantage is that
Binary search Tree– Another approach to implementing a symbol table is to use a binary search tree
i.e. we add two link fields i.e. left and right child.
• All names are created as child of the root node that always follows the property of the binary search
tree.
• Insertion and lookup are O(log2 n) on average
Advantages of Symbol Table
1.The efficiency of a program can be increased by using symbol tables, which give quick and simple
access to crucial data such as variable and function names, data kinds, and memory locations.
2.better coding structure Symbol tables can be used to organize and simplify code, making it simpler
to comprehend, discover, and correct problems.
3.Faster code execution: By offering quick access to information like memory addresses, symbol tables
can be utilized to optimize code execution by lowering the number of memory accesses required
during execution.
4.Symbol tables can be used to increase the portability of code by offering a standardized method of
storing and retrieving data, which can make it simpler to migrate code between other systems or
programming languages.
5.Improved code reuse: By offering a standardized method of storing and accessing information,
symbol tables can be utilized to increase the reuse of code across multiple projects.
6.Symbol tables can be used to facilitate easy access to and examination of a program’s state during
execution, enhancing debugging by making it simpler to identify and correct mistakes.
Disadvantages of Symbol Table
1.Increased memory consumption: Systems with low memory resources
may suffer from symbol tables’ high memory requirements.
2.Increased processing time: The creation and processing of symbol
tables can take a long time, which can be problematic in systems with
constrained processing power.
3.Complexity: Developers who are not familiar with compiler design may
find symbol tables difficult to construct and maintain.
4.Limited scalability: Symbol tables may not be appropriate for large-
scale projects or applications that require o the management of
enormous amounts of data due to their limited scalability.
5.Upkeep: Maintaining and updating symbol tables on a regular basis can
be time- and resource-consuming.
6.Limited functionality: It’s possible that symbol tables don’t offer all the
features a developer needs, and therefore more tools or libraries will be
needed to round out their capabilities.
Applications of Symbol Table
1.Resolution of variable and function names: Symbol tables are used to
identify the data types and memory locations of variables and functions
as well as to resolve their names.
2.Resolution of scope issues: To resolve naming conflicts and ascertain the
range of variables and functions, symbol tables are utilized.
3.Symbol tables, which offer quick access to information such as memory
locations, are used to optimize code execution.
4.Code generation: By giving details like memory locations and data kinds,
symbol tables are utilized to create machine code from source code.
5.Error checking and code debugging: By supplying details about the
status of a program during execution, symbol tables are used to check for
faults and debug code.
6.Code organization and documentation: By supplying details about a
program’s structure, symbol tables can be used to organize code and
make it simpler to understand

CSC 204 PASSES IN COMPILER CONSTURCTION.pptx

  • 1.
    CSC 204 COMPILERCONSTRUCTION 1 PASSES IN COMPILER CONSTRUCTION
  • 2.
    What is aCompiler Pass? A Compiler pass refers to the traversal of a compiler through the entire program. Compiler passes are of two types Single Pass Compiler, and Two Pass Compiler or Multi-Pass Compiler. These are explained as follows. Types of Compiler Pass 1. Single Pass Compiler If we combine or group all the phases of compiler design in a single module known as a single pass compiler. A single pass compiler runs the source code only one time. In single pass Compiler source code directly transforms into machine code. It is also called a "Narrow compiler. It has limited scope A single pass compiler is one that processes the input exactly once, so going directly from lexical analysis to code generator, and then going back for the next read. Single pass compiler is faster and smaller than the multi-pass compiler.
  • 4.
    In the abovediagram, there are all 6 phases are grouped in a single module, some points of the single pass compiler are as: •A one-pass/single-pass compiler is a type of compiler that passes through the part of each compilation unit exactly once. Problems with Single Pass Compiler •A disadvantage of a single-pass compiler is that it is less efficient in comparison with the multipass compiler. •We can not optimize very well due to the context of expressions are limited. •As we can’t back up and process it again, so grammar should be limited or simplified. •Command interpreters such as bash/sh/tcsh can be considered Single pass compilers, but they also execute entries as soon as they are processed.
  • 5.
    2. Two-Pass compileror Multi-Pass compiler • A Two pass/multi-pass Compiler is a type of compiler that processes the source code or abstract syntax tree of a program two times. A processor that runs through the program to be translated twice is considered a two-pass compiler. • First Pass : is refers as  Front End  Analytic Part  Platform Independent • Second Pass : is refers as  Back End  Synthesis Part  Platform Dependent
  • 6.
    Multi Pass Compiler •The multi-pass compiler processes the source code several times. • It divided a large program into multiple small programs and process them. It develops multiple intermediate codes. • All of these multi-pass take the output of the previous phase as an input. • It is also known as ‘Wide Compiler’. As it can scan every portion of the program.
  • 7.
    Problems that canbe Solved With Multi-Pass Compiler First: If we want to design a compiler for a different programming language for the same machine. In this case for each programming language, there is a requirement to make the Front end/first pass for each of them and only one Back end/second pass as:
  • 8.
    Second: If wewant to design a compiler for the same programming language for different machines/systems. In this case, we make different Back end for different Machine/system and make only one Front end for the same programming language as:
  • 9.
    Difference between OnePass and Two Pass Compiler Single pass Two-pass It performs Translation in one pass It performs Translation in two pass It scans the entire file only once. It requires two passes to scan the source file. It generates intermediate code It does not generate Intermediate code It is faster than two pass assembler It is slower than two pass assembler A loader is not required A loader is required. No object program is written. A loader is required as the object code is generated. Perform some professing of assembler directives. Perform processing of assembler directives not done in pass-1
  • 10.
    Intermediate Code Generationin Compiler Design • In the analysis-synthesis model of a compiler, the front end of a compiler translates a source program into an independent intermediate code, then the back end of the compiler uses this intermediate code to generate the target code (which can be understood by the machine). The benefits of using machine-independent intermediate code are: • Because of the machine-independent intermediate code, portability will be enhanced. For ex, suppose, if a compiler translates the source language to its target machine language without having the option for generating intermediate code, then for each new machine, a full native compiler is required. Because, obviously, there were some modifications in the compiler itself according to the machine specifications. • Retargeting is facilitated. • It is easier to apply source code modification to improve the performance of source code by optimizing the intermediate code.
  • 12.
    If we generatemachine code directly from source code then for n target machine we will have optimizers and n code generator but if we will have a machine- independent intermediate code, we will have only one optimizer. Intermediate code can be either language-specific (e.g., Bytecode for Java) or language. independent (three-address code). The following are commonly used intermediate code representations: 1. Postfix Notation: Also known as reverse Polish notation or suffix notation. • In the infix notation, the operator is placed between operands, e.g., a + b. Postfix notation positions the operator at the right end, as in ab +. • For any postfix expressions e1 and e2 with a binary operator (+) , applying the operator yields e1e2+. • Postfix notation eliminates the need for parentheses, as the operator’s position and arity allow unambiguous expression decoding. • In postfix notation, the operator consistently follows the operand. Example 1: The postfix representation of the expression (a + b) * c is : ab + c * Example 2: The postfix representation of the expression (a – b) * (c + d) + (a – b) is : ab – cd + *ab -+
  • 13.
    2. Three-Address Code:A three address statement involves a maximum of three references, consisting of two for operands and one for the result. • A sequence of three address statements collectively forms a three address code. • The typical form of a three address statement is expressed as x = y op z, where x, y, and z represent memory addresses. • Each variable (x, y, z) in a three address statement is associated with a specific memory location. • While a standard three address statement includes three references, there are instances where a statement may contain fewer than three references, yet it is still categorized as a three address statement. Example: The three address code for the expression a + b * c + d : T1 = b * c T2 = a + T1, T3 = T2 + d; T 1, T2, T3 are temporary variables. There are 3 ways to represent a Three-Address Code in compiler design: i) Quadruples ii) Triples iii) Indirect Triples
  • 14.
    3. Syntax Tree:A syntax tree serves as a condensed representation of a parse tree. • The operator and keyword nodes present in the parse tree undergo a relocation process to become part of their respective parent nodes in the syntax tree. the internal nodes are operators and child nodes are operands. • Creating a syntax tree involves strategically placing parentheses within the expression. This technique contributes to a more intuitive representation, making it easier to discern the sequence in which operands should be processed. • The syntax tree not only condenses the parse tree but also offers an improved visual representation of the program’s syntactic structure, Example: x = (a + b * c) / (a – b * c)
  • 16.
    Advantages of IntermediateCode Generation: • Easier to implement: Intermediate code generation can simplify the code generation process by reducing the complexity of the input code, making it easier to implement. • Facilitates code optimization: Intermediate code generation can enable the use of various code optimization techniques, leading to improved performance and efficiency of the generated code. • Platform independence: Intermediate code is platform- independent, meaning that it can be translated into machine code or bytecode for any platform. • Code reuse: Intermediate code can be reused in the future to generate code for other platforms or languages. • Easier debugging: Intermediate code can be easier to debug than machine code or bytecode, as it is closer to the original source code.
  • 17.
    Disadvantages of IntermediateCode Generation: • Increased compilation time: Intermediate code generation can significantly increase the compilation time, making it less suitable for real-time or time-critical applications. • Additional memory usage: Intermediate code generation requires additional memory to store the intermediate representation, which can be a concern for memory-limited systems. • Increased complexity: Intermediate code generation can increase the complexity of the compiler design, making it harder to implement and maintain. • Reduced performance: The process of generating intermediate code can result in code that executes slower than code generated directly from the source code.
  • 18.
    Symbol Table inCompiler Definition The symbol table is defined as the set of Name and Value pairs. • Symbol Table is an important data structure created and maintained by the compiler in order to keep track of semantics of variables i.e. it stores information about the scope and binding information about names, information about instances of various entities such as variable and function names, classes, objects, etc. • It is built-in lexical and syntax analysis phases. • The information is collected by the analysis phases of the compiler and is used by the synthesis phases of the compiler to generate code. • It is used by the compiler to achieve compile-time efficiency. • It is used by various phases of the compiler as follows:-
  • 19.
    1.Lexical Analysis: Createsnew table entries in the table, for example like entries about tokens. 2.Syntax Analysis: Adds information regarding attribute type, scope, dimension, line of reference, use, etc in the table. 3.Semantic Analysis: Uses available information in the table to check for semantics i.e. to verify that expressions and assignments are semantically correct(type checking) and update it accordingly. 4.Intermediate Code generation: Refers symbol table for knowing how much and what type of run-time is allocated and table helps in adding temporary variable information. 5.Code Optimization: Uses information present in the symbol table for machine-dependent optimization. 6.Target Code generation: Generates code by using address information of identifier present in the table.
  • 20.
    Symbol Table entries– Each entry in the symbol table is associated with attributes that support the compiler in different phases. Use of Symbol Table- • The symbol tables are typically used in compilers. Basically compiler is a program which scans the application program (for instance: your C program) and produces machine code. • During this scan compiler stores the identifiers of that application program in the symbol table. These identifiers are stored in the form of name, value address, type. • Here the name represents the name of identifier, value represents the value stored in an identifier, the address represents memory location of that identifier and type represents the data type of identifier. • Thus compiler can keep track of all the identifiers with all the necessary information.
  • 21.
    Items stored inSymbol table: • Variable names and constants • Procedure and function names • Literal constants and strings • Compiler generated temporaries • Labels in source languages Information used by the compiler from Symbol table: • Data type and name • Declaring procedures • Offset in storage • If structure or record then, a pointer to structure table. • For parameters, whether parameter passing by value or by reference • Number and type of arguments passed to function • Base Address
  • 22.
    Operations of Symboltable – The basic operations defined on a symbol table include:
  • 23.
    Operations on SymbolTable: • Following operations can be performed on symbol table- • 1. Insertion of an item in the symbol table. • 2. Deletion of any item from the symbol table. • 3. Searching of desired item from symbol table. Implementation of Symbol table – Following are commonly used data structures for implementing symbol table:- 1.List – • we use a single array or equivalently several arrays, to store names and their associated information ,New names are added to the list in the order in which they are encountered . The position of the end of the array is marked by the pointer available, pointing to where the next symbol-table entry will go. The search for a name proceeds backwards from the end of the array to the beginning. when the name is located the associated information can be found in the words following next.
  • 24.
    • In thismethod, an array is used to store names and associated information. • A pointer “available” is maintained at end of all stored records and new names are added in the order as they arrive • To search for a name we start from the beginning of the list till available pointer and if not found we get an error “use of the undeclared name” • While inserting a new name we must ensure that it is not already present otherwise an error occurs i.e. “Multiple defined names” • Insertion is fast O(1), but lookup is slow for large tables – O(n) on average • The advantage is that it takes a minimum amount of space.
  • 25.
    1. Linked List– 1.Thisimplementation is using a linked list. A link field is added to each record. 2.Searching of names is done in order pointed by the link of the link field. 3.A pointer “First” is maintained to point to the first record of the symbol table. 4.Insertion is fast O(1), but lookup is slow for large tables – O(n) on average 2. Hash Table– 1.In hashing scheme, two tables are maintained – a hash table and symbol table and are the most commonly used method to implement symbol tables. 2.A hash table is an array with an index range: 0 to table size – 1. These entries are pointers pointing to the names of the symbol table. 3.To search for a name we use a hash function that will result in an integer between 0 to table size – 1. 4.Insertion and lookup can be made very fast – O(1). 5.The advantage is quick to search is possible and the disadvantage is that
  • 26.
    Binary search Tree–Another approach to implementing a symbol table is to use a binary search tree i.e. we add two link fields i.e. left and right child. • All names are created as child of the root node that always follows the property of the binary search tree. • Insertion and lookup are O(log2 n) on average Advantages of Symbol Table 1.The efficiency of a program can be increased by using symbol tables, which give quick and simple access to crucial data such as variable and function names, data kinds, and memory locations. 2.better coding structure Symbol tables can be used to organize and simplify code, making it simpler to comprehend, discover, and correct problems. 3.Faster code execution: By offering quick access to information like memory addresses, symbol tables can be utilized to optimize code execution by lowering the number of memory accesses required during execution. 4.Symbol tables can be used to increase the portability of code by offering a standardized method of storing and retrieving data, which can make it simpler to migrate code between other systems or programming languages. 5.Improved code reuse: By offering a standardized method of storing and accessing information, symbol tables can be utilized to increase the reuse of code across multiple projects. 6.Symbol tables can be used to facilitate easy access to and examination of a program’s state during execution, enhancing debugging by making it simpler to identify and correct mistakes.
  • 27.
    Disadvantages of SymbolTable 1.Increased memory consumption: Systems with low memory resources may suffer from symbol tables’ high memory requirements. 2.Increased processing time: The creation and processing of symbol tables can take a long time, which can be problematic in systems with constrained processing power. 3.Complexity: Developers who are not familiar with compiler design may find symbol tables difficult to construct and maintain. 4.Limited scalability: Symbol tables may not be appropriate for large- scale projects or applications that require o the management of enormous amounts of data due to their limited scalability. 5.Upkeep: Maintaining and updating symbol tables on a regular basis can be time- and resource-consuming. 6.Limited functionality: It’s possible that symbol tables don’t offer all the features a developer needs, and therefore more tools or libraries will be needed to round out their capabilities.
  • 28.
    Applications of SymbolTable 1.Resolution of variable and function names: Symbol tables are used to identify the data types and memory locations of variables and functions as well as to resolve their names. 2.Resolution of scope issues: To resolve naming conflicts and ascertain the range of variables and functions, symbol tables are utilized. 3.Symbol tables, which offer quick access to information such as memory locations, are used to optimize code execution. 4.Code generation: By giving details like memory locations and data kinds, symbol tables are utilized to create machine code from source code. 5.Error checking and code debugging: By supplying details about the status of a program during execution, symbol tables are used to check for faults and debug code. 6.Code organization and documentation: By supplying details about a program’s structure, symbol tables can be used to organize code and make it simpler to understand