This document discusses different approaches to implementing scope rules in programming languages. It begins by defining lexical/static scope and dynamic scope. It then discusses how block structure and nested procedures can be implemented using stacks and access links. Specifically, it describes how storage is allocated for local and non-local variables under lexical and dynamic scope models. The key implementation techniques discussed are stacks, access links, displays, deep access, and shallow access.
The document discusses issues in code generation by a compiler. It defines code generation as converting an intermediate representation into executable machine code. The code generator accesses symbol tables and performs multiple passes over intermediate forms. Key issues addressed include the input to the code generator, generating code for the target machine, memory management, instruction selection, register allocation, and optimization techniques like reordering independent instructions to improve efficiency.
The document discusses run-time environments and how compilers support program execution through run-time environments. It covers:
1) The compiler cooperates with the OS and system software through a run-time environment to implement language abstractions during execution.
2) The run-time environment handles storage layout/allocation, variable access, procedure linkage, parameter passing and interfacing with the OS.
3) Memory is typically divided into code, static storage, heap and stack areas, with the stack and heap growing towards opposite ends of memory dynamically during execution.
The purpose of types:
To define what the program should do.
e.g. read an array of integers and return a double
To guarantee that the program is meaningful.
that it does not add a string to an integer
that variables are declared before they are used
To document the programmer's intentions.
better than comments, which are not checked by the compiler
To optimize the use of hardware.
reserve the minimal amount of memory, but not more
use the most appropriate machine instructions.
This document discusses various techniques for optimizing computer code, including:
1. Local optimizations that improve performance within basic blocks, such as constant folding, propagation, and elimination of redundant computations.
2. Global optimizations that analyze control flow across basic blocks, such as common subexpression elimination.
3. Loop optimizations that improve performance of loops by removing invariant data and induction variables.
4. Machine-dependent optimizations like peephole optimizations that replace instructions with more efficient alternatives.
The goal of optimizations is to improve speed and efficiency while preserving program meaning and correctness. Optimizations can occur at multiple stages of development and compilation.
This produced by straight forward compiling algorithms made to run faster or less space or both. This improvement is achieved by program transformations that are traditionally called optimizations.compiler that apply-code improving transformation are called optimizing compilers.
The document discusses heap memory management. It describes the heap as the portion of memory used for indefinitely stored data. A memory manager subsystem allocates and deallocates space in the heap. It keeps track of free space and serves as the interface between programs and the operating system. When allocating memory, the manager either uses available contiguous space or increases heap size from the OS. Deallocated space is returned to the free pool but memory is not returned to the OS when heap usage drops.
This document discusses various strategies for register allocation and assignment in compiler design. It notes that assigning values to specific registers simplifies compiler design but can result in inefficient register usage. Global register allocation aims to assign frequently used values to registers for the duration of a single block. Usage counts provide an estimate of how many loads/stores could be saved by assigning a value to a register. Graph coloring is presented as a technique where an interference graph is constructed and coloring aims to assign registers efficiently despite interference between values.
The document discusses issues in code generation by a compiler. It defines code generation as converting an intermediate representation into executable machine code. The code generator accesses symbol tables and performs multiple passes over intermediate forms. Key issues addressed include the input to the code generator, generating code for the target machine, memory management, instruction selection, register allocation, and optimization techniques like reordering independent instructions to improve efficiency.
The document discusses run-time environments and how compilers support program execution through run-time environments. It covers:
1) The compiler cooperates with the OS and system software through a run-time environment to implement language abstractions during execution.
2) The run-time environment handles storage layout/allocation, variable access, procedure linkage, parameter passing and interfacing with the OS.
3) Memory is typically divided into code, static storage, heap and stack areas, with the stack and heap growing towards opposite ends of memory dynamically during execution.
The purpose of types:
To define what the program should do.
e.g. read an array of integers and return a double
To guarantee that the program is meaningful.
that it does not add a string to an integer
that variables are declared before they are used
To document the programmer's intentions.
better than comments, which are not checked by the compiler
To optimize the use of hardware.
reserve the minimal amount of memory, but not more
use the most appropriate machine instructions.
This document discusses various techniques for optimizing computer code, including:
1. Local optimizations that improve performance within basic blocks, such as constant folding, propagation, and elimination of redundant computations.
2. Global optimizations that analyze control flow across basic blocks, such as common subexpression elimination.
3. Loop optimizations that improve performance of loops by removing invariant data and induction variables.
4. Machine-dependent optimizations like peephole optimizations that replace instructions with more efficient alternatives.
The goal of optimizations is to improve speed and efficiency while preserving program meaning and correctness. Optimizations can occur at multiple stages of development and compilation.
This produced by straight forward compiling algorithms made to run faster or less space or both. This improvement is achieved by program transformations that are traditionally called optimizations.compiler that apply-code improving transformation are called optimizing compilers.
The document discusses heap memory management. It describes the heap as the portion of memory used for indefinitely stored data. A memory manager subsystem allocates and deallocates space in the heap. It keeps track of free space and serves as the interface between programs and the operating system. When allocating memory, the manager either uses available contiguous space or increases heap size from the OS. Deallocated space is returned to the free pool but memory is not returned to the OS when heap usage drops.
This document discusses various strategies for register allocation and assignment in compiler design. It notes that assigning values to specific registers simplifies compiler design but can result in inefficient register usage. Global register allocation aims to assign frequently used values to registers for the duration of a single block. Usage counts provide an estimate of how many loads/stores could be saved by assigning a value to a register. Graph coloring is presented as a technique where an interference graph is constructed and coloring aims to assign registers efficiently despite interference between values.
Intermediate code generation in Compiler DesignKuppusamy P
The document discusses intermediate code generation in compilers. It begins by explaining that intermediate code generation is the final phase of the compiler front-end and its goal is to translate the program into a format expected by the back-end. Common intermediate representations include three address code and static single assignment form. The document then discusses why intermediate representations are used, how to choose an appropriate representation, and common types of representations like graphical IRs and linear IRs.
This presentation discusses peephole optimization. Peephole optimization is performed on small segments of generated code to replace sets of instructions with shorter or faster equivalents. It aims to improve performance, reduce code size, and reduce memory footprint. The working flow of peephole optimization involves scanning code for patterns that match predefined replacement rules. These rules include constant folding, strength reduction, removing null sequences, and combining operations. Peephole optimization functions by replacing slow instructions with faster ones, removing redundant code and stack instructions, and optimizing jumps.
program partitioning and scheduling IN Advanced Computer ArchitecturePankaj Kumar Jain
Advanced Computer Architecture,Program Partitioning and Scheduling,Program Partitioning & Scheduling,Latency,Levels of Parallelism,Loop-level Parallelism,Subprogram-level Parallelism,Job or Program-Level Parallelism,Communication Latency,Grain Packing and Scheduling,Program Graphs and Packing
Peephole optimization techniques in compiler designAnul Chaudhary
This document discusses various compiler optimization techniques, focusing on peephole optimization. It defines optimization as transforming code to run faster or use less memory without changing functionality. Optimization can be machine-independent, transforming code regardless of hardware, or machine-dependent, tailored to a specific architecture. Peephole optimization examines small blocks of code and replaces them with faster or smaller equivalents using techniques like constant folding, strength reduction, null sequence elimination, and algebraic laws. Common replacement rules aim to improve performance, reduce memory usage, and decrease code size.
The document describes a simple code generator that generates target code for a sequence of three-address statements. It tracks register availability using register descriptors and variable locations using address descriptors. For each statement, it determines the locations of operands, copies them to a register if needed, performs the operation, updates the register and address descriptors, and stores values before procedure calls or basic block boundaries. It uses a getreg function to determine register allocation. Conditional statements are handled using compare and jump instructions and condition codes.
The document discusses solving the 8 queens problem using backtracking. It begins by explaining backtracking as an algorithm that builds partial candidates for solutions incrementally and abandons any partial candidate that cannot be completed to a valid solution. It then provides more details on the 8 queens problem itself - the goal is to place 8 queens on a chessboard so that no two queens attack each other. Backtracking is well-suited for solving this problem by attempting to place queens one by one and backtracking when an invalid placement is found.
The document provides an overview of compilers by discussing:
1. Compilers translate source code into executable target code by going through several phases including lexical analysis, syntax analysis, semantic analysis, code optimization, and code generation.
2. An interpreter directly executes source code statement by statement while a compiler produces target code as translation. Compiled code generally runs faster than interpreted code.
3. The phases of a compiler include a front end that analyzes the source code and produces intermediate code, and a back end that optimizes and generates the target code.
This document discusses loops in flow graphs. It defines dominators and uses them to define natural loops and inner loops. It explains how to build a dominator tree and find natural loops given a back edge. Reducible flow graphs are introduced as graphs that can be partitioned into forward and back edges such that the forward edges form an acyclic subgraph, allowing certain loop transformations. Examples of natural inner and outer loops are provided. Pre-headers, which are added to loops to facilitate transformations, are also discussed.
This document discusses different types of scheduling algorithms used by operating systems to determine which process or processes will run on the CPU. It describes preemptive and non-preemptive scheduling, and provides examples of common scheduling algorithms like first-come, first-served (FCFS), shortest job first (SJF), round robin, and priority-based scheduling. Formulas for calculating turnaround time and waiting time are also presented.
1. Static allocation assigns storage locations to data objects at compile time. Stack allocation uses a stack to dynamically allocate memory for procedure activations and local variables at runtime. Heap allocation allocates memory for dynamic data structures from a heap region at runtime.
2. Access to non-local names can use lexical scoping by following access links or displays, or dynamic scoping by searching the stack.
3. Blocks can be nested and treated as parameterless procedures, with memory allocated on the stack when entered and deallocated on exit.
4. The activation record stores information for a procedure execution, including local data, saved registers, parameters, return values, and links to enclosing activations.
This document summarizes key topics in intermediate code generation discussed in Chapter 6, including:
1) Variants of syntax trees like DAGs are introduced to share common subexpressions. Three-address code is also discussed where each instruction has at most three operands.
2) Type checking and type expressions are covered, along with translating expressions and statements to three-address code. Control flow statements like if/else are also translated using techniques like backpatching.
3) Backpatching allows symbolic labels in conditional jumps to be resolved by a later pass that inserts actual addresses, avoiding an extra pass. This and other control flow translation topics are covered.
This document discusses top-down parsing and different types of top-down parsers, including recursive descent parsers, predictive parsers, and LL(1) grammars. It explains how to build predictive parsers without recursion by using a parsing table constructed from the FIRST and FOLLOW sets of grammar symbols. The key steps are: 1) computing FIRST and FOLLOW, 2) filling the predictive parsing table based on FIRST/FOLLOW, 3) using the table to parse inputs in a non-recursive manner by maintaining the parser's own stack. An example is provided to illustrate constructing the FIRST/FOLLOW sets and parsing table for a sample grammar.
This document discusses various page replacement algorithms used in operating systems. It begins with definitions of paging and page replacement in virtual memory systems. There are then overviews of 12 different page replacement algorithms including FIFO, optimal, LRU, NRU, NFU, second chance, clock, and random. The goal of page replacement algorithms is to minimize page faults. The document provides examples and analyses of how each algorithm approaches replacing pages in memory.
Performance analysis(Time & Space Complexity)swapnac12
The document discusses algorithms analysis and design. It covers time complexity and space complexity analysis using approaches like counting the number of basic operations like assignments, comparisons etc. and analyzing how they vary with the size of the input. Common complexities like constant, linear, quadratic and cubic are explained with examples. Frequency count method is presented to determine tight bounds of time and space complexity of algorithms.
FellowBuddy.com is an innovative platform that brings students together to share notes, exam papers, study guides, project reports and presentation for upcoming exams.
We connect Students who have an understanding of course material with Students who need help.
Benefits:-
# Students can catch up on notes they missed because of an absence.
# Underachievers can find peer developed notes that break down lecture and study material in a way that they can understand
# Students can earn better grades, save time and study effectively
Our Vision & Mission – Simplifying Students Life
Our Belief – “The great breakthrough in your life comes when you realize it, that you can learn anything you need to learn; to accomplish any goal that you have set for yourself. This means there are no limits on what you can be, have or do.”
Like Us - https://www.facebook.com/FellowBuddycom
The document discusses the Liang-Barsky line clipping algorithm, an algorithm used to clip lines to a rectangular viewing area. It is covered by Arvind Kumar, an assistant professor at Vidya College of Engineering. As an example, the line clipping algorithm is shown clipping a line with endpoints (22.5,15) and (25,16).
This document discusses different techniques for implementing lexical and dynamic scope in programming languages. It describes access links and displays for implementing lexical scope, where access links allow nested procedures to access non-local variables by following pointers to activation records, while displays provide faster access by storing pointers to relevant activation records in an array. For dynamic scope, deep access links variables to the same records as control links, while shallow access stores variable values in static locations and saves/restores them during procedure activation and deactivation.
The runtime environment handles the implementation of programming language abstractions on the target machine. It allocates storage for code and data and handles access to variables, procedure linkage and parameter passing. Storage is typically divided into code, static, heap and stack areas. The compiler generates code that maps logical addresses to physical addresses. The stack grows downward and stores activation records for procedures, while the heap grows upward and dynamically allocates memory. Procedure activations are represented by activation records on the call stack. The runtime environment implements variable scoping and access to non-local data using techniques like static scopes, access links and displays.
Implementing subprograms requires saving execution context, allocating activation records, and maintaining dynamic or static chains. Activation records contain parameters, local variables, return addresses, and dynamic/static links. Nested subprograms are supported through static chains that connect activation records. Dynamic scoping searches the dynamic chain for non-local variables, while shallow access uses a central variable table. Blocks are implemented as parameterless subprograms to allocate separate activation records for block variables.
Intermediate code generation in Compiler DesignKuppusamy P
The document discusses intermediate code generation in compilers. It begins by explaining that intermediate code generation is the final phase of the compiler front-end and its goal is to translate the program into a format expected by the back-end. Common intermediate representations include three address code and static single assignment form. The document then discusses why intermediate representations are used, how to choose an appropriate representation, and common types of representations like graphical IRs and linear IRs.
This presentation discusses peephole optimization. Peephole optimization is performed on small segments of generated code to replace sets of instructions with shorter or faster equivalents. It aims to improve performance, reduce code size, and reduce memory footprint. The working flow of peephole optimization involves scanning code for patterns that match predefined replacement rules. These rules include constant folding, strength reduction, removing null sequences, and combining operations. Peephole optimization functions by replacing slow instructions with faster ones, removing redundant code and stack instructions, and optimizing jumps.
program partitioning and scheduling IN Advanced Computer ArchitecturePankaj Kumar Jain
Advanced Computer Architecture,Program Partitioning and Scheduling,Program Partitioning & Scheduling,Latency,Levels of Parallelism,Loop-level Parallelism,Subprogram-level Parallelism,Job or Program-Level Parallelism,Communication Latency,Grain Packing and Scheduling,Program Graphs and Packing
Peephole optimization techniques in compiler designAnul Chaudhary
This document discusses various compiler optimization techniques, focusing on peephole optimization. It defines optimization as transforming code to run faster or use less memory without changing functionality. Optimization can be machine-independent, transforming code regardless of hardware, or machine-dependent, tailored to a specific architecture. Peephole optimization examines small blocks of code and replaces them with faster or smaller equivalents using techniques like constant folding, strength reduction, null sequence elimination, and algebraic laws. Common replacement rules aim to improve performance, reduce memory usage, and decrease code size.
The document describes a simple code generator that generates target code for a sequence of three-address statements. It tracks register availability using register descriptors and variable locations using address descriptors. For each statement, it determines the locations of operands, copies them to a register if needed, performs the operation, updates the register and address descriptors, and stores values before procedure calls or basic block boundaries. It uses a getreg function to determine register allocation. Conditional statements are handled using compare and jump instructions and condition codes.
The document discusses solving the 8 queens problem using backtracking. It begins by explaining backtracking as an algorithm that builds partial candidates for solutions incrementally and abandons any partial candidate that cannot be completed to a valid solution. It then provides more details on the 8 queens problem itself - the goal is to place 8 queens on a chessboard so that no two queens attack each other. Backtracking is well-suited for solving this problem by attempting to place queens one by one and backtracking when an invalid placement is found.
The document provides an overview of compilers by discussing:
1. Compilers translate source code into executable target code by going through several phases including lexical analysis, syntax analysis, semantic analysis, code optimization, and code generation.
2. An interpreter directly executes source code statement by statement while a compiler produces target code as translation. Compiled code generally runs faster than interpreted code.
3. The phases of a compiler include a front end that analyzes the source code and produces intermediate code, and a back end that optimizes and generates the target code.
This document discusses loops in flow graphs. It defines dominators and uses them to define natural loops and inner loops. It explains how to build a dominator tree and find natural loops given a back edge. Reducible flow graphs are introduced as graphs that can be partitioned into forward and back edges such that the forward edges form an acyclic subgraph, allowing certain loop transformations. Examples of natural inner and outer loops are provided. Pre-headers, which are added to loops to facilitate transformations, are also discussed.
This document discusses different types of scheduling algorithms used by operating systems to determine which process or processes will run on the CPU. It describes preemptive and non-preemptive scheduling, and provides examples of common scheduling algorithms like first-come, first-served (FCFS), shortest job first (SJF), round robin, and priority-based scheduling. Formulas for calculating turnaround time and waiting time are also presented.
1. Static allocation assigns storage locations to data objects at compile time. Stack allocation uses a stack to dynamically allocate memory for procedure activations and local variables at runtime. Heap allocation allocates memory for dynamic data structures from a heap region at runtime.
2. Access to non-local names can use lexical scoping by following access links or displays, or dynamic scoping by searching the stack.
3. Blocks can be nested and treated as parameterless procedures, with memory allocated on the stack when entered and deallocated on exit.
4. The activation record stores information for a procedure execution, including local data, saved registers, parameters, return values, and links to enclosing activations.
This document summarizes key topics in intermediate code generation discussed in Chapter 6, including:
1) Variants of syntax trees like DAGs are introduced to share common subexpressions. Three-address code is also discussed where each instruction has at most three operands.
2) Type checking and type expressions are covered, along with translating expressions and statements to three-address code. Control flow statements like if/else are also translated using techniques like backpatching.
3) Backpatching allows symbolic labels in conditional jumps to be resolved by a later pass that inserts actual addresses, avoiding an extra pass. This and other control flow translation topics are covered.
This document discusses top-down parsing and different types of top-down parsers, including recursive descent parsers, predictive parsers, and LL(1) grammars. It explains how to build predictive parsers without recursion by using a parsing table constructed from the FIRST and FOLLOW sets of grammar symbols. The key steps are: 1) computing FIRST and FOLLOW, 2) filling the predictive parsing table based on FIRST/FOLLOW, 3) using the table to parse inputs in a non-recursive manner by maintaining the parser's own stack. An example is provided to illustrate constructing the FIRST/FOLLOW sets and parsing table for a sample grammar.
This document discusses various page replacement algorithms used in operating systems. It begins with definitions of paging and page replacement in virtual memory systems. There are then overviews of 12 different page replacement algorithms including FIFO, optimal, LRU, NRU, NFU, second chance, clock, and random. The goal of page replacement algorithms is to minimize page faults. The document provides examples and analyses of how each algorithm approaches replacing pages in memory.
Performance analysis(Time & Space Complexity)swapnac12
The document discusses algorithms analysis and design. It covers time complexity and space complexity analysis using approaches like counting the number of basic operations like assignments, comparisons etc. and analyzing how they vary with the size of the input. Common complexities like constant, linear, quadratic and cubic are explained with examples. Frequency count method is presented to determine tight bounds of time and space complexity of algorithms.
FellowBuddy.com is an innovative platform that brings students together to share notes, exam papers, study guides, project reports and presentation for upcoming exams.
We connect Students who have an understanding of course material with Students who need help.
Benefits:-
# Students can catch up on notes they missed because of an absence.
# Underachievers can find peer developed notes that break down lecture and study material in a way that they can understand
# Students can earn better grades, save time and study effectively
Our Vision & Mission – Simplifying Students Life
Our Belief – “The great breakthrough in your life comes when you realize it, that you can learn anything you need to learn; to accomplish any goal that you have set for yourself. This means there are no limits on what you can be, have or do.”
Like Us - https://www.facebook.com/FellowBuddycom
The document discusses the Liang-Barsky line clipping algorithm, an algorithm used to clip lines to a rectangular viewing area. It is covered by Arvind Kumar, an assistant professor at Vidya College of Engineering. As an example, the line clipping algorithm is shown clipping a line with endpoints (22.5,15) and (25,16).
This document discusses different techniques for implementing lexical and dynamic scope in programming languages. It describes access links and displays for implementing lexical scope, where access links allow nested procedures to access non-local variables by following pointers to activation records, while displays provide faster access by storing pointers to relevant activation records in an array. For dynamic scope, deep access links variables to the same records as control links, while shallow access stores variable values in static locations and saves/restores them during procedure activation and deactivation.
The runtime environment handles the implementation of programming language abstractions on the target machine. It allocates storage for code and data and handles access to variables, procedure linkage and parameter passing. Storage is typically divided into code, static, heap and stack areas. The compiler generates code that maps logical addresses to physical addresses. The stack grows downward and stores activation records for procedures, while the heap grows upward and dynamically allocates memory. Procedure activations are represented by activation records on the call stack. The runtime environment implements variable scoping and access to non-local data using techniques like static scopes, access links and displays.
Implementing subprograms requires saving execution context, allocating activation records, and maintaining dynamic or static chains. Activation records contain parameters, local variables, return addresses, and dynamic/static links. Nested subprograms are supported through static chains that connect activation records. Dynamic scoping searches the dynamic chain for non-local variables, while shallow access uses a central variable table. Blocks are implemented as parameterless subprograms to allocate separate activation records for block variables.
The document discusses intermediate code generation in compilers. It describes various intermediate representations like syntax trees, DAGs, postfix notation, and 3-address code. Syntax trees represent the hierarchical structure of a program and are constructed using functions like mknode() and mkleaf(). DAGs provide a more compact representation by identifying common subexpressions. Postfix notation linearizes the syntax tree. The document also discusses run-time environments, storage allocation strategies like static, stack and heap allocation, and activation records.
This document discusses run-time addressing and storage of variables in programming. It covers how variables are accessed using offsets from frames or stacks. It also discusses variable-length local data and how it can be allocated dynamically on the stack or heap. The document then covers scope, static and dynamic scoping rules, and how static links are used to access non-local variables at run-time.
The document discusses various techniques for implementing subprograms in programming languages. It covers:
1) The general semantics of calls and returns between subprograms and the actions involved.
2) Implementing simple subprograms with static local variables and activation records.
3) Implementing subprograms with stack-dynamic local variables using run-time stacks and activation records.
4) Techniques for implementing nested subprograms using static linking chains to access nonlocal variables.
The document discusses compiler design topics like run-time environments, code generation, and garbage collection.
It covers stack allocation of space, access to non-local data, heap management, introduction to garbage collection and trace-based collection. For code generation, it discusses issues in code generator design, target languages, basic blocks, optimization, and register allocation.
It also provides details on stack allocation, static and dynamic scopes, lexical scopes for nested procedures, displays, storage allocation techniques, and issues in code generator design like instruction selection, evaluation order, and register allocation problems.
-ImplementSubprogram-theory of programmingjavariagull777
This document discusses different techniques for implementing subprograms (subroutines and functions) in programming languages. It covers:
- The general semantics of subprogram calls and returns, including saving execution context, passing parameters, and transferring control.
- Implementing "simple" subprograms where local variables are static and nesting is not allowed, using static allocation of code and activation records.
- Using a call stack to implement subprograms with stack-dynamic local variables, recursion, and nested subprograms, where activation records are dynamically allocated on the stack.
- Techniques like static chaining and static depths to allow accessing nonlocal variables in nested subprograms by finding the correct activation record on the stack.
A stack is a data structure that follows the last-in, first-out (LIFO) principle. Elements can only be added or removed from one end, called the top. In computers, stacks are implemented using a portion of memory and the stack pointer register points to the top of the stack. When calling subroutines, the return address is saved on the stack using a push operation so the subroutine knows where to return to after executing. Parameters can be passed between the calling program and subroutine using registers or by pushing them onto the stack.
This document discusses abstract data types and encapsulation. It explains that abstract data types define a set of objects, operations on those objects, and encapsulate them so the user cannot directly access the hidden data. Encapsulation through subprograms and type definitions is described. Different approaches to static and dynamic storage management like stacks, heaps, and garbage collection are also summarized.
The document discusses assembly language programming concepts including the stack segment, stack, stack instructions, subroutines, macros, and recursive procedures. It provides examples and explanations of these concepts. It also includes sample programs and solutions related to stacks, subroutines, and other assembly language topics.
The runtime environment manages memory allocation and procedure activations at runtime. It uses a runtime support system and handles procedure calls through activation records stored on a call stack. Activation records contain local variables, parameters, and return addresses. Procedures can be allocated memory statically at compile time or dynamically on the call stack or heap as needed. The runtime environment maps names to memory locations and handles memory management through allocation and deallocation.
The document discusses run-time environments and how they manage memory allocation and procedure calls at runtime. A run-time system handles mapping names to storage locations, allocating and deallocating memory, and managing procedure activations through techniques like activation records, control stacks, and dynamic storage allocation using stacks and heaps. The key responsibilities of a run-time system include storage management and keeping track of the dynamic execution state as programs execute.
This document describes a unique functional coverage flow using SystemVerilog and NTB (Vera) at ARM, Inc. in Austin, Texas. They wanted to use SystemVerilog coverage but the tool did not fully support covergroups. So they wrote coverage in SystemVerilog using covered properties and covergroups, and translated the covergroups to NTB coverage_groups. This allowed training on SystemVerilog while still using the code, and retaining options for the future when full support was added. The process of translating between the two languages and the pros and cons of each approach are discussed.
This presentation is a part of the COP2272C college level course taught at the Florida Polytechnic University located in Lakeland Florida. The purpose of this course is to introduce students to the C++ language and the fundamentals of object orientated programming..
The course is one semester in length and meets for 2 hours twice a week. The Instructor is Dr. Jim Anderson.
The document provides an introduction to the Python programming language. It discusses what Python is, its creator Guido van Rossum, and how to write a simple "Hello World" program. It also covers Python data types, operators, flow control using conditionals and loops, functions, input/output operations, and the Zen of Python philosophy guiding Python's design. The document serves as the first day of instruction in a Python course.
Inside LoLA - Experiences from building a state space tool for place transiti...Universität Rostock
LoLA is a state space tool for analyzing place/transition nets that was developed starting in 1998. It uses various reduction techniques like stubborn sets, symmetries, and linear algebra to combat state space explosion. LoLA has been applied to problems in areas like model checking, business process verification, and distributed systems. Its core data structures and algorithms keep processing costs low during operations like firing transitions and state space traversal.
The call stack is a data structure that stores information about active subroutines in a computer program. It keeps track of the point to which each subroutine should return control when finished. Subroutines can call other subroutines, resulting in information stacking up on the call stack. The call stack is composed of stack frames containing state information for each subroutine. Debuggers like GDB allow viewing the call stack to see how the program arrived at its current state.
Subprograms, Design Issues, Local Referencing, Parameter Passing, Overloaded Methods, Generic Methods, Design Issues for functions, Semantics of call and return, implementing simple subprograms, stack and dynamic local variables, nested subprograms, blocks and dynamic scoping,
The document discusses storage classes in C++. It explains that every variable has a storage class and scope. The storage class determines where storage is allocated and how long it exists, while scope specifies visibility. It describes the auto, register, static, and extern storage classes. Auto variables are allocated and destroyed on block entry/exit, register suggests register storage, static retains value between function calls, and extern extends scope to other files. It provides examples of each storage class and their differences between C and C++.
2. INTRODUCTION
• Scope rules of a language determine the treatment of
references to non- local names
• Common rule:lexical or static scope determines the
declaration that applies to a name by examining the program
text alone
• Eg:Pascal,Ada,C
• Alternative rule:the dynamic-scope rule determines the
declaration applicable to a name( at run time) by considering
the current activations.
• Eg:Lisp,APL,Snobol
3. BLOCKS
• A block is a statement containing its own local
data declarations
• C, a block has the syntax
{Declarations statements}
• In Algol we use ( ) as delimiters
• Nesting Property of Block structure:it is not
possible for two blocks B1 and B2 to overlap in
such a way that first B1 begins
then B2 but B1 ends before B2 nesting
property
4. • Scope of a declaration in a block-structured
language is given by the most closely nested
rule
1. The scope of a declaration in a block B includes B.
2. If a name x is not declared in a block B, then an
occurrence of x in B is in the scope of a
declaration of x in an enclosing block B’ such that
i) B’ has a declaration of x, and
ii) B’is more closely nested around 8 than any other block
with a declaration of x
• The scope of declaration of b in B0 does not
include B1 because b is redeclared in
B1,indicated by B0-B1,such a gap is called hole
in scope of declaration.
5. STACK IMPLEMENTATION
• Block structure can be implemented using stack
allocation.
• scope of a declaration does not extend outside
the block in which it appear the space for the
declared name can be allocated when the block
is entered and reallocated when control leaves
the block
• view treats a block as a parameterized
procedure, called only from the point just
before the block and returning only to the point
just after the block
6. main()
{
int a=0;
int b=0;
{
int b=1;
{
int a=2;
B2
printf(“%d %d n” ,a,b);
B0 }
B1
{
int b=3;
B3
printf(“%d %d n”,a,b);
}
printf(“%d %d n”,a,b);
}
printf(“ %d %d n”a,b);
}
7. ALTERNATIVE
IMPLEMENTATION
• to allocate storage for a complete procedure
body at one time
• If there are blocks within the procedure then
allowance is made for the storage needed for
declarations within the blocks
• For block B0,we can allocate storage as in
Fig.Subscripts on locals a and b identify the
blocks that the locals are declared in. Note that
a2 and b3 may be assigned the same storage
because they are in blocks that are not alive at
the same time
8. • In the absence of variable-length data, the
maximum amount storage needed during any
execution of a block can be determined at
compile time.
• By making this determination, we
conservatively assume that all control paths
in the program can indeed be taken.
• That is, we assume that both the then- and
else-parts of a conditional statement can be
executed, and that all statements within a while
loop can be reached.
9. Lexical Scope Without
Nested Procedures
• procedure definition cannot appear within
another
• If there is a non-local reference to a name a in
some function, then it must be declared outside
any function. The scope of a declaration outside
a function consists of the function bodies that
follow the declaration, with holes if the name is
redeclared within a function.
• In the absence of nested procedures, the stack-
allocation strategy for local names
10. • Storage for all names declared outside any
procedures can he allocated statically.
• The position of this storage is known at compile
time, so if a name is nonlocal in some
procedure body, we simply use the statically
determined address.
• Any other name must be a local of the
activation at the top of the stack, accessible
through the top pointer.
• An important benefit of static allocation for
non-locals is that declared procedures can
freely he passed as parameters and returned as
result.
11. Lexical Scope with Nested
Procedures
• A non-local occurrence of a name a in a Pascal
procedure is in the scope of the most closely
nested declaration of a in the static program
text.
• The nesting of procedure definitions in the
Pascal program of quicksort is indicated by the
following indentation:
• sort
readarray
exchange
quicksort
Partition
12. Nesting Depth
• The notion of nesting depth of a procedure is
used below to implement lexical scope.
• Let the name of the main program be at nesting
depth 1;we add 1 to the nesting depth as we go
from an enclosing to an enclosed procedure.
13. Access Links
• A direct implementation of lexical scope for
nested procedures is obtained be adding a
pointer called an access link to each activation
record.
• If procedure p is nested immediately within q in
the source text, then the access link in an
activation record for p points to the access link
in the record for the most recent activation of
q.
14.
15. Suppose procedure p at nesting depth np refers to
a non-local a with nesting depth na<=np. The
storage for a can be found as follows.
1. When control is in p, an activation record for p
is at top of the stack. Follow np - na access links
from the record at the top of the stack. the value
of np - na can be precomputed at compiler time. If
the access link in one record points to the access
link in another, then performing a single
indirection operation can follow a link.
16. 2. After following np - na links, we reach an
activation record for the procedure that a is local
to. As discussed in the last section, its storage is at
a fixed offset relative to a position in the record. In
particular, the offset can be relative to the access
link.
Hence, the address of non-local a in procedure p is
given by the following pair computed at compile
time and stored in the symbol table:
(np - na, offset within activation record
containing a)
17. Suppose procedure p at nesting depth np calls
procedure x at nesting depth nx. The code for
setting up the access link in the called procedure
depends on whether or not the called procedure is
nested within the caller.
1. Case np < nx. Since the called procedure x is
nested more deeply than p it must be declared
within p, or it would not be accessible to p.
18. • 2. Case np >= nx. From the scope rules, the
enclosing procedures at nesting depths
1,2,3…. nx-1 of the called and calling
procedures must be the same.Following np-
nx+1 access links from the caller we reach the
most recent activation record of procedure that
statically encloses both the called and calling
procedures most closely. The access link
reached is the one to which the access link in
the called procedure must point. Again np-nx +1
can be computed at compile time.
19. Displays
• Faster access to non-locals than with access
links can be obtained using an array d of
pointers to activation records, called a display.
• We maintain the display so that storage for a
non-local a at nesting depth i is in the activation
record pointed to by display element d [i].
20. • Suppose control is in an activation of a
procedure p at nesting depth j.
• Then, the first j-1 elements of the display point
to the most recent activations of the
procedures that lexically enclose procedure p,
and d [j] points to the activation of p.
• Using a display is generally faster than following
access link because the activation record
holding a non-local is found by accessing an
element of d and then following just one
pointer.
21. When a new activation record for a procedure at
nested depth i is set up, we
1. Save the value of d[i] in the activation
record and
2. Set d [i] to the new activation record.
Just before activation ends, d [i] is reset to the
saved value.
22.
23. DYNAMIC SCOPE
• Under dynamic scope, a new activation inherits
the existing bindings of non-local names to
storage.
• A non-local name a in the called activation
refers to the same storage that it did in the
calling activation.
• New bindings are set up for local names of the
called procedure; the names refer to storage in
the new activation record.
24. The following 2 approaches to implementing
dynamic scope :
1. Deep Access. Conceptually, dynamic scope
results if access links point to the same activation
records that control links do. A simple
implementation is to dispense with access links
and use the control link to search into the stack,
looking for the first activation record containing
storage for the non-local name. The term deep
access comes from the fact that the search may go
"deep" into the stack. The depth to which the
search may go depends on the input to the
program and cannot be determined at compile
time.
25. 2. Shallow access. Here the idea is to hold the
current value of each name in statically allocated
storage. When a new activation of a procedure p
occurs, a local name n in p takes over the storage
statically allocated for n. The previous value of n
can be saved in the activation record for p and
must be restored when the activation of p ends.