1. Static allocation assigns storage locations to data objects at compile time. Stack allocation uses a stack to dynamically allocate memory for procedure activations and local variables at runtime. Heap allocation allocates memory for dynamic data structures from a heap region at runtime.
2. Access to non-local names can use lexical scoping by following access links or displays, or dynamic scoping by searching the stack.
3. Blocks can be nested and treated as parameterless procedures, with memory allocated on the stack when entered and deallocated on exit.
4. The activation record stores information for a procedure execution, including local data, saved registers, parameters, return values, and links to enclosing activations.
The document discusses run-time environments in compiler design. It provides details about storage organization and allocation strategies at run-time. Storage is allocated either statically at compile-time, dynamically from the heap, or from the stack. The stack is used to store procedure activations by pushing activation records when procedures are called and popping them on return. Activation records contain information for each procedure call like local variables, parameters, and return values.
This document discusses different approaches to implementing scope rules in programming languages. It begins by defining lexical/static scope and dynamic scope. It then discusses how block structure and nested procedures can be implemented using stacks and access links. Specifically, it describes how storage is allocated for local and non-local variables under lexical and dynamic scope models. The key implementation techniques discussed are stacks, access links, displays, deep access, and shallow access.
(Ref : Computer System Architecture by Morris Mano 3rd edition) : Microprogrammed Control unit, micro instructions, micro operations, symbolic and binary microprogram.
The document discusses heap memory management. It describes the heap as the portion of memory used for indefinitely stored data. A memory manager subsystem allocates and deallocates space in the heap. It keeps track of free space and serves as the interface between programs and the operating system. When allocating memory, the manager either uses available contiguous space or increases heap size from the OS. Deallocated space is returned to the free pool but memory is not returned to the OS when heap usage drops.
The document discusses the concept of virtual memory. Virtual memory allows a program to access more memory than what is physically available in RAM by storing unused portions of the program on disk. When a program requests data that is not currently in RAM, it triggers a page fault that causes the needed page to be swapped from disk into RAM. This allows the illusion of more memory than physically available through swapping pages between RAM and disk as needed by the program during execution.
The document discusses various indexing techniques used to improve data access performance in databases, including ordered indices like B-trees and B+-trees, as well as hashing techniques. It covers the basic concepts, data structures, operations, advantages and disadvantages of each approach. B-trees and B+-trees store index entries in sorted order to support range queries efficiently, while hashing distributes entries uniformly across buckets using a hash function but does not support ranges.
Virtual Memory
• Copy-on-Write
• Page Replacement
• Allocation of Frames
• Thrashing
• Operating-System Examples
Background
Page Table When Some PagesAre Not in Main Memory
Steps in Handling a Page Fault
The document discusses run-time environments in compiler design. It provides details about storage organization and allocation strategies at run-time. Storage is allocated either statically at compile-time, dynamically from the heap, or from the stack. The stack is used to store procedure activations by pushing activation records when procedures are called and popping them on return. Activation records contain information for each procedure call like local variables, parameters, and return values.
This document discusses different approaches to implementing scope rules in programming languages. It begins by defining lexical/static scope and dynamic scope. It then discusses how block structure and nested procedures can be implemented using stacks and access links. Specifically, it describes how storage is allocated for local and non-local variables under lexical and dynamic scope models. The key implementation techniques discussed are stacks, access links, displays, deep access, and shallow access.
(Ref : Computer System Architecture by Morris Mano 3rd edition) : Microprogrammed Control unit, micro instructions, micro operations, symbolic and binary microprogram.
The document discusses heap memory management. It describes the heap as the portion of memory used for indefinitely stored data. A memory manager subsystem allocates and deallocates space in the heap. It keeps track of free space and serves as the interface between programs and the operating system. When allocating memory, the manager either uses available contiguous space or increases heap size from the OS. Deallocated space is returned to the free pool but memory is not returned to the OS when heap usage drops.
The document discusses the concept of virtual memory. Virtual memory allows a program to access more memory than what is physically available in RAM by storing unused portions of the program on disk. When a program requests data that is not currently in RAM, it triggers a page fault that causes the needed page to be swapped from disk into RAM. This allows the illusion of more memory than physically available through swapping pages between RAM and disk as needed by the program during execution.
The document discusses various indexing techniques used to improve data access performance in databases, including ordered indices like B-trees and B+-trees, as well as hashing techniques. It covers the basic concepts, data structures, operations, advantages and disadvantages of each approach. B-trees and B+-trees store index entries in sorted order to support range queries efficiently, while hashing distributes entries uniformly across buckets using a hash function but does not support ranges.
Virtual Memory
• Copy-on-Write
• Page Replacement
• Allocation of Frames
• Thrashing
• Operating-System Examples
Background
Page Table When Some PagesAre Not in Main Memory
Steps in Handling a Page Fault
The document discusses run-time environments and how compilers support program execution through run-time environments. It covers:
1) The compiler cooperates with the OS and system software through a run-time environment to implement language abstractions during execution.
2) The run-time environment handles storage layout/allocation, variable access, procedure linkage, parameter passing and interfacing with the OS.
3) Memory is typically divided into code, static storage, heap and stack areas, with the stack and heap growing towards opposite ends of memory dynamically during execution.
This document provides an overview of mass storage structures and operating system services for mass storage. It discusses disk structure, disk scheduling algorithms, swap space management, RAID structures, and stable storage implementation. The document also describes the physical structure of secondary and tertiary storage devices and their performance characteristics.
Control Units : Microprogrammed and Hardwired:control unitabdosaidgkv
The document discusses control units in CPUs. There are two main methods for implementing control units: hardwired and microprogrammed. A hardwired control unit generates control signals through circuitry using logic gates, while a microprogrammed control unit generates control signals by executing a stored microprogram. Overall, hardwired control units are faster but less flexible, while microprogrammed control units are slower but more flexible and modifiable.
The document discusses memory management techniques used in operating systems. It describes logical vs physical addresses and how relocation registers map logical addresses to physical addresses. It covers contiguous and non-contiguous storage allocation, including paging and segmentation. Paging divides memory into fixed-size frames and pages, using a page table and translation lookaside buffer (TLB) for address translation. Segmentation divides memory into variable-sized segments based on a program's logical structure. Virtual memory and demand paging are also covered, along with page replacement algorithms like FIFO, LRU and optimal replacement.
RAM is the main memory that allows bidirectional transfer of data via its data bus. It has a capacity of 128 bytes addressed by a 7-bit address. ROM can only read and can store more data than RAM in the same size chip. A memory address map assigns addresses to RAM and ROM chips. RAM uses address lines 1-7 and is selected by address lines 8-9 through a decoder. ROM uses address lines 1-9 and is selected by address line 10.
This Presentation is for Memory Management in Operating System (OS). This Presentation describes the basic need for the Memory Management in our OS and its various Techniques like Swapping, Fragmentation, Paging and Segmentation.
The document discusses issues in code generation by a compiler. It defines code generation as converting an intermediate representation into executable machine code. The code generator accesses symbol tables and performs multiple passes over intermediate forms. Key issues addressed include the input to the code generator, generating code for the target machine, memory management, instruction selection, register allocation, and optimization techniques like reordering independent instructions to improve efficiency.
Memory is encoded, stored, and retrieved through processes. Encoding allows external information to reach our senses. Memory allocation involves setting aside space, such as allocating hard drive space for an application. It places blocks of information in memory systems. To allocate memory, the memory management system tracks available memory and allocates only what is needed, keeping the rest available. If insufficient memory exists, blocks may be swapped. Static and dynamic allocation methods exist, with dynamic using nonpreemptive and preemptive allocation. Nonpreemptive allocation searches memory for available space for a transferring block. Preemptive allocation more efficiently uses memory through compaction. Different memory types store executable code, variables, and dynamically sized structures, with heap memory
Associative memory, also known as content-addressable memory (CAM), allows data to be searched based on its content rather than its location. It consists of a memory array, argument register (containing the search word), key register (specifying which bits to compare), and match register (indicating matching locations). All comparisons are done in parallel. Associative memory provides faster searching than conventional memory but is more expensive due to the additional comparison circuitry in each cell. It is well-suited for applications requiring very fast searching such as databases and virtual memory address translation.
This slide contain the description about the various technique related to parallel Processing(vector Processing and array processor), Arithmetic pipeline, Instruction Pipeline, SIMD processor, Attached array processor
This document summarizes and compares paging and segmentation, two common memory management techniques. Paging divides physical memory into fixed-size frames and logical memory into same-sized pages. It maps pages to frames using a page table. Segmentation divides logical memory into variable-sized segments and uses a segment table to map segment numbers to physical addresses. Paging avoids external fragmentation but can cause internal fragmentation, while segmentation avoids internal fragmentation but can cause external fragmentation. Both approaches separate logical and physical address spaces but represent different models of how a process views memory.
This document discusses stack organization and operations. A stack is a last-in, first-out data structure where items added last are retrieved first. It uses a stack pointer to track the top of the stack. Common operations are push, which adds an item to the top of the stack, and pop, which removes an item from the top. Stacks can be implemented with registers, using a stack pointer and data register. Reverse Polish notation places operators after operands, making it suitable for stack-based expression evaluation.
Memory management is the act of managing computer memory. The essential requirement of memory management is to provide ways to dynamically allocate portions of memory to programs at their request, and free it for reuse when no longer needed. This is critical to any advanced computer system where more than a single process might be underway at any time
The document discusses memory management in operating systems. It covers key concepts like logical versus physical addresses, binding logical addresses to physical addresses, and different approaches to allocating memory like contiguous allocation. It also discusses dynamic storage allocation using a buddy system to merge adjacent free spaces, as well as compaction techniques to reduce external fragmentation by moving free memory blocks together. Memory management aims to efficiently share physical memory between processes using mechanisms like partitioning memory and enforcing protection boundaries.
The document discusses interrupts in a computer system. It defines an interrupt as a signal that breaks the normal sequence of program execution to handle an event that requires immediate attention, like input from a device. There are two main types of interrupts: hardware interrupts caused by external devices, and software interrupts caused by exceptional conditions in a program like division by zero. The document outlines how interrupts work, including how the processor saves the state of the interrupted program, services the interrupt, and then restores the original program context. It also discusses interrupt priorities and how interrupts can be disabled or deferred based on priority.
Independent processes operate concurrently without affecting each other, while cooperating processes can impact one another. Inter-process communication (IPC) allows processes to share information, improve computation speed, and share resources. The two main types of IPC are shared memory and message passing. Shared memory uses a common memory region for fast communication, while message passing involves establishing communication links and exchanging messages without shared variables. Key considerations for message passing include direct vs indirect communication and synchronous vs asynchronous messaging.
The document discusses the role and process of a lexical analyzer in compiler design. A lexical analyzer groups input characters into lexemes and produces a sequence of tokens as output for the syntactic analyzer. It strips out comments and whitespace, correlates line numbers with errors, and interacts with the symbol table. Lexical analysis improves compiler efficiency, portability, and allows for simpler parser design by separating lexical and syntactic analysis.
The document discusses directed acyclic graphs (DAGs) and how they can be used to represent basic blocks of code. It describes how a DAG is constructed from three-address statements, with nodes labeled by variables, operators, or unique identifiers. Interior nodes represent computed values and leaves represent variables or constants. The DAG construction process creates nodes and links them based on the statements. DAGs are useful for detecting common subexpressions, determining which variables are used in a block, and which statements compute values used outside the block. Array accesses, pointers, and procedure calls require additional rules when constructing DAGs to properly capture dependencies.
This document summarizes key topics in intermediate code generation discussed in Chapter 6, including:
1) Variants of syntax trees like DAGs are introduced to share common subexpressions. Three-address code is also discussed where each instruction has at most three operands.
2) Type checking and type expressions are covered, along with translating expressions and statements to three-address code. Control flow statements like if/else are also translated using techniques like backpatching.
3) Backpatching allows symbolic labels in conditional jumps to be resolved by a later pass that inserts actual addresses, avoiding an extra pass. This and other control flow translation topics are covered.
The document discusses run-time environments and how compilers support program execution through run-time environments. It covers:
1) The compiler cooperates with the OS and system software through a run-time environment to implement language abstractions during execution.
2) The run-time environment handles storage layout/allocation, variable access, procedure linkage, parameter passing and interfacing with the OS.
3) Memory is typically divided into code, static storage, heap and stack areas, with the stack and heap growing towards opposite ends of memory dynamically during execution.
This document provides an overview of mass storage structures and operating system services for mass storage. It discusses disk structure, disk scheduling algorithms, swap space management, RAID structures, and stable storage implementation. The document also describes the physical structure of secondary and tertiary storage devices and their performance characteristics.
Control Units : Microprogrammed and Hardwired:control unitabdosaidgkv
The document discusses control units in CPUs. There are two main methods for implementing control units: hardwired and microprogrammed. A hardwired control unit generates control signals through circuitry using logic gates, while a microprogrammed control unit generates control signals by executing a stored microprogram. Overall, hardwired control units are faster but less flexible, while microprogrammed control units are slower but more flexible and modifiable.
The document discusses memory management techniques used in operating systems. It describes logical vs physical addresses and how relocation registers map logical addresses to physical addresses. It covers contiguous and non-contiguous storage allocation, including paging and segmentation. Paging divides memory into fixed-size frames and pages, using a page table and translation lookaside buffer (TLB) for address translation. Segmentation divides memory into variable-sized segments based on a program's logical structure. Virtual memory and demand paging are also covered, along with page replacement algorithms like FIFO, LRU and optimal replacement.
RAM is the main memory that allows bidirectional transfer of data via its data bus. It has a capacity of 128 bytes addressed by a 7-bit address. ROM can only read and can store more data than RAM in the same size chip. A memory address map assigns addresses to RAM and ROM chips. RAM uses address lines 1-7 and is selected by address lines 8-9 through a decoder. ROM uses address lines 1-9 and is selected by address line 10.
This Presentation is for Memory Management in Operating System (OS). This Presentation describes the basic need for the Memory Management in our OS and its various Techniques like Swapping, Fragmentation, Paging and Segmentation.
The document discusses issues in code generation by a compiler. It defines code generation as converting an intermediate representation into executable machine code. The code generator accesses symbol tables and performs multiple passes over intermediate forms. Key issues addressed include the input to the code generator, generating code for the target machine, memory management, instruction selection, register allocation, and optimization techniques like reordering independent instructions to improve efficiency.
Memory is encoded, stored, and retrieved through processes. Encoding allows external information to reach our senses. Memory allocation involves setting aside space, such as allocating hard drive space for an application. It places blocks of information in memory systems. To allocate memory, the memory management system tracks available memory and allocates only what is needed, keeping the rest available. If insufficient memory exists, blocks may be swapped. Static and dynamic allocation methods exist, with dynamic using nonpreemptive and preemptive allocation. Nonpreemptive allocation searches memory for available space for a transferring block. Preemptive allocation more efficiently uses memory through compaction. Different memory types store executable code, variables, and dynamically sized structures, with heap memory
Associative memory, also known as content-addressable memory (CAM), allows data to be searched based on its content rather than its location. It consists of a memory array, argument register (containing the search word), key register (specifying which bits to compare), and match register (indicating matching locations). All comparisons are done in parallel. Associative memory provides faster searching than conventional memory but is more expensive due to the additional comparison circuitry in each cell. It is well-suited for applications requiring very fast searching such as databases and virtual memory address translation.
This slide contain the description about the various technique related to parallel Processing(vector Processing and array processor), Arithmetic pipeline, Instruction Pipeline, SIMD processor, Attached array processor
This document summarizes and compares paging and segmentation, two common memory management techniques. Paging divides physical memory into fixed-size frames and logical memory into same-sized pages. It maps pages to frames using a page table. Segmentation divides logical memory into variable-sized segments and uses a segment table to map segment numbers to physical addresses. Paging avoids external fragmentation but can cause internal fragmentation, while segmentation avoids internal fragmentation but can cause external fragmentation. Both approaches separate logical and physical address spaces but represent different models of how a process views memory.
This document discusses stack organization and operations. A stack is a last-in, first-out data structure where items added last are retrieved first. It uses a stack pointer to track the top of the stack. Common operations are push, which adds an item to the top of the stack, and pop, which removes an item from the top. Stacks can be implemented with registers, using a stack pointer and data register. Reverse Polish notation places operators after operands, making it suitable for stack-based expression evaluation.
Memory management is the act of managing computer memory. The essential requirement of memory management is to provide ways to dynamically allocate portions of memory to programs at their request, and free it for reuse when no longer needed. This is critical to any advanced computer system where more than a single process might be underway at any time
The document discusses memory management in operating systems. It covers key concepts like logical versus physical addresses, binding logical addresses to physical addresses, and different approaches to allocating memory like contiguous allocation. It also discusses dynamic storage allocation using a buddy system to merge adjacent free spaces, as well as compaction techniques to reduce external fragmentation by moving free memory blocks together. Memory management aims to efficiently share physical memory between processes using mechanisms like partitioning memory and enforcing protection boundaries.
The document discusses interrupts in a computer system. It defines an interrupt as a signal that breaks the normal sequence of program execution to handle an event that requires immediate attention, like input from a device. There are two main types of interrupts: hardware interrupts caused by external devices, and software interrupts caused by exceptional conditions in a program like division by zero. The document outlines how interrupts work, including how the processor saves the state of the interrupted program, services the interrupt, and then restores the original program context. It also discusses interrupt priorities and how interrupts can be disabled or deferred based on priority.
Independent processes operate concurrently without affecting each other, while cooperating processes can impact one another. Inter-process communication (IPC) allows processes to share information, improve computation speed, and share resources. The two main types of IPC are shared memory and message passing. Shared memory uses a common memory region for fast communication, while message passing involves establishing communication links and exchanging messages without shared variables. Key considerations for message passing include direct vs indirect communication and synchronous vs asynchronous messaging.
The document discusses the role and process of a lexical analyzer in compiler design. A lexical analyzer groups input characters into lexemes and produces a sequence of tokens as output for the syntactic analyzer. It strips out comments and whitespace, correlates line numbers with errors, and interacts with the symbol table. Lexical analysis improves compiler efficiency, portability, and allows for simpler parser design by separating lexical and syntactic analysis.
The document discusses directed acyclic graphs (DAGs) and how they can be used to represent basic blocks of code. It describes how a DAG is constructed from three-address statements, with nodes labeled by variables, operators, or unique identifiers. Interior nodes represent computed values and leaves represent variables or constants. The DAG construction process creates nodes and links them based on the statements. DAGs are useful for detecting common subexpressions, determining which variables are used in a block, and which statements compute values used outside the block. Array accesses, pointers, and procedure calls require additional rules when constructing DAGs to properly capture dependencies.
This document summarizes key topics in intermediate code generation discussed in Chapter 6, including:
1) Variants of syntax trees like DAGs are introduced to share common subexpressions. Three-address code is also discussed where each instruction has at most three operands.
2) Type checking and type expressions are covered, along with translating expressions and statements to three-address code. Control flow statements like if/else are also translated using techniques like backpatching.
3) Backpatching allows symbolic labels in conditional jumps to be resolved by a later pass that inserts actual addresses, avoiding an extra pass. This and other control flow translation topics are covered.
The document discusses query processing and optimization. It describes the basic concepts including query processing, query optimization, and the phases of query processing. It also explains relational algebra operations like selection, projection, joins, and additional operations. The document then covers topics like query decomposition, analysis, normalization, simplification, and restructuring during query optimization. It discusses cost estimation and algorithms for implementing relational algebra operations and file organization.
Peephole optimization techniques in compiler designAnul Chaudhary
This document discusses various compiler optimization techniques, focusing on peephole optimization. It defines optimization as transforming code to run faster or use less memory without changing functionality. Optimization can be machine-independent, transforming code regardless of hardware, or machine-dependent, tailored to a specific architecture. Peephole optimization examines small blocks of code and replaces them with faster or smaller equivalents using techniques like constant folding, strength reduction, null sequence elimination, and algebraic laws. Common replacement rules aim to improve performance, reduce memory usage, and decrease code size.
This document discusses backpatching and syntax-directed translation for boolean expressions and flow-of-control statements. It describes using three functions - Makelist, Marklist, and Backpatch - to generate code with backpatching during a single pass. Boolean expressions are translated by constructing syntax trees and associating semantic actions to record quadruple indices for later backpatching. Flow-of-control statements like IF and WHILE are handled similarly, using marker nonterminals to record quadruple numbers for backpatching statement lists.
Compiler optimization transforms programs to equivalent programs that use fewer resources by applying techniques like:
1) Combining multiple simple operations like increments into a single optimized operation
2) Evaluating constant expressions at compile-time rather than run-time
3) Eliminating redundant computations and storing values in registers rather than memory when possible
4) Optimizing loops, conditionals, and expressions to minimize computations
Compiler optimization aims to minimize program execution time, memory usage, and power consumption by transforming programs in various ways before producing executable code. Some key techniques include instruction combining, constant folding, common subexpression elimination, strength reduction, dead code elimination, and loop optimizations. This improves program efficiency and performance.
The document discusses code generation which involves mapping intermediate code to machine code. It describes three key issues in code generator design: instruction selection which determines the best machine instructions to use, register allocation which assigns variables to registers, and evaluation order which determines the order of instructions. The document outlines three algorithms for code generation that involve partitioning code into basic blocks, performing intra-block optimizations, and code selection and assignment.
The document discusses basic blocks and flow graphs in program representation. It defines basic blocks as straight-line code segments with a single entry and exit point. To construct the representation:
1. The program is partitioned into basic blocks
2. A flow graph is created where basic blocks are nodes and edges show control flow between blocks
The flow graph explicitly represents all execution paths between basic blocks. Loops in the flow graph are identified by having a single loop entry node with all paths from the start going through it, and all nodes inside the loop reaching the entry.
The document discusses code generation in compilers. It describes the main tasks of the code generator as instruction selection, register allocation and assignment, and instruction ordering. It then discusses various issues in designing a code generator such as the input and output formats, memory management, different instruction selection and register allocation approaches, and choice of evaluation order. The target machine used is a hypothetical machine with general purpose registers, different addressing modes, and fixed instruction costs. Examples of instruction selection and utilization of addressing modes are provided.
The document discusses runtime environments and memory management techniques for programming languages. It covers stack-based vs dynamic environments, parameter passing mechanisms like pass by value and reference, and garbage collection algorithms. Dynamic memory allocation uses a heap structure with malloc and free functions. Object-oriented languages require special runtime support for objects, inheritance etc. Fully dynamic environments are needed for functional languages that allow nested functions.
This document introduces Edgar Barbosa, a senior security researcher who has worked on hardware-based virtualization rootkits and detecting such rootkits. It then provides an overview of control flow analysis (CFA), a static analysis technique used to analyze program execution paths. CFA involves constructing a control flow graph (CFG) from a disassembled binary. The document discusses basic block identification, CFG properties, and challenges like self-modifying code. It also introduces other CFA concepts like dominator trees, natural loops, strongly connected components, and interval analysis.
The document discusses various types of physical storage media used in databases, including their characteristics and performance measures. It covers volatile storage like cache and main memory, and non-volatile storage like magnetic disks, flash memory, optical disks, and tape. It describes how magnetic disks work and factors that influence disk performance like seek time, rotational latency, and transfer rate. Optimization techniques for disk block access like file organization and write buffering are also summarized.
The document discusses intermediate code generation in compilers. It describes how compilers generate an intermediate representation from the abstract syntax tree that is machine independent and allows for optimizations. One popular intermediate representation is three-address code, where each statement contains at most three operands. This code is then represented using structures like quadruples and triples to store the operator and operands for code generation and rearranging during optimizations. Static single assignment form is also covered, which assigns unique names to variables to facilitate optimizations.
Symbol table design (Compiler Construction)Tech_MX
The document discusses the design of symbol tables used in compilers. It describes symbol tables as data structures that store information about identifiers from the source code, such as their names, attributes, and scopes. The analysis phase of a compiler constructs a symbol table by entering identifiers and attributes. The synthesis phase then uses the symbol table to check semantics and generate code. Symbol tables support nested scopes through a stack structure with a separate table for each scope.
This document discusses code generation in compilers. It covers:
- The code generator takes an intermediate representation and produces target code that is correct and efficient for the target machine.
- Symbol tables are used to track variable semantics, data types, scopes, and storage addresses. Common implementations are unordered lists and ordered linear lists.
- The target machine format can be absolute machine language, relocatable machine language, or assembly language. Memory management involves mapping names to runtime memory addresses.
- Basic blocks, control flow graphs, and structure-preserving transformations like common subexpression elimination are discussed for code optimization.
This document discusses inherited and synthesized attributes in semantic analysis using syntax-directed translation (SDT). It covers:
- Synthesized attributes are defined by semantic rules associated with productions and rely only on child nodes, while inherited attributes rely on parent/sibling nodes.
- Terminals can have synthesized attributes from lexing but not inherited attributes. Nonterminals can have both.
- Annotated parse trees show attribute values, while dependency graphs determine evaluation order.
- S-attributed definitions rely only on synthesized attributes and evaluate bottom-up. L-attributed definitions restrict inherited attributes to avoid cycles.
- SDTs can construct syntax trees during parsing to decouple parsing from translation
The document discusses the role of the parser in compiler design. It explains that the parser takes a stream of tokens from the lexical analyzer and checks if the source program satisfies the rules of the context-free grammar. If so, it creates a parse tree representing the syntactic structure. Parsers are categorized as top-down or bottom-up based on the direction they build the parse tree. The document also covers context-free grammars, derivations, parse trees, ambiguity, and techniques for eliminating left-recursion from grammars.
Memory Allocation & Direct Memory Allocation in C & C++ Language PPTAkhilMishra50
This document provides an overview of memory allocation in C and C++, including static and dynamic allocation. Static allocation assigns memory at compile-time using the stack, while dynamic allocation assigns memory at run-time using the heap. In C++, new and delete operators are used to allocate and free dynamic memory, while in C functions like malloc(), calloc(), realloc(), and free() perform these tasks. The document explains each function and operator and provides examples of their usage.
This document discusses C++ memory management. It describes the different memory segments used in a C++ program including the code segment, BSS segment, data segment, heap, and stack. The stack and heap are discussed in more detail. The stack stores function parameters and local variables and uses a last-in, first-out approach. The heap stores dynamically allocated memory using new until explicitly freed. The document also provides details on how the call stack works, including pushing and popping stack frames when functions are called and returned from.
The document discusses intermediate code generation in compilers. It describes various intermediate representations like syntax trees, DAGs, postfix notation, and 3-address code. Syntax trees represent the hierarchical structure of a program and are constructed using functions like mknode() and mkleaf(). DAGs provide a more compact representation by identifying common subexpressions. Postfix notation linearizes the syntax tree. The document also discusses run-time environments, storage allocation strategies like static, stack and heap allocation, and activation records.
This document discusses abstract data types and encapsulation. It explains that abstract data types define a set of objects, operations on those objects, and encapsulate them so the user cannot directly access the hidden data. Encapsulation through subprograms and type definitions is described. Different approaches to static and dynamic storage management like stacks, heaps, and garbage collection are also summarized.
The runtime environment manages memory allocation and procedure activations at runtime. It uses a runtime support system and handles procedure calls through activation records stored on a call stack. Activation records contain local variables, parameters, and return addresses. Procedures can be allocated memory statically at compile time or dynamically on the call stack or heap as needed. The runtime environment maps names to memory locations and handles memory management through allocation and deallocation.
The document discusses run-time environments and how they manage memory allocation and procedure calls at runtime. A run-time system handles mapping names to storage locations, allocating and deallocating memory, and managing procedure activations through techniques like activation records, control stacks, and dynamic storage allocation using stacks and heaps. The key responsibilities of a run-time system include storage management and keeping track of the dynamic execution state as programs execute.
Prepare for your interview with these top 20 SAP HANA interview questions. For more IT Profiles, Sample Resumes, Practice exams, Interview Questions, Live Training and more…visit ITLearnMore – Most Trusted Website for all Learning Needs by Students, Graduates and Working Professionals.
Looking to add weight to your resume? Check out for ITLearnmore for varied online IT courses at affordable prices intended for career boost. There is so much in store for both fresh graduates and professionals here. Hurry up..! Get updated with the current IT job market requirements and related courses.For more information visit http://www.ITLearnMore.com.
Caching is used to optimize database applications by reducing traffic between applications and databases. Hibernate uses two caches - a first-level cache associated with sessions and a second-level cache associated with the session factory. Configuring caching involves choosing a cache implementation like EHCache, setting caching strategies like read-only, and configuring cache rules for classes.
This document discusses memory management techniques in operating systems. It covers topics such as binding instructions and data to memory at different stages, logical vs physical address spaces, memory management units that map virtual to physical addresses, dynamic loading and linking of code, using overlays to only hold needed instructions and data in memory, swapping processes temporarily out of memory to secondary storage, and contiguous allocation of memory to processes.
Memory management handles allocation of memory to processes and tracks used and free memory. It uses techniques like paging, segmentation, and dynamic allocation from a heap. Paging maps logical addresses to physical pages, avoiding external fragmentation. Segmentation divides memory into logical segments of varying sizes. Dynamic allocation fulfills requests from the heap, managing free blocks and avoiding fragmentation and memory leaks.
This document discusses run-time addressing and storage of variables in programming. It covers how variables are accessed using offsets from frames or stacks. It also discusses variable-length local data and how it can be allocated dynamically on the stack or heap. The document then covers scope, static and dynamic scoping rules, and how static links are used to access non-local variables at run-time.
The document discusses different memory management techniques used in operating systems:
1. Programs go through several steps before execution - compilation, loading, and execution where address binding can occur.
2. Memory management schemes separate logical and physical addresses using techniques like paging and segmentation to map virtual to physical addresses.
3. Swapping allows processes to be temporarily moved out of memory to disk to improve memory utilization at the cost of performance.
2015 01-17 Lambda Architecture with Apache Spark, NextML ConferenceDB Tsai
Lambda architecture is a data-processing architecture designed to handle massive quantities of data by taking advantage of both batch- and stream-processing methods. In Lambda architecture, the system involves three layers: batch processing, speed (or real-time) processing, and a serving layer for responding to queries, and each comes with its own set of requirements.
In batch layer, it aims at perfect accuracy by being able to process the all available big dataset which is an immutable, append-only set of raw data using distributed processing system. Output will be typically stored in a read-only database with result completely replacing existing precomputed views. Apache Hadoop, Pig, and HIVE are
the de facto batch-processing system.
In speed layer, the data is processed in streaming fashion, and the real-time views are provided by the most recent data. As a result, the speed layer is responsible for filling the "gap" caused by the batch layer's lag in providing views based on the most recent data. This layer's views may not be as accurate as the views provided by batch layer's views created with full dataset, so they will be eventually replaced by the batch layer's views. Traditionally, Apache Storm is
used in this layer.
In serving layer, the result from batch layer and speed layer will be stored here, and it responds to queries in a low-latency and ad-hoc way.
One of the lambda architecture examples in machine learning context is building the fraud detection system. In speed layer, the incoming streaming data can be used for online learning to update the model learnt in batch layer to incorporate the recent events. After a while, the model can be rebuilt using the full dataset.
Why Spark for lambda architecture? Traditionally, different
technologies are used in batch layer and speed layer. If your batch system is implemented with Apache Pig, and your speed layer is implemented with Apache Storm, you have to write and maintain the same logics in SQL and in Java/Scala. This will very quickly becomes a maintenance nightmare. With Spark, we have an unified development framework for batch and speed layer at scale. In this talk, an end-to-end example implemented in Spark will be shown, and we will
discuss about the development, testing, maintenance, and deployment of lambda architecture system with Apache Spark.
Stacks are last-in, first-out data structures that can be implemented using arrays or linked lists. Key operations on stacks include push to add an element, pop to remove an element, and peek to access the top element without removing it. Stacks have many applications, including expression evaluation, backtracking, reversing strings, and solving puzzles like the Towers of Hanoi. Hardware stacks are also commonly used to allocate memory and access registers in computer architectures.
The document discusses stacks and queues as linear data structures. It defines a stack as a first-in last-out (LIFO) structure where elements are inserted and deleted from one end. Stacks are commonly used to handle function calls and parameters. The document also defines queues as first-in first-out (FIFO) structures where elements are inserted at the rear and deleted from the front. Examples of stack and queue applications are given. Implementation of stacks using arrays and pointers is described along with push and pop algorithms.
The document discusses CPU caching concepts including the need for caching due to the gap between CPU and memory speeds and the principle of locality of reference. It describes cache hierarchy with different levels (L1, L2, L3 caches) and cache organization involving mapping of memory blocks to cache blocks. The document also covers cache operations of hits and misses as well as handling cache misses through replacement policies and ensuring cache coherence through protocols.
Megastore combines the scalability of NoSQL with the ACID properties of relational databases. It uses Paxos replication across data centers to provide high availability with low latency. The data is partitioned into entity groups which are replicated independently to allow for scale. Transactions within a group use multi-version concurrency control and across groups use two-phase commit. Coordinators track write ordering to prevent conflicts during reads and writes. Metrics from Google showed Megastore provided low latency access even with widespread data distribution.
This document discusses memory management in operating systems. It covers topics like how memory management keeps track of allocated and free memory, provides protection using base and limit registers, and different address binding schemes. It also discusses dynamic loading, dynamic linking, logical versus physical addresses, swapping, memory allocation techniques like single allocation and multiple partitions, and issues like fragmentation. Paging and segmentation techniques for managing memory are also summarized.
Policy Driven Dynamic LUN Space Optimization based on the Utilizationidescitation
In a typical SAN solution consisting of intelligent
storage system there will be many LUNs with pre-allocated
space (thick LUNs) based on the business requirements. In
case of thin provisioned LUNs physical space is not allocated
upfront; space will be allocated on demand depending on
incoming write workload. In thin provisioned LUN
implementation some space will be anchored or pre-allocated
and in other implementations no space will be anchored
(everything is allocated on demand). Most often administrator
oversubscribes the space for thick LUN’s. In case of thin LUN’s
with anchored space some space will be reserved to
accommodate data or incoming writes. There will be many
oversubscribed thick and anchored thin LUNs in a storage
systems resulting in non optimal usage of storage space.
For some of the business needs oversubscribing space for
thick and anchored thin LUN’s without considering the
utilization doesn’t augur well. There should be a way to
optimize storage space in an intelligent storage system based
on LUN utilization over a period of time. By determining
utilization of a LUN from time to time it’s possible to have
dynamic provisioning mechanism for thick and anchored thin
LUNs based on the usage over a time.
The proposed policy driven thick and anchored thin LUN
optimization will help storage admin to optimize space in a
storage system.
This document provides instructions on how to create and use shared memory objects in ABAP. It discusses defining a root class with attributes and methods to store and retrieve data, as well as creating a memory area in transaction SHMA. The root class serves as a template for the shared memory area, allowing data to be stored and accessed more quickly than reading from database tables. Methods are demonstrated for initializing the stored data, retrieving all data, and retrieving a single record by material number. Using shared memory objects can improve performance for applications that require frequent, heavy access to largely static reference data.
Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapte...University of Maribor
Slides from talk presenting:
Aleš Zamuda: Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapter and Networking.
Presentation at IcETRAN 2024 session:
"Inter-Society Networking Panel GRSS/MTT-S/CIS
Panel Session: Promoting Connection and Cooperation"
IEEE Slovenia GRSS
IEEE Serbia and Montenegro MTT-S
IEEE Slovenia CIS
11TH INTERNATIONAL CONFERENCE ON ELECTRICAL, ELECTRONIC AND COMPUTING ENGINEERING
3-6 June 2024, Niš, Serbia
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODELgerogepatton
As digital technology becomes more deeply embedded in power systems, protecting the communication
networks of Smart Grids (SG) has emerged as a critical concern. Distributed Network Protocol 3 (DNP3)
represents a multi-tiered application layer protocol extensively utilized in Supervisory Control and Data
Acquisition (SCADA)-based smart grids to facilitate real-time data gathering and control functionalities.
Robust Intrusion Detection Systems (IDS) are necessary for early threat detection and mitigation because
of the interconnection of these networks, which makes them vulnerable to a variety of cyberattacks. To
solve this issue, this paper develops a hybrid Deep Learning (DL) model specifically designed for intrusion
detection in smart grids. The proposed approach is a combination of the Convolutional Neural Network
(CNN) and the Long-Short-Term Memory algorithms (LSTM). We employed a recent intrusion detection
dataset (DNP3), which focuses on unauthorized commands and Denial of Service (DoS) cyberattacks, to
train and test our model. The results of our experiments show that our CNN-LSTM method is much better
at finding smart grid intrusions than other deep learning algorithms used for classification. In addition,
our proposed approach improves accuracy, precision, recall, and F1 score, achieving a high detection
accuracy rate of 99.50%.
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
We have compiled the most important slides from each speaker's presentation. This year’s compilation, available for free, captures the key insights and contributions shared during the DfMAy 2024 conference.
Low power architecture of logic gates using adiabatic techniquesnooriasukmaningtyas
The growing significance of portable systems to limit power consumption in ultra-large-scale-integration chips of very high density, has recently led to rapid and inventive progresses in low-power design. The most effective technique is adiabatic logic circuit design in energy-efficient hardware. This paper presents two adiabatic approaches for the design of low power circuits, modified positive feedback adiabatic logic (modified PFAL) and the other is direct current diode based positive feedback adiabatic logic (DC-DB PFAL). Logic gates are the preliminary components in any digital circuit design. By improving the performance of basic gates, one can improvise the whole system performance. In this paper proposed circuit design of the low power architecture of OR/NOR, AND/NAND, and XOR/XNOR gates are presented using the said approaches and their results are analyzed for powerdissipation, delay, power-delay-product and rise time and compared with the other adiabatic techniques along with the conventional complementary metal oxide semiconductor (CMOS) designs reported in the literature. It has been found that the designs with DC-DB PFAL technique outperform with the percentage improvement of 65% for NOR gate and 7% for NAND gate and 34% for XNOR gate over the modified PFAL techniques at 10 MHz respectively.
ACEP Magazine edition 4th launched on 05.06.2024Rahul
This document provides information about the third edition of the magazine "Sthapatya" published by the Association of Civil Engineers (Practicing) Aurangabad. It includes messages from current and past presidents of ACEP, memories and photos from past ACEP events, information on life time achievement awards given by ACEP, and a technical article on concrete maintenance, repairs and strengthening. The document highlights activities of ACEP and provides a technical educational article for members.
International Conference on NLP, Artificial Intelligence, Machine Learning an...gerogepatton
International Conference on NLP, Artificial Intelligence, Machine Learning and Applications (NLAIM 2024) offers a premier global platform for exchanging insights and findings in the theory, methodology, and applications of NLP, Artificial Intelligence, Machine Learning, and their applications. The conference seeks substantial contributions across all key domains of NLP, Artificial Intelligence, Machine Learning, and their practical applications, aiming to foster both theoretical advancements and real-world implementations. With a focus on facilitating collaboration between researchers and practitioners from academia and industry, the conference serves as a nexus for sharing the latest developments in the field.
ML Based Model for NIDS MSc Updated Presentation.v2.pptx
Run time administration
1. RUN TIME ADMINISTRATION
• STORAGE ALLOCATION STRATEGIES.
• A BRIEF COMPARISON BETWEEN
STORAGE ALLOCATION STRATEGIES.
• ACCESS TO NON LOCAL NAMES.
• BLOCK STRUCTURE STORAGE
ALLOCATION.
• ACTIVATION RECORD.
2. Storage allocation strategies-
Static
Stack
Heap Stack-based Allocation.
Storage is organized as a stack.
Activation records are pushed and
popped.
Ex :- Pascal , C
Heap Allocation
The storage is allocated and deallocated
at runtime from a data area known as
heap
Ex :- Pascal
Static Allocation.
Storage is allocated for all data objects
at compile time
Ex :- Fortran
3. Static Allocation
In this allocation scheme, the
compilation data is bound to
a fixed location in the
memory and it does not
change when the program
executes.
As the memory requirement
and storage locations are
known in advance,
runtime support package for
memory allocation and de-
allocation is not
required.
4. Static Allocation
Statically allocated names are bound to relocatable storage
at compile time.
Storage bindings of statically allocated names never change.
The compiler uses the type of a name (retrieved from the
symbol table) to determine storage size required.
The required number of bytes (possibly aligned) is set aside
for the name.
The relocatable address of the storage is fixed at compile
time.
5. Static Allocation
In a static environment (Fortran 77) there are a number of
restrictions:
Size of data objects are known at compile time
No recursive procedures
No dynamic memory allocation
Only one copy of each procedure activation record exists at
time t
We can allocate storage at compile time
• Bindings do not change at runtime
• Every time a procedure is called, the same bindings
occur
6. Static Allocation
Limitations:
The size required must be known at compile time.
Recursive procedures cannot be implemented statically.
No data structure can be created dynamically as all data is
static.
7. Stack Allocation
Procedure calls and their
activations are managed by
means of stack memory
allocation.
It works in last-in-first-out LIFO
method and this allocation
strategy is very useful for
recursive procedure
calls.
8. Stack Allocation
In a stack-based allocation, the previous restrictions are lifted
(Pascal, C, etc)
Procedures are allowed to be called recursively
• Need to hold multiple activation records for the same
procedure
• Created as required and placed on the stack
Each record will maintain a pointer to the record that
activated it
On completion, the current record will be deleted
from the stack and control is passed to the calling
record
Dynamic memory allocation is allowed
Pointers to data locations are allowed
9. Stack Allocation
Storage is organized as a stack.
Activation records are pushed and popped.
Locals and parameters are contained in the activation records
for the call.
This means locals are bound to fresh storage on every call.
We just need a stack_top pointer.
To allocate a new activation record, we just increase stack_top.
To deallocate an existing activation record, we just decrease
stack_top
10. Stack Allocation
Advantages
It supports recursion as memory is always allocated on
block entry.
It allows to create data structures dynamically.
It allows an array declaration like A(I, J), since actual
allocation is made only at execution time. The dimension
bounds need not be known at compile time.
Disadvantages:
Memory addressing has to be effected through pointers and
index registers which may be store them, static allocation
especially in case of array reference.
11. Heap Allocation
Variables local to a procedure
are allocated and de-allocated
only at runtime. Heap allocation
is used to dynamically allocate
memory to the variables and
claim it back when the variables
are no more required.
Except statically allocated
memory area, both stack and
heap memory can grow and
shrink dynamically and
unexpectedly. Therefore, they
cannot be provided with a fixed
amount of
memory in the system.
12. Heap Allocation
Stack allocation cannot be used if:
The values of the local variables must be retained when an
activation ends
A called activation outlives the caller
In such a case de-allocation of activation record cannot occur
in last-in first-out fashion
Heap allocation gives out pieces of contiguous storage for
activation record
13. Heap Allocation
There are two aspects of dynamic allocation :
Runtime allocation and de-allocation of data structures
Languages like Algol have dynamic data structures and it
reserves some part of memory for it.
If a procedure wants to put a value that is to be used after its
activation is over then we cannot use stack for that purpose. That
is language like Pascal allows data to be allocated under program
control. Also in certain language a called activation may outlive the
caller procedure. In such a case last-in-first-out queue will not
work and we will require a data structure like heap to store the
activation. The last case is not true for those languages whose
activation trees correctly depict the flow of control between
procedures.
14. Heap Allocation
Some languages do not have tree-structured allocations.
In these cases, activations have to be allocated on the heap.
This allows strange situations, like called activations that live
longer than their callers’ activations.
This is not common.
Pieces may be de-allocated in any order.
Over time the heap will consist of alternate areas that are
free and in use.
15. Heap Allocation
Heap manager is supposed to make use of the free space.
For efficiency reasons it may be helpful to handle small
activations as a special case.
For each size of interest keep a linked list of free blocks of
that size.
Fill a request of size s with block of size s ' where s ' is the
smallest size greater than or equal to s
For large blocks of storage use heap manager
16. Heap Allocation
For large amount of storage computation may take some time
to use up memory so that time taken by the manager may be
negligible compared to the computation time
For efficiency reasons we can handle small activations and
activations of predictable size as a special case as follows:
1. For each size of interest, keep a linked list if free blocks
of that size
2. If possible, fill a request for size s with a block of size s',
where s' is the smallest size greater than or equal to s.
When the block is eventually de-allocated, it is returned
to the linked list it came from.
17. Heap Allocation
3. For large blocks of storage use the heap manager.
Heap manger will dynamically allocate memory. This will
come with a runtime overhead. As heap manager will
have to take care of defragmentation and garbage
collection. But since heap manger saves space otherwise
we will have to fix size of activation at compile time,
runtime overhead is the price worth it
18. Sr no. Static allocation Stack allocation Heap allocation
1.
Static Allocation is
done for all data
objects at compile
time
In stack allocation,
stack is used to
manage runtime
In heap allocation,
heap is used to
manage dynamic
memory allocation
2.
Data structures can
not be created
dynamically as the
amount of data
storage required by
each data object is
determined by
compiler
Data structures and
data objects can
be created
dynamically
Data structures and
data objects can be
created
dynamically
19. Sr no. Static allocation Stack allocation Heap allocation
3.
Memory allocation:
The names of data
objects are bound
to storage at
compile time.
Memory allocation:
Using ( LIFO )
activation records
and data objects
are pushed onto
the stack. Memory
addressing can be
done using index
and registers.
Memory allocation:
A contiguous block
of memory from
heap is allocated
for activation
record or data
object . A linked list
is maintained for
free blocks.
20. Access to non-local
names
Scope rules determine the
treatment of non-local names.
A common rule is lexical scoping
or static scoping (most
languages use lexical scoping).
The scope rules of a language
decide how to reference the
non-local variables.
.
21. Access to non-local names
There are two methods that are commonly used :
1. Static or Lexical scoping : It determines the
declaration that applies to a name by examining the
program text alone.
Ex :- Pascal, C and ADA.
2. Dynamic Scoping : It determines the declaration
applicable to a name at run time by considering
the current activations.
Ex :- Lisp
22. Access to non-local names
Static or Lexical scoping
Access Links
A direct implementation of lexical scope for nested procedures
is obtained be adding a pointer called an access link to each
activation record.
If procedure p is nested immediately within q in the source text,
then the access link in an activation record for p points to the
access link in the record for the most recent activation of q.
23. Access to non-local names
Static or Lexical scoping
Displays
Faster access to non-locals than with access links can be
obtained using an array d of pointers to activation records,
called a display.
We maintain the display so that storage for a non-
local a at nesting depth i is in the activation record pointed to
by display element d [i].
24. Access to non-local names
Dynamic scoping
Deep Access
Conceptually, dynamic scope results if access links point to the
same activation records that control links do.
A simple implementation is to dispense with access links and
use the control link to search into the stack, looking for the first
activation record containing storage for the non-local name.
The term deep access comes from the fact that the search may
go "deep" into the stack.
25. Access to non-local names
Dynamic scoping
Shallow Access
Here the idea is to hold the current value of each name in
statically allocated storage.
When a new activation of a procedure p occurs, a local name
n in p takes over the storage statically allocated for n.
The previous value of n can be saved in the activation record
for p and must be restored when the activation of p ends.
26. Block
Blocks can be nested
The property is referred to as
block structured
Blocks are simpler to handle
than procedures
27. Block
Blocks can be nested.
The property is referred to as block structured.
Blocks are simpler to handle than procedures
Blocks can be treated as parameter less procedures
Use stack for memory allocation
Allocate space for complete procedure body at one time
28. Block
Scope of the declaration is given by most closely nested rule
• The scope of a declaration in block B includes B
• If a name X is not declared in B then an occurrence of X is in
the scope of declarator X in B ' such that
o B ' has a declaration of X
o B ' is most closely nested around B
29. Block
There are two methods of implementing block structure :
1. Stack Allocation : This is based on the observation that
scope of a declaration does not extend outside the block in
which it appears, the space for declared name can
be allocated when the block is entered and de-allocated
when controls leave the block. The view treat block as a
“parameter less procedure” called only from the point
just before the block and returning only to the point just
before the block.
30. Block
There are two methods of implementing block structure :
2. Complete Allocation : Here you allocate the complete
memory at one time. If there are blocks within the
procedure, then allowance is made for the storage needed
for declarations within the books. If two variables are
never alive at the same time and are at same depth they
can be assigned same storage.
31. Activation
Record
The activation record is a
block of memory used for
managing information
needed by a single execution
of procedure
FORTRAN uses the static data
area to store the activation
record where as in PASCAL
and C the activation record is
situated in stack area
32. Activation record
information needed by a single execution
of a procedure is managed using activation
record or frame:
• Not all compilers use all of the fields
• Pascal and C push activation record
on the run time stack when procedure
is called and pop the activation record
off the stack when control returns to
the caller
33. Activation record
1. Temporary values :-
Arising in the evaluation of expression
2. Local data :-
Data that is local to the execution of the
procedure
3. Saved machine status :-
State of the machine info before
procedure is called. Values of program
counter and machine registers that have to be restored when
control returns from the procedure .
34. Activation record
4. Access links :-
refers non local data held in other activation
records.
5. Control link :-
points to the activation record of the caller.
6. Actual parameters :-
used by the calling procedure to supply
parameters to the called procedure .
[ in practice these are passed in registers ]
7. Returned values :-
used by the called procedures to return a value to the calling
procedure.
[ in practice it is returned in a register ]
You can safely remove this slide. This slide design was provided by SlideModel.com – You can download more templates, shapes and elements for PowerPoint from http://slidemodel.com/