The document discusses various techniques for implementing subprograms in programming languages. It covers:
1) The general semantics of calls and returns between subprograms and the actions involved.
2) Implementing simple subprograms with static local variables and activation records.
3) Implementing subprograms with stack-dynamic local variables using run-time stacks and activation records.
4) Techniques for implementing nested subprograms using static linking chains to access nonlocal variables.
Performance analysis(Time & Space Complexity)swapnac12
The document discusses algorithms analysis and design. It covers time complexity and space complexity analysis using approaches like counting the number of basic operations like assignments, comparisons etc. and analyzing how they vary with the size of the input. Common complexities like constant, linear, quadratic and cubic are explained with examples. Frequency count method is presented to determine tight bounds of time and space complexity of algorithms.
The document discusses compilers and their role in translating high-level programming languages into machine-readable code. It notes that compilers perform several key functions: lexical analysis, syntax analysis, generation of an intermediate representation, optimization of the intermediate code, and finally generation of assembly or machine code. The compiler allows programmers to write code in a high-level language that is easier for humans while still producing efficient low-level code that computers can execute.
Decision properties of reular languagesSOMNATHMORE2
This document discusses decision properties of regular languages. It defines regular languages as those that can be described by regular expressions and accepted by finite automata. It explains that decision properties are algorithms that take a formal language description and determine properties like emptiness, finiteness, membership in the language, and equivalence to another language. The key decision properties - emptiness, finiteness, membership, and equivalence - are then defined along with the algorithms to determine each. Examples are provided to illustrate the algorithms. Applications of decision properties in areas like data validation and parsing are also mentioned.
Shells are programs that interpret commands from the user and translate them into a language computers understand. The main types of shells in Linux are the Bourne shell, C shell, Korn shell, and Bourne Again shell (bash). Bash has become the default shell in most Linux distributions as it incorporates features from other shells while maintaining compatibility with the Bourne shell syntax used for scripts.
The document discusses minimum edit distance and how it can be used to quantify the similarity between two strings. Minimum edit distance is defined as the minimum number of editing operations like insertion, deletion, substitution needed to transform one string into another. Levenshtein distance assigns a cost of 1 to each insertion, deletion, or substitution, and calculates the minimum edits between two strings using dynamic programming to build up solutions from sub-problems. The algorithm can also be modified to produce an alignment between the strings by storing back pointers and doing a backtrace.
This document discusses free space management techniques in operating systems. It explains the need to track free disk space and reuse it from deleted files. Various free space list implementations are described, including bit vector, linked list, grouping, and counting. Bit vector uses a bitmap to track free blocks, linked list links free blocks, grouping stores addresses of free blocks in blocks, and counting tracks free block runs with an address and count.
The document provides an overview of compilers by discussing:
1. Compilers translate source code into executable target code by going through several phases including lexical analysis, syntax analysis, semantic analysis, code optimization, and code generation.
2. An interpreter directly executes source code statement by statement while a compiler produces target code as translation. Compiled code generally runs faster than interpreted code.
3. The phases of a compiler include a front end that analyzes the source code and produces intermediate code, and a back end that optimizes and generates the target code.
The document provides an introduction to compiler construction including:
1. The objectives of understanding how to build a compiler, use compiler construction tools, understand assembly code and virtual machines, and define grammars.
2. An overview of compilers and interpreters including the analysis-synthesis model of compilation where analysis determines operations from the source program and synthesis translates those operations into the target program.
3. An outline of the phases of compilation including preprocessing, compiling, assembling, and linking source code into absolute machine code using tools like scanners, parsers, syntax-directed translation, and code generators.
Performance analysis(Time & Space Complexity)swapnac12
The document discusses algorithms analysis and design. It covers time complexity and space complexity analysis using approaches like counting the number of basic operations like assignments, comparisons etc. and analyzing how they vary with the size of the input. Common complexities like constant, linear, quadratic and cubic are explained with examples. Frequency count method is presented to determine tight bounds of time and space complexity of algorithms.
The document discusses compilers and their role in translating high-level programming languages into machine-readable code. It notes that compilers perform several key functions: lexical analysis, syntax analysis, generation of an intermediate representation, optimization of the intermediate code, and finally generation of assembly or machine code. The compiler allows programmers to write code in a high-level language that is easier for humans while still producing efficient low-level code that computers can execute.
Decision properties of reular languagesSOMNATHMORE2
This document discusses decision properties of regular languages. It defines regular languages as those that can be described by regular expressions and accepted by finite automata. It explains that decision properties are algorithms that take a formal language description and determine properties like emptiness, finiteness, membership in the language, and equivalence to another language. The key decision properties - emptiness, finiteness, membership, and equivalence - are then defined along with the algorithms to determine each. Examples are provided to illustrate the algorithms. Applications of decision properties in areas like data validation and parsing are also mentioned.
Shells are programs that interpret commands from the user and translate them into a language computers understand. The main types of shells in Linux are the Bourne shell, C shell, Korn shell, and Bourne Again shell (bash). Bash has become the default shell in most Linux distributions as it incorporates features from other shells while maintaining compatibility with the Bourne shell syntax used for scripts.
The document discusses minimum edit distance and how it can be used to quantify the similarity between two strings. Minimum edit distance is defined as the minimum number of editing operations like insertion, deletion, substitution needed to transform one string into another. Levenshtein distance assigns a cost of 1 to each insertion, deletion, or substitution, and calculates the minimum edits between two strings using dynamic programming to build up solutions from sub-problems. The algorithm can also be modified to produce an alignment between the strings by storing back pointers and doing a backtrace.
This document discusses free space management techniques in operating systems. It explains the need to track free disk space and reuse it from deleted files. Various free space list implementations are described, including bit vector, linked list, grouping, and counting. Bit vector uses a bitmap to track free blocks, linked list links free blocks, grouping stores addresses of free blocks in blocks, and counting tracks free block runs with an address and count.
The document provides an overview of compilers by discussing:
1. Compilers translate source code into executable target code by going through several phases including lexical analysis, syntax analysis, semantic analysis, code optimization, and code generation.
2. An interpreter directly executes source code statement by statement while a compiler produces target code as translation. Compiled code generally runs faster than interpreted code.
3. The phases of a compiler include a front end that analyzes the source code and produces intermediate code, and a back end that optimizes and generates the target code.
The document provides an introduction to compiler construction including:
1. The objectives of understanding how to build a compiler, use compiler construction tools, understand assembly code and virtual machines, and define grammars.
2. An overview of compilers and interpreters including the analysis-synthesis model of compilation where analysis determines operations from the source program and synthesis translates those operations into the target program.
3. An outline of the phases of compilation including preprocessing, compiling, assembling, and linking source code into absolute machine code using tools like scanners, parsers, syntax-directed translation, and code generators.
OS Process Synchronization, semaphore and Monitorssgpraju
The document summarizes key concepts in process synchronization and concurrency control, including:
1) Process synchronization techniques like semaphores, monitors, and atomic transactions that ensure orderly access to shared resources. Semaphores use wait() and signal() operations while monitors provide mutual exclusion through condition variables.
2) Concurrency control algorithms like locking and two-phase locking that ensure serializability of concurrent transactions accessing a database. Locking associates locks with data items to control concurrent access.
3) Challenges in concurrency control like deadlocks, priority inversion, and starvation that synchronization mechanisms aim to prevent. Log-based recovery with write-ahead logging and checkpoints is used to ensure atomicity of transactions in
The purpose of types:
To define what the program should do.
e.g. read an array of integers and return a double
To guarantee that the program is meaningful.
that it does not add a string to an integer
that variables are declared before they are used
To document the programmer's intentions.
better than comments, which are not checked by the compiler
To optimize the use of hardware.
reserve the minimal amount of memory, but not more
use the most appropriate machine instructions.
NFA or Non deterministic finite automatadeepinderbedi
An NFA (non-deterministic finite automaton) can have multiple transitions from a single state on a given input symbol, whereas a DFA (deterministic finite automaton) has exactly one transition from each state on each symbol. The document discusses NFAs and how they differ from DFAs, provides examples of NFA diagrams, and describes how to convert an NFA to an equivalent DFA.
This document provides an overview of compilers, including their history, components, and construction. It discusses the need for compilers to translate high-level programming languages into machine-readable code. The key phases of a compiler are described as scanning, parsing, semantic analysis, intermediate code generation, optimization, and code generation. Compiler construction relies on tools like scanner and parser generators.
UML (Unified Modeling Language) is a diagramming language used for object-oriented programming. It can be used to describe the organization, execution, use, and deployment of a program. Design patterns describe common solutions to programming problems and always use UML diagrams. This document focuses on class diagrams, which show classes, interfaces, and their relationships. It provides examples of how to depict classes with variables and methods, and relationships between classes like inheritance.
This document discusses parsing and context-free grammars. It defines parsing as verifying that tokens generated by a lexical analyzer follow syntactic rules of a language using a parser. Context-free grammars are defined using terminals, non-terminals, productions and a start symbol. Top-down and bottom-up parsing are introduced. Techniques for grammar analysis and improvement like left factoring, eliminating left recursion, calculating first and follow sets are explained with examples.
The document defines key concepts in automata theory, including alphabets, strings, length of strings, powers and closures of alphabets, languages, and the membership problem. An alphabet is a set of symbols, strings are finite sequences of symbols from an alphabet, and the length of a string is the number of symbols in it. Powers of an alphabet refer to sets of strings of a given length, while closures include strings of all lengths. A language is a subset of the Kleene closure of an alphabet. The membership problem asks whether a given string is part of a given language.
This document summarizes key concepts from Chapter 5 of the textbook "Operating System Concepts - 8th Edition" regarding CPU scheduling. It introduces CPU scheduling as the basis for multiprogrammed operating systems. Various scheduling algorithms are described such as first-come first-served, shortest job first, priority scheduling, and round robin. Criteria for evaluating scheduling algorithms include CPU utilization, throughput, turnaround time, waiting time, and response time. Ready queues can be partitioned into multiple levels with different scheduling policies to implement multilevel queue and feedback queue scheduling.
The document discusses different approaches for concept learning from examples, including viewing it as a search problem to find the hypothesis that best fits the training examples. It also describes the general-to-specific learning approach, where the goal is to find the maximally specific hypothesis consistent with the positive training examples by starting with the most general hypothesis and replacing constraints to better fit the examples. The document also discusses the version space and candidate elimination algorithms for obtaining the version space of all hypotheses consistent with the training data.
Process scheduling involves assigning system resources like CPU time to processes. There are three levels of scheduling - long, medium, and short term. The goals of scheduling are to minimize turnaround time, waiting time, and response time for users while maximizing throughput, CPU utilization, and fairness for the system. Common scheduling algorithms include first come first served, priority scheduling, shortest job first, round robin, and multilevel queue scheduling. Newer algorithms like fair share scheduling and lottery scheduling aim to prevent starvation.
Memory management is the act of managing computer memory. The essential requirement of memory management is to provide ways to dynamically allocate portions of memory to programs at their request, and free it for reuse when no longer needed. This is critical to any advanced computer system where more than a single process might be underway at any time
The document discusses run-time environments and how compilers support program execution through run-time environments. It covers:
1) The compiler cooperates with the OS and system software through a run-time environment to implement language abstractions during execution.
2) The run-time environment handles storage layout/allocation, variable access, procedure linkage, parameter passing and interfacing with the OS.
3) Memory is typically divided into code, static storage, heap and stack areas, with the stack and heap growing towards opposite ends of memory dynamically during execution.
This document discusses different types of scheduling algorithms used by operating systems to determine which process or processes will run on the CPU. It describes preemptive and non-preemptive scheduling, and provides examples of common scheduling algorithms like first-come, first-served (FCFS), shortest job first (SJF), round robin, and priority-based scheduling. Formulas for calculating turnaround time and waiting time are also presented.
The document provides an overview of principles of programming languages, including:
- Reasons for studying programming language concepts such as improved ability to learn new languages.
- Categories of programming languages including imperative, functional, logic, and object-oriented languages.
- Factors that influence language design such as computer architecture and programming methodologies.
- Methods of describing syntax including Backus-Naur Form and context-free grammars. Attribute grammars add semantic information to parse trees.
- Implementation methods for languages including compilation, interpretation, and hybrid systems.
This document discusses constraint satisfaction problems (CSPs) and techniques for solving them. It begins by defining CSPs as problems with variables, domains of possible values, and constraints limiting assignments. Backtracking search and heuristics like minimum remaining values are described as standard approaches. Constraint propagation techniques like forward checking and arc consistency are explained, which aim to detect inconsistencies earlier. The 4-queens problem is provided as an example CSP.
This document discusses semaphores, which are integer variables that coordinate access to shared resources. It describes counting semaphores, which allow multiple processes to access a critical section simultaneously up to a set limit, and binary semaphores, which only permit one process at a time. Key differences are that counting semaphores can have any integer value while binary semaphores are limited to 0 or 1, and counting semaphores allow multiple slots while binary semaphores provide strict mutual exclusion. Limitations of semaphores include potential priority inversion issues and deadlocks if not used properly.
This document describes the steps to construct a LALR parser from a context-free grammar:
1. Create an augmented grammar by adding a new start symbol and productions.
2. Generate kernel items by introducing dots in productions and adding second components. Then take closures of the items.
3. Compute the goto function to fill the parsing table.
4. Construct the CLR parsing table from the items and gotos.
5. Shrink the CLR parser by merging equivalent states to produce the more efficient LALR parsing table with fewer states.
Types of Language in Theory of ComputationAnkur Singh
This document discusses different types of formal languages in the Chomsky hierarchy:
1. Recursively enumerable languages are generated by Turing machines and include type-0 languages. They are closed under union, concatenation, and Kleene star but not difference or complement.
2. Context-sensitive languages are type-1 languages generated by linear-bounded automata. They are closed under union, intersection, concatenation, and Kleene star.
3. Context-free languages are type-2 languages generated by pushdown automata, including programming language grammars. They are closed under union, concatenation, Kleene star, and reversal.
4. Regular languages are type-3 languages generated by
This lecture provide a review of requirement engineering process. The slides have been prepared after reading Ian Summerville and Roger Pressman work. This lecture is helpful to understand user, and user requirements.
The document discusses concepts related to sequence control and subprograms in programming languages. It covers conditional statements, loops, exception handling, subprogram definition and activation, and subprogram environments. Key points include implicit and explicit sequence control using statements, precedence and associativity rules for expressions, stack-based implementation of subprogram calls, and static versus dynamic scoping of identifiers through referencing environments.
This document discusses subprograms in programming languages. It covers the fundamentals of subprograms including definitions, parameters, and parameter passing methods. Key points include:
- A subprogram has a single entry point and control returns to the caller when execution terminates. Parameters can be passed by value, reference, result, or name.
- Issues around subprograms include parameter type checking, local variable scope, and parameter passing semantics and implementation. Languages support different parameter passing methods like pass-by-value or pass-by-reference.
- Parameter passing methods have tradeoffs between efficiency and aliasing. Multidimensional arrays as parameters require type information to be passed correctly in some languages. Subprograms can also
3V0-622 objective-3.1-logical-physical with Joe Clarke @elgwhoppoJoe Clarke
The document provides an overview of the objectives for transitioning from a logical design to a physical design for a vSphere 6.x environment. It begins with an introduction by Joe Clarke and then outlines the following key points:
1. It reviews the conceptual, logical, and physical design phases to refresh understanding of the differences between each.
2. For objective 3.1, it discusses analyzing design decisions and options from the logical design to determine their impact on various factors like availability, performance, security, and cost in the physical design.
3. For objective 3.1, it also covers determining the impact of applying VMware best practices to identified risks, constraints, and assumptions in a given design.
OS Process Synchronization, semaphore and Monitorssgpraju
The document summarizes key concepts in process synchronization and concurrency control, including:
1) Process synchronization techniques like semaphores, monitors, and atomic transactions that ensure orderly access to shared resources. Semaphores use wait() and signal() operations while monitors provide mutual exclusion through condition variables.
2) Concurrency control algorithms like locking and two-phase locking that ensure serializability of concurrent transactions accessing a database. Locking associates locks with data items to control concurrent access.
3) Challenges in concurrency control like deadlocks, priority inversion, and starvation that synchronization mechanisms aim to prevent. Log-based recovery with write-ahead logging and checkpoints is used to ensure atomicity of transactions in
The purpose of types:
To define what the program should do.
e.g. read an array of integers and return a double
To guarantee that the program is meaningful.
that it does not add a string to an integer
that variables are declared before they are used
To document the programmer's intentions.
better than comments, which are not checked by the compiler
To optimize the use of hardware.
reserve the minimal amount of memory, but not more
use the most appropriate machine instructions.
NFA or Non deterministic finite automatadeepinderbedi
An NFA (non-deterministic finite automaton) can have multiple transitions from a single state on a given input symbol, whereas a DFA (deterministic finite automaton) has exactly one transition from each state on each symbol. The document discusses NFAs and how they differ from DFAs, provides examples of NFA diagrams, and describes how to convert an NFA to an equivalent DFA.
This document provides an overview of compilers, including their history, components, and construction. It discusses the need for compilers to translate high-level programming languages into machine-readable code. The key phases of a compiler are described as scanning, parsing, semantic analysis, intermediate code generation, optimization, and code generation. Compiler construction relies on tools like scanner and parser generators.
UML (Unified Modeling Language) is a diagramming language used for object-oriented programming. It can be used to describe the organization, execution, use, and deployment of a program. Design patterns describe common solutions to programming problems and always use UML diagrams. This document focuses on class diagrams, which show classes, interfaces, and their relationships. It provides examples of how to depict classes with variables and methods, and relationships between classes like inheritance.
This document discusses parsing and context-free grammars. It defines parsing as verifying that tokens generated by a lexical analyzer follow syntactic rules of a language using a parser. Context-free grammars are defined using terminals, non-terminals, productions and a start symbol. Top-down and bottom-up parsing are introduced. Techniques for grammar analysis and improvement like left factoring, eliminating left recursion, calculating first and follow sets are explained with examples.
The document defines key concepts in automata theory, including alphabets, strings, length of strings, powers and closures of alphabets, languages, and the membership problem. An alphabet is a set of symbols, strings are finite sequences of symbols from an alphabet, and the length of a string is the number of symbols in it. Powers of an alphabet refer to sets of strings of a given length, while closures include strings of all lengths. A language is a subset of the Kleene closure of an alphabet. The membership problem asks whether a given string is part of a given language.
This document summarizes key concepts from Chapter 5 of the textbook "Operating System Concepts - 8th Edition" regarding CPU scheduling. It introduces CPU scheduling as the basis for multiprogrammed operating systems. Various scheduling algorithms are described such as first-come first-served, shortest job first, priority scheduling, and round robin. Criteria for evaluating scheduling algorithms include CPU utilization, throughput, turnaround time, waiting time, and response time. Ready queues can be partitioned into multiple levels with different scheduling policies to implement multilevel queue and feedback queue scheduling.
The document discusses different approaches for concept learning from examples, including viewing it as a search problem to find the hypothesis that best fits the training examples. It also describes the general-to-specific learning approach, where the goal is to find the maximally specific hypothesis consistent with the positive training examples by starting with the most general hypothesis and replacing constraints to better fit the examples. The document also discusses the version space and candidate elimination algorithms for obtaining the version space of all hypotheses consistent with the training data.
Process scheduling involves assigning system resources like CPU time to processes. There are three levels of scheduling - long, medium, and short term. The goals of scheduling are to minimize turnaround time, waiting time, and response time for users while maximizing throughput, CPU utilization, and fairness for the system. Common scheduling algorithms include first come first served, priority scheduling, shortest job first, round robin, and multilevel queue scheduling. Newer algorithms like fair share scheduling and lottery scheduling aim to prevent starvation.
Memory management is the act of managing computer memory. The essential requirement of memory management is to provide ways to dynamically allocate portions of memory to programs at their request, and free it for reuse when no longer needed. This is critical to any advanced computer system where more than a single process might be underway at any time
The document discusses run-time environments and how compilers support program execution through run-time environments. It covers:
1) The compiler cooperates with the OS and system software through a run-time environment to implement language abstractions during execution.
2) The run-time environment handles storage layout/allocation, variable access, procedure linkage, parameter passing and interfacing with the OS.
3) Memory is typically divided into code, static storage, heap and stack areas, with the stack and heap growing towards opposite ends of memory dynamically during execution.
This document discusses different types of scheduling algorithms used by operating systems to determine which process or processes will run on the CPU. It describes preemptive and non-preemptive scheduling, and provides examples of common scheduling algorithms like first-come, first-served (FCFS), shortest job first (SJF), round robin, and priority-based scheduling. Formulas for calculating turnaround time and waiting time are also presented.
The document provides an overview of principles of programming languages, including:
- Reasons for studying programming language concepts such as improved ability to learn new languages.
- Categories of programming languages including imperative, functional, logic, and object-oriented languages.
- Factors that influence language design such as computer architecture and programming methodologies.
- Methods of describing syntax including Backus-Naur Form and context-free grammars. Attribute grammars add semantic information to parse trees.
- Implementation methods for languages including compilation, interpretation, and hybrid systems.
This document discusses constraint satisfaction problems (CSPs) and techniques for solving them. It begins by defining CSPs as problems with variables, domains of possible values, and constraints limiting assignments. Backtracking search and heuristics like minimum remaining values are described as standard approaches. Constraint propagation techniques like forward checking and arc consistency are explained, which aim to detect inconsistencies earlier. The 4-queens problem is provided as an example CSP.
This document discusses semaphores, which are integer variables that coordinate access to shared resources. It describes counting semaphores, which allow multiple processes to access a critical section simultaneously up to a set limit, and binary semaphores, which only permit one process at a time. Key differences are that counting semaphores can have any integer value while binary semaphores are limited to 0 or 1, and counting semaphores allow multiple slots while binary semaphores provide strict mutual exclusion. Limitations of semaphores include potential priority inversion issues and deadlocks if not used properly.
This document describes the steps to construct a LALR parser from a context-free grammar:
1. Create an augmented grammar by adding a new start symbol and productions.
2. Generate kernel items by introducing dots in productions and adding second components. Then take closures of the items.
3. Compute the goto function to fill the parsing table.
4. Construct the CLR parsing table from the items and gotos.
5. Shrink the CLR parser by merging equivalent states to produce the more efficient LALR parsing table with fewer states.
Types of Language in Theory of ComputationAnkur Singh
This document discusses different types of formal languages in the Chomsky hierarchy:
1. Recursively enumerable languages are generated by Turing machines and include type-0 languages. They are closed under union, concatenation, and Kleene star but not difference or complement.
2. Context-sensitive languages are type-1 languages generated by linear-bounded automata. They are closed under union, intersection, concatenation, and Kleene star.
3. Context-free languages are type-2 languages generated by pushdown automata, including programming language grammars. They are closed under union, concatenation, Kleene star, and reversal.
4. Regular languages are type-3 languages generated by
This lecture provide a review of requirement engineering process. The slides have been prepared after reading Ian Summerville and Roger Pressman work. This lecture is helpful to understand user, and user requirements.
The document discusses concepts related to sequence control and subprograms in programming languages. It covers conditional statements, loops, exception handling, subprogram definition and activation, and subprogram environments. Key points include implicit and explicit sequence control using statements, precedence and associativity rules for expressions, stack-based implementation of subprogram calls, and static versus dynamic scoping of identifiers through referencing environments.
This document discusses subprograms in programming languages. It covers the fundamentals of subprograms including definitions, parameters, and parameter passing methods. Key points include:
- A subprogram has a single entry point and control returns to the caller when execution terminates. Parameters can be passed by value, reference, result, or name.
- Issues around subprograms include parameter type checking, local variable scope, and parameter passing semantics and implementation. Languages support different parameter passing methods like pass-by-value or pass-by-reference.
- Parameter passing methods have tradeoffs between efficiency and aliasing. Multidimensional arrays as parameters require type information to be passed correctly in some languages. Subprograms can also
3V0-622 objective-3.1-logical-physical with Joe Clarke @elgwhoppoJoe Clarke
The document provides an overview of the objectives for transitioning from a logical design to a physical design for a vSphere 6.x environment. It begins with an introduction by Joe Clarke and then outlines the following key points:
1. It reviews the conceptual, logical, and physical design phases to refresh understanding of the differences between each.
2. For objective 3.1, it discusses analyzing design decisions and options from the logical design to determine their impact on various factors like availability, performance, security, and cost in the physical design.
3. For objective 3.1, it also covers determining the impact of applying VMware best practices to identified risks, constraints, and assumptions in a given design.
Este documento resume conceptos clave sobre la percepción desde la perspectiva de la psicología de la Gestalt. Explica que la percepción implica la selección y organización de la información sensorial según la experiencia previa de una persona. También describe que la percepción depende del contexto y puede variar entre individuos. Finalmente, resume que la teoría de la Gestalt estudia las estructuras psicológicas como totalidades organizadas y significativas, dando importancia a cómo se forman las percepciones.
El documento describe los beneficios de los huertos escolares agroecológicos para la sostenibilidad y soberanía alimentaria. Propone educar a los niños sobre la producción de alimentos, el reciclaje y el respeto por la naturaleza para formar ciudadanos saludables y autosuficientes. También aboga por la soberanía alimentaria, el derecho a producir y decidir qué se consume, y promueve la agricultura ecológica y los huertos comunitarios como modelos educativos.
Managers control hospital costs through various budgeting processes including fixed, flexible, operating, and strategic budgets. Fixed budgets do not change with volume while flexible budgets change based on actual activity levels. Operating budgets project annual expenses and strategic budgets focus on long-term trends. Organizations deal with changes in the short, intermediate, and long-run through measures like adjusting staffing levels. Hospital costs vary due to differences in services, cost-shifting, patient illness, and production efficiency. Regulatory approaches to controlling costs include certificate of need laws, utilization review, professional standards review organizations, and administered pricing systems like DRGs and PPS.
Este documento resume la evidencia a favor del uso de la polipíldora en la prevención secundaria de enfermedades cardiovasculares. La polipíldora combina múltiples fármacos en una sola pastilla para mejorar la adherencia al tratamiento. Estudios muestran que la polipíldora incrementa la adherencia y reduce eventos cardiovasculares. Actualmente, la polipíldora Trinomia es la única aprobada en Europa para prevención secundaria cardiovascular.
Cómo encontrar el elemento en la educación renovadoIvonne Rodriguez
Este documento discute cómo mejorar la educación en México alejándose del enfoque actual. Argumenta que no todos los estudiantes se desempeñan bien bajo los estándares actuales y que las habilidades de cada persona deben valorarse. También sugiere que la educación básica es más importante que la superior y que los maestros deben ser guías para los estudiantes en lugar de simplemente enseñar el plan de estudios.
This presentation discusses the key differences between marketing and selling. Marketing is defined as a process of transferring a product or service to a buyer at a competitive price in order to satisfy their needs. It focuses on understanding customer needs and converting them into products. Selling, on the other hand, is a process of transferring a product regardless of customer needs and focuses on earning profits. The main differences are that marketing starts with understanding customers while selling starts with the product, marketing aims for customer satisfaction and selling aims for sales, and marketing involves additional activities like research while selling is just one part of marketing.
This document summarizes the key topics covered in an entrepreneurship training program. Session 3 focuses on turning passions into business opportunities. It provides examples of how Mark Zuckerberg turned his passion for connecting people into Facebook. It then lists 7 tips for transforming a passion into a business reality, such as defining the sellable element and understanding customers. The presentation emphasizes creating a business plan that addresses problems, proposed solutions, business models, marketing strategies, competition, team members, and financial projections.
Este documento contiene 20 preguntas de matemáticas para una prueba SIMCE de 8o año básico. Las preguntas cubren una variedad de temas matemáticos como porcentajes, proporciones, álgebra, geometría y más. Algunas preguntas requieren que los estudiantes calculen valores numéricos, mientras que otras piden explicaciones o fundamentaciones de las respuestas.
El documento habla sobre los accesos vasculares para hemodiálisis. Explica que existen dos tipos principales: las fistulas arteriovenosas y los catéteres venosos centrales. Las fistulas arteriovenosas son el acceso ideal porque permiten un flujo suficiente para la hemodiálisis y tienen pocas complicaciones. Sin embargo, a veces se requieren catéteres venosos centrales temporales o permanentes. Tanto los accesos como los catéteres pueden presentar complicaciones como trombosis, infección, síndrome del
Los árabes comparten una identidad étnica definida por su lengua, el árabe, más que por su religión. Actualmente la mayoría son musulmanes pero existen minorías cristianas y judías. Dentro del islam árabe existen varias ramas como los suníes y chiíes. Los árabes cristianos suelen seguir iglesias orientales como la coptas o maronitas.
The document discusses the impact of digital platforms on the sharing economy, using Airbnb as a case study. It makes three key points:
1) Airbnb has grown rapidly due to technological innovations that allow individuals to share unused resources through an online platform. This platform model reduces transaction costs and builds trust between strangers.
2) As a digital platform, Airbnb utilizes a modular system that facilitates product innovation and achieves economies of scale. It also executes control over hosts and customers through mechanisms like reviews, commissions, and branding.
3) The emergence of sharing platforms like Airbnb is driven by consumers' desire for new economic and experiential options beyond traditional hotels. This competition has led
Que es un accidente de trabajo?
Cuál es la diferencia entre causas básicas y causas inmediatas?
Cuáles son los factores de riesgo o peligros laborales generadores de accidentes de trabajo?
Cuál es la diferencia entre acto inseguro y condición insegura?
Cuál es la diferencia entre factor personal y factor del trabajo?
Opening credits at the beginning of films list the director, producers, editor, soundtrack composer, and main actors and actresses involved in making the film. While there is no single main reason for opening credits, they serve to recognize the status of those involved by displaying their names, and get their names recognized as most viewers have stopped paying attention by the end of the closing credits.
El documento presenta un proyecto sobre el ciclo del agua dividido en tres temas, actividades de aprendizaje y una evaluación. Incluye preguntas sobre la longevidad de perros y osos como ejemplos de actividades. El objetivo es educar sobre el ciclo del agua a través de diferentes secciones de contenido e interacción con ejercicios.
El documento resume algunos hitos clave del siglo XXI en Colombia. Habla sobre los avances tecnológicos como el reconocimiento a un científico colombiano por NASA. También cubre eventos sociopolíticos como los gobiernos de Álvaro Uribe y Juan Manuel Santos, el proceso de paz con las FARC y el ELN, y el rescate de rehenes de las FARC incluyendo a Ingrid Betancourt. Finalmente, aborda temas culturales como la beatificación y canonización de figuras religiosas colombianas.
-ImplementSubprogram-theory of programmingjavariagull777
This document discusses different techniques for implementing subprograms (subroutines and functions) in programming languages. It covers:
- The general semantics of subprogram calls and returns, including saving execution context, passing parameters, and transferring control.
- Implementing "simple" subprograms where local variables are static and nesting is not allowed, using static allocation of code and activation records.
- Using a call stack to implement subprograms with stack-dynamic local variables, recursion, and nested subprograms, where activation records are dynamically allocated on the stack.
- Techniques like static chaining and static depths to allow accessing nonlocal variables in nested subprograms by finding the correct activation record on the stack.
Implementing subprograms requires saving execution context, allocating activation records, and maintaining dynamic or static chains. Activation records contain parameters, local variables, return addresses, and dynamic/static links. Nested subprograms are supported through static chains that connect activation records. Dynamic scoping searches the dynamic chain for non-local variables, while shallow access uses a central variable table. Blocks are implemented as parameterless subprograms to allocate separate activation records for block variables.
The runtime environment handles the implementation of programming language abstractions on the target machine. It allocates storage for code and data and handles access to variables, procedure linkage and parameter passing. Storage is typically divided into code, static, heap and stack areas. The compiler generates code that maps logical addresses to physical addresses. The stack grows downward and stores activation records for procedures, while the heap grows upward and dynamically allocates memory. Procedure activations are represented by activation records on the call stack. The runtime environment implements variable scoping and access to non-local data using techniques like static scopes, access links and displays.
Subprograms, Design Issues, Local Referencing, Parameter Passing, Overloaded Methods, Generic Methods, Design Issues for functions, Semantics of call and return, implementing simple subprograms, stack and dynamic local variables, nested subprograms, blocks and dynamic scoping,
This document discusses subprograms (also called subroutines) in programming languages. It covers:
- The basic definitions and characteristics of subprograms, including headers, parameters, and local variables.
- Different parameter passing methods like pass-by-value, pass-by-reference, and their implementation in common languages.
- Additional features of subprograms including overloading, generics, and functions passing subprograms as parameters.
System programming involves designing and implementing system programs like operating systems, compilers, linkers, and loaders that allow user programs to run efficiently on a computer system. A key part of system programming is developing system software like operating systems, assemblers, compilers, and debuggers. An operating system acts as an interface between the user and computer hardware, managing processes, memory, devices, and files. Assemblers and compilers translate programs into machine-readable code. Loaders place object code into memory for execution. System programming optimizes computer system performance and resource utilization.
The document discusses different modularization techniques in ABAP including macros, include programs, subroutines, function groups, and function modules. It provides details on how each technique works, how to define and call them, and how to pass parameters. Modularization makes ABAP programs easier to maintain, reuse code, and avoid redundancy.
System programming involves designing and implementing system programs that assist in the execution of general user programs. This includes operating systems, compilers, assemblers, linkers, and debuggers. The document discusses what system and programming are, types of software, system programs, and components of system programming like interpreters, assemblers, compilers, macros, and operating systems. It also explains the functions of operating systems and types of translators like assemblers, compilers, cross assemblers, cross compilers, and interpreters. Loaders, macro processors, linking, and formal systems are also defined. Finally, the document discusses two-pass assemblers, specifying the problem, data structures, and general design procedure.
The document discusses different runtime environments for programming languages:
(1) Fully static environments where all data remains fixed in memory like FORTRAN77. No dynamic allocation.
(2) Stack-based environments used by languages allowing recursion and dynamic allocation like C/C++. Activation records are allocated on a runtime stack.
(3) Dynamic environments like LISP where data is allocated on a heap.
It covers key aspects of runtime environments like memory organization, calling conventions, parameter passing, and handling local variables and procedures. Different languages require different solutions for variable-length data, nested declarations, non-local references, and procedure parameters.
The document discusses compiler design topics like run-time environments, code generation, and garbage collection.
It covers stack allocation of space, access to non-local data, heap management, introduction to garbage collection and trace-based collection. For code generation, it discusses issues in code generator design, target languages, basic blocks, optimization, and register allocation.
It also provides details on stack allocation, static and dynamic scopes, lexical scopes for nested procedures, displays, storage allocation techniques, and issues in code generator design like instruction selection, evaluation order, and register allocation problems.
FRP is a programming paradigm that uses reactive and functional programming concepts. It models real-time user interfaces and event-driven applications using streams of events and cells that hold values. This allows the program structure to be declarative by describing what the program does rather than how it works. Common FRP concepts include streams that represent events, cells that hold values over time, and primitives like map, filter and merge that transform and combine streams and cells. Sodium is an FRP library for Java that uses push-based evaluation.
The document discusses intermediate code generation in compilers. It describes various intermediate representations like syntax trees, DAGs, postfix notation, and 3-address code. Syntax trees represent the hierarchical structure of a program and are constructed using functions like mknode() and mkleaf(). DAGs provide a more compact representation by identifying common subexpressions. Postfix notation linearizes the syntax tree. The document also discusses run-time environments, storage allocation strategies like static, stack and heap allocation, and activation records.
The document discusses performance tool developments for the Linux kernel. It describes the perf events subsystem, which allows measuring workloads using performance counters. It introduces tools like perf-record and perf-report for collecting and displaying performance data. It outlines efforts to build visualizers and leverage last branch records to gain deeper insights into performance bottlenecks from hardware event data. The goal is to make it easier for non-experts to understand profiling information from the Linux performance subsystem.
This presentation contains:
About Monolithic and Procedural Programming and their features
Difference between Monolithic and Procedural Programming
Examples of Monolithic and Procedural Programming
Combine example of Monolithic and Procedural Programming
This document provides an overview of programming language concepts including control structures, statements, functions, and functional programming. It covers topics like selection statements, iteration statements, subprograms, scoping, and paradigms like imperative, object-oriented, declarative, and functional programming. Examples are given in languages like Java, C++, Scheme, Haskell, and SQL.
Application of the automation know-how within the EGS-CC projectNieves Salor
This document discusses the automation know-how applied within the EGS-CC project. It describes the issues with procedure interoperability that were present before EGS-CC. It then outlines the experience with automation, both previously and for EGS-CC. The document proposes preparing an Automation Procedure Exchange Language and Execution Engine for EGS-CC. It describes the constraints, definition process, exchange format including annotations, statements, activity management, and error handling for Automation Procedures. Examples of procedures are also provided. Finally, it notes some issues that were raised during the design of the Automation Procedure format.
This presentation is a part of the COP2272C college level course taught at the Florida Polytechnic University located in Lakeland Florida. The purpose of this course is to introduce students to the C++ language and the fundamentals of object orientated programming..
The course is one semester in length and meets for 2 hours twice a week. The Instructor is Dr. Jim Anderson.
This document discusses parameter passing mechanisms in programming languages. It explains different parameter passing techniques like call by value, call by reference, call by name. It also discusses formal and actual parameters, how they are associated during a subprogram call, and how their values are copied or linked during the subprogram entry and exit. Implementation of formal parameters involves storage in the activation record and handling input/output types by copying or using pointers.
This document discusses subprograms and parameters in programming languages. It covers:
1. Characteristics of subprograms like having a single entry point and control returning to the caller.
2. Definitions related to subprograms like subprogram definitions, calls, headers, and parameters.
3. Issues with parameter passing like positional vs keyword parameters, default values, and different passing methods.
4. Design considerations for subprograms like parameter types, local variables, nested and overloaded subprograms, and independent compilation.
This chapter discusses statement-level control structures in programming languages. It covers selection statements like if/else and switch/case statements. It also covers iterative statements like for, while, and do-while loops. Other topics include unconditional branching with goto, guarded commands, and control structures based on data structures. The document examines design issues and examples of these control structures across various languages like C, Java, Python and others.
This document provides an overview of different data types in programming languages. It discusses primitive data types like integers, floating point numbers, Booleans, characters and strings. It also covers user-defined types like enumerated types, subrange types, arrays, associative arrays and records. For each type, it describes common design considerations, examples from different languages, and how they are typically implemented.
5 Names, bindings,Typechecking and ScopesMunawar Ahmed
This document discusses key concepts related to names, bindings, type checking, and scopes in programming languages. It covers topics like the design of names, variables and their attributes, different types of bindings and when they occur, strong vs weak typing, type equivalence, static and dynamic scopes, and the use of constants. It provides examples from various languages to illustrate these concepts.
This document discusses syntax analysis in language processing. It covers lexical analysis, which breaks programs into tokens, and parsing, which analyzes syntax based on context-free grammars. Lexical analysis uses state machines or tables to recognize patterns in code. Parsing can be done top-down with recursive descent or bottom-up with LR parsers that shift and reduce based on lookahead. Well-formed grammars and separation of lexical and syntax analysis enable efficient parsing.
This document discusses formal methods for describing the syntax and semantics of programming languages, including attribute grammars. It begins by introducing syntax and semantics, and the problem of formally defining a language. It then covers context-free grammars and Backus-Naur form (BNF) as a way to describe syntax. Attribute grammars are introduced as a method to incorporate semantic information into syntax rules through attributes and functions. An example attribute grammar checks for type consistency in an expression grammar.
The document discusses the evolution of major programming languages from the 1940s to the present. It covers early languages like Plankalkül, pseudocodes, FORTRAN, LISP, ALGOL 60 and COBOL. It also discusses the development of languages like BASIC, PL/I, APL, SNOBOL, SIMULA 67 and ALGOL 68. The document provides brief overviews of each language and their contributions to the field of programming languages.
This chapter introduces the key concepts around programming languages. It discusses the reasons for studying programming languages, including being able to express ideas in different ways and choosing the right language for a task. It also covers important criteria for evaluating languages, influences on language design like computer architecture and programming paradigms, different categories of languages, trade-offs in language design, and methods for implementing languages like compilation and interpretation.
Strategies for Effective Upskilling is a presentation by Chinwendu Peace in a Your Skill Boost Masterclass organisation by the Excellence Foundation for South Sudan on 08th and 09th June 2024 from 1 PM to 3 PM on each day.
हिंदी वर्णमाला पीपीटी, hindi alphabet PPT presentation, hindi varnamala PPT, Hindi Varnamala pdf, हिंदी स्वर, हिंदी व्यंजन, sikhiye hindi varnmala, dr. mulla adam ali, hindi language and literature, hindi alphabet with drawing, hindi alphabet pdf, hindi varnamala for childrens, hindi language, hindi varnamala practice for kids, https://www.drmullaadamali.com
Walmart Business+ and Spark Good for Nonprofits.pdfTechSoup
"Learn about all the ways Walmart supports nonprofit organizations.
You will hear from Liz Willett, the Head of Nonprofits, and hear about what Walmart is doing to help nonprofits, including Walmart Business and Spark Good. Walmart Business+ is a new offer for nonprofits that offers discounts and also streamlines nonprofits order and expense tracking, saving time and money.
The webinar may also give some examples on how nonprofits can best leverage Walmart Business+.
The event will cover the following::
Walmart Business + (https://business.walmart.com/plus) is a new shopping experience for nonprofits, schools, and local business customers that connects an exclusive online shopping experience to stores. Benefits include free delivery and shipping, a 'Spend Analytics” feature, special discounts, deals and tax-exempt shopping.
Special TechSoup offer for a free 180 days membership, and up to $150 in discounts on eligible orders.
Spark Good (walmart.com/sparkgood) is a charitable platform that enables nonprofits to receive donations directly from customers and associates.
Answers about how you can do more with Walmart!"
A review of the growth of the Israel Genealogy Research Association Database Collection for the last 12 months. Our collection is now passed the 3 million mark and still growing. See which archives have contributed the most. See the different types of records we have, and which years have had records added. You can also see what we have for the future.
How to Make a Field Mandatory in Odoo 17Celine George
In Odoo, making a field required can be done through both Python code and XML views. When you set the required attribute to True in Python code, it makes the field required across all views where it's used. Conversely, when you set the required attribute in XML views, it makes the field required only in the context of that particular view.
How to Add Chatter in the odoo 17 ERP ModuleCeline George
In Odoo, the chatter is like a chat tool that helps you work together on records. You can leave notes and track things, making it easier to talk with your team and partners. Inside chatter, all communication history, activity, and changes will be displayed.
The simplified electron and muon model, Oscillating Spacetime: The Foundation...RitikBhardwaj56
Discover the Simplified Electron and Muon Model: A New Wave-Based Approach to Understanding Particles delves into a groundbreaking theory that presents electrons and muons as rotating soliton waves within oscillating spacetime. Geared towards students, researchers, and science buffs, this book breaks down complex ideas into simple explanations. It covers topics such as electron waves, temporal dynamics, and the implications of this model on particle physics. With clear illustrations and easy-to-follow explanations, readers will gain a new outlook on the universe's fundamental nature.
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...PECB
Denis is a dynamic and results-driven Chief Information Officer (CIO) with a distinguished career spanning information systems analysis and technical project management. With a proven track record of spearheading the design and delivery of cutting-edge Information Management solutions, he has consistently elevated business operations, streamlined reporting functions, and maximized process efficiency.
Certified as an ISO/IEC 27001: Information Security Management Systems (ISMS) Lead Implementer, Data Protection Officer, and Cyber Risks Analyst, Denis brings a heightened focus on data security, privacy, and cyber resilience to every endeavor.
His expertise extends across a diverse spectrum of reporting, database, and web development applications, underpinned by an exceptional grasp of data storage and virtualization technologies. His proficiency in application testing, database administration, and data cleansing ensures seamless execution of complex projects.
What sets Denis apart is his comprehensive understanding of Business and Systems Analysis technologies, honed through involvement in all phases of the Software Development Lifecycle (SDLC). From meticulous requirements gathering to precise analysis, innovative design, rigorous development, thorough testing, and successful implementation, he has consistently delivered exceptional results.
Throughout his career, he has taken on multifaceted roles, from leading technical project management teams to owning solutions that drive operational excellence. His conscientious and proactive approach is unwavering, whether he is working independently or collaboratively within a team. His ability to connect with colleagues on a personal level underscores his commitment to fostering a harmonious and productive workplace environment.
Date: May 29, 2024
Tags: Information Security, ISO/IEC 27001, ISO/IEC 42001, Artificial Intelligence, GDPR
-------------------------------------------------------------------------------
Find out more about ISO training and certification services
Training: ISO/IEC 27001 Information Security Management System - EN | PECB
ISO/IEC 42001 Artificial Intelligence Management System - EN | PECB
General Data Protection Regulation (GDPR) - Training Courses - EN | PECB
Webinars: https://pecb.com/webinars
Article: https://pecb.com/article
-------------------------------------------------------------------------------
For more information about PECB:
Website: https://pecb.com/
LinkedIn: https://www.linkedin.com/company/pecb/
Facebook: https://www.facebook.com/PECBInternational/
Slideshare: http://www.slideshare.net/PECBCERTIFICATION
it describes the bony anatomy including the femoral head , acetabulum, labrum . also discusses the capsule , ligaments . muscle that act on the hip joint and the range of motion are outlined. factors affecting hip joint stability and weight transmission through the joint are summarized.
2. Last week we covered…
• Fundamentals of Subprograms
• Design Issues for Subprograms
• Local Referencing Environments
• Parameter-Passing Methods
• Parameters That Are Subprograms
• Calling Subprograms Indirectly
– Shallow and deep binding
• Overloaded Subprograms
• Generic Subprograms
3. Now we look at subprograms from an
implementational point of view…
• The General Semantics of Calls and Returns
• Implementing “Simple” Subprograms
• Implementing Subprograms with Stack-
Dynamic Local Variables
• Nested Subprograms
• Blocks
• Implementing Dynamic Scoping
4. The General Semantics of Calls and
Returns
CALL and RETURN operations are together called subprogram linkage.
Actions associated with CALL
• Implementation of the parameter-passing method.
• (If local variables are not static): allocation of storage to the locals.
• Saving execution status of the calling program unit (registers, CPU bits, EP).
• Ensuring the transfer of control to the subprogram.
• Ensuring the subprogram can return properly.
• (If the language supports nested subprograms): providing access to nonlocal variables
that are visible to the called subprogram.
5. Actions associated with RETURN
• (If the subprogram has out-mode or in-out mode
parameters implemented by copy): move the local values
of the associated formal parameters to the actual
parameters.
• Deallocate the storage used for local variables.
• Restore the execution status of the calling program unit.
• Return the control to the calling subprogram unit.
6. Implementing “simple” subprograms
• A “simple” subprogram is one in which no subprograms can
be nested, and all local variables are static.
CALL in a simple subprogram
– Save the execution status of the current program unit.
– Compute and pass the parameters.
– Pass the return address to the called.
– Transfer control to the called.
red: done by the caller
blue: done by the callee
7. RETURN in a simple subprogram
• If there are pass-by-value-result or out-mode parameters, the current
values of those parameters are moved to or made available to the
corresponding actual parameters.
• If the subprogram is a function (operator); the functional value is moved
to a place accessible to the caller.
• The execution status of the caller is restored.
• Control is transferred back to the caller.
red: done by the caller
blue: done by the callee
8. • The actions of the callee when done at the beginning of its
execution are called prologue; when done at the end of its
execution are called epilogue.
• A simple subprogram only needs an epilogue.
• CALL and RETURN require storage for the following:
– Status information about the caller
– Parameters
– Return address
– Return value for functions
– Temporaries used by the code of the subprograms
9. • The format, or layout, of the noncode part of a subprogram is
called an activation record, because the data it describes is
relevant only during the or execution of the subprogram.
• An activation record instance is a concrete example of an
activation record, a collection of data in the form of an
activation record.
• Languages with simple subprograms do not support recursion,
and need a single activation record.
• A possible layout of activation record for a simple program
(the caller ‘s status information is omitted):
10. A program consists of…
• A main program and three
subprograms: A, B, and C.
• At the time of compilation the
machine code for all units
along with a list of references
to external subprograms, is
written to a file.
• The executable program is put
together by the linker, which is
part of the operating system.
The code and activation records of a
program with simple subprograms
11. Tasks of a linker…
• When the linker is called for a main
program, its first task is to find the files
that contain the translated
subprograms referenced in that
program and load them into memory .
• Then, the linker must set the target
addresses of all calls to those
subprograms in the main program to
the entry addresses of those
subprograms.
• The same must be done for all calls to
subprograms in the loaded
subprograms and all calls to library
subprograms.
The code and activation records of a
program with simple subprograms
12. Implementing Subprograms with
Stack-Dynamic Local Variables
We need to look at the following:
(I) More complex activation records
(II) An example without recursion
(III) Recursion
13. (I) More complex activation records
Reasons for complication
– The compiler must generate code to cause the implicit allocation and
deallocation of local variables.
– Recursion adds the possibility of multiple simultaneous activations of a
subprogram; each of which required their own instances of activation
records.
– In languages with stack-dynamic local variables, activation
record instances must be created dynamically.
Because the return address, dynamic
link, and parameters are placed in the
activation record instance by the caller,
these entries must appear first.
The dynamic link is a pointer to the base of the activation record instance of the caller.
14. Components of the activation record
• In static-scoped languages, the dynamic link is used
to provide traceback information when a run-time
error occurs.
• In dynamic-scoped languages, the dynamic link is
used to access nonlocal variables.
• The actual parameters in the activation record are
the values or addresses provided by the caller.
• Local scalar variables are bound to storage within
an activation record instance.
• Local variables that are structures are sometimes
allocated elsewhere, and only their descriptors and
a pointer to that storage are part of the activation
record.
• Local variables are allocated and possibly initialized
in the called subprogram, so they appear last.
16. the mechanism
• Instances of activation record are created on a stack provided by the run-
time system, called the run-time stack.
• Every subprogram activation, recursive or nonrecursive, creates a instance
of the activation record on the stack.
• One more thing required to control the execution of a subprogram is the
EP: the environment pointer.
– Initially, the EP points at the base, or first address of the activation record
instance of the main program.
– The run-time system must ensure that it always points to the base of the
activation record instance of the currently executing program unit.
– When a subprogram is called, the current EP is saved in the new activation
record instance as the dynamic link.
– The EP is then set to point at the base of the new activation record
– instance.
– Upon return from the subprogram, the stack top is set to the value of the
current EP minus one and the EP is set to the dynamic link from the activation
record instance of the subprogram that has completed its execution.
17. Summary of the linkage actions
ACTIONS BY THE CALLER
1. Create an activation record
instance.
2. Save the execution status of
the current program unit.
3. Compute and pass the
parameters.
4. Pass the return address to
the callee.
5. Transfer control to the callee.
ACTIONS BY THE CALLEE
Prologue
1. Save the old EP in the stack as the dynamic link
and create the new value.
2. Allocate local variables.
Epilogue
1. If there are pass-by-value-result or out-mode
parameters, the current values of those
parameters are moved to the corresponding
actual parameters.
2. If the subprogram is a function, the functional
value is moved to a place accessible to the caller.
3. Restore the stack pointer by setting it to the
value of the current EP minus one and set the EP
to the old dynamic link.
4. Restore the execution status of the caller.
5. Transfer control back to the caller.
18. (II) An example without recursion
Skeletal C program Run-time stack at points 1, 2, 3
19. • The collection of dynamic links present in the stack at a
given time is called the dynamic chain, or call chain.
• References to local variables can be represented in the
code as offsets from the beginning of the activation
record of the local scope, whose address is stored in
the EP. Such an offset is called a local offset.
– The local_offset of a variable in an activation record can be
determined at compile time, using the order, types, and
sizes of variables declared in the subprogram associated
with the activation record.
20. (III) Recursion
Skeletal C program, and the
activation record format
Run-time stack at points 1 (for the three
time it occurs during factorial(3))
Notice the extra entry for function return value.
21. Skeletal C program, and the
activation record format
Run-time stack at points 2, 3 (during
factorial(3))
Notice the extra entry for function return value.
22. Implementing Subprograms
• The General Semantics of Calls and Returns
• Implementing “Simple” Subprograms
• Implementing Subprograms with Stack-
Dynamic Local Variables
• Nested Subprograms
• Blocks
• Implementing Dynamic Scoping
23. Nested Subprograms
Some of the non–C-based static-scoped programming
languages use stack-dynamic local variables and
allow subprograms to be nested. Among these are:
Fortran 95+, Ada, Python, JavaScript, Ruby.
The major challenge in implementing nested
subprograms is to provide access to non-local
variables.
25. Access to non-local variables
Two steps are involved:
1. Find the instance of the activation record in the stack in
which the variable was allocated.
– In a given subprogram, only variables that are declared in static
ancestor scopes are visible and can be accessed.
– A subprogram is callable only when all of its static ancestor
subprograms are active.
– The correct declaration for a referenced variable is the first one found
when looking through the enclosing scopes, most closely nested first.
– A technique is required to implement these rules of static scoping.
2. Use the local offset of the variable (within the activation
record instance) to access it.
26. Static Links
The most common way to implement static scoping in languages that
allow nested subprograms is static chaining.
– A new pointer, called a static link, is added to the activation record.
– The static link points to the bottom of the activation record instance of
an activation of the static parent.
– Typically, the static link appears in the activation record below the
parameters.
– Instead of having two activation record elements before the
parameters, there are now three: the return address, the static link,
and the dynamic link.
27. Static Chains
A static chain is a chain of static links that connect certain
activation record instances in the stack.
During the execution of a subprogram P, the static link of
its activation record instance points to an activation
record instance of P’s static parent program unit. That
instance’s static link points in turn to P’s static
grandparent program unit’s activation record instance,
if there is one. This chain of activation records can be
explored to access the entire referencing environment
of P – under static scoping.
28. Finding the correct activation record
Method I
– When a reference is made to a nonlocal variable, the
activation record instance containing the variable can be
found by searching the static chain until a static ancestor
activation record instance is found that contains the
variable.
– The drawback in this method is the need to search each
activation record along the chain for the referenced
variable.
29. Method II
– Observation: Because the nesting of scopes is known at compile time, the compiler can
determine not only that a reference is nonlocal but also the length of the static chain that
must be followed to reach the activation record instance that contains the nonlocal object.
– Each subprogram has an integer associated with it, which contains an integer value
representing the depth of the subprogram from the top level subroutine. This integer is called
static_depth.
– The length of the static chain needed to reach the correct activation record instance for a
nonlocal reference to a variable X is exactly the difference between the static_depth of the
subprogram containing the reference to X and the static_depth of the subprogram containing
the declaration for X. This difference is called the nesting_depth, or chain_offset, of the
reference.
– The actual reference can be represented by an ordered pair of integers :
(chain_offset, local_offset) – where chain_offset is the number of links to the correct
activation record instance , and local_offset is the offset of the variable ,from the EP of the
activation record declaring the accessed variable.
30. Example from Python
What are the static_depth values for global
scope, f1, f2 and f3?
If procedure f3 references a variable declared in
f1, the chain_offset of that reference would be?
31. Working example from Ada
The sequence of procedure calls is:
1) Main_2 calls BigSub
2) BigSub calls Sub2
3) Sub2 calls Sub3
4) Sub3 calls Sub1
What are the
(chain_offset, local_offset)
values at positions 1, 2 and 3?
Stack at position 1
32. How is the static chain maintained?
• The static chain must be updated for each
program call and return.
• The return part is simple:
– When the subprogram terminates, its activation
record instance is removed from the stack.
– After this removal, the new top activation record
instance is that of the unit that called the subprogram
whose execution just terminated.
– Because the static chain from this activation record
instance was never changed, it works correctly just as
it did before the call to the other subprogram.
– We are done.
33. • Observation: The most recent activation record instance of
the parent scope must be found at the time of the call.
• Method I: Look at activation record instances on the
dynamic chain, until the first one of the parent scope is
found. Problems?
– Search time can be too much.
• Method II: This search can be avoided by treating
subprogram declarations and references exactly like
variable declarations and references.
– When the compiler encounters a subprogram call, among other
things, it determines the subprogram that declared the called
subprogram, and which must be a static ancestor of the calling
routine, also.
34. Method II
• Consider the situation where Q was called by P.
– In this situation, if S is the subprogram which declared Q, then, S must
also be an ancestor of P. Why?
• What does the compiler do when it encounters the call to Q inside
P?
– It computes the nesting_depth, n, of P w.r.t. S.
– This information is stored and can be accessed by Q, during execution.
– The static link of Q is determined by moving down the static chain of P,
n times.
– This ensures that the most recent activation record instance of S is
found at the time of the call to Q.
35. The Ada Example, again
Q = Sub1, P = Sub 3,
S = BigSub
Go 2 jumps down
from here to find
the static link for
Sub1
This is the link
being computed.
36. Criticisms of Static Chaining
• What happens when functions as parameters are allowed?
• Access nonlocal variables is that references to variables in scopes beyond
the static parent cost more than references to locals. The static chain must
be followed, one link per enclosing scope from the reference to the
declaration.
– Fortunately, in practice, references to distant nonlocal variables are rare
• It is difficult for a programmer working on a time-critical program to
estimate the costs of nonlocal references, because the cost of each
reference depends on the depth of nesting between the reference and the
scope of declaration.
– Code modifications may change nesting depths, thereby changing the timing
of some references, both in the changed code and possibly in code far from
the changes.
• However, none of the alternatives suggested for static chaining has been
found to be superior to the static-chain method, which is still the most
widely used approach.
37. Blocks
A block is specified in the C-based languages as
a compound statement that begins with one
or more data definitions. The lifetime of the
variable temp in the following block begins
when control enters the block and ends when
control exits the block.
38. How are blocks implemented?
• Blocks can be implemented by using the static-chain process described
previously.
• Blocks are treated as parameter-less subprograms that are always called
from the same place in the program.
• Method I (for implementing blocks)
– Every block has an activation record. An instance of its activation record is
created every time the block is executed.
• Method II (for implementing blocks)
– The maximum amount of storage required for block variables at any time
during the execution of a program can be statically determined.
– This amount of space can be allocated after the local variables in the
activation record of the function in which the block resides. So new activation
record is created for a block.
– Offsets for all block variables can be statically computed, so block variables
can be addressed exactly as if they were local variables of the function.
39. Example from C – while blocks in main
f and g occupy
the same space
on the stack as
a and b.
40. Implementing Dynamic Scoping
• There are two distinct ways in which local
variables and nonlocal references to them can
be implemented in a dynamic-scoped
language:
– Deep access
– Shallow access.
41. Deep Access
• References to nonlocal variables are resolved by
searching through the activation record instances of
the other subprograms that are currently active,
beginning with the one most recently activated.
• In the case of dynamic scoping, the dynamic links –
rather than the static links – are followed.
• The dynamic chain links all subprogram activation
record instances in the reverse of the order in which
they were activated.
• This method is called deep access, because access may
require searches deep into the stack.
43. Differences between Deep Access and
Static Chaining
• In a dynamic-scoped language, there is no way to
determine at compile time the length of the chain that
must be searched.
– Every activation record instance in the chain must be searched
until the first instance of the variable is found.
– This is one reason why dynamic-scoped languages typically have
slower execution speeds than static-scoped languages.
• In dynamic scoped languages, activation records must store
the names of variables for the search process, whereas in
static-scoped language implementations only the values are
required.
– Names are not required for static scoping, because all variables
are represented by the (chain_offset, local_offset) pairs.
44. Shallow Access
• In the shallow-access method, variables declared in subprograms are not stored in the activation
records of those subprograms.
• Because with dynamic scoping there is at most one visible version of a variable of any specific name
at a given time, a very different approach can be taken.
Method I
• One variation of shallow access is to have a separate stack for each variable name in a complete
program.
– Every time a new variable with a particular name is created by a declaration at the beginning of a
subprogram that has been called, the variable is given a cell at the top of the stack for its name.
– Every reference to the name is to the variable on top of the stack associated with that name, because the
top one is the most recently created.
– When a subprogram terminates, the lifetimes of its local variables end, and the stacks for those variable
names are popped.
– This method allows fast references to variables, but maintaining the stacks at the entrances and exits of
subprograms is costly.
46. Method II
• Another option for implementing shallow access is to use a
central table that has a location for each different variable
name in a program.
• Along with each entry, a bit called active is maintained that
indicates whether the name has a current binding or
variable association.
• Any access to any variable can then be to an offset into the
central table.
• The offset is static, so the access can be fast.
Reading Assignment: Figure out how the SNOBOL language
uses the central table approach.
47. Maintenance of the central table
– A subprogram call requires that all of its local variables be logically placed in
the central table.
– If the position of the new variable in the central table is already active, that
value must be saved somewhere during the lifetime of the new variable.
• One variation is to have a “hidden” stack on which all saved objects are stored. Because
subprogram calls and returns, and thus the lifetimes of local variables, are nested, this
works well.
• In another variation, replaced variables are stored in the activation record of the
subprogram that created the replacement variable.
– Whenever a variable begins its lifetime, the active bit in its central table
position must be set.
How does SNOBOL store values when they are temporarily replaced?
48. Deep, or Shallow access?
• The choice between shallow and deep access to
nonlocal variables depends on the relative
frequencies of subprogram calls and nonlocal
references.
– The deep-access method provides fast subprogram
linkage, but references to nonlocals are costly.
– The shallow-access method provides much faster
references to nonlocals, especially distant nonlocals,
but is more costly in terms of subprogram linkage
49. We examined the following issues in
Implementing Subprograms
• The General Semantics of Calls and Returns
• Implementing “Simple” Subprograms
• Implementing Subprograms with Stack-
Dynamic Local Variables
• Nested Subprograms
• Blocks
• Implementing Dynamic Scoping
50. starting next class:
Support for Object Oriented
Programming
• Introduction
• Object-Oriented Programming
• Design Issues for Object-Oriented Languages
• Support for Object-Oriented Programming in Smalltalk
• Support for Object-Oriented Programming in C++
• Support for Object-Oriented Programming in Objective-C
• Support for Object-Oriented Programming in Java
• Support for Object-Oriented Programming in C#
• Support for Object-Oriented Programming in Ada 95
• Support for Object-Oriented Programming in Ruby
• Implementation of Object-Oriented Constructs