This document discusses type-dependent name resolution in programming languages. It notes that sometimes type information is needed before name resolution can be performed, such as when resolving names in records where the record fields depend on the types. It gives an example where a program defines two records A and B, with B containing a field of type A, and names must be resolved through the record types. The document suggests that name resolution and type checking/inference can often be done in either order for languages, but type information is sometimes necessary for resolving some names.
The document describes static name resolution in programming languages. It discusses how names are bound to declarations through lexical scoping and how references are resolved to declarations by following paths through a scope graph representation. It presents the concepts of scopes, declarations, references, resolution paths, imports, and parent scopes. It also discusses how name resolution can be formalized using a calculus based on scope graphs, separation reachability and visibility, and how this supports name resolution, disambiguation, and program transformations.
This document provides an outline and overview of dynamic semantics and operational semantics. It discusses defining the meaning of programs through execution and transition systems. It introduces DynSem, a domain-specific language for specifying dynamic semantics in a modular way. DynSem specifications generate interpreters from language definitions. The document uses examples from arithmetic expressions and a language with boxes to illustrate DynSem specifications.
Slides for invited talk at Dynamic Languages Symposium (DLS'15) at SPLASH 2015 in Pittsburgh
http://2015.splashcon.org/event/dls2015-papers-declare-your-language
In the Language Designer’s Workbench project we are extending the Spoofax Language Workbench with meta-languages to declaratively specify the syntax, name binding rules, type rules, and operational semantics of a programming language design such that a variety of artifacts including parsers, static analyzers, interpreters, and IDE editor services can be derived and properties can be verified automatically. In this presentation I will talk about declarative specification for two aspects of language design: syntax and name binding.
First, I discuss the idea of declarative syntax definition as supported by grammar formalisms based on generalized parsing using the SDF3 syntax definition formalism as example. With SDF3, the language designer defines syntax in terms of productions and declarative disambiguation rules. This requires understanding a language in term of (tree) structure instead of the operational implementation of parsers. As a result, syntax definitions can be used for a range of language processors including parsers, formatters, syntax coloring, outline view, syntactic completion.
Second, I discuss our recent work on the declarative specification of name binding rules, that takes inspiration from declarative syntax definition. The NaBL name binding language supports definition of name binding rules in terms of its fundamental concepts: declarations, references, scopes, and imports. I will present the theory of name resolution that we have recently developed to provide a semantics for name binding languages such as NaBL.
Compiler Components and their Generators - Lexical AnalysisGuido Wachsmuth
The document discusses lexical analysis in compiler construction, including an overview of the topics covered such as regular languages represented as regular grammars, regular expressions, and finite state automata. It also discusses the equivalence between these formalisms and techniques for constructing tools for lexical analysis.
Introduction - Imperative and Object-Oriented LanguagesGuido Wachsmuth
This document provides an overview of imperative and object-oriented languages. It discusses the properties of imperative languages like state, statements, control flow, procedures and types. It then covers object-oriented concepts like objects, messages, classes, inheritance and polymorphism. Examples are given in various languages like C, Java bytecode, x86 assembly to illustrate concepts like variables, expressions, functions and object-oriented features. Finally, it provides an outlook on upcoming lectures covering declarative language definition.
This document discusses term rewriting and provides examples of how rewrite rules can be used to transform terms. Key points include:
- Rewrite rules define pattern matching and substitution to transform terms from a left-hand side to a right-hand side.
- Examples show desugaring language constructs like if-then statements, constant folding arithmetic expressions, and mapping/zipping lists with strategies as parameters to rules.
- Terms can represent programming language syntax and semantics domains. Signatures define the structure of terms.
- Rewriting systems provide a declarative way to define program transformations and semantic definitions through rewrite rules and strategies.
Compiler Components and their Generators - Traditional Parsing AlgorithmsGuido Wachsmuth
This document discusses parsing algorithms for compilers. It begins with an overview of topics to be covered, including lexical analysis, parsing algorithms like predictive and LR parsing, grammar classes, and an assignment on implementing a MiniJava compiler. It then covers predictive parsing in more detail, including how to generate parsing tables from grammars and how to use these tables in a predictive parsing automaton. Finally, it discusses LR parsing and how it can handle issues like left recursion that predictive parsing cannot. It provides an example of an LR parsing step involving expression evaluation.
The document describes static name resolution in programming languages. It discusses how names are bound to declarations through lexical scoping and how references are resolved to declarations by following paths through a scope graph representation. It presents the concepts of scopes, declarations, references, resolution paths, imports, and parent scopes. It also discusses how name resolution can be formalized using a calculus based on scope graphs, separation reachability and visibility, and how this supports name resolution, disambiguation, and program transformations.
This document provides an outline and overview of dynamic semantics and operational semantics. It discusses defining the meaning of programs through execution and transition systems. It introduces DynSem, a domain-specific language for specifying dynamic semantics in a modular way. DynSem specifications generate interpreters from language definitions. The document uses examples from arithmetic expressions and a language with boxes to illustrate DynSem specifications.
Slides for invited talk at Dynamic Languages Symposium (DLS'15) at SPLASH 2015 in Pittsburgh
http://2015.splashcon.org/event/dls2015-papers-declare-your-language
In the Language Designer’s Workbench project we are extending the Spoofax Language Workbench with meta-languages to declaratively specify the syntax, name binding rules, type rules, and operational semantics of a programming language design such that a variety of artifacts including parsers, static analyzers, interpreters, and IDE editor services can be derived and properties can be verified automatically. In this presentation I will talk about declarative specification for two aspects of language design: syntax and name binding.
First, I discuss the idea of declarative syntax definition as supported by grammar formalisms based on generalized parsing using the SDF3 syntax definition formalism as example. With SDF3, the language designer defines syntax in terms of productions and declarative disambiguation rules. This requires understanding a language in term of (tree) structure instead of the operational implementation of parsers. As a result, syntax definitions can be used for a range of language processors including parsers, formatters, syntax coloring, outline view, syntactic completion.
Second, I discuss our recent work on the declarative specification of name binding rules, that takes inspiration from declarative syntax definition. The NaBL name binding language supports definition of name binding rules in terms of its fundamental concepts: declarations, references, scopes, and imports. I will present the theory of name resolution that we have recently developed to provide a semantics for name binding languages such as NaBL.
Compiler Components and their Generators - Lexical AnalysisGuido Wachsmuth
The document discusses lexical analysis in compiler construction, including an overview of the topics covered such as regular languages represented as regular grammars, regular expressions, and finite state automata. It also discusses the equivalence between these formalisms and techniques for constructing tools for lexical analysis.
Introduction - Imperative and Object-Oriented LanguagesGuido Wachsmuth
This document provides an overview of imperative and object-oriented languages. It discusses the properties of imperative languages like state, statements, control flow, procedures and types. It then covers object-oriented concepts like objects, messages, classes, inheritance and polymorphism. Examples are given in various languages like C, Java bytecode, x86 assembly to illustrate concepts like variables, expressions, functions and object-oriented features. Finally, it provides an outlook on upcoming lectures covering declarative language definition.
This document discusses term rewriting and provides examples of how rewrite rules can be used to transform terms. Key points include:
- Rewrite rules define pattern matching and substitution to transform terms from a left-hand side to a right-hand side.
- Examples show desugaring language constructs like if-then statements, constant folding arithmetic expressions, and mapping/zipping lists with strategies as parameters to rules.
- Terms can represent programming language syntax and semantics domains. Signatures define the structure of terms.
- Rewriting systems provide a declarative way to define program transformations and semantic definitions through rewrite rules and strategies.
Compiler Components and their Generators - Traditional Parsing AlgorithmsGuido Wachsmuth
This document discusses parsing algorithms for compilers. It begins with an overview of topics to be covered, including lexical analysis, parsing algorithms like predictive and LR parsing, grammar classes, and an assignment on implementing a MiniJava compiler. It then covers predictive parsing in more detail, including how to generate parsing tables from grammars and how to use these tables in a predictive parsing automaton. Finally, it discusses LR parsing and how it can handle issues like left recursion that predictive parsing cannot. It provides an example of an LR parsing step involving expression evaluation.
The document discusses static analysis and error checking in compiler construction. It introduces key concepts like parsing source code, performing static semantic checks, and generating machine code. Specific techniques covered include name analysis, type systems, formal semantics, and testing static analysis. Examples are provided using Tiger, a simple imperative language, to illustrate type rules and name binding. The document also discusses theoretical foundations in formal language theory and decidability/complexity.
The document discusses lexical analysis and regular languages. It begins with an overview of lexical analysis and its components, including regular languages defined via regular grammars, regular expressions, and finite state automata. It then covers the equivalence between these formalisms for describing regular languages and how to construct a nondeterministic finite automaton from a regular expression.
Declarative Semantics Definition - Term RewritingGuido Wachsmuth
This document discusses term rewriting and its applications in compiler construction. It covers term rewriting systems, rewrite rules that transform terms, and rewrite strategies that control rule application. Examples are provided for desugaring code using rewrite rules and constant folding arithmetic expressions using rewrite rules and strategies. Stratego is presented as a domain-specific language for program transformation based on term rewriting.
Dynamic Semantics Specification and Interpreter GenerationEelco Visser
(1) The document describes a domain specific language called DynSem for specifying dynamic semantics of programming languages. DynSem allows defining semantics in a modular way using semantic rules.
(2) DynSem specifications can be used to generate high-performance interpreters. The document outlines various language features that can be modeled in DynSem, including arithmetic, booleans, control flow, functions, and mutable state.
(3) DynSem specifications are composed of modules that import language signatures and define semantic rules over them. Rules are used to reduce expressions to values in an environment and store. This allows modeling features like variables, functions, and mutable boxes.
Declarative Semantics Definition - Static Analysis and Error CheckingGuido Wachsmuth
The document discusses static analysis and error checking in compiler construction. It covers several key topics:
- The static analysis process of parsing source code, checking for errors, and generating machine code.
- Name analysis, binding, and scoping during static checking and for editor services like refactoring and code generation.
- Testing static semantics including name binding, type systems, and constraints.
- Restricting context-free languages using static semantics and judgements of well-formedness and well-typedness.
- Formal type systems including those for Tiger language examples involving types, expressions, and scoping.
Declarative Syntax Definition - Grammars and TreesGuido Wachsmuth
This lecture lays the theoretic foundations for declarative syntax formalisms and syntax-based language processors, which we will discuss later in the course. We introduce the notions of formal languages, formal grammars, and syntax trees, starting from Chomsky's work on formal grammars as generative devices.
We start with a formal model of languages and investigate formal grammars and their derivation relations as finite models of infinite productivity. We further discuss several classes of formal grammars and their corresponding classes of formal languages. In a second step, we introduce the word problem, analyse its decidability and complexity for different classes of formal languages, and discuss consequences of this analysis on language processing. We conclude the lecture with a discussion about parse tree construction, abstract syntax trees, and ambiguities.
Declare Your Language: Syntactic (Editor) ServicesEelco Visser
Lecture 3 on compiler construction course on definition of lexical syntax and syntactic services that can be derived from syntax definitions such as formatting and syntactic completion
Top down parsers are more restricted than bottom up parsers. However, ANTLR uses a top-down parser. In this chapter parse tables and recursive descent parsers are described.
This document summarizes and discusses type checking algorithms for programming languages. It introduces constraint-based type checking, which separates type checking into constraint generation and constraint solving. This provides a more declarative way to specify type checkers. The document discusses using variables and constraints to represent types during type checking. It introduces NaBL2, a domain-specific language for writing constraint generators to specify name and type constraints for programming language static semantics. NaBL2 uses scope graphs to represent name binding structures and supports features like type equality, subtyping, and type-dependent name resolution through constraint rules. An example scope graph and constraint rule for let-bindings is provided.
This document discusses syntax definition and provides examples using various syntax definition formalisms including Backus-Naur Form (BNF), Extended Backus-Naur Form (EBNF), and SDF3. It introduces concepts of lexical syntax, context-free syntax, abstract syntax, disambiguation, and testing syntax definitions. Specific examples are provided for defining the syntax of an expression language using BNF, EBNF, and SDF3. Testing syntax definitions using Spoofax is also discussed with examples of test cases for lexical and context-free syntax.
The document discusses formal grammars and their applications in language specification and parsing. It introduces key concepts such as formal grammars, derivation, terminal and non-terminal symbols, and different types of grammars including context-sensitive, context-free and regular grammars. It also discusses applications of formal grammars in parsing and how they relate to theoretical computer science concepts like decidability and complexity of the word problem for different grammar types.
What is "logic programming" and "constraint programming"
Prolog in a nutshell
How Prolog "makes pointers safe"
Why Prolog was the ultimate scripting language for AI (backtracking search, interpreters, and DSLs for free)
What is "functional-logic programming" (a taste of the programming language Mercury)
Video recording of the talk: http://youtu.be/Fhc7fPQF1iY
This document provides an overview of LL parsing algorithms. It begins with a recap of formal language theory concepts like regular grammars, regular expressions, finite state automata, context-free grammars, derivation, and language generation. It then discusses predictive (recursive descent) parsing and LL parsing in particular. Key concepts covered include LL(k) grammars, filling the LL parsing table based on FIRST and FOLLOW sets, and using the table to perform LL parsing. An example grammar and its parsing table are provided to illustrate the process.
In this part of the course, meta languages for describing grammars are introduced. Bottom-up and top-down parsers are derivation steps are described. Finally, ambiguous grammars are defined.
This document discusses minimizing deterministic finite automata (DFA) and provides references on the topic. It begins with an introduction to minimizing DFA and then provides several examples of DFAs with different languages over various alphabets. It concludes by listing references for additional information on minimizing DFA and automata theory.
The document discusses the role of parsers in compilers. It explains that parsers check syntax and report errors, perform semantic checks like type checking, and produce an intermediate representation of the source code. Parsers use syntax-directed translation with methods like abstract syntax trees. The document also covers topics like error handling strategies, the viable prefix property, left recursion elimination, and constructing LL(1) parsing tables.
The document discusses the basic language of functions. A function assigns each input exactly one output. Functions can be defined through written instructions, tables, or mathematical formulas. The domain is the set of all inputs, and the range is the set of all outputs. Functions are widely used in mathematics to model real-world relationships.
This document discusses parsing and context-free grammars. It defines parsing as verifying that tokens generated by a lexical analyzer follow syntactic rules of a language using a parser. Context-free grammars are defined using terminals, non-terminals, productions and a start symbol. Top-down and bottom-up parsing are introduced. Techniques for grammar analysis and improvement like left factoring, eliminating left recursion, calculating first and follow sets are explained with examples.
The document discusses lexical analysis in compilers. It describes how a lexical analyzer groups characters into tokens by recognizing patterns in the input based on regular expressions. It provides examples of token classes and structures. It also explains how lexical analysis is implemented using a lexical analyzer generator called LEX, which translates a LEX source file into a C program that performs lexical analysis.
This document discusses garbage collection techniques for automatically reclaiming memory from unused objects. It describes several garbage collection algorithms including reference counting, mark-and-sweep, and copying collection. It also covers optimizations like generational collection which focuses collection on younger object generations. The goal of garbage collection is to promote memory safety and management while allowing for automatic reclamation of memory from objects that are no longer reachable.
LR parsing allows parsers to see the entire right-hand side of a rule and perform lookahead, allowing it to handle a wider range of grammars than LL parsing. The document provides an example of LR parsing a simple expression grammar. It demonstrates the steps of building the LR parsing items, closure, and goto functions to generate the LR parsing table from the grammar. LR parsing tables contain the states, symbols, and parsing actions (reduce, shift, accept).
The document discusses static analysis and error checking in compiler construction. It introduces key concepts like parsing source code, performing static semantic checks, and generating machine code. Specific techniques covered include name analysis, type systems, formal semantics, and testing static analysis. Examples are provided using Tiger, a simple imperative language, to illustrate type rules and name binding. The document also discusses theoretical foundations in formal language theory and decidability/complexity.
The document discusses lexical analysis and regular languages. It begins with an overview of lexical analysis and its components, including regular languages defined via regular grammars, regular expressions, and finite state automata. It then covers the equivalence between these formalisms for describing regular languages and how to construct a nondeterministic finite automaton from a regular expression.
Declarative Semantics Definition - Term RewritingGuido Wachsmuth
This document discusses term rewriting and its applications in compiler construction. It covers term rewriting systems, rewrite rules that transform terms, and rewrite strategies that control rule application. Examples are provided for desugaring code using rewrite rules and constant folding arithmetic expressions using rewrite rules and strategies. Stratego is presented as a domain-specific language for program transformation based on term rewriting.
Dynamic Semantics Specification and Interpreter GenerationEelco Visser
(1) The document describes a domain specific language called DynSem for specifying dynamic semantics of programming languages. DynSem allows defining semantics in a modular way using semantic rules.
(2) DynSem specifications can be used to generate high-performance interpreters. The document outlines various language features that can be modeled in DynSem, including arithmetic, booleans, control flow, functions, and mutable state.
(3) DynSem specifications are composed of modules that import language signatures and define semantic rules over them. Rules are used to reduce expressions to values in an environment and store. This allows modeling features like variables, functions, and mutable boxes.
Declarative Semantics Definition - Static Analysis and Error CheckingGuido Wachsmuth
The document discusses static analysis and error checking in compiler construction. It covers several key topics:
- The static analysis process of parsing source code, checking for errors, and generating machine code.
- Name analysis, binding, and scoping during static checking and for editor services like refactoring and code generation.
- Testing static semantics including name binding, type systems, and constraints.
- Restricting context-free languages using static semantics and judgements of well-formedness and well-typedness.
- Formal type systems including those for Tiger language examples involving types, expressions, and scoping.
Declarative Syntax Definition - Grammars and TreesGuido Wachsmuth
This lecture lays the theoretic foundations for declarative syntax formalisms and syntax-based language processors, which we will discuss later in the course. We introduce the notions of formal languages, formal grammars, and syntax trees, starting from Chomsky's work on formal grammars as generative devices.
We start with a formal model of languages and investigate formal grammars and their derivation relations as finite models of infinite productivity. We further discuss several classes of formal grammars and their corresponding classes of formal languages. In a second step, we introduce the word problem, analyse its decidability and complexity for different classes of formal languages, and discuss consequences of this analysis on language processing. We conclude the lecture with a discussion about parse tree construction, abstract syntax trees, and ambiguities.
Declare Your Language: Syntactic (Editor) ServicesEelco Visser
Lecture 3 on compiler construction course on definition of lexical syntax and syntactic services that can be derived from syntax definitions such as formatting and syntactic completion
Top down parsers are more restricted than bottom up parsers. However, ANTLR uses a top-down parser. In this chapter parse tables and recursive descent parsers are described.
This document summarizes and discusses type checking algorithms for programming languages. It introduces constraint-based type checking, which separates type checking into constraint generation and constraint solving. This provides a more declarative way to specify type checkers. The document discusses using variables and constraints to represent types during type checking. It introduces NaBL2, a domain-specific language for writing constraint generators to specify name and type constraints for programming language static semantics. NaBL2 uses scope graphs to represent name binding structures and supports features like type equality, subtyping, and type-dependent name resolution through constraint rules. An example scope graph and constraint rule for let-bindings is provided.
This document discusses syntax definition and provides examples using various syntax definition formalisms including Backus-Naur Form (BNF), Extended Backus-Naur Form (EBNF), and SDF3. It introduces concepts of lexical syntax, context-free syntax, abstract syntax, disambiguation, and testing syntax definitions. Specific examples are provided for defining the syntax of an expression language using BNF, EBNF, and SDF3. Testing syntax definitions using Spoofax is also discussed with examples of test cases for lexical and context-free syntax.
The document discusses formal grammars and their applications in language specification and parsing. It introduces key concepts such as formal grammars, derivation, terminal and non-terminal symbols, and different types of grammars including context-sensitive, context-free and regular grammars. It also discusses applications of formal grammars in parsing and how they relate to theoretical computer science concepts like decidability and complexity of the word problem for different grammar types.
What is "logic programming" and "constraint programming"
Prolog in a nutshell
How Prolog "makes pointers safe"
Why Prolog was the ultimate scripting language for AI (backtracking search, interpreters, and DSLs for free)
What is "functional-logic programming" (a taste of the programming language Mercury)
Video recording of the talk: http://youtu.be/Fhc7fPQF1iY
This document provides an overview of LL parsing algorithms. It begins with a recap of formal language theory concepts like regular grammars, regular expressions, finite state automata, context-free grammars, derivation, and language generation. It then discusses predictive (recursive descent) parsing and LL parsing in particular. Key concepts covered include LL(k) grammars, filling the LL parsing table based on FIRST and FOLLOW sets, and using the table to perform LL parsing. An example grammar and its parsing table are provided to illustrate the process.
In this part of the course, meta languages for describing grammars are introduced. Bottom-up and top-down parsers are derivation steps are described. Finally, ambiguous grammars are defined.
This document discusses minimizing deterministic finite automata (DFA) and provides references on the topic. It begins with an introduction to minimizing DFA and then provides several examples of DFAs with different languages over various alphabets. It concludes by listing references for additional information on minimizing DFA and automata theory.
The document discusses the role of parsers in compilers. It explains that parsers check syntax and report errors, perform semantic checks like type checking, and produce an intermediate representation of the source code. Parsers use syntax-directed translation with methods like abstract syntax trees. The document also covers topics like error handling strategies, the viable prefix property, left recursion elimination, and constructing LL(1) parsing tables.
The document discusses the basic language of functions. A function assigns each input exactly one output. Functions can be defined through written instructions, tables, or mathematical formulas. The domain is the set of all inputs, and the range is the set of all outputs. Functions are widely used in mathematics to model real-world relationships.
This document discusses parsing and context-free grammars. It defines parsing as verifying that tokens generated by a lexical analyzer follow syntactic rules of a language using a parser. Context-free grammars are defined using terminals, non-terminals, productions and a start symbol. Top-down and bottom-up parsing are introduced. Techniques for grammar analysis and improvement like left factoring, eliminating left recursion, calculating first and follow sets are explained with examples.
The document discusses lexical analysis in compilers. It describes how a lexical analyzer groups characters into tokens by recognizing patterns in the input based on regular expressions. It provides examples of token classes and structures. It also explains how lexical analysis is implemented using a lexical analyzer generator called LEX, which translates a LEX source file into a C program that performs lexical analysis.
This document discusses garbage collection techniques for automatically reclaiming memory from unused objects. It describes several garbage collection algorithms including reference counting, mark-and-sweep, and copying collection. It also covers optimizations like generational collection which focuses collection on younger object generations. The goal of garbage collection is to promote memory safety and management while allowing for automatic reclamation of memory from objects that are no longer reachable.
LR parsing allows parsers to see the entire right-hand side of a rule and perform lookahead, allowing it to handle a wider range of grammars than LL parsing. The document provides an example of LR parsing a simple expression grammar. It demonstrates the steps of building the LR parsing items, closure, and goto functions to generate the LR parsing table from the grammar. LR parsing tables contain the states, symbols, and parsing actions (reduce, shift, accept).
This document discusses imperative and object-oriented programming languages. It covers basic concepts like state, variables, expressions, assignments, and control flow in imperative languages. It also discusses procedures and functions, including passing parameters, stack frames, and recursion. Finally, it briefly mentions the differences between call by value and call by reference.
We start with a linguistic discussion of language, its properties, and the study of language in philosophy and linguistics. We then investigate natural languages, controlled languages, and artificial languages to emphasise the human ability to control and construct languages. At the end, we arrive at the notion of software languages as means to communicate software between people.
This document discusses register allocation in compiler construction. It begins with an example of constructing an interference graph from a code snippet during liveness analysis. It then covers the main steps of register allocation: constructing interference graphs from liveness analysis, graph coloring to assign registers while minimizing spills, and handling move instructions through coalescing. It provides examples demonstrating graph coloring on interference graphs with different numbers of available registers.
The document provides an overview of software languages and language processors. It discusses natural languages, controlled languages, and artificial languages. It then covers software languages which are used to engineer software. Different types of language processors like compilers, interpreters, scanners and parsers are described. Traditional compiler architecture and compilation by transformation are summarized. Modern compilers used in integrated development environments are also discussed. The document concludes by covering compiler construction and language workbenches.
A Systematic Approach To Probabilistic Pointer AnalysisMonica Franklin
This document presents a formal framework for probabilistic pointer analysis of probabilistic programs. It describes constructing a discrete-time Markov chain representing the concrete semantics of a probabilistic While program with static pointers by composing the probabilistic control flow and data updates using a tensor product. It then applies probabilistic abstract interpretation to obtain an abstract semantics with drastically reduced size. The analysis systematically derives probabilistic pointer information like points-to matrices and tensors, providing an alternative to experimental profiling approaches.
This document provides an overview of function notation and how to work with functions. It defines what a function is as a relation that assigns a single output value to each input value. It shows how functions can be represented using standard notation like f(x) and discusses evaluating functions by inputting values. Examples are provided of determining if a relationship represents a function, evaluating functions from tables and graphs, and solving functional equations.
The document provides an overview of the Chase Algorithm, which is used to determine if a dependency D follows from a given set of dependencies S. It begins with a recap of relevant concepts from first-order logic for database theory, including formulas, models/instances, semantics, and dependencies. It then introduces the Chase Algorithm, which rewrites D as much as possible using rules in S, checking if the result D' is a tautology. If so, D follows from S. The document also discusses embeddings of formulas into instances and the satisfiability and termination of the Chase.
Date: March 9, 2016
Course: UiS DAT911 - Foundations of Computer Science (fall 2016)
Please cite, link to or credit this presentation when using it or part of it in your work.
Random variable, distributive function lect3a.pptsadafshahbaz7777
1. A random variable (X) is a function that maps outcomes of a probability experiment to real numbers. The probability distribution function (FX(x)) gives the probability that X takes on a value less than or equal to x.
2. FX(x) satisfies properties of a distribution function - it is nondecreasing, right-continuous, and its limit as x approaches positive/negative infinity is 0/1.
3. A random variable can be either continuous or discrete. FX(x) is continuous if it is continuous for all x, and discrete if it has jump discontinuities at countable points.
The document summarizes key concepts from a lecture on discrete structures including:
1) Predicates are statements with variables that become true or false when values are substituted. The truth set of a predicate contains values that make the statement true.
2) Universal statements are true if the predicate is true for all values, while existential statements are true if the predicate is true for at least one value.
3) Statements can be translated between formal logic notation using quantifiers and informal English. Negations of universal statements are existential, and vice versa.
The document summarizes key concepts from a lecture on discrete structures, including:
1) It defines predicates as sentences containing variables that become statements when values are substituted, and introduces truth sets as the set of elements making a predicate true.
2) It discusses universal and existential statements, where a universal statement is true if a predicate is true for all variables, and an existential is true if true for at least one variable.
3) It explains translating between formal quantified statements and informal English statements, and shows several examples of translating in both directions.
This document provides an overview of propositional logic, including:
- Propositions are statements that can be true or false. Compound propositions combine simpler statements with logical connectives like "and" and "or".
- Truth tables show the truth values of compound propositions based on the truth values of their variables.
- Common logical connectives include conjunction, disjunction, negation, implication, and equivalence.
- Tautologies and contradictions are types of statements that are always true or false regardless of variable values.
- Quantifiers like "for all" and "there exists" can be used to define propositional functions on a domain.
- Valid arguments are those where the conclusion is necessarily true
1. The document discusses the semantics and soundness of the assignment axiom {ψ[x ← E]} x := E {ψ} in assertional methods. It considers E and ψ as functions of x, and how their values change before and after the assignment.
2. It poses the challenge of finding a forward axiom for assignment and proving it is sound. The first solution earns a chocolate bar.
3. A better approach is to formally define the semantics of the programming language, including the execution step and transition relation between states, and then prove the soundness of the assignment rule.
Rough sets and fuzzy rough sets in Decision MakingDrATAMILARASIMCA
Rough sets, Fuzzy rough sets, lower approximation, upper approximation, positive region and reduct, Equivalence relation, dependency coefficient, Information system for road accident system
This document contains lecture notes on finite automata from the Theory of Computation unit 1 course. Some key points include:
1. Finite automata are defined as 5-tuples (Q, Σ, δ, q0, F) where Q is a set of states, Σ is an input alphabet, δ is a transition function, q0 is the initial state, and F is a set of final states.
2. Deterministic finite automata (DFAs) have a single transition between states for each input symbol, while non-deterministic finite automata (NFAs) can have multiple transitions for a single input.
3. Regular expressions are used to describe the languages
This document summarizes information about ER diagrams, schema refinement, and database normalization. It provides examples of ER diagrams and how they can be converted to tables. It discusses different normal forms including Boyce-Codd normal form (BCNF) and third normal form (3NF), and provides algorithms for decomposing a schema into BCNF and 3NF. The goal of normalization is to reduce data redundancy and avoid data anomalies.
Functions are treated as objects in Scala, with the function type A => B being an abbreviation for the class scala.Function1[A, B]. Functions are objects that have an apply method. Case classes implicitly define companion objects with apply methods, allowing constructor-like syntax. Pattern matching provides a way to generalize switch statements to class hierarchies. The for expression provides a cleaner syntax than higher-order functions like map and flatMap for working with collections, but compiles to calls to these functions. Types like Option and Try are examples of monads in Scala, making failure or missing data explicit in the type while hiding boilerplate code.
The document discusses tuple relational calculus and domain relational calculus. Tuple relational calculus describes the desired information as a set of tuples that satisfy a predicate, in the form {t | P(t)}. Domain relational calculus uses domain variables that take on attribute values rather than entire tuples. Both languages use atoms, formulae, and quantification to write queries. Expressions must be safe by only generating tuples within the domain to avoid infinite relations. Sample queries are provided to illustrate the languages.
The document provides an introduction to integral calculus. It discusses how integral calculus is motivated by the problem of defining and calculating the area under a function's graph. The key points are:
1) Integration is the inverse process of differentiation, where we find the original function given its derivative. This results in families of functions that differ by an arbitrary constant.
2) Indefinite integrals represent families of functions, while definite integrals have practical uses in science, engineering, economics and other fields.
3) Standard formulae for integrals are provided that correspond to common derivative formulae, which can be used to evaluate more complex integrals.
Integral Calculus. - Differential Calculus - Integration as an Inverse Process of Differentiation - Methods of Integration - Integration using trigonometric identities - Integrals of Some Particular Functions - rational function - partial fraction - Integration by partial fractions - standard integrals - First and second fundamental theorem of integral calculus
This presentation takes you on a functional programming journey, it starts from basic Scala programming language design concepts and leads to a concept of Monads, how some of them designed in Scala and what is the purpose of them
The document discusses the semantics of propositional logic, including:
1) Defining logical formulas using a formal language and grammar;
2) Describing the meaning of logical connectives like conjunction and negation through truth tables;
3) Explaining how interpretations assign truth values to formulas based on the truth values of their components.
This document provides a probability cheatsheet compiled by William Chen and Joe Blitzstein with contributions from others. It is licensed under CC BY-NC-SA 4.0 and contains information on topics like counting rules, probability definitions, random variables, moments, and more. The cheatsheet is regularly updated with comments and suggestions submitted through a GitHub repository.
This document provides an introduction to set theory and functions of a single variable in mathematics. It defines sets such as the integers (Z), rational numbers (Q), irrational numbers (I), and real numbers (R) using set notation. It explains how the real number line is constructed by starting with the integers and adding in rational and irrational numbers. It then introduces the concept of a function and defines a real-valued single variable function as a mapping from real numbers to real numbers such that each input has a unique output. Functions are visualized by graphing the set of ordered pairs {(x, f(x))} in the Cartesian plane R2. Recommended texts for further reading on these topics are also provided.
This document discusses syntactic editor services including formatting, syntax coloring, and syntactic completion. It describes how syntactic completion can be provided generically based on a syntax definition. The document also discusses how context-free grammars can be extended with templates to specify formatting layout when pretty-printing abstract syntax trees to text. Templates are used to insert whitespace, line breaks, and indentation to produce readable output.
This document provides an overview of parsing in compiler construction. It discusses context-free grammars and how they are used to generate sentences and parse trees through derivations. It also covers ambiguity that can arise from grammars and various grammar transformations used to eliminate ambiguity, including defining associativity and priority. The dangling else problem is presented as an example of an ambiguous grammar.
This document provides an overview of the Lecture 2 on Declarative Syntax Definition for the CS4200 Compiler Construction course. The lecture covers the specification of syntax definition from which parsers can be derived, the perspective on declarative syntax definition using SDF, and reading material on the SDF3 syntax definition formalism and papers on testing syntax definitions and declarative syntax. It also discusses what syntax is, both in linguistics and programming languages, and how programs can be described in terms of syntactic categories and language constructs. An example Tiger program for solving the n-queens problem is presented to illustrate syntactic categories in Tiger.
This document provides an overview of the CS4200 Compiler Construction course at TU Delft. It discusses the course organization, structure, and assessment. The course is split into two parts - CS4200-A which covers concepts and techniques through lectures, papers, and homework assignments, and CS4200-B which involves building a compiler for a subset of Java as a semester-long project. Students will use the Spoofax language workbench to implement their compiler and will submit assignments through a private GitLab repository.
A Direct Semantics of Declarative Disambiguation RulesEelco Visser
This document discusses research into providing a direct semantics for declarative disambiguation of expression grammars. It aims to define what disambiguation rules mean, ensure they are safe and complete, and provide an effective implementation strategy. The document outlines key research questions around the meaning, safety, completeness and coverage of disambiguation rules. It also presents contributions around using subtree exclusion patterns to define safe and complete disambiguation for classes of expression grammars, and implementing this in SDF3.
Declarative Type System Specification with StatixEelco Visser
In this talk I present the design of Statix, a new constraint-based language for the executable specification of type systems. Statix specifications consist of predicates that define the well-formedness of language constructs in terms of built-in and user-defined constraints. Statix has a declarative semantics that defines whether a model satisfies a constraint. The operational semantics of Statix is defined as a sound constraint solving algorithm that searches for a solution for a constraint. The aim of the design is that Statix users can ignore the execution order of constraint solving and think in terms of the declarative semantics.
A distinctive feature of Statix is its use of scope graphs, a language parametric framework for the representation and querying of the name binding facts in programs. Since types depend on name resolution and name resolution may depend on types, it is typically not possible to construct the entire scope graph of a program before type constraint resolution. In (algorithmic) type system specifications this leads to explicit staging of the construction and querying of the type environment (class table, symbol table). Statix automatically stages the construction of the scope graph of a program such that queries are never executed when their answers may be affected by future scope graph extension. In the talk, I will explain the design of Statix by means of examples.
https://eelcovisser.org/post/309/declarative-type-system-specification-with-statix
Compiler Construction | Lecture 17 | Beyond Compiler ConstructionEelco Visser
Compiler construction techniques are applied beyond general-purpose languages through domain-specific languages (DSLs). The document discusses several DSLs developed using Spoofax including:
- WebDSL for web programming with sub-languages for entities, queries, templates, and access control.
- IceDust for modeling information systems with derived values computed on-demand, incrementally, or eventually consistently.
- PixieDust for client-side web programming with views as derived values updated incrementally.
- PIE for defining software build pipelines as tasks with dynamic dependencies computed incrementally.
The document also outlines several research challenges in compiler construction like high-level declarative language definition, verification of
Domain Specific Languages for Parallel Graph AnalytiX (PGX)Eelco Visser
This document discusses domain-specific languages (DSLs) for parallel graph analytics using PGX. It describes how DSLs allow users to implement graph algorithms and queries using high-level languages that are then compiled and optimized to run efficiently on PGX. Examples of DSL optimizations like multi-source breadth-first search are provided. The document also outlines the extensible compiler architecture used for DSLs, which can generate code for different backends like shared memory or distributed memory.
Compiler Construction | Lecture 15 | Memory ManagementEelco Visser
The document discusses different memory management techniques:
1. Reference counting counts the number of pointers to each record and deallocates records with a count of 0.
2. Mark and sweep marks all reachable records from program roots and sweeps unmarked records, adding them to a free list.
3. Copying collection copies reachable records to a "to" space, allowing the original "from" space to be freed without fragmentation.
4. Generational collection focuses collection on younger object generations more frequently to improve efficiency.
Compiler Construction | Lecture 14 | InterpretersEelco Visser
This document summarizes a lecture on interpreters for programming languages. It discusses how operational semantics can be used to define the meaning of a program through state transitions in an interpreter. It provides examples of defining the semantics of a simple language using DynSem, a domain-specific language for specifying operational semantics. DynSem specifications can be compiled to interpreters that execute programs in the defined language.
Compiler Construction | Lecture 13 | Code GenerationEelco Visser
The document discusses code generation and optimization techniques, describing compilation schemas that define how language constructs are translated to target code patterns, and covers topics like ensuring correctness of generated code through type checking and verification of static constraints on the target format. It also provides examples of compilation schemas for Tiger language constructs like arithmetic expressions and control flow and discusses generating nested functions.
Compiler Construction | Lecture 12 | Virtual MachinesEelco Visser
The document discusses the architecture of the Java Virtual Machine (JVM). It describes how the JVM uses threads, a stack, heap, and method area. It explains JVM control flow through bytecode instructions like goto, and how the operand stack is used to perform operations and hold method arguments and return values.
Compiler Construction | Lecture 9 | Constraint ResolutionEelco Visser
This document provides an overview of constraint resolution in the context of a compiler construction lecture. It discusses unification, which is the basis for many type inference and constraint solving approaches. It also describes separating type checking into constraint generation and constraint solving, and introduces a constraint language that integrates name resolution into constraint resolution through scope graph constraints. Finally, it discusses papers on further developments with this approach, including addressing expressiveness and staging issues in type systems through the Statix DSL for defining type systems.
Compiler Construction | Lecture 8 | Type ConstraintsEelco Visser
This lecture covers type checking with constraints. It introduces the NaBL2 meta-language for writing type specifications as constraint generators that map a program to constraints. The constraints are then solved to determine if a program is well-typed. NaBL2 supports defining name binding and type structures through scope graphs and constraints over names, types, and scopes. Examples show type checking patterns in NaBL2 including variables, functions, records, and name spaces.
Compiler Construction | Lecture 7 | Type CheckingEelco Visser
This document summarizes a lecture on type checking. It discusses using constraints to separate the language-specific type checking rules from the language-independent solving algorithm. Constraint-based type checking collects constraints as it traverses the AST, then solves the constraints in any order. This allows type information to be learned gradually and avoids issues with computation order.
Compiler Construction | Lecture 6 | Introduction to Static AnalysisEelco Visser
Lecture introducing the need for static analysis in addition to parsing, the complications caused by names, and an introduction to name resolution with scope graphs
Enterprise Resource Planning System includes various modules that reduce any business's workload. Additionally, it organizes the workflows, which drives towards enhancing productivity. Here are a detailed explanation of the ERP modules. Going through the points will help you understand how the software is changing the work dynamics.
To know more details here: https://blogs.nyggs.com/nyggs/enterprise-resource-planning-erp-system-modules/
Introducing Crescat - Event Management Software for Venues, Festivals and Eve...Crescat
Crescat is industry-trusted event management software, built by event professionals for event professionals. Founded in 2017, we have three key products tailored for the live event industry.
Crescat Event for concert promoters and event agencies. Crescat Venue for music venues, conference centers, wedding venues, concert halls and more. And Crescat Festival for festivals, conferences and complex events.
With a wide range of popular features such as event scheduling, shift management, volunteer and crew coordination, artist booking and much more, Crescat is designed for customisation and ease-of-use.
Over 125,000 events have been planned in Crescat and with hundreds of customers of all shapes and sizes, from boutique event agencies through to international concert promoters, Crescat is rigged for success. What's more, we highly value feedback from our users and we are constantly improving our software with updates, new features and improvements.
If you plan events, run a venue or produce festivals and you're looking for ways to make your life easier, then we have a solution for you. Try our software for free or schedule a no-obligation demo with one of our product specialists today at crescat.io
SOCRadar's Aviation Industry Q1 Incident Report is out now!
The aviation industry has always been a prime target for cybercriminals due to its critical infrastructure and high stakes. In the first quarter of 2024, the sector faced an alarming surge in cybersecurity threats, revealing its vulnerabilities and the relentless sophistication of cyber attackers.
SOCRadar’s Aviation Industry, Quarterly Incident Report, provides an in-depth analysis of these threats, detected and examined through our extensive monitoring of hacker forums, Telegram channels, and dark web platforms.
Odoo ERP software
Odoo ERP software, a leading open-source software for Enterprise Resource Planning (ERP) and business management, has recently launched its latest version, Odoo 17 Community Edition. This update introduces a range of new features and enhancements designed to streamline business operations and support growth.
The Odoo Community serves as a cost-free edition within the Odoo suite of ERP systems. Tailored to accommodate the standard needs of business operations, it provides a robust platform suitable for organisations of different sizes and business sectors. Within the Odoo Community Edition, users can access a variety of essential features and services essential for managing day-to-day tasks efficiently.
This blog presents a detailed overview of the features available within the Odoo 17 Community edition, and the differences between Odoo 17 community and enterprise editions, aiming to equip you with the necessary information to make an informed decision about its suitability for your business.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
UI5con 2024 - Boost Your Development Experience with UI5 Tooling ExtensionsPeter Muessig
The UI5 tooling is the development and build tooling of UI5. It is built in a modular and extensible way so that it can be easily extended by your needs. This session will showcase various tooling extensions which can boost your development experience by far so that you can really work offline, transpile your code in your project to use even newer versions of EcmaScript (than 2022 which is supported right now by the UI5 tooling), consume any npm package of your choice in your project, using different kind of proxies, and even stitching UI5 projects during development together to mimic your target environment.
AI Fusion Buddy Review: Brand New, Groundbreaking Gemini-Powered AI AppGoogle
AI Fusion Buddy Review: Brand New, Groundbreaking Gemini-Powered AI App
👉👉 Click Here To Get More Info 👇👇
https://sumonreview.com/ai-fusion-buddy-review
AI Fusion Buddy Review: Key Features
✅Create Stunning AI App Suite Fully Powered By Google's Latest AI technology, Gemini
✅Use Gemini to Build high-converting Converting Sales Video Scripts, ad copies, Trending Articles, blogs, etc.100% unique!
✅Create Ultra-HD graphics with a single keyword or phrase that commands 10x eyeballs!
✅Fully automated AI articles bulk generation!
✅Auto-post or schedule stunning AI content across all your accounts at once—WordPress, Facebook, LinkedIn, Blogger, and more.
✅With one keyword or URL, generate complete websites, landing pages, and more…
✅Automatically create & sell AI content, graphics, websites, landing pages, & all that gets you paid non-stop 24*7.
✅Pre-built High-Converting 100+ website Templates and 2000+ graphic templates logos, banners, and thumbnail images in Trending Niches.
✅Say goodbye to wasting time logging into multiple Chat GPT & AI Apps once & for all!
✅Save over $5000 per year and kick out dependency on third parties completely!
✅Brand New App: Not available anywhere else!
✅ Beginner-friendly!
✅ZERO upfront cost or any extra expenses
✅Risk-Free: 30-Day Money-Back Guarantee!
✅Commercial License included!
See My Other Reviews Article:
(1) AI Genie Review: https://sumonreview.com/ai-genie-review
(2) SocioWave Review: https://sumonreview.com/sociowave-review
(3) AI Partner & Profit Review: https://sumonreview.com/ai-partner-profit-review
(4) AI Ebook Suite Review: https://sumonreview.com/ai-ebook-suite-review
#AIFusionBuddyReview,
#AIFusionBuddyFeatures,
#AIFusionBuddyPricing,
#AIFusionBuddyProsandCons,
#AIFusionBuddyTutorial,
#AIFusionBuddyUserExperience
#AIFusionBuddyforBeginners,
#AIFusionBuddyBenefits,
#AIFusionBuddyComparison,
#AIFusionBuddyInstallation,
#AIFusionBuddyRefundPolicy,
#AIFusionBuddyDemo,
#AIFusionBuddyMaintenanceFees,
#AIFusionBuddyNewbieFriendly,
#WhatIsAIFusionBuddy?,
#HowDoesAIFusionBuddyWorks
Microservice Teams - How the cloud changes the way we workSven Peters
A lot of technical challenges and complexity come with building a cloud-native and distributed architecture. The way we develop backend software has fundamentally changed in the last ten years. Managing a microservices architecture demands a lot of us to ensure observability and operational resiliency. But did you also change the way you run your development teams?
Sven will talk about Atlassian’s journey from a monolith to a multi-tenanted architecture and how it affected the way the engineering teams work. You will learn how we shifted to service ownership, moved to more autonomous teams (and its challenges), and established platform and enablement teams.
May Marketo Masterclass, London MUG May 22 2024.pdfAdele Miller
Can't make Adobe Summit in Vegas? No sweat because the EMEA Marketo Engage Champions are coming to London to share their Summit sessions, insights and more!
This is a MUG with a twist you don't want to miss.
Utilocate offers a comprehensive solution for locate ticket management by automating and streamlining the entire process. By integrating with Geospatial Information Systems (GIS), it provides accurate mapping and visualization of utility locations, enhancing decision-making and reducing the risk of errors. The system's advanced data analytics tools help identify trends, predict potential issues, and optimize resource allocation, making the locate ticket management process smarter and more efficient. Additionally, automated ticket management ensures consistency and reduces human error, while real-time notifications keep all relevant personnel informed and ready to respond promptly.
The system's ability to streamline workflows and automate ticket routing significantly reduces the time taken to process each ticket, making the process faster and more efficient. Mobile access allows field technicians to update ticket information on the go, ensuring that the latest information is always available and accelerating the locate process. Overall, Utilocate not only enhances the efficiency and accuracy of locate ticket management but also improves safety by minimizing the risk of utility damage through precise and timely locates.
Graspan: A Big Data System for Big Code AnalysisAftab Hussain
We built a disk-based parallel graph system, Graspan, that uses a novel edge-pair centric computation model to compute dynamic transitive closures on very large program graphs.
We implement context-sensitive pointer/alias and dataflow analyses on Graspan. An evaluation of these analyses on large codebases such as Linux shows that their Graspan implementations scale to millions of lines of code and are much simpler than their original implementations.
These analyses were used to augment the existing checkers; these augmented checkers found 132 new NULL pointer bugs and 1308 unnecessary NULL tests in Linux 4.4.0-rc5, PostgreSQL 8.3.9, and Apache httpd 2.2.18.
- Accepted in ASPLOS ‘17, Xi’an, China.
- Featured in the tutorial, Systemized Program Analyses: A Big Data Perspective on Static Analysis Scalability, ASPLOS ‘17.
- Invited for presentation at SoCal PLS ‘16.
- Invited for poster presentation at PLDI SRC ‘16.
DDS Security Version 1.2 was adopted in 2024. This revision strengthens support for long runnings systems adding new cryptographic algorithms, certificate revocation, and hardness against DoS attacks.
4. Shadowing
S0
S1
S2
D < P.p
s.p < s.p’
p < p’
S1
S2
x1
y1
y2 x2
z1
x3S0z2
def x3 = z2 5 7
def z1 =
fun x1 {
fun y1 {
x2 + y2
}
}
S1
S2
x1
y1
y2 x2
z1
x3S0z2
R
P
P
D
S1
S2
x1
y1
y2 x2
z1
x3S0z2
D
P
R
R
P
P
D
R.P.D < R.P.P.D
5. Blocks in Java
class Foo {
void foo() {
int x = 1;
{
int y = 2;
}
x = y;
}
}
What is the scope graph for this program?
Is the y declaration visible to the y reference?
6. A Calculus for Name Resolution
S R1 R2 SR
SRS
I(R1)
S’S
S’S P
Sx
Sx R
xS
xS D
I(_).p’ < P.p
D < I(_).p’
D < P.p
s.p < s.p’
p < p’
Reachability
VisibilityWell formed path: R.P*.I(_)*.D
8. Language-independent 𝜶-equivalence
Program similarity
Equivalence
define ↵-equivalence using scope graphs. Except for the leaves rep
ifiers, two ↵-equivalent programs must have the same abstract
write P ' P’ (pronounced “P and P’ are similar”) when the AS
re equal up to identifiers. To compare two programs we first c
T structures; if these are equal then we compare how identifiers
programs. Since two potentially ↵-equivalent programs are simi
s occur at the same positions. In order to compare the identifiers’
efine equivalence classes of positions of identifiers in a program: p
me equivalence class are declarations of or reference to the same
ract position ¯x identifies the equivalence class corresponding to
x.
n a program P, we write P for the set of positions correspon
s and declarations and PX for P extended with the artificial p
We define the
P
⇠ equivalence relation between elements of PX
if have same AST ignoring identifier names
9. Language-independent 𝜶-equivalence
Position equivalence
e same equivalence class are declarations of or reference to the same enti
abstract position ¯x identifies the equivalence class corresponding to the fr
ble x.
Given a program P, we write P for the set of positions corresponding
ences and declarations and PX for P extended with the artificial positio
¯x). We define the
P
⇠ equivalence relation between elements of PX as t
xive symmetric and transitive closure of the resolution relation.
nition 7 (Position equivalence).
` p : r i
x 7 ! di0
x
i
P
⇠ i0
i0 P
⇠ i
i
P
⇠ i0
i
P
⇠ i0
i0 P
⇠ i00
i
P
⇠ i00
i
P
⇠ i
his equivalence relation, the class containing the abstract free variable d
ion can not contain any other declaration. So the references in a particu
are either all free or all bound.
mma 6 (Free variable class). The equivalence class of a free variable do
contain any other declaration, i.e. 8 di
x s.t. i
P
⇠ ¯x =) i = ¯x
xi xi'
Program similarity
Equivalence
define ↵-equivalence using scope graphs. Except for the leaves rep
ifiers, two ↵-equivalent programs must have the same abstract
write P ' P’ (pronounced “P and P’ are similar”) when the AS
re equal up to identifiers. To compare two programs we first c
T structures; if these are equal then we compare how identifiers
programs. Since two potentially ↵-equivalent programs are simi
s occur at the same positions. In order to compare the identifiers’
efine equivalence classes of positions of identifiers in a program: p
me equivalence class are declarations of or reference to the same
ract position ¯x identifies the equivalence class corresponding to
x.
n a program P, we write P for the set of positions correspon
s and declarations and PX for P extended with the artificial p
We define the
P
⇠ equivalence relation between elements of PX
if have same AST ignoring identifier names
10. Language-independent 𝜶-equivalence
Position equivalence
e same equivalence class are declarations of or reference to the same enti
abstract position ¯x identifies the equivalence class corresponding to the fr
ble x.
Given a program P, we write P for the set of positions corresponding
ences and declarations and PX for P extended with the artificial positio
¯x). We define the
P
⇠ equivalence relation between elements of PX as t
xive symmetric and transitive closure of the resolution relation.
nition 7 (Position equivalence).
` p : r i
x 7 ! di0
x
i
P
⇠ i0
i0 P
⇠ i
i
P
⇠ i0
i
P
⇠ i0
i0 P
⇠ i00
i
P
⇠ i00
i
P
⇠ i
his equivalence relation, the class containing the abstract free variable d
ion can not contain any other declaration. So the references in a particu
are either all free or all bound.
mma 6 (Free variable class). The equivalence class of a free variable do
contain any other declaration, i.e. 8 di
x s.t. i
P
⇠ ¯x =) i = ¯x
xi xi'
Program similarity
Equivalence
define ↵-equivalence using scope graphs. Except for the leaves rep
ifiers, two ↵-equivalent programs must have the same abstract
write P ' P’ (pronounced “P and P’ are similar”) when the AS
re equal up to identifiers. To compare two programs we first c
T structures; if these are equal then we compare how identifiers
programs. Since two potentially ↵-equivalent programs are simi
s occur at the same positions. In order to compare the identifiers’
efine equivalence classes of positions of identifiers in a program: p
me equivalence class are declarations of or reference to the same
ract position ¯x identifies the equivalence class corresponding to
x.
n a program P, we write P for the set of positions correspon
s and declarations and PX for P extended with the artificial p
We define the
P
⇠ equivalence relation between elements of PX
if have same AST ignoring identifier names
ed proof is in appendix A.5, we first prove:
r i
x 7 ! d ¯x
x ) =) 8 p di0
x , p ` r i
x 7 ! di0
x =) i0
= ¯x ^ p =
eed by induction on the equivalence relation.
ce classes defined by this relation contain references to
me entity. Given this relation, we can state that two p
f the identifiers at identical positions refer to the same e
he same equivalence class:
(↵-equivalence). Two programs P1 and P2 are ↵-equi
2) when they are similar and have the same ⇠-equivalen
P1
↵
⇡ P2 , P1 ' P2 ^ 8 e e0
, e
P1
⇠ e0
, e
P2
⇠ e0
is an equivalence relation since ' and , are equivalenc(with some further details about free variables)
Alpha equivalence
11. Preserving ambiguity
25
module A1 {
def x2 := 1
}
module B3 {
def x4 := 2
}
module C5 {
import A6 B7 ;
def y8 := x9
}
module D10 {
import A11 ;
def y12 := x13
}
module E14 {
import B15 ;
def y16 := x17
}
P1
module AA1 {
def z2 := 1
}
module BB3 {
def z4 := 2
}
module C5 {
import AA6 BB7 ;
def s8 := z9
}
module D10 {
import AA11 ;
def u12 := z13
}
module E14 {
import BB15 ;
def v16 := z17
}
P2
module A1 {
def z2 := 1
}
module B3 {
def x4 := 2
}
module C5 {
import A6 B7 ;
def y8 := z9
}
module D10 {
import A11 ;
def y12 := z13
}
module E14 {
import B15 ;
def y16 := x17
}
P3
Fig. 23. ↵-equivalence and duplicate declarationP1
Lemma 6 (Free variable class). The equivalence class of a fr
ot contain any other declaration, i.e. 8 di
x s.t. i
P
⇠ ¯x =) i = ¯x
Proof. Detailed proof is in appendix A.5, we first prove:
8 r i
x, (` > : r i
x 7 ! d ¯x
x ) =) 8 p di0
x , p ` r i
x 7 ! di0
x =) i0
= ¯x
nd then proceed by induction on the equivalence relation.
The equivalence classes defined by this relation contain reference
ions of the same entity. Given this relation, we can state that t
↵-equivalent if the identifiers at identical positions refer to the s
s belong to the same equivalence class:
Definition 8 (↵-equivalence). Two programs P1 and P2 are ↵
oted P1
↵
⇡ P2) when they are similar and have the same ⇠-equi
P1
↵
⇡ P2 , P1 ' P2 ^ 8 e e0
, e
P1
⇠ e0
, e
P2
⇠ e0P2 P2
Lemma 6 (Free variable class). The e
not contain any other declaration, i.e. 8 d
Proof. Detailed proof is in appendix A.5,
8 r i
x, (` > : r i
x 7 ! d ¯x
x ) =) 8 p di0
x , p `
and then proceed by induction on the equ
The equivalence classes defined by this rel
tions of the same entity. Given this relatio
↵-equivalent if the identifiers at identical
is belong to the same equivalence class:
Definition 8 (↵-equivalence). Two pro
noted P1
↵
⇡ P2) when they are similar and
P1
↵
⇡ P2 , P1 ' P2 ^ 8P3
13. Types from Declaration
def x : int = 6
def x : int = 6
def f = fun (y : int) { x + y }
Static type-checking (or inference) is one obvious client for name resolution
In many cases, we can perform resolution before doing type analysis
14. Types from Declaration
def x : int = 6
def f = fun (y : int) { x + y }
def f = fun (y : int) { x + y }
def x : int = 6
def f = fun (y : int) { x + y }
Static type-checking (or inference) is one obvious client for name resolution
In many cases, we can perform resolution before doing type analysis
15. Types from Declaration
def x : int = 6
def f = fun (y : int) { x + y }
def x : int = 6
def f = fun (y : int) { x + y }
Static type-checking (or inference) is one obvious client for name resolution
In many cases, we can perform resolution before doing type analysis
16. Type-Dependent Name Resolution
But sometimes we need types before we can do name resolution
record A1 { x1 : int }
record B1 { a1 : A2 ; x2 : bool}
def z1 : B2 = ...
def y1 = z2.x3
def y2 = z3.a2.x4
17. Type-Dependent Name Resolution
But sometimes we need types before we can do name resolution
record A1 { x1 : int }
record B1 { a1 : A2 ; x2 : bool}
def z1 : B2 = ...
def y1 = z2.x3
def y2 = z3.a2.x4
18. Type-Dependent Name Resolution
But sometimes we need types before we can do name resolution
record A1 { x1 : int }
record B1 { a1 : A2 ; x2 : bool}
def z1 : B2 = ...
def y1 = z2.x3
def y2 = z3.a2.x4
19. Type-Dependent Name Resolution
But sometimes we need types before we can do name resolution
record A1 { x1 : int }
record B1 { a1 : A2 ; x2 : bool}
def z1 : B2 = ...
def y1 = z2.x3
def y2 = z3.a2.x4
20. Type-Dependent Name Resolution
But sometimes we need types before we can do name resolution
record A1 { x1 : int }
record B1 { a1 : A2 ; x2 : bool}
def z1 : B2 = ...
def y1 = z2.x3
def y2 = z3.a2.x4
21. Type-Dependent Name Resolution
But sometimes we need types before we can do name resolution
record A1 { x1 : int }
record B1 { a1 : A2 ; x2 : bool}
def z1 : B2 = ...
def y1 = z2.x3
def y2 = z3.a2.x4
22. Type-Dependent Name Resolution
But sometimes we need types before we can do name resolution
record A1 { x1 : int }
record B1 { a1 : A2 ; x2 : bool}
def z1 : B2 = ...
def y1 = z2.x3
def y2 = z3.a2.x4
23. Type-Dependent Name Resolution
But sometimes we need types before we can do name resolution
record A1 { x1 : int }
record B1 { a1 : A2 ; x2 : bool}
def z1 : B2 = ...
def y1 = z2.x3
def y2 = z3.a2.x4
24. Type-Dependent Name Resolution
But sometimes we need types before we can do name resolution
record A1 { x1 : int }
record B1 { a1 : A2 ; x2 : bool}
def z1 : B2 = ...
def y1 = z2.x3
def y2 = z3.a2.x4
25. Type-Dependent Name Resolution
But sometimes we need types before we can do name resolution
record A1 { x1 : int }
record B1 { a1 : A2 ; x2 : bool}
def z1 : B2 = ...
def y1 = z2.x3
def y2 = z3.a2.x4
26. Type-Dependent Name Resolution
But sometimes we need types before we can do name resolution
record A1 { x1 : int }
record B1 { a1 : A2 ; x2 : bool}
def z1 : B2 = ...
def y1 = z2.x3
def y2 = z3.a2.x4
Our approach: interleave partial name resolution with type resolution
(also using constraints)
See PEPM 2016 paper / talk
30. Syntax of Constraints
C := CG
| CTy | CRes | C ^ C | True
CG
:= R S | S D | S l
S | D S | S l
R
CRes := R 7! D | D S | !N | N ⇢
⇠ N
CTy := T ⌘ T | D : T
D := | xD
i
R := xR
i
S := & | n
T := ⌧ | c(T, ..., T) with c 2 CT
N := D(S) | R(S) | V(S)
Figure 7. Syntax of constraints
scope graph resolution calculus (described in Section 3.3). Finally,
we apply |= with G set to CG
.
31. LMR: Language with Modules and Records
prog = decl⇤
decl = module id {decl⇤
}
| import id
| def bind
| record id {fdecl⇤
}
fdecl = id : ty
ty = Int
| Bool
| id
| ty ! ty
exp = int
| true
| false
| id
| exp exp
| if exp then exp else exp
| fun ( id : ty ) {exp}
| exp exp
| letrec tbind in exp
| new id {fbind⇤
}
| with exp do exp
| exp . id
bind = id = exp
| tbind
tbind = id : ty = exp
fbind = id = exp
Figure 5. Syntax of LMR.
[[ds]]prog :=
[[module xi {ds}]]decl
s :=
[[import xi]]decl
s :=
[[def b]]decl
s :=
[[record xi {fs}]]decl
s :=
[[xi = e]]bind
s :=
[[xi : t = e]]bind
s :=
[[xi:t]]fdecl
sr,sd
:=
[[Int]]ty
s,t :=
[[Bool]]ty
s,t :=
[[t1 ! t2]]ty
s,t :=
[[xi]]ty
s,t :=
[[fun (xi:t1){e}]]exp
s,t :=
36. Constraints for Declarations
p
[[ds]]prog := !D(s) ^ [[ds]]decl⇤
s
[[module xi {ds}]]decl
s := s xD
i ^ xD
i s0 ^ s0 P
s ^ !D(s0) ^ [[ds]]decl⇤
s0
[[import xi]]decl
s := xR
i s ^ s I
xR
i
[[def b]]decl
s := [[b]]bind
s
[[record xi {fs}]]decl
s := s xD
i ^ xD
i s0 ^ s0 P
s ^ !D(s0) ^ [[fs]]fdecl⇤
s,s0
[[xi = e]]bind
s := s xD
i ^ xD
i : ⌧ ^ [[e]]exp
s,⌧
[[xi : t = e]]bind
s := s xD
i ^ xD
i : ⌧ ^ [[t]]ty
s,⌧ ^ [[e]]exp
s,⌧
[[xi:t]]fdecl
sr,sd
:= sd xD
i ^ xD
i : ⌧ ^ [[t]]ty
sr,⌧
[[Int]]ty
s,t := t ⌘ Int
[[Bool]]ty
s,t := t ⌘ Bool
[[t1 ! t2]]ty
s,t := t ⌘ Fun[⌧1,⌧2] ^ [[t1]]ty
s,⌧1 ^ [[t2]]ty
s,⌧2
[[xi]]ty
:= t ⌘ Rec( ) ^ xR
s ^ xR
7!
37. Constraints for Expressions
LMR.
be the algo-
R’s concrete
defined by a
by syntactic
ach function
nt and possi-
parameters,
sibly involv-
ables or new
s are defined
ach possible
gory. For ex-
rules, and is
e s in which
the expres-
ected type t
the notation
ms of syntac-
result of ap-
and return-
esulting con-
pty sequence.
1 2 s,t 1 2 1 s,⌧1 2 s,⌧2
[[xi]]ty
s,t := t ⌘ Rec( ) ^ xR
i s ^ xR
i 7!
[[fun (xi:t1){e}]]exp
s,t := t ⌘ Fun[⌧1,⌧2] ^ s0 P
s ^ !D(s0) ^ s0 xD
i
^ xD
i : ⌧1 ^ [[t1]]ty
s,⌧1 ^ [[e]]exp
s0,⌧2
[[letrec bs in e]]exp
s,t := s0 P
s ^ !D(s0) ^ [[bs]]bind
s0 ^ [[e]]exp
s0,t
[[n]]exp
s,t := t ⌘ Int
[[true]]exp
s,t := t ⌘ Bool
[[false]]exp
s,t := t ⌘ Bool
[[e1 e2]]exp
s,t := t ⌘ t3 ^ ⌧1 ⌘ t1 ^ ⌧2 ⌘ t2 ^ [[e1]]exp
s,⌧1 ^ [[e2]]exp
s,⌧2
(where has type t1 ⇥ t2 ! t3)
[[if e1 then e2 else e3]]exp
s,t := ⌧1 ⌘ Bool ^ [[e1]]exp
s,⌧1 ^ [[e2]]exp
s,t ^ [[e3]]exp
s,t
[[xi]]exp
s,t := xR
i s ^ xR
i 7! ^ : t
[[e1 e2]]exp
s,t := ⌧ ⌘ Fun[⌧1,t] ^ [[e1]]exp
s,⌧ ^ [[e2]]exp
s,⌧1
[[e.xi]]exp
s,t := [[e]]exp
s,⌧ ^ ⌧ ⌘ Rec( ) ^ & ^ s0 I
& ^ [[xi]]exp
s0,t
[[with e1 do e2]]exp
s,t := [[e1]]exp
s,⌧ ^ ⌧ ⌘ Rec( ) ^ &
^ s0 P
s ^ s0 I
& ^ [[e2]]exp
s0,t
[[new xi {bs}]]exp
s,t := xR
i s ^ xR
i 7! ^ s0 I
xR
i
^ [[bs]]fbind⇤
s,s0 ^ V(s0) ⇡ R(s0) ^ t ⌘ Rec( )
[[xi = e]]fbind
s,s0 := xR
i s0 ^ xR
i 7! ^ : ⌧ ^ [[e]]exp
s,⌧
40. Generic Constraints
false // always fails
true // always succeeds
C1, C2 // conjunction of constraints
[[ e ^ (s) : ty ]] // generate constraints for sub-term
C | error $[something is wrong] // custom error message
C | warning $[does not look right] // custom warning
41. Visibility Policy and Scope Graph Constraints
new s // generate a new scope
NS{x} <- s // declaration of name x in namespace NS in scope s
NS{x} -> s // reference of name x in namespace NS in scope s
s ---> s' // unlabeled scope edge from s to s’
s -L-> s' // scope edge from s to s' labeled L
NS{x} |-> d // resolve reference x in namespace NS to declaration d
name resolution
namespaces
NS1 NS2
labels
P I
well-formedness
P* . I*
order
D < P,
D < I,
I < P
42. Name Set Constraints
distinct D(s)/NS, // declarations for NS in s should be distinct
D(s1) subseteq R(s2), // name set
D(s_rec)/Field subseteq R(s_use)/Field
| error $[Field [NAME] not initialized] @r
Note: incomplete
43. Type Signature and Type Constraints
signature
types
TC1()
TC2(type)
TC3(scope)
TC4(type, scope, type)
[[ e ^ (s) : ty ]] // subterm e has type ty under scope s
o : ty // occurrence o has type ty
o : ty ! // with priority
ty1 == ty2 // ty1 and ty2 should unify
ty <! ty2 // declare ty1 a subtype of ty2
ty1 <? ty2 // is ty1 a subtype of ty1?
45. Visibility Policy and Type Signature
signature
name resolution
namespaces
Type Var Field Loop
labels
P I
well-formedness
P* . I*
order
D < P,
D < I,
I < P
signature
types
UNIT()
INT()
STRING()
NIL()
RECORD(scope)
ARRAY(type, scope)
FUN(List(type), type)
46. Declaration of Built-in Types and Functions
init ^ (s) : ty_init :=
new s, // the root scope
Type{"int"} <- s, // declare primitive type int
Type{"int"} : INT() !!,
Type{"string"} <- s, // declare primitive type string
Type{"string"} : STRING() !!,
// standard library
Var{"print"} <- s,
Var{"print"} : FUN([STRING()], UNIT()) !!,
…
Var{"exit"} <- s,
Var{"exit"} : FUN([INT()], UNIT()) !!.
47. module nabl-lib
rules
[[ None() ^ (s) ]] := true.
[[ Some(e) ^ (s) ]] := [[ e ^ (s) ]].
Map[[ [] ^ (s) ]] := true.
Map[[ [ x | xs ] ^ (s) ]] :=
[[ x ^ (s) ]], Map[[ xs ^ (s) ]].
Map2[[ [] ^ (s, s') ]] := true.
Map2[[ [ x | xs ] ^ (s, s') ]] :=
[[ x ^ (s, s') ]], Map2[[ xs ^ (s, s') ]].
MapT2[[ [] ^ (s, s') : [] ]] := true.
MapT2[[ [ x | xs ] ^ (s, s') : [ty | tys] ]] :=
[[ x ^ (s, s') : ty ]], MapT2[[ xs ^ (s, s') : tys ]].
MapT[[ [] ^ (s) : ty ]] := true.
MapT[[ [ x | xs ] ^ (s) : ty ]] :=
[[ x ^ (s) : ty ]], MapT[[ xs ^ (s) : ty ]].
MapTs[[ [] ^ (s) : [] ]] := true.
MapTs[[ [ x | xs ] ^ (s) : [ty | tys] ]] :=
[[ x ^ (s) : ty ]],
MapTs[[ xs ^ (s) : tys ]].
MapTs2[[ [] ^ (s1, s2) : [] ]] := true.
MapTs2[[ [ x | xs ] ^ (s1, s2) : [ty | tys] ]] :=
[[ x ^ (s1, s2) : ty ]], MapTs2[[ xs ^ (s1, s2) : tys ]].
Constraint Generation
for Lists of Terms
51. Sequences
rules
[[ Seq(es) ^ (s) : ty ]] :=
Seq[[ es ^ (s) : ty ]].
Seq[[ [] ^ (s) : UNIT() ]] :=
true.
Seq[[ [e] ^ (s) : ty ]] :=
[[ e ^ (s) : ty ]].
Seq[[ [ e | es@[_|_] ] ^ (s) : ty ]] :=
[[ e ^ (s) : ty' ]],
Seq[[ es ^ (s) : ty ]].
52. Control-Flow
rules
[[ If(e1, e2, e3) ^ (s) : ty2 ]] :=
[[ e1 ^ (s) : INT() ]],
[[ e2 ^ (s) : ty2 ]],
[[ e3 ^ (s) : ty3 ]],
ty2 == ty3 | error $[branches should have same type].
[[ IfThen(e1, e2) ^ (s) : UNIT() ]] :=
[[ e1 ^ (s) : INT() ]],
[[ e2 ^ (s) : ty ]].
[[ While(e1, e2) ^ (s) : UNIT() ]] :=
new s', s' -P-> s,
[[ e1 ^ (s) : INT() ]],
[[ e2 ^ (s') : ty ]].
54. Lets Bind Sequentially
rules // let
[[ Let(blocks, exps) ^ (s) : ty ]] :=
new s_body,
distinct D(s_body),
Decs[[ blocks ^ (s, s_body) ]],
Seq[[ exps ^ (s_body) : ty ]].
Decs[[ [] ^ (s_outer, s_body) ]] :=
s_body -P-> s_outer.
Decs[[ [block] ^ (s_outer, s_body) ]] :=
s_body -P-> s_outer,
Dec[[ block ^ (s_body, s_outer) ]].
Decs[[ [block | blocks@[_|_]] ^ (s_outer, s_body) ]] :=
new s_dec,
s_dec -P-> s_outer,
Dec[[ block ^ (s_dec, s_outer) ]],
Decs[[ blocks ^ (s_dec, s_body) ]].
let
var x : int := 0 + z // z not in scope
var y : int := x + 1
var z : int := x + y + 1
in
x + y + z
end
55. Variable Declarations and References
rules // variable declarations
Dec[[ VarDec(x, t, e) ^ (s, s_outer) ]] :=
Var{x} <- s, Var{x} : ty1 !,
[[ t ^ (s_outer) : ty1 ]],
[[ e ^ (s_outer) : ty2 ]],
ty2 <? ty1 | error $[type mismatch] @ e.
Dec[[ VarDecNoInit(x, t) ^ (s, s_outer) ]] :=
Var{x} <- s, Var{x} : ty !,
[[ t ^ (s_outer) : ty ]].
rules // variable references
[[ Var(x) ^ (s) : ty ]] :=
Var{x} -> s,
Var{x} |-> d,
d : ty.
let
var x : int := 5
var f : int := 1
in
for y := 1 to x do (
f := f * y
)
end
56. Type Declarations
let
type foo = int
function foo(x : foo) : foo = 3
var foo : foo := foo(4)
in foo(56) + foo // both refer to the variable foo
end
rules
Dec[[ TypeDecs(tdecs) ^ (s, s_outer) ]] :=
Map[[ tdecs ^ (s) ]].
[[ TypeDec(x, t) ^ (s) ]] :=
Type{x} <- s, Type{x} : ty !,
[[ t ^ (s) : ty ]].
rules // types
[[ Tid(x) ^ (s) : ty ]] :=
Type{x} -> s,
Type{x} |-> d | error $[Type [x] not declared],
d : ty.
58. Adjacent Functions are Mutually Recursive
let
function odd(x : int) : int =
if x > 0 then even(x - 1) else false
function even(x : int) : int =
if x > 0 then odd(x - 1) else true
in
even(34)
end
let
function odd(x : int) : int =
if x > 0 then even(x - 1) else false
var x : int
function even(x : int) : int =
if x > 0 then odd(x - 1) else true
in
even(34)
end
59. Function Definitions
rules
Dec[[ FunDecs(fdecs) ^ (s, s_outer) ]] :=
Map2[[ fdecs ^ (s, s_outer) ]].
[[ FunDec(f, args, t, e) ^ (s, s_outer) ]] :=
Var{f} <- s,
Var{f} : FUN(tys, ty) !,
new s_fun,
s_fun -P-> s,
distinct D(s_fun) | error $[duplicate argument] @ NAMES,
MapTs2[[ args ^ (s_fun, s_outer) : tys ]],
[[ t ^ (s_outer) : ty ]],
[[ e ^ (s_fun) : ty_body ]],
ty == ty_body| error $[return type does not match body] @ t.
[[ FArg(x, t) ^ (s_fun, s_outer) : ty ]] :=
Var{x} <- s_fun,
Var{x} : ty !,
[[ t ^ (s_outer) : ty ]].
let function fact(n : int) : int =
if n < 1 then 1 else (n * fact(n - 1))
in fact(10)
end
60. let function fact(n : int) : int =
if n < 1 then 1 else (n * fact(n - 1))
in fact(10)
end
Function Calls
rules
[[ Call(Var(f), exps) ^ (s) : ty ]] :=
Var{f} -> s,
Var{f} |-> d | error $[Function [f] not declared],
d : FUN(tys, ty) | error $[Function expected] ,
MapTs[[ exps ^ (s) : tys ]].
62. Type Dependent Name Resolution
let
type point = {x : int, y : int}
var origin : point := point { x = 1, y = 2 }
in origin.x
end
63. Errors in Record Declaration and Creation
let
type point = {x : int, y : int}
type errpoint = {x : int, x : int}
var p : point
var e : errpoint
in
p := point{ x = 3, y = 3, z = "a" }
p := point{ x = 3 }
end
Field “y” not initialized
Reference “z” not resolved
Duplicate Declaration of Field “x”
64. Recursive Types
let
type intlist = {hd : int, tl : intlist}
type tree = {key : int, children : treelist}
type treelist = {hd : tree, tl : treelist}
var l : intlist
var t : tree
var tl : treelist
in
l := intlist { hd = 3, tl = l };
t := tree {
key = 2,
children = treelist {
hd = tree{ key = 3, children = 3 },
tl = treelist{ }
}
};
t.children.hd.children := t.children
end
type mismatch
Field "tl" not initialized
Field "hd" not initialized
65. NIL is a Subtype of RECORD
let
type intlist = {hd : int, tl : intlist}
var l : intlist := nil
in
l := intlist{ hd = 1, tl = l };
l := intlist{ hd = 2, tl = l };
l := intlist{ hd = 3, tl = l }
end
66. Record Types
rules
[[ RecordTy(fields) ^ (s) : ty ]] :=
new s_rec,
ty == RECORD(s_rec),
NIL() <! ty,
distinct D(s_rec)/Field
| error $[Duplicate declaration of field [NAME]] @ NAMES,
Map2[[ fields ^ (s_rec, s) ]].
[[ Field(x, t) ^ (s_rec, s_outer) ]] :=
Field{x} <- s_rec,
Field{x} : ty !,
[[ t ^ (s_outer) : ty ]].
let
type point = {x : int, y : int}
var origin : point := point { x = 1, y = 2 }
in origin.x
end
67. Record Creation
rules
[[ r@Record(t, inits) ^ (s) : ty ]] :=
[[ t ^ (s) : ty ]],
ty == RECORD(s_rec) | error $[record type expected],
new s_use, s_use -I-> s_rec,
D(s_rec)/Field subseteq R(s_use)/Field
| error $[Field [NAME] not initialized] @r,
Map2[[ inits ^ (s_use, s) ]].
[[ InitField(x, e) ^ (s_use, s) ]] :=
Field{x} -> s_use,
Field{x} |-> d,
d : ty1,
[[ e ^ (s) : ty2 ]],
ty2 <? ty1 | error $[type mismatch].
let
type point = {x : int, y : int}
var origin : point := point { x = 1, y = 2 }
in origin.x
end
68. Record Field Access
rules
[[ FieldVar(e, f) ^ (s) : ty ]] :=
[[ e ^ (s) : ty_e ]],
ty_e == RECORD(s_rec),
new s_use, s_use -I-> s_rec,
Field{f} -> s_use,
Field{f} |-> d,
d : ty.
let
type point = {x : int, y : int}
var origin : point := point { x = 1, y = 2 }
in origin.x
end
71. Issues with the Reachability Calculus
S R1 R2 SR
SRS
I(R1)
S’S
S’S P
Sx
Sx R
xS
xS D
I(_).p’ < P.p
D < I(_).p’
D < P.p
s.p < s.p’
p < p’
Well formed path: R.P*.I(_)*.D
Disambiguating import paths
Fixed visibility policy
Cyclic Import Paths
Multi-import interpretation
72. Resolution Calculus with Edge Labels
s the resolution relation for graph G.
collection JNKG is the multiset defined
DG(S)), JR(S)KG = ⇡(RG(S)), and
`G p : S 7 ! xD
i }) where ⇡(A) is
ojecting the identifiers from a set A of
iven a multiset M, 1M (x) denotes the
nes the resolution of a reference to a
as a most specific, well-formed path
n through a sequence of edges. A path
ng the atomic scope transitions in the
of steps:
l, S2) is a direct transition from the
pe S2. This step records the label of
s used.
, yR
, S) requires the resolution of ref-
n with associated scope S to allow a
rrent scope and scope S.
nds with a declaration step D(xD
) that
path is leading to.
ion in the graph from reference xR
i
`G p : xR
i 7 ! xD
i according to
. These rules all implicitly apply to
omit to avoid clutter. The calculus
n in terms of edges in the scope graph,
isible declarations. Here I is the set of
vice needed to avoid “out of thin air”
Well-formed paths
WF(p) , labels(p) 2 E
Visibility ordering on paths
label(s1) < label(s2)
s1 · p1 < s2 · p2
p1 < p2
s · p1 < s · p2
Edges in scope graph
S1
l
S2
I ` E(l, S2) : S1 ! S2
(E)
S1
l
yR
i yR
i /2 I I ` p : yR
i 7 ! yD
j yD
j S2
I ` N(l, yR
i , S2) : S1 ! S2
(N)
Transitive closure
I, S ` [] : A ⇣ A
(I)
B /2 S I ` s : A ! B I, {B} [ S ` p : B ⇣ C
I, S ` s · p : A ⇣ C
(T)
Reachable declarations
I, {S} ` p : S ⇣ S0
WF(p) S0
xD
i
I ` p · D(xD
i ) : S ⇢ xD
i
(R)
Visible declarations
I ` p : S ⇢ xD
i
8j, p0
(I ` p0
: S ⇢ xD
j ) ¬(p0
< p))
I ` p : S 7 ! xD
i
(V )
Reference resolution
xR
i S {xR
i } [ I ` p : S 7 ! xD
j
I ` p : xR
i 7 ! xD
j
(X)
G, |= !N
JN1KG ✓ JN2KG
G, |= N1
⇢
⇠ N2
(C-SUBNAME)
t1 = t2
G, |= t1 ⌘ t2
(C-EQ)
Figure 8. Interpretation of resolution and typing constraints
Resolution paths
s := D(xD
i ) | E(l, S) | N(l, xR
i , S)
p := [] | s | p · p (inductively generated)
[] · p = p · [] = p
(p1 · p2) · p3 = p1 · (p2 · p3)
Well-formed paths
WF(p) , labels(p) 2 E
Visibility ordering on paths
label(s1) < label(s2)
s1 · p1 < s2 · p2
p1 < p2
s · p1 < s · p2
Edges in scope graph
S1
l
S2
I ` E(l, S2) : S1 ! S2
(E)
S1
l
yR
i yR
i /2 I I ` p : yR
i 7 ! yD
j yD
j S2
I ` N(l, yR
i , S2) : S1 ! S2
(N)
Transitive closure
I, S ` [] : A ⇣ A
(I)
B /2 S I ` s : A ! B I, {B} [ S ` p : B ⇣ C
I, S ` s · p : A ⇣ C
(T)
Reachable declarations
I, {S} ` p : S ⇣ S0
WF(p) S0
xD
i
I ` p · D(xD
i ) : S ⇢ xD
i
(R)
Visible declarations
I ` p : S ⇢ xD
i
8j, p0
(I ` p0
: S ⇢ xD
j ) ¬(p0
< p))
I ` p : S 7 ! xD
i
(V )
Reference resolution
xR
S {xR
} [ I ` p : S 7 ! xD
C := CG
| CTy | CRes | C ^ C | True
CG
:= R S | S D | S l
S | D S | S l
R
CRes := R 7! D | D S | !N | N ⇢
⇠ N
CTy := T ⌘ T | D : T
D := | xD
i
R := xR
i
S := & | n
T := ⌧ | c(T, ..., T) with c 2 CT
N := D(S) | R(S) | V(S)
Figure 7. Syntax of constraints
scope graph resolution calculus (described in Section 3.3). Finally,
we apply |= with G set to CG
.
To lift this approach to constraints with variables, we simply
apply a multi-sorted substitution , mapping type variables ⌧ to
ground types, declaration variables to ground declarations and
scope variables & to ground scopes. Thus, our overall definition of
satisfaction for a program p is:
(CG
), |= (CRes
) ^ (CTy
) (⇧)
73. Visibility Policies
Lexical scope
L := {P} E := P⇤
D < P
Non-transitive imports
L := {P, I} E := P⇤
· I?
D < P, D < I, I < P
Transitive imports
L := {P, TI} E := P⇤
· TI⇤
D < P, D < TI, TI < P
Transitive Includes
L := {P, Inc} E := P⇤
· Inc⇤
D < P, Inc < P
Transitive includes and imports, and non-transitive imports
L := {P, Inc, TI, I} E := P⇤
· (Inc | TI)⇤
· I?
D < P, D < TI, TI < P, Inc < P, D < I, I < P,
Figure 10. Example reachability and visibility policies by instan-
tiation of path well-formedness and visibility.
3.4 Parameterization
R
Envre [
EnvL
re [
EnvD
re [
Envl
re [
I
74. Seen Imports
hout import tracking.
ficity order on paths is
e visibility policies can
e and specificity order.
module A1 {
module A2 {
def a3 = ...
}
}
import A4
def b5 = a6
Fig. 11. Self im-
port
tion
the
dule
far,
few
!
t A4
ody
?
13
AD
2 :SA2 2 D(SA1 )
AR
4 2 I(Sroot)
AR
4 2 R(Sroot) AD
1 :SA1 2 D(Sroot)
AR
4 7 ! AD
1 :SA1
Sroot ! SA1 (⇤)
Sroot ⇢ AD
2 :SA2
AR
4 2 R(Sroot) Sroot 7 ! AD
2 :SA2
AR
4 7 ! AD
2 :SA2
Fig. 10. Derivation for AR
4 7 ! AD
2 :SA2 in a calculus without import tracking.
The complete definition of well-formed paths and specificity order on paths is
given in Fig. 2. In Section 2.5 we discuss how alternative visibility policies can
be defined by just changing the well-formedness predicate and specificity order.
module A1 {
module A2 {
def a3 = ...
}
}
import A4
def b5 = a6
Seen imports. Consider the example in Fig. 11. Is declaration
a3 reachable in the scope of reference a6? This reduces to the
question whether the import of A4 can resolve to module
A2. Surprisingly, it can, in the calculus as discussed so far,
as shown by the derivation in Fig. 10 (which takes a few
shortcuts). The conclusion of the derivation is that AR
4 7 !
AD
:S . This conclusion is obtained by using the import at A
as shown by the derivation in Fig. 10 (which takes
shortcuts). The conclusion of the derivation is that AR
4
AD
2 :SA2 . This conclusion is obtained by using the import
to conclude at step (*) that Sroot ! SA1
, i.e. that the
of module A1 is reachable! In other words, the import
is used in its own resolution. Intuitively, this is nonsen
To rule out this kind of behavior we extend the cal
to keep track of the set of seen imports I using judgem
of the form I ` p : xR
i 7 ! xD
j . We need to extend all ru
pass the set I, but only the rules for resolution and im
are truly affected:
xR
i 2 R(S) {xR
i } [ I ` p : S 7 ! xD
j
I ` p : xR
i 7 ! xD
j
yR
i 2 I(S1) I I ` p : yR
i 7 ! yD
j :S2
I ` I(yR
i , yD
j :S2) : S1 ! S2
With this final ingredient, we reach the full calcul
75. 4
def b5 = a6
Fig. 11. Self im-
port
module A1 {
module B2 {
def x3 = 1
}
}
module B4 {
module A5 {
def y6 = 2
}
}
module C7 {
import A8
import B9
def z10 = x11
+ y12
}
Fig. 12. Anoma-
lous resolution
. The conclusion of the derivation is that AR
4 7 !
This conclusion is obtained by using the import at A4
de at step (*) that Sroot ! SA1
, i.e. that the body
A1 is reachable! In other words, the import of A4
its own resolution. Intuitively, this is nonsensical.
e out this kind of behavior we extend the calculus
ack of the set of seen imports I using judgements
m I ` p : xR
i 7 ! xD
j . We need to extend all rules to
et I, but only the rules for resolution and import
affected:
xR
i 2 R(S) {xR
i } [ I ` p : S 7 ! xD
j
I ` p : xR
i 7 ! xD
j
(X)
yR
i 2 I(S1) I I ` p : yR
i 7 ! yD
j :S2
I ` I(yR
i , yD
j :S2) : S1 ! S2
(I)
this final ingredient, we reach the full calculus in
is not hard to see that the resolution relation is
ded. The only recursive invocation (via the I rule)
ictly larger set I of seen imports (via the X rule); since the set R(G)
Anomaly
4
def b5 = a6
Fig. 11. Self im-
port
module A1 {
module B2 {
def x3 = 1
}
}
module B4 {
module A5 {
def y6 = 2
}
}
module C7 {
import A8
import B9
def z10 = x11
+ y12
}
Fig. 12. Anoma-
lous resolution
R
4 7 !
at A4
body
of A4
sical.
culus
ments
les to
mport
(X)
(I)
us in
on is
rule)
rule); since the set R(G)
2
1
3
1
3
2
76. Resolution Algorithm
< P
P,
nstan-
R[I](xR
) := let (r, s) = EnvE [{xR
} [ I, ;](Sc(xR
))} in
(
U if r = P and {xD
|xD
2 s} = ;
{xD
|xD
2 s}
Envre [I, S](S) :=
(
(T, ;) if S 2 S or re = ?
Env
L[{D}
re [I, S](S)
EnvL
re [I, S](S) :=
[
l2Max(L)
⇣
Env
{l0
2L|l0
<l}
re [I, S](S) Envl
re [I, S](S)
⌘
EnvD
re [I, S](S) :=
(
(T, ;) if [] /2 re
(T, D(S))
Envl
re [I, S](S) :=
8
><
>:
(P, ;) if S
I
l contains a variable or ISl[I](S) = U
S
S02
⇣
ISl[I](S)[S
I
l
⌘
Env(l 1re)[I, {S} [ S](S0)
ISl
[I](S) :=
(
U if 9yR
2 (S
B
l I) s.t. R[I](yR
) = U
{S0 | yR
2 (S
B
l I) ^ yD
2 R[I](yR
) ^ yD
S0}
80. Q1: Compiler Front-End
• Syntax Definition
- from formal grammars to syntax definition
- derivation of parsers, abstract syntax trees, syntax-aware editors, …
• Syntax Techniques (1)
- automata for lexical analysis
• Term Rewriting
- simple syntactic transformations using term rewrite rules
- desugaring, outline view
• Imperative and OO Languages
- behavior and types
• Static Semantics
- name resolution using scope graphs
- type analysis using type constraints
81. Q2: Compiler Back-End
• Dynamic Semantics
- what is the meaning of programs in a language
• Target Machine
- instruction set is API for programming (virtual) machine
• Code Generation
- from (typed) abstract syntax trees to (virtual) machine code instructions
• Garbage Collection
- techniques for safe automated memory management
• Register Allocation
- mapping unlimited set of variables to limited set of registers
• Dataflow Analysis
- basis for optimizations such as constant propagation, dead code elimination, …
• Syntax Techniques (2)
- LL and LR parsing