This document provides an overview of constraint resolution in the context of a compiler construction lecture. It discusses unification, which is the basis for many type inference and constraint solving approaches. It also describes separating type checking into constraint generation and constraint solving, and introduces a constraint language that integrates name resolution into constraint resolution through scope graph constraints. Finally, it discusses papers on further developments with this approach, including addressing expressiveness and staging issues in type systems through the Statix DSL for defining type systems.
This document provides an overview of the Lecture 2 on Declarative Syntax Definition for the CS4200 Compiler Construction course. The lecture covers the specification of syntax definition from which parsers can be derived, the perspective on declarative syntax definition using SDF, and reading material on the SDF3 syntax definition formalism and papers on testing syntax definitions and declarative syntax. It also discusses what syntax is, both in linguistics and programming languages, and how programs can be described in terms of syntactic categories and language constructs. An example Tiger program for solving the n-queens problem is presented to illustrate syntactic categories in Tiger.
Compiler Construction | Lecture 8 | Type ConstraintsEelco Visser
This lecture covers type checking with constraints. It introduces the NaBL2 meta-language for writing type specifications as constraint generators that map a program to constraints. The constraints are then solved to determine if a program is well-typed. NaBL2 supports defining name binding and type structures through scope graphs and constraints over names, types, and scopes. Examples show type checking patterns in NaBL2 including variables, functions, records, and name spaces.
This document discusses syntactic editor services including formatting, syntax coloring, and syntactic completion. It describes how syntactic completion can be provided generically based on a syntax definition. The document also discusses how context-free grammars can be extended with templates to specify formatting layout when pretty-printing abstract syntax trees to text. Templates are used to insert whitespace, line breaks, and indentation to produce readable output.
Compiler Construction | Lecture 2 | Declarative Syntax DefinitionEelco Visser
The document describes a lecture on declarative syntax definition. It discusses the perspective on declarative syntax definition explained in an Onward! 2010 essay. It also mentions an OOPSLA 2011 paper that introduced the SPoofax Testing (SPT) language used in the section on testing syntax definitions. Finally, it provides a link to documentation on the SDF3 syntax definition formalism.
Declarative Type System Specification with StatixEelco Visser
In this talk I present the design of Statix, a new constraint-based language for the executable specification of type systems. Statix specifications consist of predicates that define the well-formedness of language constructs in terms of built-in and user-defined constraints. Statix has a declarative semantics that defines whether a model satisfies a constraint. The operational semantics of Statix is defined as a sound constraint solving algorithm that searches for a solution for a constraint. The aim of the design is that Statix users can ignore the execution order of constraint solving and think in terms of the declarative semantics.
A distinctive feature of Statix is its use of scope graphs, a language parametric framework for the representation and querying of the name binding facts in programs. Since types depend on name resolution and name resolution may depend on types, it is typically not possible to construct the entire scope graph of a program before type constraint resolution. In (algorithmic) type system specifications this leads to explicit staging of the construction and querying of the type environment (class table, symbol table). Statix automatically stages the construction of the scope graph of a program such that queries are never executed when their answers may be affected by future scope graph extension. In the talk, I will explain the design of Statix by means of examples.
https://eelcovisser.org/post/309/declarative-type-system-specification-with-statix
Compiler Construction | Lecture 14 | InterpretersEelco Visser
This document summarizes a lecture on interpreters for programming languages. It discusses how operational semantics can be used to define the meaning of a program through state transitions in an interpreter. It provides examples of defining the semantics of a simple language using DynSem, a domain-specific language for specifying operational semantics. DynSem specifications can be compiled to interpreters that execute programs in the defined language.
This document provides an overview of the Lecture 2 on Declarative Syntax Definition for the CS4200 Compiler Construction course. The lecture covers the specification of syntax definition from which parsers can be derived, the perspective on declarative syntax definition using SDF, and reading material on the SDF3 syntax definition formalism and papers on testing syntax definitions and declarative syntax. It also discusses what syntax is, both in linguistics and programming languages, and how programs can be described in terms of syntactic categories and language constructs. An example Tiger program for solving the n-queens problem is presented to illustrate syntactic categories in Tiger.
Compiler Construction | Lecture 8 | Type ConstraintsEelco Visser
This lecture covers type checking with constraints. It introduces the NaBL2 meta-language for writing type specifications as constraint generators that map a program to constraints. The constraints are then solved to determine if a program is well-typed. NaBL2 supports defining name binding and type structures through scope graphs and constraints over names, types, and scopes. Examples show type checking patterns in NaBL2 including variables, functions, records, and name spaces.
This document discusses syntactic editor services including formatting, syntax coloring, and syntactic completion. It describes how syntactic completion can be provided generically based on a syntax definition. The document also discusses how context-free grammars can be extended with templates to specify formatting layout when pretty-printing abstract syntax trees to text. Templates are used to insert whitespace, line breaks, and indentation to produce readable output.
Compiler Construction | Lecture 2 | Declarative Syntax DefinitionEelco Visser
The document describes a lecture on declarative syntax definition. It discusses the perspective on declarative syntax definition explained in an Onward! 2010 essay. It also mentions an OOPSLA 2011 paper that introduced the SPoofax Testing (SPT) language used in the section on testing syntax definitions. Finally, it provides a link to documentation on the SDF3 syntax definition formalism.
Declarative Type System Specification with StatixEelco Visser
In this talk I present the design of Statix, a new constraint-based language for the executable specification of type systems. Statix specifications consist of predicates that define the well-formedness of language constructs in terms of built-in and user-defined constraints. Statix has a declarative semantics that defines whether a model satisfies a constraint. The operational semantics of Statix is defined as a sound constraint solving algorithm that searches for a solution for a constraint. The aim of the design is that Statix users can ignore the execution order of constraint solving and think in terms of the declarative semantics.
A distinctive feature of Statix is its use of scope graphs, a language parametric framework for the representation and querying of the name binding facts in programs. Since types depend on name resolution and name resolution may depend on types, it is typically not possible to construct the entire scope graph of a program before type constraint resolution. In (algorithmic) type system specifications this leads to explicit staging of the construction and querying of the type environment (class table, symbol table). Statix automatically stages the construction of the scope graph of a program such that queries are never executed when their answers may be affected by future scope graph extension. In the talk, I will explain the design of Statix by means of examples.
https://eelcovisser.org/post/309/declarative-type-system-specification-with-statix
Compiler Construction | Lecture 14 | InterpretersEelco Visser
This document summarizes a lecture on interpreters for programming languages. It discusses how operational semantics can be used to define the meaning of a program through state transitions in an interpreter. It provides examples of defining the semantics of a simple language using DynSem, a domain-specific language for specifying operational semantics. DynSem specifications can be compiled to interpreters that execute programs in the defined language.
Declare Your Language: Syntax DefinitionEelco Visser
This document provides information about syntax definition and the lab organization for a compiler construction course. It includes links to papers on declarative syntax definition and the SDF3 syntax definition formalism. It also provides details about submitting lab assignments through GitHub, including instructions to fork a repository template and submit solutions as pull requests. Grades will be published on a web lab and early feedback may be provided on pull requests and pushed changes.
Declare Your Language: Name ResolutionEelco Visser
Scope graphs are used to represent the binding information in programs. They provide a language-independent representation of name resolution that can be used to conduct and represent the results of name resolution. Separating the representation of resolved programs from the declarative rules that define name binding allows language-independent tooling to be developed for name resolution and other tasks.
This document summarizes and discusses type checking algorithms for programming languages. It introduces constraint-based type checking, which separates type checking into constraint generation and constraint solving. This provides a more declarative way to specify type checkers. The document discusses using variables and constraints to represent types during type checking. It introduces NaBL2, a domain-specific language for writing constraint generators to specify name and type constraints for programming language static semantics. NaBL2 uses scope graphs to represent name binding structures and supports features like type equality, subtyping, and type-dependent name resolution through constraint rules. An example scope graph and constraint rule for let-bindings is provided.
Compiler Construction | Lecture 12 | Virtual MachinesEelco Visser
The document discusses the architecture of the Java Virtual Machine (JVM). It describes how the JVM uses threads, a stack, heap, and method area. It explains JVM control flow through bytecode instructions like goto, and how the operand stack is used to perform operations and hold method arguments and return values.
This document provides an overview of parsing in compiler construction. It discusses context-free grammars and how they are used to generate sentences and parse trees through derivations. It also covers ambiguity that can arise from grammars and various grammar transformations used to eliminate ambiguity, including defining associativity and priority. The dangling else problem is presented as an example of an ambiguous grammar.
Declarative Semantics Definition - Term RewritingGuido Wachsmuth
This document discusses term rewriting and its applications in compiler construction. It covers term rewriting systems, rewrite rules that transform terms, and rewrite strategies that control rule application. Examples are provided for desugaring code using rewrite rules and constant folding arithmetic expressions using rewrite rules and strategies. Stratego is presented as a domain-specific language for program transformation based on term rewriting.
Lex is a tool that generates lexical analyzers (scanners) that are used to break input text streams into tokens. It allows rapid development of scanners by specifying patterns and actions in a lex source file. The lex source file contains three sections - definitions, translation rules, and user subroutines. The translation rules specify patterns and corresponding actions. Lex compiles the source file to a C program that performs the tokenization. Example lex programs are provided to tokenize input based on regular expressions and generate output.
This document discusses imperative and object-oriented programming languages. It covers basic concepts like state, variables, expressions, assignments, and control flow in imperative languages. It also discusses procedures and functions, including passing parameters, stack frames, and recursion. Finally, it briefly mentions the differences between call by value and call by reference.
Compiler Construction | Lecture 4 | Parsing Eelco Visser
This lecture covers parsing and turning syntax definitions into parsers. It discusses context-free grammars and derivations. Grammars can be ambiguous, allowing multiple parse trees for a sentence. Grammar transformations like disambiguation, eliminating left recursion, and left factoring can address issues while preserving the language. Associativity and priority can be defined through transformations. The reading material covers parsing schemata, classical compiler textbooks, and papers on disambiguation filters and parsing algorithms.
Introduction - Imperative and Object-Oriented LanguagesGuido Wachsmuth
This document provides an overview of imperative and object-oriented languages. It discusses the properties of imperative languages like state, statements, control flow, procedures and types. It then covers object-oriented concepts like objects, messages, classes, inheritance and polymorphism. Examples are given in various languages like C, Java bytecode, x86 assembly to illustrate concepts like variables, expressions, functions and object-oriented features. Finally, it provides an outlook on upcoming lectures covering declarative language definition.
Dynamic Semantics Specification and Interpreter GenerationEelco Visser
(1) The document describes a domain specific language called DynSem for specifying dynamic semantics of programming languages. DynSem allows defining semantics in a modular way using semantic rules.
(2) DynSem specifications can be used to generate high-performance interpreters. The document outlines various language features that can be modeled in DynSem, including arithmetic, booleans, control flow, functions, and mutable state.
(3) DynSem specifications are composed of modules that import language signatures and define semantic rules over them. Rules are used to reduce expressions to values in an environment and store. This allows modeling features like variables, functions, and mutable boxes.
This is continuation of the slide Advanced C part 1. In part 1 you learnt about fundamentals of C - How to build an algorithm, operators. In this module - Advanced C part 2 you will be learning about functions, pointers and standard Input Output functions. This slide will help you to move a further ahead in Advanced C and gain deeper knowledge on it.
The document describes static name resolution in programming languages. It discusses how names are bound to declarations through lexical scoping and how references are resolved to declarations by following paths through a scope graph representation. It presents the concepts of scopes, declarations, references, resolution paths, imports, and parent scopes. It also discusses how name resolution can be formalized using a calculus based on scope graphs, separation reachability and visibility, and how this supports name resolution, disambiguation, and program transformations.
This document discusses functional programming (FP) and its benefits compared to object-oriented programming (OOP). It defines FP as programming with pure functions that have no side effects. The document explores eliminating side effects through techniques like separating function concerns and returning descriptions of side effects rather than executing them. It also covers FP concepts like higher order functions, recursion, and data types like Option for handling errors/exceptions. The goal is to introduce FP techniques and when each paradigm (FP vs OOP) is best suited.
Declare Your Language: Syntactic (Editor) ServicesEelco Visser
Lecture 3 on compiler construction course on definition of lexical syntax and syntactic services that can be derived from syntax definitions such as formatting and syntactic completion
The document discusses functional programming and pattern matching. It provides examples of using pattern matching in functional programming to:
1. Match on algebraic data types like lists to traverse and operate on data in a recursive manner. Pattern matching allows adding new operations easily by adding new patterns.
2. Use pattern matching in variable declarations to destructure data like tuples and case class objects.
3. Perform pattern matching on function parameters to selectively apply different logic based on the patterns, like filtering even numbers from a list. Everything can be treated as values and expressions in functional programming.
C Recursion, Pointers, Dynamic memory managementSreedhar Chowdam
The document summarizes key topics related to recursion, pointers, and dynamic memory management in C programming:
Recursion is introduced as a process where a function calls itself repeatedly to solve a problem. Examples of recursive functions like factorial, Fibonacci series, and Towers of Hanoi are provided.
Pointers are defined as variables that store the memory addresses of other variables. Pointer operations like incrementing, decrementing, and arithmetic are described. The use of pointers to pass arguments to functions and access array elements is also demonstrated.
Dynamic memory allocation functions malloc(), calloc(), and realloc() are explained along with examples. These functions allocate and manage memory during run-time in C programs.
This document discusses Python programming concepts such as data types, variables, expressions, statements, comments, and modules. It provides examples and explanations of:
- Python's history and uses as an interpreted, interactive, object-oriented language.
- Core data types like integers, floats, booleans, strings, and lists.
- Variable naming rules and local vs. global variables.
- Expressions, operators, and precedence.
- Comments and multiline statements.
- Modules as files containing reusable Python code.
The document discusses call by value vs call by reference in functions, and different storage classes in C including auto, extern, register, and static. It provides examples of each storage class and how they determine the scope and lifetime of variables. It also discusses recursion and provides examples of recursive functions to calculate factorial, sum of natural numbers, Fibonacci series, and solve the Towers of Hanoi problem.
Declare Your Language: Constraint Resolution 1Eelco Visser
This document provides an overview of constraint resolution and type checking with constraints. It discusses unification theory and the semantics and implementation of constraint solving. Specifically, it covers:
- Unification algorithms and their properties of soundness, completeness, and principality
- The meaning of constraints and how constraint satisfaction is formally defined using constraint semantics
- How to write a constraint solver using the Constraint Handling Rules (CHR) formalism and the semantics of solving subtyping constraints
- The evaluation rules for CHR-based constraint solving, including solving built-in constraints, introducing new constraints, and simplifying/propagating constraints using rewrite rules
- How the CHR rules for subtyping constraints are evaluated using
The document describes a Prolog program for planning flight routes between locations on a given day. It defines predicates for finding direct and indirect flight routes based on a timetable database. The timetable stores flights between locations with departure and arrival times, flight numbers, and valid days. The route predicate uses the flight predicate to recursively find valid multi-segment routes that ensure a minimum transfer time between connections. Sample queries and timetable facts are provided to demonstrate the program's operation.
Declare Your Language: Syntax DefinitionEelco Visser
This document provides information about syntax definition and the lab organization for a compiler construction course. It includes links to papers on declarative syntax definition and the SDF3 syntax definition formalism. It also provides details about submitting lab assignments through GitHub, including instructions to fork a repository template and submit solutions as pull requests. Grades will be published on a web lab and early feedback may be provided on pull requests and pushed changes.
Declare Your Language: Name ResolutionEelco Visser
Scope graphs are used to represent the binding information in programs. They provide a language-independent representation of name resolution that can be used to conduct and represent the results of name resolution. Separating the representation of resolved programs from the declarative rules that define name binding allows language-independent tooling to be developed for name resolution and other tasks.
This document summarizes and discusses type checking algorithms for programming languages. It introduces constraint-based type checking, which separates type checking into constraint generation and constraint solving. This provides a more declarative way to specify type checkers. The document discusses using variables and constraints to represent types during type checking. It introduces NaBL2, a domain-specific language for writing constraint generators to specify name and type constraints for programming language static semantics. NaBL2 uses scope graphs to represent name binding structures and supports features like type equality, subtyping, and type-dependent name resolution through constraint rules. An example scope graph and constraint rule for let-bindings is provided.
Compiler Construction | Lecture 12 | Virtual MachinesEelco Visser
The document discusses the architecture of the Java Virtual Machine (JVM). It describes how the JVM uses threads, a stack, heap, and method area. It explains JVM control flow through bytecode instructions like goto, and how the operand stack is used to perform operations and hold method arguments and return values.
This document provides an overview of parsing in compiler construction. It discusses context-free grammars and how they are used to generate sentences and parse trees through derivations. It also covers ambiguity that can arise from grammars and various grammar transformations used to eliminate ambiguity, including defining associativity and priority. The dangling else problem is presented as an example of an ambiguous grammar.
Declarative Semantics Definition - Term RewritingGuido Wachsmuth
This document discusses term rewriting and its applications in compiler construction. It covers term rewriting systems, rewrite rules that transform terms, and rewrite strategies that control rule application. Examples are provided for desugaring code using rewrite rules and constant folding arithmetic expressions using rewrite rules and strategies. Stratego is presented as a domain-specific language for program transformation based on term rewriting.
Lex is a tool that generates lexical analyzers (scanners) that are used to break input text streams into tokens. It allows rapid development of scanners by specifying patterns and actions in a lex source file. The lex source file contains three sections - definitions, translation rules, and user subroutines. The translation rules specify patterns and corresponding actions. Lex compiles the source file to a C program that performs the tokenization. Example lex programs are provided to tokenize input based on regular expressions and generate output.
This document discusses imperative and object-oriented programming languages. It covers basic concepts like state, variables, expressions, assignments, and control flow in imperative languages. It also discusses procedures and functions, including passing parameters, stack frames, and recursion. Finally, it briefly mentions the differences between call by value and call by reference.
Compiler Construction | Lecture 4 | Parsing Eelco Visser
This lecture covers parsing and turning syntax definitions into parsers. It discusses context-free grammars and derivations. Grammars can be ambiguous, allowing multiple parse trees for a sentence. Grammar transformations like disambiguation, eliminating left recursion, and left factoring can address issues while preserving the language. Associativity and priority can be defined through transformations. The reading material covers parsing schemata, classical compiler textbooks, and papers on disambiguation filters and parsing algorithms.
Introduction - Imperative and Object-Oriented LanguagesGuido Wachsmuth
This document provides an overview of imperative and object-oriented languages. It discusses the properties of imperative languages like state, statements, control flow, procedures and types. It then covers object-oriented concepts like objects, messages, classes, inheritance and polymorphism. Examples are given in various languages like C, Java bytecode, x86 assembly to illustrate concepts like variables, expressions, functions and object-oriented features. Finally, it provides an outlook on upcoming lectures covering declarative language definition.
Dynamic Semantics Specification and Interpreter GenerationEelco Visser
(1) The document describes a domain specific language called DynSem for specifying dynamic semantics of programming languages. DynSem allows defining semantics in a modular way using semantic rules.
(2) DynSem specifications can be used to generate high-performance interpreters. The document outlines various language features that can be modeled in DynSem, including arithmetic, booleans, control flow, functions, and mutable state.
(3) DynSem specifications are composed of modules that import language signatures and define semantic rules over them. Rules are used to reduce expressions to values in an environment and store. This allows modeling features like variables, functions, and mutable boxes.
This is continuation of the slide Advanced C part 1. In part 1 you learnt about fundamentals of C - How to build an algorithm, operators. In this module - Advanced C part 2 you will be learning about functions, pointers and standard Input Output functions. This slide will help you to move a further ahead in Advanced C and gain deeper knowledge on it.
The document describes static name resolution in programming languages. It discusses how names are bound to declarations through lexical scoping and how references are resolved to declarations by following paths through a scope graph representation. It presents the concepts of scopes, declarations, references, resolution paths, imports, and parent scopes. It also discusses how name resolution can be formalized using a calculus based on scope graphs, separation reachability and visibility, and how this supports name resolution, disambiguation, and program transformations.
This document discusses functional programming (FP) and its benefits compared to object-oriented programming (OOP). It defines FP as programming with pure functions that have no side effects. The document explores eliminating side effects through techniques like separating function concerns and returning descriptions of side effects rather than executing them. It also covers FP concepts like higher order functions, recursion, and data types like Option for handling errors/exceptions. The goal is to introduce FP techniques and when each paradigm (FP vs OOP) is best suited.
Declare Your Language: Syntactic (Editor) ServicesEelco Visser
Lecture 3 on compiler construction course on definition of lexical syntax and syntactic services that can be derived from syntax definitions such as formatting and syntactic completion
The document discusses functional programming and pattern matching. It provides examples of using pattern matching in functional programming to:
1. Match on algebraic data types like lists to traverse and operate on data in a recursive manner. Pattern matching allows adding new operations easily by adding new patterns.
2. Use pattern matching in variable declarations to destructure data like tuples and case class objects.
3. Perform pattern matching on function parameters to selectively apply different logic based on the patterns, like filtering even numbers from a list. Everything can be treated as values and expressions in functional programming.
C Recursion, Pointers, Dynamic memory managementSreedhar Chowdam
The document summarizes key topics related to recursion, pointers, and dynamic memory management in C programming:
Recursion is introduced as a process where a function calls itself repeatedly to solve a problem. Examples of recursive functions like factorial, Fibonacci series, and Towers of Hanoi are provided.
Pointers are defined as variables that store the memory addresses of other variables. Pointer operations like incrementing, decrementing, and arithmetic are described. The use of pointers to pass arguments to functions and access array elements is also demonstrated.
Dynamic memory allocation functions malloc(), calloc(), and realloc() are explained along with examples. These functions allocate and manage memory during run-time in C programs.
This document discusses Python programming concepts such as data types, variables, expressions, statements, comments, and modules. It provides examples and explanations of:
- Python's history and uses as an interpreted, interactive, object-oriented language.
- Core data types like integers, floats, booleans, strings, and lists.
- Variable naming rules and local vs. global variables.
- Expressions, operators, and precedence.
- Comments and multiline statements.
- Modules as files containing reusable Python code.
The document discusses call by value vs call by reference in functions, and different storage classes in C including auto, extern, register, and static. It provides examples of each storage class and how they determine the scope and lifetime of variables. It also discusses recursion and provides examples of recursive functions to calculate factorial, sum of natural numbers, Fibonacci series, and solve the Towers of Hanoi problem.
Declare Your Language: Constraint Resolution 1Eelco Visser
This document provides an overview of constraint resolution and type checking with constraints. It discusses unification theory and the semantics and implementation of constraint solving. Specifically, it covers:
- Unification algorithms and their properties of soundness, completeness, and principality
- The meaning of constraints and how constraint satisfaction is formally defined using constraint semantics
- How to write a constraint solver using the Constraint Handling Rules (CHR) formalism and the semantics of solving subtyping constraints
- The evaluation rules for CHR-based constraint solving, including solving built-in constraints, introducing new constraints, and simplifying/propagating constraints using rewrite rules
- How the CHR rules for subtyping constraints are evaluated using
The document describes a Prolog program for planning flight routes between locations on a given day. It defines predicates for finding direct and indirect flight routes based on a timetable database. The timetable stores flights between locations with departure and arrival times, flight numbers, and valid days. The route predicate uses the flight predicate to recursively find valid multi-segment routes that ensure a minimum transfer time between connections. Sample queries and timetable facts are provided to demonstrate the program's operation.
The document discusses time complexity analysis of loops. It defines key terminology used in loop time complexity analysis such as loop variable, loop repetitions, time complexity per iteration (TC1iter), and change of variable. It explains that the time complexity per iteration may depend on the loop variable, requiring the use of summations. It also discusses handling loops where the variable does not take consecutive values through a change of variable technique to map it to a new variable that does take consecutive values.
This document discusses a theory solver for the theory of uninterpreted functions (UF) in satisfiability modulo theories (SMT). It presents the key components of a UF solver, including union-find algorithms to handle equalities, congruence closure to handle functions, and computing theory conflicts. The solver decides satisfiability of UF formulas in incremental, backtrackable, and theory-propagating manner. It can also be used as a base layer for other theory solvers like LRA.
The document discusses the benefits of declarative programming using Scala. It provides examples of implementing algorithms and data structures declaratively in Scala. It also discusses the history and future of Scala, as well as how Scala encourages thinking about programs as transformations rather than changes to memory.
Murphy: Machine learning A probabilistic perspective: Ch.9Daisuke Yoneoka
This document summarizes key concepts about the exponential family and generalized linear models (GLMs). It defines the exponential family and provides examples like the Bernoulli, multinomial, and Gaussian distributions. The exponential family has important properties like finite sufficient statistics, existence of conjugate priors, and convexity. Maximum likelihood estimation for the exponential family involves matching sample moments to population moments. Conjugate priors allow tractable Bayesian inference for the exponential family. The document outlines maximum entropy derivation of the exponential family and how GLMs can generate classifiers.
A common random fixed point theorem for rational inequality in hilbert spaceAlexander Decker
This document presents a common random fixed point theorem for four continuous random operators defined on a non-empty closed subset of a separable Hilbert space. It begins with introducing basic concepts such as separable Hilbert spaces, random operators, and common random fixed points. It then defines a condition (A) that the four mappings must satisfy. The main result is Theorem 2.1, which proves the existence of a unique common random fixed point for the four operators under condition (A) and a rational inequality condition. The proof constructs a sequence of measurable functions and shows it converges to the common random fixed point. This establishes the common random fixed point theorem for these operators.
Introduction to Algorithms and Asymptotic NotationAmrinder Arora
Asymptotic Notation is a notation used to represent and compare the efficiency of algorithms. It is a concise notation that deliberately omits details, such as constant time improvements, etc. Asymptotic notation consists of 5 commonly used symbols: big oh, small oh, big omega, small omega, and theta.
The document discusses the convolution integral and its relation to the Laplace transform. Specifically, it shows that the Laplace transform of the convolution of two functions f and g is equal to the product of the individual Laplace transforms, unlike the normal multiplication of functions. This defines a "generalized product" called the convolution. The document provides examples calculating convolutions and inverse Laplace transforms to demonstrate the convolution integral theorem.
Adomian Decomposition Method for Certain Space-Time Fractional Partial Differ...IOSR Journals
This document presents an application of the Adomian Decomposition Method (ADM) to solve certain space-time fractional partial differential equations. It begins with an introduction to fractional calculus concepts and definitions. It then outlines the four cases of the ADM that are used to solve different types of space-time fractional PDEs. As an example, it presents the general steps of the direct ADM case to solve a linear space-time fractional PDE. The steps involve decomposing the unknown function into a series, determining the components recursively, and obtaining an approximate solution. Finally, several examples are solved to demonstrate the effectiveness of the ADM for fractional PDEs.
Optimization of probabilistic argumentation with Markov processesEmmanuel Hadoux
This document describes research on optimizing probabilistic argumentation strategies using Markov processes. It introduces argumentation problems with probabilistic strategies (APS) and formalizes them using probabilistic finite state machines. It then describes how an APS can be transformed into a mixed-observability Markov decision process (MOMDP) to allow for optimization without full knowledge of the initial state or opponent's private state. Algorithms for solving MOMDPs like MO-IP and MO-SARSOP are discussed. Different types of optimizations for the MOMDP are also presented, including removing irrelevant arguments, inferring attacks, and removing dominated arguments, both with and without dependencies on the initial state.
Laziness, trampolines, monoids and other functional amenities: this is not yo...Codemotion
by Mario Fusco - Lambdas are the main feature introduced with Java 8, but the biggest part of Java developers are still not very familliar with the most common functional idioms and patterns. The purpose of this talk is presenting with practical examples concepts like high-order functions, currying, functions composition, persistent data structures, lazy evaluation, recursion, trampolines and monoids showing how to implement them in Java and how thinking functionally can help us to design and develop more readable, reusable, performant, parallelizable and in a word better, code.
This document provides an outline and introduction to propositional logic. It discusses the history and development of logic from philosophical logic to its use in computer science. It covers propositional logic syntax using symbols and truth tables, semantics using the satisfaction relation, and the classification of formulas as valid, satisfiable, or unsatisfiable. It also introduces the decision problem of determining if a formula is satisfiable.
The document discusses intermediate code generation in compiler construction. It covers several intermediate representations including postfix notation, three-address code, and quadruples. It also discusses generating three-address code through syntax-directed translation and the use of symbol tables to handle name resolution and scoping.
/2
Ex.5 Evaluate : sin2
x dx .
d
If dx
[f(x)] =
(x) and a and b, are two values
0
/2
Sol. sin2 x dx
independent of variable x, then 0
b /2 FG1 cos 2xIJ
(x) dx = f(x) a = f(b) – f(a)
a
= zH 2 dx
1 LM sin 2x O /2
is called Definite Integral of (x) within limits
= x
2
2 PQ
a and b. Here a is called the lower limit and b is called the upper limit of the integral. The interval [a,b] is known as range of integration. It should be noted that every definite integral has
a unique value.
= 1 LM 0
0
= / 4 Ans.
1
x2
Ex.6 Evaluate : xe dx.
0
1
2 x2
Ex.1 Evaluate : x4 dx.
1
Sol. xe dx
0
2 Lx5 O2
1 ex2 1
Sol.
zx4 dx = MP= 32 – 1 = 31
=
Ans. 2 0
MN5 PQ1 5 5 5
/4
= 1 (e –1) Ans.
2
Ex.2 Evaluate : sec2 x dx.
0
1 x3
/4
Sol. sec2 x .dx = tan x /4 = tan / 4 – tan 0 = 1
0
Ex.7 Find the value of
0
1 x8
dx.
0
Ans.
2 1
Sol. Let x4 = t, then 4x3 dx = dt
Ex.3 Evaluate :
1
4 x2
dx.
I =
1 1 dt
4 =
1 [ sin–1 t] 1 =
4 8
2 1 L 2 0
Sol. z dx = Msin GJP
Ans.
1 4 x2
N H2KQ1
= sin–1 (1) – sin–1 (1/2)
z/3 cos x
= – =
Ans.
Ex.8 Evaluate :
0
3 4 sin x dx.
2 6 3
Sol. I =
/3 cos x 3 4 sin x dx.
Ex.4 Evaluate :
z2 1
2 dx
0 4 x2
0
Let 3 + 4 sin x = t 4 cos x. dx = dt
cos x dx = dt/4
Now in the given integral x lies between the
Sol.
4 x2 dx
limit x = 0 to x = / 3 . Now we will decide the limit of t.
1
= tan
2
1 x OP2
0
In 3 + 4 sin x = t, by putting lower limit of x as x = 0; and upper limit as x = / 3 . We
= 1 tan11 0 = / 8 Ans.
2
get lower and upper limit of t respectively.
Putting x = 0 3 + 4 sin 0 = t t = 3
z3 z2
z3 (x) dx
/3 cos x
dx =
zt3 2 3 1 dt
= 2 x2dx +
0
z3b3x 4gdx
0 3 4 sin x
t3 t 4
Fx3 I2 F3x2 I3
= 1 zt3 2 3 1 dt
= GJ+ G
4xJ
4 t3 t
H3 K H2
= 1 log t 32
4
= 8 +
3
27 – 12 – 6 + 8
2
= 1 [ log (3 + 2
4
) – log 3] Ans.
= 37/6 Ans.
Ex.9
sin(tan1 x)
2
dx equals-
Ex.11 Evaluate : |1 x|dx.
0
0 1 x
Sol. Put tan
x = t, then 1 dx = dt
Sol. |1 x| =
RST1 x, when
0 x 1
–1
(1 x2 )
I =
x 1, when 1 x 2
1b1 xgdx + 2 bx 1gdx
/ 2
I = sin t dt [– cos t] /2 = 1 Ans.
0 1
L x2 O1 Lx2 O2
0 Mx
P M xP
0 = MN
2 PQ+
MN2
PQ1
= b1/ 2 0 + b0 1/ 2 = 1 Ans.
z z
i.e. the value of a definite integral remains unchanged if its variable is placed by any other symbol.
[P-4] f(x) dx = f(a x) dx .
0 0
Note :
[P-2]
b
f(x) dx
a
a
= – f(x) dx
b
This property can be used only when lower limit is zero. It is generally used for those complicated
i.e. the interchange of limits of a definite integral
changes only its sign.
zb zc zb
integrals whose denominators are unchanged when x is replaced by a– x. With the hel
On Twisted Paraproducts and some other Multilinear Singular IntegralsVjekoslavKovac1
Presentation.
9th International Conference on Harmonic Analysis and Partial Differential Equations, El Escorial, June 12, 2012.
The 24th International Conference on Operator Theory, Timisoara, July 3, 2012.
The document discusses asymptotic analysis and asymptotic notation, which are used to characterize and compare the efficiency of algorithms. It introduces common asymptotic classifications like O, Ω, and Θ notation. These notations allow comparison of how fast functions grow relative to each other as their inputs increase. The chapter also covers standard functions like exponentials, logarithms, and factorials that are used in analyzing algorithms.
Lecture 4 - Growth of Functions (1).pptZohairMughal1
The document discusses asymptotic analysis and asymptotic notation, which are used to characterize and compare the efficiency of algorithms. It introduces common asymptotic classifications like O, Ω, and Θ notation. These notations allow comparison of how fast functions grow relative to each other as their inputs increase. The chapter also covers standard functions like exponentials, logarithms, and factorials that are used in analyzing algorithms.
This document provides lecture notes on complex analysis covering four units of content:
1) The index of a close curve, Cauchy's theorem, and entire functions.
2) Counting zeroes, meromorphic functions, and maximum principle.
3) Spaces of continuous and analytic functions, and behavior of functions.
4) Comparison of entire functions, analytic continuation, and harmonic functions.
It also provides definitions and theorems regarding integrals over rectifiable curves, winding numbers, and Cauchy's theorem. Exercises and proofs are included.
Similar to Compiler Construction | Lecture 9 | Constraint Resolution (20)
This document provides an overview of the CS4200 Compiler Construction course at TU Delft. It discusses the course organization, structure, and assessment. The course is split into two parts - CS4200-A which covers concepts and techniques through lectures, papers, and homework assignments, and CS4200-B which involves building a compiler for a subset of Java as a semester-long project. Students will use the Spoofax language workbench to implement their compiler and will submit assignments through a private GitLab repository.
A Direct Semantics of Declarative Disambiguation RulesEelco Visser
This document discusses research into providing a direct semantics for declarative disambiguation of expression grammars. It aims to define what disambiguation rules mean, ensure they are safe and complete, and provide an effective implementation strategy. The document outlines key research questions around the meaning, safety, completeness and coverage of disambiguation rules. It also presents contributions around using subtree exclusion patterns to define safe and complete disambiguation for classes of expression grammars, and implementing this in SDF3.
Compiler Construction | Lecture 17 | Beyond Compiler ConstructionEelco Visser
Compiler construction techniques are applied beyond general-purpose languages through domain-specific languages (DSLs). The document discusses several DSLs developed using Spoofax including:
- WebDSL for web programming with sub-languages for entities, queries, templates, and access control.
- IceDust for modeling information systems with derived values computed on-demand, incrementally, or eventually consistently.
- PixieDust for client-side web programming with views as derived values updated incrementally.
- PIE for defining software build pipelines as tasks with dynamic dependencies computed incrementally.
The document also outlines several research challenges in compiler construction like high-level declarative language definition, verification of
Domain Specific Languages for Parallel Graph AnalytiX (PGX)Eelco Visser
This document discusses domain-specific languages (DSLs) for parallel graph analytics using PGX. It describes how DSLs allow users to implement graph algorithms and queries using high-level languages that are then compiled and optimized to run efficiently on PGX. Examples of DSL optimizations like multi-source breadth-first search are provided. The document also outlines the extensible compiler architecture used for DSLs, which can generate code for different backends like shared memory or distributed memory.
Compiler Construction | Lecture 15 | Memory ManagementEelco Visser
The document discusses different memory management techniques:
1. Reference counting counts the number of pointers to each record and deallocates records with a count of 0.
2. Mark and sweep marks all reachable records from program roots and sweeps unmarked records, adding them to a free list.
3. Copying collection copies reachable records to a "to" space, allowing the original "from" space to be freed without fragmentation.
4. Generational collection focuses collection on younger object generations more frequently to improve efficiency.
Compiler Construction | Lecture 13 | Code GenerationEelco Visser
The document discusses code generation and optimization techniques, describing compilation schemas that define how language constructs are translated to target code patterns, and covers topics like ensuring correctness of generated code through type checking and verification of static constraints on the target format. It also provides examples of compilation schemas for Tiger language constructs like arithmetic expressions and control flow and discusses generating nested functions.
Compiler Construction | Lecture 7 | Type CheckingEelco Visser
This document summarizes a lecture on type checking. It discusses using constraints to separate the language-specific type checking rules from the language-independent solving algorithm. Constraint-based type checking collects constraints as it traverses the AST, then solves the constraints in any order. This allows type information to be learned gradually and avoids issues with computation order.
Compiler Construction | Lecture 6 | Introduction to Static AnalysisEelco Visser
Lecture introducing the need for static analysis in addition to parsing, the complications caused by names, and an introduction to name resolution with scope graphs
Compiler Construction | Lecture 3 | Syntactic Editor ServicesEelco Visser
The document discusses syntactic editor services for programming languages. It covers formatting specifications that define how abstract syntax trees are mapped to text using templates. It also discusses syntactic completion, which proposes valid completions in an editor by using the syntax definition. The lecture focuses on defining lexical and syntactic syntax for Tiger using SDF, and generating editor services like formatting and coloring from the syntax definitions.
Compiler Construction | Lecture 1 | What is a compiler?Eelco Visser
This document provides an overview of the CS4200 Compiler Construction course at TU Delft. It discusses the organization of the course into two parts: CS4200-A which covers compiler concepts and techniques through lectures, papers, and homework assignments; and CS4200-B which involves building a compiler for a subset of Java as a semester-long project. Key topics covered include the components of a compiler like parsing, type checking, optimization, and code generation; intermediate representations; and different types of compilers.
Declare Your Language: Virtual Machines & Code GenerationEelco Visser
The document summarizes virtual machines and code generation. It discusses how high-level programming languages are abstracted from low-level machine details through virtual machines. The Java Virtual Machine architecture and bytecode instructions are described, including its stack-based design, threads, heap, and method area. Code generation mechanics like string operations are also covered.
Declare Your Language: Dynamic SemanticsEelco Visser
This document provides an overview of DynSem, a domain-specific language for specifying the dynamic semantics of programming languages. It discusses the design goals of DynSem in being concise, modular, executable, portable, and high-performance. The document then provides an example DynSem specification for the semantics of a simple language with features like arithmetic, booleans, functions, and mutable variables. It describes how DynSem specifications are organized into modules that can be composed to define the semantics.
Declare Your Language: Constraint Resolution 2Eelco Visser
1) The document discusses unification and the union-find algorithm for efficiently solving unification problems.
2) Unification involves resolving constraints between terms to find a substitution that makes the terms equal. This can have exponential time and space complexity.
3) The union-find algorithm uses set representatives and linking to maintain disjoint sets in near-constant time complexity, improving on naive recursive algorithms.
4) An example shows how union-find can be applied to fully unify a complex expression by progressively linking variables and subterms between the two sides.
The Rising Future of CPaaS in the Middle East 2024Yara Milbes
Explore "The Rising Future of CPaaS in the Middle East in 2024" with this comprehensive PPT presentation. Discover how Communication Platforms as a Service (CPaaS) is transforming communication across various sectors in the Middle East.
A Comprehensive Guide on Implementing Real-World Mobile Testing Strategies fo...kalichargn70th171
In today's fiercely competitive mobile app market, the role of the QA team is pivotal for continuous improvement and sustained success. Effective testing strategies are essential to navigate the challenges confidently and precisely. Ensuring the perfection of mobile apps before they reach end-users requires thoughtful decisions in the testing plan.
Mobile App Development Company In Noida | Drona InfotechDrona Infotech
Drona Infotech is a premier mobile app development company in Noida, providing cutting-edge solutions for businesses.
Visit Us For : https://www.dronainfotech.com/mobile-application-development/
Measures in SQL (SIGMOD 2024, Santiago, Chile)Julian Hyde
SQL has attained widespread adoption, but Business Intelligence tools still use their own higher level languages based upon a multidimensional paradigm. Composable calculations are what is missing from SQL, and we propose a new kind of column, called a measure, that attaches a calculation to a table. Like regular tables, tables with measures are composable and closed when used in queries.
SQL-with-measures has the power, conciseness and reusability of multidimensional languages but retains SQL semantics. Measure invocations can be expanded in place to simple, clear SQL.
To define the evaluation semantics for measures, we introduce context-sensitive expressions (a way to evaluate multidimensional expressions that is consistent with existing SQL semantics), a concept called evaluation context, and several operations for setting and modifying the evaluation context.
A talk at SIGMOD, June 9–15, 2024, Santiago, Chile
Authors: Julian Hyde (Google) and John Fremlin (Google)
https://doi.org/10.1145/3626246.3653374
Project Management: The Role of Project Dashboards.pdfKarya Keeper
Project management is a crucial aspect of any organization, ensuring that projects are completed efficiently and effectively. One of the key tools used in project management is the project dashboard, which provides a comprehensive view of project progress and performance. In this article, we will explore the role of project dashboards in project management, highlighting their key features and benefits.
Using Query Store in Azure PostgreSQL to Understand Query PerformanceGrant Fritchey
Microsoft has added an excellent new extension in PostgreSQL on their Azure Platform. This session, presented at Posette 2024, covers what Query Store is and the types of information you can get out of it.
14 th Edition of International conference on computer visionShulagnaSarkar2
About the event
14th Edition of International conference on computer vision
Computer conferences organized by ScienceFather group. ScienceFather takes the privilege to invite speakers participants students delegates and exhibitors from across the globe to its International Conference on computer conferences to be held in the Various Beautiful cites of the world. computer conferences are a discussion of common Inventions-related issues and additionally trade information share proof thoughts and insight into advanced developments in the science inventions service system. New technology may create many materials and devices with a vast range of applications such as in Science medicine electronics biomaterials energy production and consumer products.
Nomination are Open!! Don't Miss it
Visit: computer.scifat.com
Award Nomination: https://x-i.me/ishnom
Conference Submission: https://x-i.me/anicon
For Enquiry: Computer@scifat.com
What to do when you have a perfect model for your software but you are constrained by an imperfect business model?
This talk explores the challenges of bringing modelling rigour to the business and strategy levels, and talking to your non-technical counterparts in the process.
How Can Hiring A Mobile App Development Company Help Your Business Grow?ToXSL Technologies
ToXSL Technologies is an award-winning Mobile App Development Company in Dubai that helps businesses reshape their digital possibilities with custom app services. As a top app development company in Dubai, we offer highly engaging iOS & Android app solutions. https://rb.gy/necdnt
Everything You Need to Know About X-Sign: The eSign Functionality of XfilesPr...XfilesPro
Wondering how X-Sign gained popularity in a quick time span? This eSign functionality of XfilesPro DocuPrime has many advancements to offer for Salesforce users. Explore them now!
Top Benefits of Using Salesforce Healthcare CRM for Patient Management.pdfVALiNTRY360
Salesforce Healthcare CRM, implemented by VALiNTRY360, revolutionizes patient management by enhancing patient engagement, streamlining administrative processes, and improving care coordination. Its advanced analytics, robust security, and seamless integration with telehealth services ensure that healthcare providers can deliver personalized, efficient, and secure patient care. By automating routine tasks and providing actionable insights, Salesforce Healthcare CRM enables healthcare providers to focus on delivering high-quality care, leading to better patient outcomes and higher satisfaction. VALiNTRY360's expertise ensures a tailored solution that meets the unique needs of any healthcare practice, from small clinics to large hospital systems.
For more info visit us https://valintry360.com/solutions/health-life-sciences
The Key to Digital Success_ A Comprehensive Guide to Continuous Testing Integ...kalichargn70th171
In today's business landscape, digital integration is ubiquitous, demanding swift innovation as a necessity rather than a luxury. In a fiercely competitive market with heightened customer expectations, the timely launch of flawless digital products is crucial for both acquisition and retention—any delay risks ceding market share to competitors.
Unveiling the Advantages of Agile Software Development.pdfbrainerhub1
Learn about Agile Software Development's advantages. Simplify your workflow to spur quicker innovation. Jump right in! We have also discussed the advantages.
3. Reading Material
3
The following papers add background, conceptual exposition,
and examples to the material from the slides. Some notation and
technical details have been changed; check the documentation.
4. !4
Good introduction to unification, which is the basis of many
type inference approaches, constraint languages, and logic
programming languages. Read sections 1, and 2.
https://www.cs.bu.edu/~snyder/publications/UnifChapter.pdf
Baader et al. “Chapter 8 - Unification Theory.” In Handbook of
Automated Reasoning, 445–533. Amsterdam: North-Holland, 2001.
5. !5
Separating type checking into constraint generation and
constraint solving provides more declarative definition of type
checkers. This paper introduces a constraint language
integrating name resolution into constraint resolution through
scope graph constraints.
This is the basis for the design of the NaBL2 static semantics
specification language.
https://doi.org/10.1145/2847538.2847543
PEPM 2016
6. !6
This paper describes the next generation of the approach.
Addresses (previously) open issues in expressiveness of scope graphs for
type systems:
- Structural types
- Generic types
Addresses open issue with staging of information in type systems.
Introduces Statix DSL for definition of type systems.
Prototype of Statix is available in Spoofax HEAD, but not ready for use in
project yet.
The future
OOPSLA 2018
To appear
8. let
type point1 = { x2 : int, y3 : int }
var p4 := point5{ x6 = 4, y7 = 5 }
in
p8.x9
end
s3
Record Definitions
!8
[[ RecordTy(fields) ^ (s) : ty ]] :=
new s_rec,
ty == RECORD(s_rec),
NIL() <! ty,
distinct/name D(s_rec)/Field,
Map2[[ fields ^ (s_rec, s) ]].
[[ Field(x, t) ^ (s_rec, s_outer) ]] :=
Field{x} <- s_rec,
Field{x} : ty !,
[[ t ^ (s_outer) : ty ]].
s0
s2
s1
s0
Type
point1
s1 s2
RECORD(s2)
Field
x2
Field
y3
s3
INT
INT
9. let
type point1 = { x2 : int, y3 : int }
var p4 := point5{ x6 = 4, y7 = 5 }
in
p8.x9
end
s1
Record Creation
!9
s0
Type
point1
s1 s2
RECORD(s2)
Field
x2
Field
y3
s3
Var
p4
INT
INT
s0
s2
s3
[[ r@Record(t, inits) ^ (s) : ty ]] :=
[[ t ^ (s) : ty ]],
ty == RECORD(s_rec),
new s_use, s_use -I-> s_rec,
D(s_rec)/Field subseteq/name
R(s_use)/Field,
distinct/name R(s_use)/Field,
Map2[[ inits ^ (s_use, s) ]].
[[ InitField(x, e) ^ (s_use, s) ]] :=
Field{x} -> s_use,
Field{x} |-> d,
d : ty1,
[[ e ^ (s) : ty2 ]],
ty2 <? ty1.
s4
Type
point5
s4
Field
x6
Field
y7
s_rec
RECORD(s_rec)
10. let
type point1 = { x2 : int, y3 : int }
var p4 := point5{ x6 = 4, y7 = 5 }
in
p8.x9
end
s1
Record Field Access
!10
s0
s2
s3
s0
Type
point1
s1 s2
RECORD(s2)
Field
x2
Field
y3
s3
INT
INT
[[ FieldVar(e, f) ^ (s) : ty ]] :=
[[ e ^ (s) : ty_e ]],
ty_e == RECORD(s_rec),
new s_use, s_use -I-> s_rec,
Field{f} -> s_use,
Field{f} |-> d,
d : ty.
s5
RECORD(s2)
Var
p4
Var
p8
s5
Field
x9
s_rec
s4
13. Solving by Rewriting
!13
<C; G, s> ---> <C; G, s>
<t == u, C; G, s> ---> <C; G, s'> where unify(s,t,u) = s'
<s1 -L-> s2, C; G, s> ---> <C; G', s> where G + {s1 -L-> s2} = G'
<Ns{x@i} |-> t, C; G, s> ---> <t == d; G, s> where resolve(G,Ns{x@i}) = d
def solve(C):
if <C; {}, {}> --->* <{}; G, s>:
return <G, s>
else:
fail
14. Solver = rewrite system
- Rewrite a constraints set + solution
- Simplifying and eliminating constraints
‣ Constraint selecting is non-deterministic
‣ Partial order is enforced by side conditions on rewrite rules
- Rely on (other) solvers and algorithms for base cases
‣ Unification for term equality
‣ Scope graph resolution
- The solution is final if all constraints are eliminated
Does the order matter for the outcome?
- Confluence: the output is the same for any solving order
- Conjecture: Partly true for NaBL2
‣ Up to variable and scope names
‣ Only if all constraints are reduced
Solving by Rewriting
!14
16. What is the meaning of constraints?
- What is a valid solution?
- Or: in which models are the constraints satisfied?
- Can we describe this independent of an algorithm?
When are constraints satisfied?
- Formally described by the semantics
- Written as G,s ⊨ C
- Satisfied in a model (substitution + scope graph)
- Describes for every type of constraint when it is satisfied
!16
What gives constraints meaning?
ty == FUN(ty1,ty2)
Var{x} |-> d
ty1 == INT()
17. Semantics of (a Subset of) NaBL2 Constraints
!17
C = t == t // equality
| r |-> d // resolution
| C / C // conjunction
G,s ⊨ t == u
G,s ⊨ r |-> d
G,s ⊨ C1 / C2
if s(t) = s(u)
if s(r) = Var{x @i}
and s(d) = Var{x @j}
and Var{x @i} resolves to Var{x @j} in G
if G,s ⊨ C1 and G,s ⊨ C2
Constraint syntax
Constraint semantics
18. Using the Semantics
!18
G,s ⊨ t == u
if s(t) = s(u)
G,s ⊨ r |-> d
if s(r) = Var{x @i}
and s(d) = Var{x @j}
and Var{x @i} resolves to Var{x @j} in G
G,s ⊨ C1 / C2
if G,s ⊨ C1
and G,s ⊨ C2
let
function f1(x2 : int) : int =
x3 + 1
in
f4(14)
end
ty1 == INT()
INT() == INT()
Var{x @3} |-> d1
ty2 == INT()
Var{f @4} |-> d2
ty3 == FUN(ty4,ty5)
ty4 == INT()
…
s = { ty1 -> INT(),
ty2 -> INT(),
ty3 -> FUN(INT(),ty5),
ty4 -> INT(),
d1 -> Var{x @2},
d2 -> Var{f @1}
}
s0
s1
f1 FUN(ty1,ty2)
x2x3
f4
19. What is the difference?
- Algorithm computes a solution (= model)
- Semantics describes when a constraint is satisfied by a model
How are these related?
- Soundness
‣ If the solver returns <G, s>, then G,s ⊨ C
- Completeness:
‣ If an s exists such that G,s ⊨ C, then the solver returns it
‣ If no such s exists, the solver fails
Principality
- The solver finds the most general s
!19
Semantics vs Algorithm
22. Variables and Substitution
!22
f(g(),a)
terms t, u
functions f, g, h
variables a, b, c
substitution s
ground term: a term without variables
s = { a -> f(g(),b), b -> h() }
domain
f(g(),f(g(),b))
s(a) = t if { a -> t } in s
s(a) = a otherwise
s(f(t0,…,tn)) = f(s(t0),…,s(tn))
variable substitution
23. Unifiers
!23
f(a,g()) == f(h(),b)
a -> h()
b -> g()
g(a,f(b)) == g(f(h()),a) a -> f(h())
b -> h()
f(a,h()) == g(h(),b) no unifier, f != g
terms t, u
functions f, g, h
variables a, b, c
substitution s
f(h(),g()) == f(h(),g())
g(f(h()),f(h())) == g(f(h()),f(h()))
f(b,b) == b b -> f(b,b) not idempotent
unifier: a substitution that makes terms equal
24. Most General Unifiers
!24
terms t, u
functions f, g, h
variables a, b, c
substitution s
f(a,b) == f(b,c)
a -> g()
b -> g()
c -> g()
a -> b
c -> b
f(g(),g()) == f(g(),g())
f(b,b) == f(b,b)
b -> a
c -> a
f(a,a) == f(a,a)most general
unifiers
25. Most General Unifiers
!25
a -> g()
b -> g()
c -> g()
a -> b
b -> b
c -> b
a -> a
b -> a
c -> a
a -> b
b -> b
c -> b
terms t, u
functions f, g, h
variables a, b, c
substitution s
b -> g()
b -> a
a -> b
b -> b
c -> b
a -> a
b -> a
c -> a
a -> b
every unifier is an instance of a most general unifier
(implicit) identity case
most general unifiers are related by renaming substitutions
26. global s
def unify(t, u):
if t is a variable:
t := s(t)
if u is a variable:
u := s(u)
if t == u:
pass
else if t == f(t0,...,tn) and u == g(u0,...,um):
if f == g and n == m:
for i := 1 to n:
unify(ti, ui)
else:
fail "different function symbols"
else if t is not a variable:
unify(u, t)
else if t occurs in u:
fail "recursive term"
else:
s += { s -> t }
Unification
!26
terms t, u
functions f, g, h
variables a, b, c
substitution s
t == a
instantiate variable
t == k(t0,...,t5), u == k(u0,...,u5)
matching terms
t == k(t0,...,t5), u == f(u0,...,u3)
mismatching terms
t == k(t0,...,t5), u == b
mismatching terms
t == a, u == k(g(a,f()))
recursive terms
t == a, u == k(u0,...,u5)
extend unifier
u == b
instantiate variable
equal terms
27. Unification (rewriting)
!X
{ t == u } U C; s ———> C’ U C; s’
{ t == t } U C; s ———>
C; s
{ f(t0,…,tn) == f(u0,…,un) } U C; s ———>
{ t0 == u0, …, tn == un } U C; s
{ f(t0,…,tn) == g(u0,…,um) } U C; s ———>
FAIL
if f != g
{ a == t } U C; s ———>
C{a -> t}; s{a -> t}
if a ∉ vars(t)
{ a == t } U C; s ———>
FAIL
if a ∈ vars(t)
{ t == a } U C; s ———>
{ a == t } U C; s
equalities
rewrite function
{ f() == f() }
{}
{ f(g()) == f(h()) }
{ g() == h() }
{ g(b,a) == g(a,h(b)) }
{ b == a, a == h(b) }
{ a == h(a) }
{ f(g(),h(a)) == f(b,h(b)) }
{ g() == b, h(a) == h(b) }
{ b == g(), h(a) == h(b) }
{ h(a) == h(g()) }
{ a == g() }
{}
{}
{}
{}
FAIL
{}
{b -> a}
FAIL
{}
{}
{b -> g()}
{b -> g()}
{b -> g(),a -> g()}
{b -> g(),a -> g()}
substitution
terms t, u
functions f, g, h
variables a, b, c
substitution s
equalities C substitution s
28. Soundness
- If the algorithm returns a unifier, it makes the terms equal
Completeness
- If a unifier exists, the algorithm will return it
Principality
- If the algorithm returns a unifier, it is a most general unifier
Termination
- The algorithm always returns a unifier or fails
Properties of Unification
!27
30. Space complexity
- Exponential
- Representation of unifier
Time complexity
- Exponential
- Recursive calls on terms
Solution
- Union-Find algorithm
- Complexity growth can be
considered constant
Complexity of Unification
!28
h(a1 , …,an , f(b0,b0), …, f(bn-1,bn-1), an) ==
h(f(a0,a0), …,f(an-1,an-1), b1, …, bn-1 , bn)
a1 -> f(a0,a0)
a2 -> f(f(a0,a0), f(a0,a0))
ai -> … 2i+1-1 subterms …
b1 -> f(a0,a0)
b2 -> f(f(a0,a0), f(a0,a0))
bi -> … 2i+1-1 subterms …
terms t, u
functions f, g, h
variables a, b, c
substitution s
a1 -> f(a0,a0)
a2 -> f(a1,a1)
ai -> … 3 subterms …
b1 -> f(a0,a0)
b2 -> f(a1,a1)
bi -> … 3 subterms …
fully applied triangular
31. Set Representatives
!29
FIND(a):
b := rep(a)
if b == a:
return a
else
return FIND(b)
UNION(a1,a2):
b1 := FIND(a1)
b2 := FIND(a2)
LINK(b1,b2)
LINK(a1,a2):
rep(a1) := a2
a == b
c == a
u == w
v == u
x == v
x == c
a
b c
u
w v
x
representative
32. Path Compression
!30
FIND(a):
b := rep(a)
if b == a:
return a
else
b := FIND(b)
rep(a) := b
return b
UNION(a1,a2):
b1 := FIND(a1)
b2 := FIND(a2)
LINK(b1,b2)
LINK(a1,a2):
rep(a1) := a2
…
x == b
x == c
x == w
x == v
a
b c
u
w v
x
33. Tree Balancing
!31
FIND(a):
b := rep(a)
if b == a:
return a
else
b := FIND(b)
rep(a) := b
return b
UNION(a1,a2):
b1 := FIND(a1)
b2 := FIND(a2)
LINK(b1,b2)
LINK(a1,a2):
if size(a2) > size(a1):
rep(a1) := a2
size(a2) += size(a1)
else:
rep(a2) := a1
size(a1) += size(a2)
…
x == c
a
b c
u
w v
x
1
21
4
11
3
3 steps
2 steps
?
35. Main idea
- Represent unifier as graph
- One variable represent equivalence class
- Replace substitution by union & find operations
- Testing equality becomes testing node identity
Optimizations
- Path compression make recurring lookups fast
- Tree balancing keeps paths short
Complexity
- Linear in space and almost linear in time (technically inverse Ackermann)
- Easy to extract triangular unifier from graph
- Postpone occurrence checks to prevent traversing (potentially) large terms
Union-Find
!33
Martelli, Montanari. An Efficient Unification Algorithm. TOPLAS, 1982
37. What is the meaning of constraints?
- Formally described by constraint semantics
- Semantics classify solutions, but do not compute them
- Semantics are expressed in terms of other theories
‣ Syntactic equality
‣ Scope graph resolution
What techniques can we use to implements solvers?
- Constraint Simplification
‣ Simplification rules
‣ Depends on built-in procedures to unify or resolve names
- Unification
‣ Unifiers make terms with variables equal
‣ Unification computes most general unifiers
What is the relation between solver and semantics?
- Soundness: any solution satisfies the semantics
- Completeness: if a solution exists, the solver finds it
- Principality: the solver computes most general solutions
Summary
!35