We define an equivalence relation on propositions and a proof system where equivalent propositions have the same proofs. The system obtained this way resembles several known non-deterministic and algebraic lambda-calculi.
Discussion of Fearnhead and Prangle, RSS< Dec. 14, 2011Christian Robert
The document discusses approximate Bayesian computation (ABC), a technique used when the likelihood function is intractable. ABC works by simulating data under different parameter values and accepting simulations that are close to the observed data according to a distance measure. The key challenges are choosing a sufficient summary statistic of the data and setting the tolerance level. Later sections discuss using a noisy ABC approach, where the summary statistic is perturbed, and calibrating the method so that the ABC posterior converges to the true parameter as the number of simulations increases. The document examines issues around choosing optimal summary statistics and tolerance levels to minimize errors in the ABC approximation.
Galois: A Language for Proofs Using Galois Connections and Fork AlgebrasPaulo Silva
The document describes a language called Galois for performing proofs using Galois connections and fork algebras. It aims to build a proof assistant called Galculator based on these concepts. Galois would be the front-end language for writing proofs in a typed, first-order logical style and interacting with the Galculator proof engine. The document provides background on key theoretical concepts like indirect equality, Galois connections, fork algebras, and the point-free transform used in the language and system design. It also outlines the objectives and provides a brief example proof using Galois connections.
It includes all the basics of calculus. It also includes all the formulas of derivatives and how to carry it out. It also includes function definition and different types of function along with relation.
This document discusses logarithmic and exponential functions as well as trigonometric identities, functions, and their derivatives and integrals. It covers topics such as f(x)=sin x, f(x)=cos x, f(x)=tan x, and trigonometric identities as well as functions and their derivatives and integrals.
This document contains solutions to 5 mathematical problems presented in the form of proofs or justifications. The key details are:
1) Justifications are provided for whether certain logical statements are true or false, including that the contrapositive of A→B is ¬B→¬A, and that P∧Q being true implies P∨Q is true.
2) A proof is given that if a and b are odd integers, then their product a*b is also an odd integer.
3) It is proved that if statement A is true and B is false, then ¬(f3(A) ∨ f2(B)) is a true statement.
4
Effects of the Discussion Groups Sizes on the Dynamics of Public Opinion KNOWeSCAPE2014
Oleg Yordanov – Effects of the Discussion Groups Sizes on the Dynamics of Public Opinion (Talk at 2nd Annual KNOWeSCAPE Scientific Meeting, http://knowescape.org/knowescape2014-2/)
Abstract: An enhanced hybrid approach to OWL query answering that combines an RDF triple-store with an OWL reasoner in order to provide scaleable pay-as-you-go performance. The enhancements presented here include an extension to deal with arbitary OWL ontologies and optimisations that significantly improve scalability. We have implemented these techniques in a prototype system, a preliminary evaluation of which has produced very encouraging results.
Discussion of Fearnhead and Prangle, RSS< Dec. 14, 2011Christian Robert
The document discusses approximate Bayesian computation (ABC), a technique used when the likelihood function is intractable. ABC works by simulating data under different parameter values and accepting simulations that are close to the observed data according to a distance measure. The key challenges are choosing a sufficient summary statistic of the data and setting the tolerance level. Later sections discuss using a noisy ABC approach, where the summary statistic is perturbed, and calibrating the method so that the ABC posterior converges to the true parameter as the number of simulations increases. The document examines issues around choosing optimal summary statistics and tolerance levels to minimize errors in the ABC approximation.
Galois: A Language for Proofs Using Galois Connections and Fork AlgebrasPaulo Silva
The document describes a language called Galois for performing proofs using Galois connections and fork algebras. It aims to build a proof assistant called Galculator based on these concepts. Galois would be the front-end language for writing proofs in a typed, first-order logical style and interacting with the Galculator proof engine. The document provides background on key theoretical concepts like indirect equality, Galois connections, fork algebras, and the point-free transform used in the language and system design. It also outlines the objectives and provides a brief example proof using Galois connections.
It includes all the basics of calculus. It also includes all the formulas of derivatives and how to carry it out. It also includes function definition and different types of function along with relation.
This document discusses logarithmic and exponential functions as well as trigonometric identities, functions, and their derivatives and integrals. It covers topics such as f(x)=sin x, f(x)=cos x, f(x)=tan x, and trigonometric identities as well as functions and their derivatives and integrals.
This document contains solutions to 5 mathematical problems presented in the form of proofs or justifications. The key details are:
1) Justifications are provided for whether certain logical statements are true or false, including that the contrapositive of A→B is ¬B→¬A, and that P∧Q being true implies P∨Q is true.
2) A proof is given that if a and b are odd integers, then their product a*b is also an odd integer.
3) It is proved that if statement A is true and B is false, then ¬(f3(A) ∨ f2(B)) is a true statement.
4
Effects of the Discussion Groups Sizes on the Dynamics of Public Opinion KNOWeSCAPE2014
Oleg Yordanov – Effects of the Discussion Groups Sizes on the Dynamics of Public Opinion (Talk at 2nd Annual KNOWeSCAPE Scientific Meeting, http://knowescape.org/knowescape2014-2/)
Abstract: An enhanced hybrid approach to OWL query answering that combines an RDF triple-store with an OWL reasoner in order to provide scaleable pay-as-you-go performance. The enhancements presented here include an extension to deal with arbitary OWL ontologies and optimisations that significantly improve scalability. We have implemented these techniques in a prototype system, a preliminary evaluation of which has produced very encouraging results.
This document discusses canonical forms for representing Boolean functions. It defines sum of products (SOP) and product of sums (POS) forms, which are standard representations. Minterms and maxterms are also defined as product and sum terms involving all variables. The canonical SOP form is defined as the logical sum of minterms where the function value is 1. Canonical POS form is the logical product of maxterms where the function value is 0. Procedures to convert between SOP, POS and canonical forms are presented. Canonical forms provide a unique representation and can be used to determine equivalency between functions.
Polynomial functions are functions that can be written as the sum of terms involving powers of x, with the powers being non-negative integers and the coefficients being real numbers. Examples of polynomial functions include F(x)=6x^4 – 3x^2 + 2x, which is a polynomial of degree 4, and F(x)= x^12. Key terms related to polynomial functions are degree, leading coefficient, standard form, and factored form.
The document discusses AND/OR graphs and the AO* algorithm for searching AND/OR trees. Some problems can be represented as having subgoals that can be achieved simultaneously or independently (AND) or as OR options. The AO* algorithm extends A* search to AND/OR trees. It examines multiple nodes simultaneously, selecting the most promising path and expanding nodes to generate successors. It computes heuristic values (h) for nodes and propagates new information up the graph as the search progresses until a solution is found or all paths are determined to be unsolvable. An example demonstrates how AO* searches an AND/OR graph and labels nodes as it proceeds.
The document defines exponential functions as functions of the form f(x) = bx, where b is a positive constant base. It provides examples and discusses key characteristics of exponential graphs such as their domains, ranges, and asymptotic behavior. The document also covers transformations of exponential graphs and using exponential functions to model compound interest over time.
The document discusses lines and planes in mathematics. It provides multiple ways to specify a line, including using two points, a point and slope, or a slope and y-intercept. Lines can also be described using vectors, with a line being the set of points a + tv, where a is a point on the line, v is a direction vector, and t is a real number. Planes are similarly defined as the set of points where the dot product of a normal vector p and the offset (x - a) is 0, where a is a point on the plane. An example shows how to check if three points lie on the same line by finding the line equation and checking if a third point satisfies it.
Analytic construction of elliptic curves and rational pointsmmasdeu
This document summarizes a talk on constructing elliptic curves over number fields from automorphic forms using non-archimedean methods. It outlines computing the period lattice of an elliptic curve over a non-archimedean field by integrating rigid-analytic forms over the Bruhat-Tits tree, then using this to recover the Weierstrass equation of the curve from its j-invariant. An example construction over a quartic field is also given.
A new practical algorithm for volume estimation using annealing of convex bodiesVissarion Fisikopoulos
We study the problem of estimating the volume of convex polytopes, focusing on H- and V-polytopes, as well as zonotopes. Although a lot of effort is devoted to practical algorithms for H-polytopes there is no such method for the latter two representations. We propose a new, practical algorithm for all representations, which is faster than existing methods. It relies on Hit-and-Run sampling, and combines a new simulated annealing method with the Multiphase Monte Carlo (MMC) approach. Our method introduces the following key features to make it adaptive: (a) It defines a sequence of convex bodies in MMC by introducing a new annealing schedule, whose length is shorter than in previous methods with high probability, and the need of computing an enclosing and an inscribed ball is removed; (b) It exploits statistical properties in rejection-sampling and proposes a better empirical convergence criterion for specifying each step; (c) For zonotopes, it may use a sequence of convex bodies for MMC different than balls, where the chosen body adapts to the input. We offer an open-source, optimized C++ implementation, and analyze its performance to show that it outperforms state-of-the-art software for H-polytopes by Cousins-Vempala (2016) and Emiris-Fisikopoulos (2018), while it undertakes volume computations that were intractable until now, as it is the first polynomial-time, practical method for V-polytopes and zonotopes that scales to high dimensions (currently 100). We further focus on zonotopes, and characterize them by their order (number of generators over dimension), because this largely determines sampling complexity. We analyze a related application, where we evaluate methods of zonotope approximation in engineering.
The document describes polar coordinates, which represent the location of a point P in a plane using two numbers: r, the distance from P to the origin O, and θ, the angle between the positive x-axis and the line from O to P. θ is positive for counter-clockwise angles and negative for clockwise angles. The polar coordinate (r, θ) uniquely identifies P's location. The document also provides the conversion formulas between polar coordinates (r, θ) and rectangular coordinates (x, y).
Stinespring’s theorem for maps on hilbert c star moduleswtyru1989
This document discusses Stinespring's theorem for completely positive maps on Hilbert C*-modules. It begins by introducing C*-algebras, Hilbert C*-modules, and completely positive maps. It then presents Stinespring's theorem for completely positive maps between C*-algebras. The document goes on to discuss Asadi's generalization of Stinespring's theorem to completely positive maps between a C*-algebra and bounded operators on a Hilbert space that are compatible with a Hilbert C*-module. It concludes by presenting a further generalization of Stinespring's theorem to completely positive maps between a C*-algebra and a Hilbert C*-module.
This document summarizes several integration techniques including the fundamental theorem of calculus, substitution, integration by parts, trigonometric integrals, partial fractions, and approximate integration. It explains that the fundamental theorem relates antiderivatives to definite integrals, substitution allows integrals with functions of functions to be evaluated, and integration by parts and partial fractions are used to decompose integrals that cannot be directly evaluated. Trigonometric integrals may use trigonometric substitutions or identities while approximate integration provides numerical approximations.
Probability Arunesh Chand Mankotia 2005Consultonmic
The document provides an overview of key probability concepts including:
- Sample space is the set of all possible outcomes of a random experiment.
- Mutually exclusive events cannot occur simultaneously.
- Venn diagrams can visually depict relationships between events like intersections.
- Classical probability is the ratio of favorable outcomes to total possible outcomes.
- Relative frequency probability is the limit of observed frequencies of an event over many trials.
- Bayes' theorem relates conditional and inverse conditional probabilities.
This document provides instructions for graphing sine and cosine functions. It notes the difference between (bx-c) and b(x-c) and provides examples of graphing 3 periods of different sine and cosine functions. The document also lists homework problems from page 328 involving graphing sine and cosine functions.
This document discusses properties of polynomial graphs and zeros. It states that for a polynomial f(x), the following are equivalent: k is a zero of f, x - k is a factor of f(x), and k is a solution to the equation f(x) = 0. It also notes that if k is real, k will be an x-intercept of the graph of f(x). Finally, it concludes that the graph of any polynomial function of degree n will have at most n - 1 turning points, and if the function has n distinct real zeros, there will be exactly n - 1 turning points.
1. Prove that the function f(x) = x if x is rational, x^2 if x is irrational, is continuous at 1 and discontinuous at 2.
2. Show that if two continuous functions f and g agree on all rational numbers, then they are equal everywhere.
3. Show that if a function f is such that the sequence {f(x_n)} converges whenever {x_n} converges to c, then f is continuous at c.
The document introduces linear logic and provides examples of proofs in linear logic using natural deduction. It discusses key concepts in linear logic including linear implication (-○), contexts, the restriction that each hypothesis can only be used once, and the introduction and extraction rules for linear implication. It also covers other linear logic connectives like falsehood, AND, OR, storage (!A), and provides examples of proofs using these connectives. Finally, it discusses computations in linear logic through typed lambda calculus and a linear Lisp machine that avoids garbage collection by not allowing data sharing.
This document discusses deformations of twisted harmonic maps. It begins with background on harmonic maps and Higgs bundles. First order deformations of representations and harmonic maps are introduced, along with equivariant and harmonic deformations. It is shown that equivariant harmonic deformations correspond to harmonic 1-forms. The first variation of energy is computed, and critical points are characterized as complex variations of Hodge structure when the domain is Kähler. Second order deformations are then defined.
ECCV2008: MAP Estimation Algorithms in Computer Vision - Part 1zukun
The document describes various algorithms for maximum a posteriori (MAP) estimation in computer vision problems. It discusses how MAP estimation involves defining an energy function consisting of unary and pairwise potentials, and finding the labeling that minimizes this energy function. Common computer vision problems addressed include binary image segmentation, object detection using parts-based models, and stereo correspondence. Computational challenges are discussed as MAP estimation is NP-hard in general, though approximate algorithms can be used.
The document discusses finding equations to model periodic functions from graphs. It works through examples of cosine, sine and other wave functions, identifying their amplitude, period, and determining the specific equation based on those characteristics. For the first example, it is a cosine wave with amplitude 3, period 2π, so the equation is y = -3cos(x).
- The document discusses one-shot entanglement theory, which considers manipulating entanglement without assuming many identical copies or an i.i.d. setting. This generalizes the traditional asymptotic i.i.d. entanglement theory.
- For pure states, the one-shot distillable entanglement is characterized by the smoothed min-entropy of the reduced state. The one-shot entanglement cost is characterized by the smoothed max-entropy.
- For mixed states, the one-shot results involve smoothed min- and max-entropies as well as smoothed relative entropies. These generalize the corresponding quantities in the asymptotic i.i.d. setting, providing a one-to-one correspondence between the two frameworks.
This document describes eliminators (also called induction principles) for dependent types in dependent type theory, including:
- Empty, unit, sum, product, and function types
- Dependent pairs and dependent functions
- Booleans, natural numbers, lists, and vectors
- Identity types
For each type, it provides the eliminator signature and definition, allowing values of that type to be analyzed in a dependent context.
19 - Scala. Eliminators into dependent types (induction)Roman Brovko
This document describes eliminators (also called induction principles) for dependent types in dependent type theory, including:
- Empty, unit, sum, product, and function types
- Dependent pairs and dependent functions
- Boolean, natural numbers, lists, and vectors
- Identity types
For each type, it provides the eliminator signature and definition, describing how to eliminate values of that type into a dependent type.
This document discusses canonical forms for representing Boolean functions. It defines sum of products (SOP) and product of sums (POS) forms, which are standard representations. Minterms and maxterms are also defined as product and sum terms involving all variables. The canonical SOP form is defined as the logical sum of minterms where the function value is 1. Canonical POS form is the logical product of maxterms where the function value is 0. Procedures to convert between SOP, POS and canonical forms are presented. Canonical forms provide a unique representation and can be used to determine equivalency between functions.
Polynomial functions are functions that can be written as the sum of terms involving powers of x, with the powers being non-negative integers and the coefficients being real numbers. Examples of polynomial functions include F(x)=6x^4 – 3x^2 + 2x, which is a polynomial of degree 4, and F(x)= x^12. Key terms related to polynomial functions are degree, leading coefficient, standard form, and factored form.
The document discusses AND/OR graphs and the AO* algorithm for searching AND/OR trees. Some problems can be represented as having subgoals that can be achieved simultaneously or independently (AND) or as OR options. The AO* algorithm extends A* search to AND/OR trees. It examines multiple nodes simultaneously, selecting the most promising path and expanding nodes to generate successors. It computes heuristic values (h) for nodes and propagates new information up the graph as the search progresses until a solution is found or all paths are determined to be unsolvable. An example demonstrates how AO* searches an AND/OR graph and labels nodes as it proceeds.
The document defines exponential functions as functions of the form f(x) = bx, where b is a positive constant base. It provides examples and discusses key characteristics of exponential graphs such as their domains, ranges, and asymptotic behavior. The document also covers transformations of exponential graphs and using exponential functions to model compound interest over time.
The document discusses lines and planes in mathematics. It provides multiple ways to specify a line, including using two points, a point and slope, or a slope and y-intercept. Lines can also be described using vectors, with a line being the set of points a + tv, where a is a point on the line, v is a direction vector, and t is a real number. Planes are similarly defined as the set of points where the dot product of a normal vector p and the offset (x - a) is 0, where a is a point on the plane. An example shows how to check if three points lie on the same line by finding the line equation and checking if a third point satisfies it.
Analytic construction of elliptic curves and rational pointsmmasdeu
This document summarizes a talk on constructing elliptic curves over number fields from automorphic forms using non-archimedean methods. It outlines computing the period lattice of an elliptic curve over a non-archimedean field by integrating rigid-analytic forms over the Bruhat-Tits tree, then using this to recover the Weierstrass equation of the curve from its j-invariant. An example construction over a quartic field is also given.
A new practical algorithm for volume estimation using annealing of convex bodiesVissarion Fisikopoulos
We study the problem of estimating the volume of convex polytopes, focusing on H- and V-polytopes, as well as zonotopes. Although a lot of effort is devoted to practical algorithms for H-polytopes there is no such method for the latter two representations. We propose a new, practical algorithm for all representations, which is faster than existing methods. It relies on Hit-and-Run sampling, and combines a new simulated annealing method with the Multiphase Monte Carlo (MMC) approach. Our method introduces the following key features to make it adaptive: (a) It defines a sequence of convex bodies in MMC by introducing a new annealing schedule, whose length is shorter than in previous methods with high probability, and the need of computing an enclosing and an inscribed ball is removed; (b) It exploits statistical properties in rejection-sampling and proposes a better empirical convergence criterion for specifying each step; (c) For zonotopes, it may use a sequence of convex bodies for MMC different than balls, where the chosen body adapts to the input. We offer an open-source, optimized C++ implementation, and analyze its performance to show that it outperforms state-of-the-art software for H-polytopes by Cousins-Vempala (2016) and Emiris-Fisikopoulos (2018), while it undertakes volume computations that were intractable until now, as it is the first polynomial-time, practical method for V-polytopes and zonotopes that scales to high dimensions (currently 100). We further focus on zonotopes, and characterize them by their order (number of generators over dimension), because this largely determines sampling complexity. We analyze a related application, where we evaluate methods of zonotope approximation in engineering.
The document describes polar coordinates, which represent the location of a point P in a plane using two numbers: r, the distance from P to the origin O, and θ, the angle between the positive x-axis and the line from O to P. θ is positive for counter-clockwise angles and negative for clockwise angles. The polar coordinate (r, θ) uniquely identifies P's location. The document also provides the conversion formulas between polar coordinates (r, θ) and rectangular coordinates (x, y).
Stinespring’s theorem for maps on hilbert c star moduleswtyru1989
This document discusses Stinespring's theorem for completely positive maps on Hilbert C*-modules. It begins by introducing C*-algebras, Hilbert C*-modules, and completely positive maps. It then presents Stinespring's theorem for completely positive maps between C*-algebras. The document goes on to discuss Asadi's generalization of Stinespring's theorem to completely positive maps between a C*-algebra and bounded operators on a Hilbert space that are compatible with a Hilbert C*-module. It concludes by presenting a further generalization of Stinespring's theorem to completely positive maps between a C*-algebra and a Hilbert C*-module.
This document summarizes several integration techniques including the fundamental theorem of calculus, substitution, integration by parts, trigonometric integrals, partial fractions, and approximate integration. It explains that the fundamental theorem relates antiderivatives to definite integrals, substitution allows integrals with functions of functions to be evaluated, and integration by parts and partial fractions are used to decompose integrals that cannot be directly evaluated. Trigonometric integrals may use trigonometric substitutions or identities while approximate integration provides numerical approximations.
Probability Arunesh Chand Mankotia 2005Consultonmic
The document provides an overview of key probability concepts including:
- Sample space is the set of all possible outcomes of a random experiment.
- Mutually exclusive events cannot occur simultaneously.
- Venn diagrams can visually depict relationships between events like intersections.
- Classical probability is the ratio of favorable outcomes to total possible outcomes.
- Relative frequency probability is the limit of observed frequencies of an event over many trials.
- Bayes' theorem relates conditional and inverse conditional probabilities.
This document provides instructions for graphing sine and cosine functions. It notes the difference between (bx-c) and b(x-c) and provides examples of graphing 3 periods of different sine and cosine functions. The document also lists homework problems from page 328 involving graphing sine and cosine functions.
This document discusses properties of polynomial graphs and zeros. It states that for a polynomial f(x), the following are equivalent: k is a zero of f, x - k is a factor of f(x), and k is a solution to the equation f(x) = 0. It also notes that if k is real, k will be an x-intercept of the graph of f(x). Finally, it concludes that the graph of any polynomial function of degree n will have at most n - 1 turning points, and if the function has n distinct real zeros, there will be exactly n - 1 turning points.
1. Prove that the function f(x) = x if x is rational, x^2 if x is irrational, is continuous at 1 and discontinuous at 2.
2. Show that if two continuous functions f and g agree on all rational numbers, then they are equal everywhere.
3. Show that if a function f is such that the sequence {f(x_n)} converges whenever {x_n} converges to c, then f is continuous at c.
The document introduces linear logic and provides examples of proofs in linear logic using natural deduction. It discusses key concepts in linear logic including linear implication (-○), contexts, the restriction that each hypothesis can only be used once, and the introduction and extraction rules for linear implication. It also covers other linear logic connectives like falsehood, AND, OR, storage (!A), and provides examples of proofs using these connectives. Finally, it discusses computations in linear logic through typed lambda calculus and a linear Lisp machine that avoids garbage collection by not allowing data sharing.
This document discusses deformations of twisted harmonic maps. It begins with background on harmonic maps and Higgs bundles. First order deformations of representations and harmonic maps are introduced, along with equivariant and harmonic deformations. It is shown that equivariant harmonic deformations correspond to harmonic 1-forms. The first variation of energy is computed, and critical points are characterized as complex variations of Hodge structure when the domain is Kähler. Second order deformations are then defined.
ECCV2008: MAP Estimation Algorithms in Computer Vision - Part 1zukun
The document describes various algorithms for maximum a posteriori (MAP) estimation in computer vision problems. It discusses how MAP estimation involves defining an energy function consisting of unary and pairwise potentials, and finding the labeling that minimizes this energy function. Common computer vision problems addressed include binary image segmentation, object detection using parts-based models, and stereo correspondence. Computational challenges are discussed as MAP estimation is NP-hard in general, though approximate algorithms can be used.
The document discusses finding equations to model periodic functions from graphs. It works through examples of cosine, sine and other wave functions, identifying their amplitude, period, and determining the specific equation based on those characteristics. For the first example, it is a cosine wave with amplitude 3, period 2π, so the equation is y = -3cos(x).
- The document discusses one-shot entanglement theory, which considers manipulating entanglement without assuming many identical copies or an i.i.d. setting. This generalizes the traditional asymptotic i.i.d. entanglement theory.
- For pure states, the one-shot distillable entanglement is characterized by the smoothed min-entropy of the reduced state. The one-shot entanglement cost is characterized by the smoothed max-entropy.
- For mixed states, the one-shot results involve smoothed min- and max-entropies as well as smoothed relative entropies. These generalize the corresponding quantities in the asymptotic i.i.d. setting, providing a one-to-one correspondence between the two frameworks.
This document describes eliminators (also called induction principles) for dependent types in dependent type theory, including:
- Empty, unit, sum, product, and function types
- Dependent pairs and dependent functions
- Booleans, natural numbers, lists, and vectors
- Identity types
For each type, it provides the eliminator signature and definition, allowing values of that type to be analyzed in a dependent context.
19 - Scala. Eliminators into dependent types (induction)Roman Brovko
This document describes eliminators (also called induction principles) for dependent types in dependent type theory, including:
- Empty, unit, sum, product, and function types
- Dependent pairs and dependent functions
- Boolean, natural numbers, lists, and vectors
- Identity types
For each type, it provides the eliminator signature and definition, describing how to eliminate values of that type into a dependent type.
This document provides an outline and definitions for fundamental concepts in set theory and discrete mathematics, including:
1. Definitions of sets, operations on sets like union and intersection, and relations.
2. Functions, relations, and properties like domains, ranges, and composition.
3. Partial orders, trees, groups, and other algebraic structures.
This document discusses methods for semiparametric estimation of heavy-tailed densities while including covariate information. It introduces a transformation that separates a density's tail from its bulk using a tail-identified base family. This allows modeling the bulk parametrically while leaving the tail nonparametric. It extends linear quantile regression to multiple covariates by jointly modeling quantile planes over arbitrary predictor spaces. Applications to hurricane intensity and species abundance data are presented.
1. The document discusses a library stock management system with entities like books, copies, readers, and loans. It defines relationships between these entities like what books are stocked, which copies are issued to readers, and overdue return dates.
2. Set theory concepts like intersection, union, subset, and complement are explained through examples like disjoint sets, inclusion relationships between sets.
3. An example algebraic expression is broken down step-by-step to show that a subset relationship holds true.
This document describes a higher-order logical framework called Hybrid that can reason about programming languages and logics. Hybrid uses higher-order abstract syntax to represent object logic expressions and implements inference rules in Coq. It consists of three layers: the object logic layer encodes the target language, the specification logic layer defines deductive rules for the object logic, and the reasoning logic layer is Coq. Hybrid improves on previous approaches by allowing more object logic judgments to be encoded and proves properties like cut elimination on the specification logic. The document provides an example encoding of the correspondence between HOAS and de Bruijn representations of lambda terms in Hybrid.
The document discusses description logics, which are decidable fragments of first-order logic used for knowledge representation. It presents the syntax and semantics of ALC, a basic description logic. It then introduces a labeled sequent calculus called SCALC for reasoning with ALC concepts. SCALC uses labeled formulas and includes structural, boolean, and generalization rules for reasoning over ALC concepts. An example proof in SCALC is provided.
This document discusses various theories of truth and paradoxes involving truth. It examines proposals by philosophers like Vann McGee, Hartry Field, and JC Beal regarding how to address paradoxes like the liar paradox within formal theories. It also discusses fuzzy logics and Łukasiewicz logic as possible frameworks for modeling graduality and comparative notions of truth. Adding a truth predicate to formal theories is shown to potentially lead to deviations from the intended ontology or revenge paradoxes.
Similar to Non determinism through type isomophism (8)
We propose a way to unify two approaches of non-cloning in quantum lambda-calculi. The first approach is to forbid duplicating variables, while the second is to consider all lambda-terms as algebraic-linear functions. We illustrate this idea by defining a quantum extension of first-order simply-typed lambda-calculus, where the type is linear on superposition, while allows cloning base vectors. In addition, we provide an interpretation of the calculus where superposed types are interpreted as vector spaces and non-superposed types as their basis.
Slides of LNCS 10687:281-293 paper (TPNC 2017). Full paper: https://doi.org/10.1007/978-3-319-71069-3_22
A lambda calculus for density matrices with classical and probabilistic controlsAlejandro Díaz-Caro
This document presents a lambda calculus for density matrices called λρ. It extends the standard lambda calculus with constructs that model the four postulates of quantum mechanics using density matrices rather than state vectors. This includes operations for unitary evolution (U), measurement (π), composite systems (⊗), and allowing classical control over measurements. Types are also presented for the language. A denotational semantics is given that interprets terms as probability distributions over density matrices or functions on density matrices. An example is analyzed showing how measurement and classical control can be modeled in the language.
We study a purely functional quantum extension of lambda calculus, that is, an extension of lambda calculus to express some quantum features, where the quantum memory is abstracted out. This calculus is a typed extension of the first-order linear-algebraic lambda-calculus. The type is linear on superpositions, so to forbid from cloning them, while allows to clone basis vectors. We provide examples of the Deutsch algorithm and the Teleportation, and prove the subject reduction of the calculus. In addition, we provide a denotational semantics where superposed types are interpreted as vector spaces and non-superposed types as their basis.
The document discusses affine computation and affine automata. It introduces affine systems as a generalization of probabilistic and quantum systems that allows for negative amplitudes. Affine states are defined as linear combinations of basis states with coefficients summing to 1. Affine transformations must also map affine states to affine states. Affine automata operate similarly to quantum finite automata, but use affine transformations instead of unitary transformations. The document shows that bounded-error affine languages properly contain regular languages and examines a language over a,b that counts the difference between a's and b's.
This document describes work on simply typed lambda calculus modulo type isomorphisms. It introduces an equivalence relation between types based on isomorphisms for conjunction, implication, and their interactions. This allows identifying isomorphic types and terms with the same type up to isomorphism. The document outlines the challenges of defining a type-isomorphic proof theory and presents solutions, such as Church-style projections and alpha equivalence rules, to develop a sound and complete operational semantics for the simply typed lambda calculus modulo type isomorphisms.
We show how to provide a structure of probability space to the set of execution traces on a non-confluent abstract rewrite system, by defining a variant of a Lebesgue measure on the space of traces. Then, we show how to use this probability space to transform a non-deterministic calculus into a probabilistic one. We use as example λ+, a recently introduced calculus defined with techniques from deduction modulo.
Vectorial types, non-determinism and probabilistic systems: Towards a computa...Alejandro Díaz-Caro
This document discusses finding a correspondence between quantum computing and logic, similar to the Curry-Howard correspondence between intuitionistic logic and typed lambda calculus. It presents an untyped algebraic extension of lambda calculus called Lineal that can encode quantum computing concepts like qubits and quantum gates. It then introduces a typed version called Typed Lineal that assigns linear types capturing the "vectorial" structure of terms, allowing verification of properties of probabilistic and quantum processes. The goal is to define a computational quantum logic whose proofs are quantum programs.
Quantum computing, non-determinism, probabilistic systems... and the logic be...Alejandro Díaz-Caro
This document provides an introduction to quantum computing, λ-calculus, and typed λ-calculus. It discusses how these topics relate to intuitionistic logic through the Curry-Howard correspondence. The author is working on algebraic calculi and vectorial typing for non-determinism and probabilistic systems, and how this can be extended from non-determinism to probabilities. An example of Deutsch's algorithm for quantum computing is also presented.
Call-by-value non-determinism in a linear logic type disciplineAlejandro Díaz-Caro
We consider the call-by-value λ-calculus extended with a may-convergent non-deterministic choice and a must-convergent parallel composition. Inspired by recent works on the relational semantics of linear logic and non-idempotent intersection types, we endow this calculus with a type system based on the so-called Girard's second translation of intuitionistic logic into linear logic. We prove that a term is typable if and only if is converging, and that its typing tree carries enough information to give a bound on the length of its lazy call-by-value reduction. Moreover, when the typing tree is minimal, such a bound becomes the exact length of the reduction.
Slides used during my thesis defense "Du typage vectoriel"Alejandro Díaz-Caro
The objective of this thesis is to develop a type theory for the linear-algebraic λ-calculus, an extension of λ-calculus motivated by quantum computing. This algebraic extension encompass all the terms of λ-calculus together with their linear combinations, so if t and r are two terms, so is α.t + β.r, with α and β being scalars from a given ring. The key idea and challenge of this thesis was to introduce a type system where the types, in the same way as the terms, form a vectorial space, providing the information about the structure of the normal form of the terms. This thesis presents the system Lineal, and also three intermediate systems, however interesting by themselves: Scalar, Additive and λCA, all of them with their subject reduction and strong normalisation proofs.
We consider the non-deterministic extension of the call-by-value lambda calculus, which corresponds to the additive fragment of the linear-algebraic lambda-calculus. We define a fine-grained type system, capturing the right linearity present in such formalisms. After proving the subject reduction and the strong normalisation properties, we propose a translation of this calculus into the System F with pairs, which corresponds to a non linear fragment of linear logic. The translation provides a deeper understanding of the linearity in our setting.
A type system for the vectorial aspects of the linear-algebraic lambda-calculusAlejandro Díaz-Caro
We describe a type system for the linear-algebraic lambda-calculus. The type system accounts for the part of the language emulating linear operators and vectors, i.e. it is able to statically describe the linear combinations of terms resulting from the reduction of programs. This gives rise to an original type theory where types, in the same way as terms, can be superposed into linear combinations. We show that the resulting typed lambda-calculus is strongly normalizing and features a weak subject-reduction.
The linear-algebraic λ-calculus and the algebraic λ-calculus are untyped λ-calculi extended with arbitrary linear combinations of terms. The former presents the axioms of linear algebra in the form of a rewrite system, while the latter uses equalities. When given by rewrites, algebraic λ-calculi are not confluent unless further restrictions are added. We provide a type system for the linear-algebraic λ-calculus enforcing strong normalisation, which gives back confluence. The type system allows an interpretation in System F.
The linear-algebraic lambda-calculus (arXiv:quant-ph/0612199) extends the lambda-calculus with the possibility of making arbitrary linear combinations of lambda-calculus terms a.t+b.u. In this paper we provide a System F -like type system for the linear-algebraic lambda-calculus, which keeps track of \'the amount of a type\' that is present in a term. We show that this scalar type system enjoys both the subject-reduction property and the strong-normalisation property, which constitute our main technical results. The latter yields a significant simplification of the linear-algebraic lambda-calculus itself, by removing the need for some restrictions in its reduction rules - and thus leaving it more intuitive. More importantly we show that our type system can readily be modified into a probabilistic type system, which guarantees that terms define correct probabilistic functions. Thus we are able to specialize the linear-algebraic lambda-calculus into a higher-order, probabilistic lambda-calculus. Finally we discuss the more long-term aims of this reseach in terms of establishing connections with linear logic, and building up towards a quantum physical logic through the Curry-Howard isomorphism. Thus we begin to investigate the logic induced by the scalar type system, and prove a no-cloning theorem expressed solely in terms of the possible proof methods in this logic.
1. Non determinism through type
isomophism
Alejandro Díaz-Caro Gilles Dowek
LIPN, Université Paris 13, Sorbonne Paris Cité INRIA – Paris–Rocquencourt
7th LSFA
Rio de Janeiro, September 29–30, 2012
2. Motivation: Di Cosmo’s isomorphisms [Di Cosmo’95]
- A∧B ≡B ∧A
- A ∧ (B ∧ C ) ≡ (A ∧ B) ∧ C
- A ⇒ (B ∧ C ) ≡ (A ⇒ B) ∧ (A ⇒ C )
- (A ∧ B) ⇒ C ≡ A ⇒ (B ⇒ C )
- A ⇒ (B ⇒ C ) ≡ B ⇒ (A ⇒ C )
- A∧T≡A
- A⇒T≡T
- T⇒A≡A
- ∀X .∀Y .A ≡ ∀Y .∀X .A
- ∀X .A ≡ ∀Y .A[Y = X ]
- ∀X .(A ⇒ B) ≡ A ⇒ ∀X .B if X ∈ FV (A)
/
- ∀X .(A ∧ B) ≡ ∀X .A ∧ ∀X .B
- ∀X .T ≡ T
- ∀X .(A ∧ B) ≡ ∀X .∀Y .(A ∧ (B[Y = X ]))
2 / 10
3. Motivation: Di Cosmo’s isomorphisms [Di Cosmo’95]
- A∧B ≡B ∧A
- A ∧ (B ∧ C ) ≡ (A ∧ B) ∧ C
- A ⇒ (B ∧ C ) ≡ (A ⇒ B) ∧ (A ⇒ C )
- (A ∧ B) ⇒ C ≡ A ⇒ (B ⇒ C )
- A ⇒ (B ⇒ C ) ≡ B ⇒ (A ⇒ C )
- A∧T≡A
- A⇒T≡T We want a proof-system
- T⇒A≡A where isomorphic proposi-
- ∀X .∀Y .A ≡ ∀Y .∀X .A tions have the same proofs
- ∀X .A ≡ ∀Y .A[Y = X ]
- ∀X .(A ⇒ B) ≡ A ⇒ ∀X .B if X ∈ FV (A)
/
- ∀X .(A ∧ B) ≡ ∀X .A ∧ ∀X .B
- ∀X .T ≡ T
- ∀X .(A ∧ B) ≡ ∀X .∀Y .(A ∧ (B[Y = X ]))
2 / 10
4. Minimal second order propositional logic
Γ, x : A t:B Γ t:A⇒B Γ s:A
Γ, x : A x :A Γ λx.t : A ⇒ B Γ ts : B
Γ t:A Γ t : ∀X .A
X ∈ FV (Γ)
/
Γ t : ∀X .A Γ t : A[B/X ]
3 / 10
5. Minimal second order propositional logic
Γ, x : A t:B Γ t:A⇒B Γ s:A
Γ, x : A x :A Γ λx.t : A ⇒ B Γ ts : B
Γ t:A Γ t : ∀X .A
X ∈ FV (Γ)
/
Γ t : ∀X .A Γ t : A[B/X ]
Adding conjunction
Γ t:A Γ r:B
Γ t, r :A∧B
3 / 10
6. Minimal second order propositional logic
Γ, x : A t:B Γ t:A⇒B Γ s:A
Γ, x : A x :A Γ λx.t : A ⇒ B Γ ts : B
Γ t:A Γ t : ∀X .A
X ∈ FV (Γ)
/
Γ t : ∀X .A Γ t : A[B/X ]
Adding conjunction
Γ t:A Γ r:B
Γ t, r :A∧B
We want A ∧ B = B ∧A
A ∧ (B ∧ C ) = (A ∧ B) ∧ C
so t, r = r, t
t, r, s = t, r , s
3 / 10
7. Minimal second order propositional logic
Γ, x : A t:B Γ t:A⇒B Γ s:A
Γ, x : A x :A Γ λx.t : A ⇒ B Γ ts : B
Γ t:A Γ t : ∀X .A
X ∈ FV (Γ)
/
Γ t : ∀X .A Γ t : A[B/X ]
Adding conjunction
Γ t:A Γ r:B
Γ t+r :A∧B
We want A ∧ B = B ∧A We write
A ∧ (B ∧ C ) = (A ∧ B) ∧ C t+r=r+t
so t, r = r, t t + (r + s) = (t + r) + s
t, r, s = t, r , s
3 / 10
8. Minimal second order propositional logic
Γ, x : A t:B Γ t:A⇒B Γ s:A
Γ, x : A x :A Γ λx.t : A ⇒ B Γ ts : B
Γ t:A Γ t : ∀X .A
X ∈ FV (Γ)
/
Γ t : ∀X .A Γ t : A[B/X ]
Adding conjunction
Γ t:A Γ r:B
Γ t+r :A∧B
We want A ∧ B = B ∧A We write
A ∧ (B ∧ C ) = (A ∧ B) ∧ C t+r=r+t
so t, r = r, t t + (r + s) = (t + r) + s
t, r, s = t, r , s
λx.(t + r) = λx.t + λx.r
Also A ⇒ (B ∧ C ) = (A ⇒ B) ∧ (A ⇒ C ) induces
(t + r)s = ts + rs
3 / 10
10. What about ∧-elimination?
Γ t+r:A∧B Γ t+r:B ∧A
But A ∧ B = B ∧ A !!
Γ π1 (t + r) : A Γ π1 (t + r) : B
Moreover
t+r=r+t so π1 (t + r) = π1 (r + t) !!
4 / 10
11. What about ∧-elimination?
Γ t+r:A∧B Γ t+r:B ∧A
But A ∧ B = B ∧ A !!
Γ π1 (t + r) : A Γ π1 (t + r) : B
Moreover
t+r=r+t so π1 (t + r) = π1 (r + t) !!
Workaround: Church-style. Project w.r.t. a type
If Γ t : A then πA (t + r) → t
4 / 10
12. What about ∧-elimination?
Γ t+r:A∧B Γ t+r:B ∧A
But A ∧ B = B ∧ A !!
Γ π1 (t + r) : A Γ π1 (t + r) : B
Moreover
t+r=r+t so π1 (t + r) = π1 (r + t) !!
Workaround: Church-style. Project w.r.t. a type
If Γ t : A then πA (t + r) → t
This induces non-determinism
Γ t:A πA (t + r) → t
If then
Γ r:A πA (t + r) → r
4 / 10
13. What about ∧-elimination?
Γ t+r:A∧B Γ t+r:B ∧A
But A ∧ B = B ∧ A !!
Γ π1 (t + r) : A Γ π1 (t + r) : B
Moreover
t+r=r+t so π1 (t + r) = π1 (r + t) !!
Workaround: Church-style. Project w.r.t. a type
If Γ t : A then πA (t + r) → t
This induces non-determinism
Γ t:A πA (t + r) → t
If then
Γ r:A πA (t + r) → r
We are interested in the proof theory
and both t and r are valid proofs of A
4 / 10
14. The calculus
Types
A, B, C ::= X | A ⇒ B | A ∧ B | ∀X .A
Equivalences
A∧B ≡ B ∧A
(A ∧ B) ∧ C ≡ A ∧ (B ∧ C )
A ⇒ (B ∧ C ) ≡ (A ⇒ B) ∧ (A ⇒ C )
5 / 10
15. The calculus
Types
A, B, C ::= X | A ⇒ B | A ∧ B | ∀X .A
Equivalences
A∧B ≡ B ∧A
(A ∧ B) ∧ C ≡ A ∧ (B ∧ C )
A ⇒ (B ∧ C ) ≡ (A ⇒ B) ∧ (A ⇒ C )
Terms
t, r, s ::= x A | λx A .t | tr | ΛX .t | t{A}
| t + r | πA (t)
5 / 10
16. The calculus
Types
A, B, C ::= X | A ⇒ B | A ∧ B | ∀X .A
Equivalences
A∧B ≡ B ∧A
(A ∧ B) ∧ C ≡ A ∧ (B ∧ C )
A ⇒ (B ∧ C ) ≡ (A ⇒ B) ∧ (A ⇒ C )
Terms
t, r, s ::= x A | λx A .t | tr | ΛX .t | t{A}
| t + r | πA (t)
Reduction rules
(λx A .t)r → t[r/x]
(ΛX .t){A} → t[A/X ]
πA (t + r) → t (if Γ t : A)
5 / 10
17. The calculus
Types
A, B, C ::= X | A ⇒ B | A ∧ B | ∀X .A
Equivalences
A∧B ≡ B ∧A
(A ∧ B) ∧ C ≡ A ∧ (B ∧ C )
A ⇒ (B ∧ C ) ≡ (A ⇒ B) ∧ (A ⇒ C )
Terms
t, r, s ::= x A | λx A .t | tr | ΛX .t | t{A}
| t + r | πA (t)
Reduction rules
(λx A .t)r → t[r/x]
(ΛX .t){A} → t[A/X ]
πA (t + r) → t (if Γ t : A)
t+r r+t
(t + r) + s t + (r + s)
(t + r)s ts + rs
λx A .(t + r) λx A .t + λx A .r
πA⇒B (t)r πB (tr)
(if Γ t : A ⇒ (B ∧ C ))
5 / 10
18. The calculus
Types Γ, x : A t:B
A, B, C ::= X | A ⇒ B | A ∧ B | ∀X .A ax ⇒I
Γ, x : A x :A A
Γ λx .t : A ⇒ B
Equivalences
A∧B ≡ B ∧A Γ t:A⇒B Γ s:A
(A ∧ B) ∧ C ≡ A ∧ (B ∧ C ) ⇒E
Γ ts : B
A ⇒ (B ∧ C ) ≡ (A ⇒ B) ∧ (A ⇒ C )
Γ t:A X ∈ FV (Γ)
/ Γ t : ∀X .A
∀I ∀E
Terms Γ ΛX .t : ∀X .A Γ t{B} : A[B/X ]
t, r, s ::= x A | λx A .t | tr | ΛX .t | t{A}
| t + r | πA (t) Γ t:A Γ r:B Γ t:A∧B
Reduction rules ∧I ∧E
(λx A .t)r → t[r/x] Γ t+r:A∧B Γ πA (t) : A
(ΛX .t){A} → t[A/X ]
Γ t:A A≡B
πA (t + r) → t (if Γ t : A) ≡
t+r r+t Γ t:B
(t + r) + s t + (r + s)
(t + r)s ts + rs Theorem (Subject reduction)
λx A .(t + r) λx A .t + λx A .r
πA⇒B (t)r πB (tr) If Γ t : A and t → r then Γ r:A
(if Γ t : A ⇒ (B ∧ C )) with → := → or
5 / 10
19. Example (I)
λx A∧B .x : (A ∧ B) ⇒ (A ∧ B)
6 / 10
20. Example (I)
λx A∧B .x : (A ∧ B) ⇒ (A ∧ B)
(A ∧ B) ⇒ (A ∧ B) ≡ ((A ∧ B) ⇒ A) ∧ ((A ∧ B) ⇒ B)
6 / 10
21. Example (I)
λx A∧B .x : (A ∧ B) ⇒ (A ∧ B)
(A ∧ B) ⇒ (A ∧ B) ≡ ((A ∧ B) ⇒ A) ∧ ((A ∧ B) ⇒ B)
Hence
π(A∧B)⇒A (λx A∧B .x) : (A ∧ B) ⇒ A
6 / 10
22. Example (I)
λx A∧B .x : (A ∧ B) ⇒ (A ∧ B)
(A ∧ B) ⇒ (A ∧ B) ≡ ((A ∧ B) ⇒ A) ∧ ((A ∧ B) ⇒ B)
Hence
π(A∧B)⇒A (λx A∧B .x) : (A ∧ B) ⇒ A
Let r:A∧B
6 / 10
23. Example (I)
λx A∧B .x : (A ∧ B) ⇒ (A ∧ B)
(A ∧ B) ⇒ (A ∧ B) ≡ ((A ∧ B) ⇒ A) ∧ ((A ∧ B) ⇒ B)
Hence
π(A∧B)⇒A (λx A∧B .x) : (A ∧ B) ⇒ A
Let r:A∧B
π(A∧B)⇒A (λx A∧B .x) r : A
6 / 10
24. Example (I)
λx A∧B .x : (A ∧ B) ⇒ (A ∧ B)
(A ∧ B) ⇒ (A ∧ B) ≡ ((A ∧ B) ⇒ A) ∧ ((A ∧ B) ⇒ B)
Hence
π(A∧B)⇒A (λx A∧B .x) : (A ∧ B) ⇒ A
Let r:A∧B
π(A∧B)⇒A (λx A∧B .x) r : A
π(A∧B)⇒A (λx A∧B .x)r πA ((λx A∧B .x)r) → πA (r)
6 / 10
25. Example (II)
TF = λx B .λy B .(x + y )
TF : B ⇒ B ⇒ (B ∧ B)
7 / 10
26. Example (II)
TF = λx B .λy B .(x + y )
TF : B ⇒ B ⇒ (B ∧ B)
B ⇒ B ⇒ (B ∧ B) ≡ (B ⇒ B ⇒ B) ∧ (B ⇒ B ⇒ B)
7 / 10
27. Example (II)
TF = λx B .λy B .(x + y )
TF : B ⇒ B ⇒ (B ∧ B)
B ⇒ B ⇒ (B ∧ B) ≡ (B ⇒ B ⇒ B) ∧ (B ⇒ B ⇒ B)
πB⇒B⇒B (TF) : B ⇒ B ⇒ B
7 / 10
28. Example (II)
TF = λx B .λy B .(x + y )
TF : B ⇒ B ⇒ (B ∧ B)
B ⇒ B ⇒ (B ∧ B) ≡ (B ⇒ B ⇒ B) ∧ (B ⇒ B ⇒ B)
πB⇒B⇒B (TF) : B ⇒ B ⇒ B
Let t:B and f:B
7 / 10
29. Example (II)
TF = λx B .λy B .(x + y )
TF : B ⇒ B ⇒ (B ∧ B)
B ⇒ B ⇒ (B ∧ B) ≡ (B ⇒ B ⇒ B) ∧ (B ⇒ B ⇒ B)
πB⇒B⇒B (TF) : B ⇒ B ⇒ B
Let t:B and f:B
πB⇒B⇒B (TF) t f
7 / 10
30. Example (II)
TF = λx B .λy B .(x + y )
TF : B ⇒ B ⇒ (B ∧ B)
B ⇒ B ⇒ (B ∧ B) ≡ (B ⇒ B ⇒ B) ∧ (B ⇒ B ⇒ B)
πB⇒B⇒B (TF) : B ⇒ B ⇒ B
Let t:B and f:B
πB⇒B⇒B (TF) t f πB⇒B ((TF)t) f
7 / 10
31. Example (II)
TF = λx B .λy B .(x + y )
TF : B ⇒ B ⇒ (B ∧ B)
B ⇒ B ⇒ (B ∧ B) ≡ (B ⇒ B ⇒ B) ∧ (B ⇒ B ⇒ B)
πB⇒B⇒B (TF) : B ⇒ B ⇒ B
Let t:B and f:B
πB⇒B⇒B (TF) t f πB⇒B ((TF)t) f πB ((TF)tf)
7 / 10
32. Example (II)
TF = λx B .λy B .(x + y )
TF : B ⇒ B ⇒ (B ∧ B)
B ⇒ B ⇒ (B ∧ B) ≡ (B ⇒ B ⇒ B) ∧ (B ⇒ B ⇒ B)
πB⇒B⇒B (TF) : B ⇒ B ⇒ B
Let t:B and f:B
πB⇒B⇒B (TF) t f πB⇒B ((TF)t) f πB ((TF)tf)
7Õ 3t
→ πB (t + f) % y
+f
7 / 10
33. Confluence (some ideas)
Of course, a non-deterministic calculus is not confluent!
Counterexample πA (x A + y A )
x &
xA yA
| |
' w
?
8 / 10
34. Confluence (some ideas)
Of course, a non-deterministic calculus is not confluent!
Counterexample πA (x A + y A )
x &
xA yA
| |
' w
?
However, we can prove it keeps some coherence
Confluence of the deterministic fragment
Confluence of the “term ensembles”
e.g. t
z "
{ri }i t
# }
{ri }i [Arrighi,Díaz-Caro,Gadella,Grattage’08]
8 / 10
35. Conclusions (with some examples)
Proof system
Let t be a proof of A
and r be a proof of B
so t + r is a proof of both A ∧ B and B ∧ A
Non deterministic calculus
t = ΛX .λx X .λy X .x
t ff = ΛX .λx X .λy X .y
B = ∀X .X ⇒ X ⇒ X
t + ff : B ∧ B
πB (t + ff ) : B
t
{ #
t ff
So far:
- Proof system where (three) isomorphic types get the same proofs
- Non-deterministic calculus
9 / 10
36. Future directions (open problems)
Can we continue adding Di Cosmo’s isomorphisms?
e.g. A ∧ T ≡ A induces t + 0 t and A ⇒ T ≡ T induces λx.0 0
10 / 10
37. Future directions (open problems)
Can we continue adding Di Cosmo’s isomorphisms?
e.g. A ∧ T ≡ A induces t + 0 t and A ⇒ T ≡ T induces λx.0 0
But if T ⇒ T ≡ T then (λx T .xx)(λx T .xx) : T (wrong)
10 / 10
38. Future directions (open problems)
Can we continue adding Di Cosmo’s isomorphisms?
e.g. A ∧ T ≡ A induces t + 0 t and A ⇒ T ≡ T induces λx.0 0
But if T ⇒ T ≡ T then (λx T .xx)(λx T .xx) : T (wrong)
A more interesting open question:
Can we use this no determinism to define a probabilistic/quantum
language?
10 / 10
39. Future directions (open problems)
Can we continue adding Di Cosmo’s isomorphisms?
e.g. A ∧ T ≡ A induces t + 0 t and A ⇒ T ≡ T induces λx.0 0
But if T ⇒ T ≡ T then (λx T .xx)(λx T .xx) : T (wrong)
A more interesting open question:
Can we use this no determinism to define a probabilistic/quantum
language?
Some clues:
Similar to the linear-algebraic lambda-calculus [Arrighi,Dowek]
10 / 10
40. Future directions (open problems)
Can we continue adding Di Cosmo’s isomorphisms?
e.g. A ∧ T ≡ A induces t + 0 t and A ⇒ T ≡ T induces λx.0 0
But if T ⇒ T ≡ T then (λx T .xx)(λx T .xx) : T (wrong)
A more interesting open question:
Can we use this no determinism to define a probabilistic/quantum
language?
Some clues:
Similar to the linear-algebraic lambda-calculus [Arrighi,Dowek]
We need call-by-value (no-cloning)
10 / 10
41. Future directions (open problems)
Can we continue adding Di Cosmo’s isomorphisms?
e.g. A ∧ T ≡ A induces t + 0 t and A ⇒ T ≡ T induces λx.0 0
But if T ⇒ T ≡ T then (λx T .xx)(λx T .xx) : T (wrong)
A more interesting open question:
Can we use this no determinism to define a probabilistic/quantum
language?
Some clues:
Similar to the linear-algebraic lambda-calculus [Arrighi,Dowek]
We need call-by-value (no-cloning)
In call-by-value,
t(r + s) tr + ts
10 / 10
42. Future directions (open problems)
Can we continue adding Di Cosmo’s isomorphisms?
e.g. A ∧ T ≡ A induces t + 0 t and A ⇒ T ≡ T induces λx.0 0
But if T ⇒ T ≡ T then (λx T .xx)(λx T .xx) : T (wrong)
A more interesting open question:
Can we use this no determinism to define a probabilistic/quantum
language?
Some clues:
Similar to the linear-algebraic lambda-calculus [Arrighi,Dowek]
We need call-by-value (no-cloning)
In call-by-value,
t(r + s) tr + ts
But (A ∧ B) ⇒ C ≡ (A ⇒ C ) ∧ (B ⇒ C )
10 / 10
43. Future directions (open problems)
Can we continue adding Di Cosmo’s isomorphisms?
e.g. A ∧ T ≡ A induces t + 0 t and A ⇒ T ≡ T induces λx.0 0
But if T ⇒ T ≡ T then (λx T .xx)(λx T .xx) : T (wrong)
A more interesting open question:
Can we use this no determinism to define a probabilistic/quantum
language?
Some clues:
Similar to the linear-algebraic lambda-calculus [Arrighi,Dowek]
We need call-by-value (no-cloning)
In call-by-value,
t(r + s) tr + ts
But (A ∧ B) ⇒ C ≡ (A ⇒ C ) ∧ (B ⇒ C )
Workaround: Use polymorphism: ∀X .X ⇒ CX [Arrighi,Díaz-Caro]
10 / 10