A type system for the vectorial aspects of the linear-algebraic lambda-calculusAlejandro Díaz-Caro
We describe a type system for the linear-algebraic lambda-calculus. The type system accounts for the part of the language emulating linear operators and vectors, i.e. it is able to statically describe the linear combinations of terms resulting from the reduction of programs. This gives rise to an original type theory where types, in the same way as terms, can be superposed into linear combinations. We show that the resulting typed lambda-calculus is strongly normalizing and features a weak subject-reduction.
Scientific Computing with Python Webinar 9/18/2009:Curve FittingEnthought, Inc.
This webinar will provide an overview of the tools that SciPy and NumPy provide for regression analysis including linear and non-linear least-squares and a brief look at handling other error metrics. We will also demonstrate simple GUI tools that can make some problems easier and provide a quick overview of the new Scikits package statsmodels whose API is maturing in a separate package but should be incorporated into SciPy in the future.
The talk is concerned with a particular definition of statistical sparsity as a stochastic limit. The limit definition is satisfied by every example that has been proposed in the literature on sparse signal detection, so, in that sense it is uncontroversial. Nonetheless, the definition has implications for sparse signal detection. For example, it puts very specific limits on the types of inferential questions (integrals or conditional expectations) that we can hope to address. It also implies that certain pairs of sparse models are first -order equivalent, or effectively equivalent.
(Joint work with Nick Polson)
Profº. Marcelo Santos Chaves - Cálculo I (Limites e Continuidades) - Exercíci...MarcelloSantosChaves
1. The document discusses limits and continuities. It provides solutions to calculating the limits of 6 different functions as x approaches certain values.
2. The solutions involve algebraic manipulations such as factoring, simplifying, and applying limit properties. Various limit results are obtained such as 1, -6, 0.
3. The techniques demonstrated include making substitutions to simplify indeterminate forms, factoring, and taking limits of rational functions as the variables approach certain values.
Lesson 2: A Catalog of Essential Functions (slides)Matthew Leingang
This document provides an overview of different types of functions including: linear, polynomial, rational, power, trigonometric, and exponential functions. It discusses representing functions verbally, numerically, visually, and symbolically. Key topics covered include transformations of functions through shifting graphs vertically and horizontally, as well as composing multiple functions.
The document discusses limits and the limit laws. It introduces the concept of a limit using an "error-tolerance" game. It then proves some basic limits, such as the limit of x as x approaches a equals a, and the limit of a constant c equals c. It also proves the limit laws, such as the fact that limits can be combined using arithmetic operations and the rules for limits of quotients and roots.
The dual geometry of Shannon informationFrank Nielsen
The document discusses the dual geometry of Shannon information. It covers:
1. Shannon entropy and related concepts like maximum entropy principle and exponential families.
2. The properties of Kullback-Leibler divergence including its interpretation as a statistical distance and relation to maximum entropy.
3. How maximum likelihood estimation for exponential families can be viewed as minimizing Kullback-Leibler divergence between the empirical distribution and model distribution.
This document provides a calculus cheat sheet covering key topics in limits, derivatives, and integrals. It defines limits, including one-sided limits and limits at infinity. Properties of limits are listed. Derivatives are defined and basic rules like the power, constant multiple, sum, difference, and chain rules are covered. Common derivatives are provided. Higher order derivatives and the second derivative are defined. Evaluation techniques like L'Hospital's rule, polynomials at infinity, and piecewise functions are summarized.
A type system for the vectorial aspects of the linear-algebraic lambda-calculusAlejandro Díaz-Caro
We describe a type system for the linear-algebraic lambda-calculus. The type system accounts for the part of the language emulating linear operators and vectors, i.e. it is able to statically describe the linear combinations of terms resulting from the reduction of programs. This gives rise to an original type theory where types, in the same way as terms, can be superposed into linear combinations. We show that the resulting typed lambda-calculus is strongly normalizing and features a weak subject-reduction.
Scientific Computing with Python Webinar 9/18/2009:Curve FittingEnthought, Inc.
This webinar will provide an overview of the tools that SciPy and NumPy provide for regression analysis including linear and non-linear least-squares and a brief look at handling other error metrics. We will also demonstrate simple GUI tools that can make some problems easier and provide a quick overview of the new Scikits package statsmodels whose API is maturing in a separate package but should be incorporated into SciPy in the future.
The talk is concerned with a particular definition of statistical sparsity as a stochastic limit. The limit definition is satisfied by every example that has been proposed in the literature on sparse signal detection, so, in that sense it is uncontroversial. Nonetheless, the definition has implications for sparse signal detection. For example, it puts very specific limits on the types of inferential questions (integrals or conditional expectations) that we can hope to address. It also implies that certain pairs of sparse models are first -order equivalent, or effectively equivalent.
(Joint work with Nick Polson)
Profº. Marcelo Santos Chaves - Cálculo I (Limites e Continuidades) - Exercíci...MarcelloSantosChaves
1. The document discusses limits and continuities. It provides solutions to calculating the limits of 6 different functions as x approaches certain values.
2. The solutions involve algebraic manipulations such as factoring, simplifying, and applying limit properties. Various limit results are obtained such as 1, -6, 0.
3. The techniques demonstrated include making substitutions to simplify indeterminate forms, factoring, and taking limits of rational functions as the variables approach certain values.
Lesson 2: A Catalog of Essential Functions (slides)Matthew Leingang
This document provides an overview of different types of functions including: linear, polynomial, rational, power, trigonometric, and exponential functions. It discusses representing functions verbally, numerically, visually, and symbolically. Key topics covered include transformations of functions through shifting graphs vertically and horizontally, as well as composing multiple functions.
The document discusses limits and the limit laws. It introduces the concept of a limit using an "error-tolerance" game. It then proves some basic limits, such as the limit of x as x approaches a equals a, and the limit of a constant c equals c. It also proves the limit laws, such as the fact that limits can be combined using arithmetic operations and the rules for limits of quotients and roots.
The dual geometry of Shannon informationFrank Nielsen
The document discusses the dual geometry of Shannon information. It covers:
1. Shannon entropy and related concepts like maximum entropy principle and exponential families.
2. The properties of Kullback-Leibler divergence including its interpretation as a statistical distance and relation to maximum entropy.
3. How maximum likelihood estimation for exponential families can be viewed as minimizing Kullback-Leibler divergence between the empirical distribution and model distribution.
This document provides a calculus cheat sheet covering key topics in limits, derivatives, and integrals. It defines limits, including one-sided limits and limits at infinity. Properties of limits are listed. Derivatives are defined and basic rules like the power, constant multiple, sum, difference, and chain rules are covered. Common derivatives are provided. Higher order derivatives and the second derivative are defined. Evaluation techniques like L'Hospital's rule, polynomials at infinity, and piecewise functions are summarized.
This document provides summaries of common derivatives and integrals, including:
- Basic properties and formulas for derivatives and integrals of functions like polynomials, trig functions, inverse trig functions, exponentials/logarithms, and more.
- Standard integration techniques like u-substitution, integration by parts, and trig substitutions.
- How to evaluate integrals of products and quotients of trig functions using properties like angle addition formulas and half-angle identities.
- How to use partial fractions to decompose rational functions for the purpose of integration.
So in summary, this document outlines essential derivatives and integrals for many common functions, along with standard integration strategies and techniques.
1) The document provides an overview of continuity, including defining continuity as a function having a limit equal to its value at a point.
2) It discusses several theorems related to continuity, such as the sum of continuous functions being continuous and various trigonometric, exponential, and logarithmic functions being continuous on their domains.
3) The document also covers inverse trigonometric functions and their domains of continuity.
Classification with mixtures of curved Mahalanobis metricsFrank Nielsen
This document discusses curved Mahalanobis distances in Cayley-Klein geometries and their application to classification. Specifically:
1. It introduces Mahalanobis distances and generalizes them to curved distances in Cayley-Klein geometries, which can model both elliptic and hyperbolic geometries.
2. It describes how to learn these curved Mahalanobis metrics using an adaptation of Large Margin Nearest Neighbors (LMNN) to the elliptic and hyperbolic cases.
3. Experimental results on several datasets show that curved Mahalanobis distances can achieve comparable or better classification accuracy than standard Mahalanobis distances.
This calculus cheat sheet provides definitions and formulas for:
1) Integrals including definite integrals, anti-derivatives, and the Fundamental Theorem of Calculus.
2) Common integration techniques like u-substitution and integration by parts.
3) Standard integrals of common functions like polynomials, trigonometric functions, logarithms, and exponentials.
Geodesic Method in Computer Vision and GraphicsGabriel Peyré
This document discusses geodesic methods in computer vision and graphics. It begins with an overview of topics including Riemannian data modelling, numerical computations of geodesics, geodesic image segmentation, geodesic shape representation, geodesic meshing, and inverse problems with geodesic fidelity. It then provides details on parametric surfaces, Riemannian manifolds, anisotropy and geodesics, the eikonal equation and viscosity solution, discretization methods, and numerical schemes for solving the fixed point equation.
This document summarizes Frank Nielsen's talk on divergence-based center clustering and their applications. Some key points:
- Center-based clustering aims to minimize an objective function that assigns data points to their closest cluster centers. This is an NP-hard problem when the number of dimensions and data points are greater than 1.
- Mixed divergences use dual centroids per cluster to define cluster assignments. Total Jensen divergences are proposed as a way to make divergences more robust by incorporating a conformal factor.
- For clustering when centroids do not have closed-form solutions, initialization methods like k-means++ can be used which randomly select initial seeds without computing centroids. Total Jensen k-means++
Mathematics (from Greek μάθημα máthēma, “knowledge, study, learning”) is the study of topics such as quantity (numbers), structure, space, and change. There is a range of views among mathematicians and philosophers as to the exact scope and definition of mathematics
Patch Matching with Polynomial Exponential Families and Projective DivergencesFrank Nielsen
This document presents a method called Polynomial Exponential Family-Patch Matching (PEF-PM) to solve the patch matching problem. PEF-PM models patch colors using polynomial exponential families (PEFs), which are universal smooth positive densities. It estimates PEFs using a Score Matching Estimator and accelerates batch estimation using Summed Area Tables. Patch similarity is measured using a statistical projective divergence called the symmetrized γ-divergence. Experiments show PEF-PM handles noise robustly, symmetries, and outperforms baseline methods.
Bregman divergences from comparative convexityFrank Nielsen
This document discusses generalized divergences and comparative convexity. It introduces Jensen divergences, Bregman divergences, and their generalizations to quasi-arithmetic and weighted means. Quasi-arithmetic Bregman divergences are defined for strictly (ρ,τ)-convex functions using two strictly monotone functions ρ and τ. Power mean Bregman divergences are obtained as a subfamily when ρ(x)=xδ1 and τ(x)=xδ2. A criterion is given to check (ρ,τ)-convexity by testing the ordinary convexity of the transformed function G=Fρ,τ.
The document discusses Taylor series and how they can be used to approximate functions. It provides an example of using Taylor series to approximate the cosine function. Specifically:
1) It derives the Taylor series for the cosine function centered at x=0.
2) It shows that this Taylor series converges absolutely for all x.
3) It demonstrates that the Taylor series equals the cosine function everywhere based on properties of the remainder term.
4) It provides an example of using the Taylor series to approximate cos(0.1) to within 10^-7, the accuracy of a calculator display.
Stuff You Must Know Cold for the AP Calculus BC Exam!A Jorge Garcia
This document provides a summary of key concepts from AP Calculus that students must know, including:
- Differentiation rules like product rule, quotient rule, and chain rule
- Integration techniques like Riemann sums, trapezoidal rule, and Simpson's rule
- Theorems related to derivatives and integrals like the Mean Value Theorem, Fundamental Theorem of Calculus, and Rolle's Theorem
- Common trigonometric derivatives and integrals
- Series approximations like Taylor series and Maclaurin series
- Calculus topics for polar coordinates, parametric equations, and vectors
The document discusses curve sketching of functions by analyzing their derivatives. It provides:
1) A checklist for graphing a function which involves finding where the function is positive/negative/zero, its monotonicity from the first derivative, and concavity from the second derivative.
2) An example of graphing the cubic function f(x) = 2x^3 - 3x^2 - 12x through analyzing its derivatives.
3) Explanations of the increasing/decreasing test and concavity test to determine monotonicity and concavity from a function's derivatives.
Proximal Splitting and Optimal TransportGabriel Peyré
This document summarizes proximal splitting and optimal transport methods. It begins with an overview of topics including optimal transport and imaging, convex analysis, and various proximal splitting algorithms. It then discusses measure-preserving maps between distributions and defines the optimal transport problem. Finally, it presents formulations for optimal transport including the convex Benamou-Brenier formulation and discrete formulations on centered and staggered grids. Numerical examples of optimal transport between distributions on 2D domains are also shown.
Computational Information Geometry: A quick review (ICMS)Frank Nielsen
From the workshop
Computational information geometry for image and signal processing
Sep 21, 2015 - Sep 25, 2015
ICMS, 15 South College Street, Edinburgh
http://www.icms.org.uk/workshop.php?id=343
We show how to provide a structure of probability space to the set of execution traces on a non-confluent abstract rewrite system, by defining a variant of a Lebesgue measure on the space of traces. Then, we show how to use this probability space to transform a non-deterministic calculus into a probabilistic one. We use as example λ+, a recently introduced calculus defined with techniques from deduction modulo.
We consider the non-deterministic extension of the call-by-value lambda calculus, which corresponds to the additive fragment of the linear-algebraic lambda-calculus. We define a fine-grained type system, capturing the right linearity present in such formalisms. After proving the subject reduction and the strong normalisation properties, we propose a translation of this calculus into the System F with pairs, which corresponds to a non linear fragment of linear logic. The translation provides a deeper understanding of the linearity in our setting.
Vectorial types, non-determinism and probabilistic systems: Towards a computa...Alejandro Díaz-Caro
This document discusses finding a correspondence between quantum computing and logic, similar to the Curry-Howard correspondence between intuitionistic logic and typed lambda calculus. It presents an untyped algebraic extension of lambda calculus called Lineal that can encode quantum computing concepts like qubits and quantum gates. It then introduces a typed version called Typed Lineal that assigns linear types capturing the "vectorial" structure of terms, allowing verification of properties of probabilistic and quantum processes. The goal is to define a computational quantum logic whose proofs are quantum programs.
This document provides summaries of common derivatives and integrals, including:
- Basic properties and formulas for derivatives and integrals of functions like polynomials, trig functions, inverse trig functions, exponentials/logarithms, and more.
- Standard integration techniques like u-substitution, integration by parts, and trig substitutions.
- How to evaluate integrals of products and quotients of trig functions using properties like angle addition formulas and half-angle identities.
- How to use partial fractions to decompose rational functions for the purpose of integration.
So in summary, this document outlines essential derivatives and integrals for many common functions, along with standard integration strategies and techniques.
1) The document provides an overview of continuity, including defining continuity as a function having a limit equal to its value at a point.
2) It discusses several theorems related to continuity, such as the sum of continuous functions being continuous and various trigonometric, exponential, and logarithmic functions being continuous on their domains.
3) The document also covers inverse trigonometric functions and their domains of continuity.
Classification with mixtures of curved Mahalanobis metricsFrank Nielsen
This document discusses curved Mahalanobis distances in Cayley-Klein geometries and their application to classification. Specifically:
1. It introduces Mahalanobis distances and generalizes them to curved distances in Cayley-Klein geometries, which can model both elliptic and hyperbolic geometries.
2. It describes how to learn these curved Mahalanobis metrics using an adaptation of Large Margin Nearest Neighbors (LMNN) to the elliptic and hyperbolic cases.
3. Experimental results on several datasets show that curved Mahalanobis distances can achieve comparable or better classification accuracy than standard Mahalanobis distances.
This calculus cheat sheet provides definitions and formulas for:
1) Integrals including definite integrals, anti-derivatives, and the Fundamental Theorem of Calculus.
2) Common integration techniques like u-substitution and integration by parts.
3) Standard integrals of common functions like polynomials, trigonometric functions, logarithms, and exponentials.
Geodesic Method in Computer Vision and GraphicsGabriel Peyré
This document discusses geodesic methods in computer vision and graphics. It begins with an overview of topics including Riemannian data modelling, numerical computations of geodesics, geodesic image segmentation, geodesic shape representation, geodesic meshing, and inverse problems with geodesic fidelity. It then provides details on parametric surfaces, Riemannian manifolds, anisotropy and geodesics, the eikonal equation and viscosity solution, discretization methods, and numerical schemes for solving the fixed point equation.
This document summarizes Frank Nielsen's talk on divergence-based center clustering and their applications. Some key points:
- Center-based clustering aims to minimize an objective function that assigns data points to their closest cluster centers. This is an NP-hard problem when the number of dimensions and data points are greater than 1.
- Mixed divergences use dual centroids per cluster to define cluster assignments. Total Jensen divergences are proposed as a way to make divergences more robust by incorporating a conformal factor.
- For clustering when centroids do not have closed-form solutions, initialization methods like k-means++ can be used which randomly select initial seeds without computing centroids. Total Jensen k-means++
Mathematics (from Greek μάθημα máthēma, “knowledge, study, learning”) is the study of topics such as quantity (numbers), structure, space, and change. There is a range of views among mathematicians and philosophers as to the exact scope and definition of mathematics
Patch Matching with Polynomial Exponential Families and Projective DivergencesFrank Nielsen
This document presents a method called Polynomial Exponential Family-Patch Matching (PEF-PM) to solve the patch matching problem. PEF-PM models patch colors using polynomial exponential families (PEFs), which are universal smooth positive densities. It estimates PEFs using a Score Matching Estimator and accelerates batch estimation using Summed Area Tables. Patch similarity is measured using a statistical projective divergence called the symmetrized γ-divergence. Experiments show PEF-PM handles noise robustly, symmetries, and outperforms baseline methods.
Bregman divergences from comparative convexityFrank Nielsen
This document discusses generalized divergences and comparative convexity. It introduces Jensen divergences, Bregman divergences, and their generalizations to quasi-arithmetic and weighted means. Quasi-arithmetic Bregman divergences are defined for strictly (ρ,τ)-convex functions using two strictly monotone functions ρ and τ. Power mean Bregman divergences are obtained as a subfamily when ρ(x)=xδ1 and τ(x)=xδ2. A criterion is given to check (ρ,τ)-convexity by testing the ordinary convexity of the transformed function G=Fρ,τ.
The document discusses Taylor series and how they can be used to approximate functions. It provides an example of using Taylor series to approximate the cosine function. Specifically:
1) It derives the Taylor series for the cosine function centered at x=0.
2) It shows that this Taylor series converges absolutely for all x.
3) It demonstrates that the Taylor series equals the cosine function everywhere based on properties of the remainder term.
4) It provides an example of using the Taylor series to approximate cos(0.1) to within 10^-7, the accuracy of a calculator display.
Stuff You Must Know Cold for the AP Calculus BC Exam!A Jorge Garcia
This document provides a summary of key concepts from AP Calculus that students must know, including:
- Differentiation rules like product rule, quotient rule, and chain rule
- Integration techniques like Riemann sums, trapezoidal rule, and Simpson's rule
- Theorems related to derivatives and integrals like the Mean Value Theorem, Fundamental Theorem of Calculus, and Rolle's Theorem
- Common trigonometric derivatives and integrals
- Series approximations like Taylor series and Maclaurin series
- Calculus topics for polar coordinates, parametric equations, and vectors
The document discusses curve sketching of functions by analyzing their derivatives. It provides:
1) A checklist for graphing a function which involves finding where the function is positive/negative/zero, its monotonicity from the first derivative, and concavity from the second derivative.
2) An example of graphing the cubic function f(x) = 2x^3 - 3x^2 - 12x through analyzing its derivatives.
3) Explanations of the increasing/decreasing test and concavity test to determine monotonicity and concavity from a function's derivatives.
Proximal Splitting and Optimal TransportGabriel Peyré
This document summarizes proximal splitting and optimal transport methods. It begins with an overview of topics including optimal transport and imaging, convex analysis, and various proximal splitting algorithms. It then discusses measure-preserving maps between distributions and defines the optimal transport problem. Finally, it presents formulations for optimal transport including the convex Benamou-Brenier formulation and discrete formulations on centered and staggered grids. Numerical examples of optimal transport between distributions on 2D domains are also shown.
Computational Information Geometry: A quick review (ICMS)Frank Nielsen
From the workshop
Computational information geometry for image and signal processing
Sep 21, 2015 - Sep 25, 2015
ICMS, 15 South College Street, Edinburgh
http://www.icms.org.uk/workshop.php?id=343
We show how to provide a structure of probability space to the set of execution traces on a non-confluent abstract rewrite system, by defining a variant of a Lebesgue measure on the space of traces. Then, we show how to use this probability space to transform a non-deterministic calculus into a probabilistic one. We use as example λ+, a recently introduced calculus defined with techniques from deduction modulo.
We consider the non-deterministic extension of the call-by-value lambda calculus, which corresponds to the additive fragment of the linear-algebraic lambda-calculus. We define a fine-grained type system, capturing the right linearity present in such formalisms. After proving the subject reduction and the strong normalisation properties, we propose a translation of this calculus into the System F with pairs, which corresponds to a non linear fragment of linear logic. The translation provides a deeper understanding of the linearity in our setting.
Vectorial types, non-determinism and probabilistic systems: Towards a computa...Alejandro Díaz-Caro
This document discusses finding a correspondence between quantum computing and logic, similar to the Curry-Howard correspondence between intuitionistic logic and typed lambda calculus. It presents an untyped algebraic extension of lambda calculus called Lineal that can encode quantum computing concepts like qubits and quantum gates. It then introduces a typed version called Typed Lineal that assigns linear types capturing the "vectorial" structure of terms, allowing verification of properties of probabilistic and quantum processes. The goal is to define a computational quantum logic whose proofs are quantum programs.
Call-by-value non-determinism in a linear logic type disciplineAlejandro Díaz-Caro
We consider the call-by-value λ-calculus extended with a may-convergent non-deterministic choice and a must-convergent parallel composition. Inspired by recent works on the relational semantics of linear logic and non-idempotent intersection types, we endow this calculus with a type system based on the so-called Girard's second translation of intuitionistic logic into linear logic. We prove that a term is typable if and only if is converging, and that its typing tree carries enough information to give a bound on the length of its lazy call-by-value reduction. Moreover, when the typing tree is minimal, such a bound becomes the exact length of the reduction.
Slides used during my thesis defense "Du typage vectoriel"Alejandro Díaz-Caro
The objective of this thesis is to develop a type theory for the linear-algebraic λ-calculus, an extension of λ-calculus motivated by quantum computing. This algebraic extension encompass all the terms of λ-calculus together with their linear combinations, so if t and r are two terms, so is α.t + β.r, with α and β being scalars from a given ring. The key idea and challenge of this thesis was to introduce a type system where the types, in the same way as the terms, form a vectorial space, providing the information about the structure of the normal form of the terms. This thesis presents the system Lineal, and also three intermediate systems, however interesting by themselves: Scalar, Additive and λCA, all of them with their subject reduction and strong normalisation proofs.
We define an equivalence relation on propositions and a proof system where equivalent propositions have the same proofs. The system obtained this way resembles several known non-deterministic and algebraic lambda-calculi.
This document describes work on simply typed lambda calculus modulo type isomorphisms. It introduces an equivalence relation between types based on isomorphisms for conjunction, implication, and their interactions. This allows identifying isomorphic types and terms with the same type up to isomorphism. The document outlines the challenges of defining a type-isomorphic proof theory and presents solutions, such as Church-style projections and alpha equivalence rules, to develop a sound and complete operational semantics for the simply typed lambda calculus modulo type isomorphisms.
Quantum computing, non-determinism, probabilistic systems... and the logic be...Alejandro Díaz-Caro
This document provides an introduction to quantum computing, λ-calculus, and typed λ-calculus. It discusses how these topics relate to intuitionistic logic through the Curry-Howard correspondence. The author is working on algebraic calculi and vectorial typing for non-determinism and probabilistic systems, and how this can be extended from non-determinism to probabilities. An example of Deutsch's algorithm for quantum computing is also presented.
The document discusses affine computation and affine automata. It introduces affine systems as a generalization of probabilistic and quantum systems that allows for negative amplitudes. Affine states are defined as linear combinations of basis states with coefficients summing to 1. Affine transformations must also map affine states to affine states. Affine automata operate similarly to quantum finite automata, but use affine transformations instead of unitary transformations. The document shows that bounded-error affine languages properly contain regular languages and examines a language over a,b that counts the difference between a's and b's.
We study a purely functional quantum extension of lambda calculus, that is, an extension of lambda calculus to express some quantum features, where the quantum memory is abstracted out. This calculus is a typed extension of the first-order linear-algebraic lambda-calculus. The type is linear on superpositions, so to forbid from cloning them, while allows to clone basis vectors. We provide examples of the Deutsch algorithm and the Teleportation, and prove the subject reduction of the calculus. In addition, we provide a denotational semantics where superposed types are interpreted as vector spaces and non-superposed types as their basis.
The linear-algebraic lambda-calculus (arXiv:quant-ph/0612199) extends the lambda-calculus with the possibility of making arbitrary linear combinations of lambda-calculus terms a.t+b.u. In this paper we provide a System F -like type system for the linear-algebraic lambda-calculus, which keeps track of \'the amount of a type\' that is present in a term. We show that this scalar type system enjoys both the subject-reduction property and the strong-normalisation property, which constitute our main technical results. The latter yields a significant simplification of the linear-algebraic lambda-calculus itself, by removing the need for some restrictions in its reduction rules - and thus leaving it more intuitive. More importantly we show that our type system can readily be modified into a probabilistic type system, which guarantees that terms define correct probabilistic functions. Thus we are able to specialize the linear-algebraic lambda-calculus into a higher-order, probabilistic lambda-calculus. Finally we discuss the more long-term aims of this reseach in terms of establishing connections with linear logic, and building up towards a quantum physical logic through the Curry-Howard isomorphism. Thus we begin to investigate the logic induced by the scalar type system, and prove a no-cloning theorem expressed solely in terms of the possible proof methods in this logic.
The linear-algebraic λ-calculus and the algebraic λ-calculus are untyped λ-calculi extended with arbitrary linear combinations of terms. The former presents the axioms of linear algebra in the form of a rewrite system, while the latter uses equalities. When given by rewrites, algebraic λ-calculi are not confluent unless further restrictions are added. We provide a type system for the linear-algebraic λ-calculus enforcing strong normalisation, which gives back confluence. The type system allows an interpretation in System F.
This document defines and provides examples of metric spaces. It begins by introducing metrics as distance functions that satisfy certain properties like non-negativity and the triangle inequality. Examples of metric spaces given include the real numbers under the usual distance, the complex numbers, and the plane under various distance metrics like the Euclidean, taxi cab, and maximum metrics. It is noted that some functions like the minimum function are not valid metrics as they fail to satisfy all the required properties.
The document discusses key concepts in regression analysis including best-fit lines, correlation coefficients, linearization, residuals, weighted means, and using linear approximations and differentials. It provides examples of how to calculate coefficients for best-fit lines, determine correlation direction and strength, find linearized forms of functions, and use Newton's method to find where functions intersect. The goal of regression is to find a formula that minimizes residuals and best represents the relationship between variables.
Introduction to Artificial IntelligenceManoj Harsule
S: fuzzy relation defined on Y and Z.
To find the composite relation R o S on X and Z:
μR o S(x,z) = maxy [min(μR(x,y), μS(y,z))]
For each x and z, find the maximum membership grade obtained by considering all possible y values and taking the minimum of the membership grades of R and S.
This gives the generalized intersection-union definition of composition of fuzzy relations. It reduces to the usual composition rule when relations are crisp.
The document discusses limits and the limit laws. It introduces the concept of a limit using an "error-tolerance" game. It then proves some basic limits, such as the limit of x as x approaches a equals a, and the limit of a constant c equals c. It explains the limit laws for addition, subtraction, multiplication, division and nth roots of functions. It uses the error-tolerance game framework to justify the limit laws.
Quantum fields on the de sitter spacetime - Ion CotaescuSEENET-MTP
This document summarizes research on defining quantum fields on de Sitter spacetime. It discusses how external symmetries allow defining conserved observables like an energy operator, despite doubts in the literature. New quantum modes were obtained for scalar, Dirac, and vector fields on de Sitter spacetime using this energy operator. The paper reviews defining fields in local frames where spin is well-defined, and how isometries give rise to conserved quantities through external symmetry transformations that involve gauge transformations and diffeomorphisms. Generators of field representations are constructed from orbital and spin parts related to Killing vectors and structure functions.
This document provides definitions and formulas for vector algebra and calculus. It includes:
- Definitions of orthonormal vectors, scalar and vector products, equations of lines and planes
- Formulas for vector multiplication, scalar and vector triple products, and non-orthogonal bases
- Notation for vector and scalar functions, and identities for gradient, divergence, curl, and Laplacian operators
- Formulas for gradient, divergence, curl in Cartesian, cylindrical and spherical coordinates
- Definitions of eigenvalues/eigenvectors, and formulas for matrix operations like transpose, inverse, and determinant
- Theorems of Gauss, Stokes, and Green relating integrals over volumes, surfaces and curves
We propose a way to unify two approaches of non-cloning in quantum lambda-calculi. The first approach is to forbid duplicating variables, while the second is to consider all lambda-terms as algebraic-linear functions. We illustrate this idea by defining a quantum extension of first-order simply-typed lambda-calculus, where the type is linear on superposition, while allows cloning base vectors. In addition, we provide an interpretation of the calculus where superposed types are interpreted as vector spaces and non-superposed types as their basis.
Slides of LNCS 10687:281-293 paper (TPNC 2017). Full paper: https://doi.org/10.1007/978-3-319-71069-3_22
An implicit partial pivoting gauss elimination algorithm for linear system of...Alexander Decker
This document proposes a new method for solving fully fuzzy linear systems of equations (FFLS) using Gauss elimination with implicit partial pivoting. It begins by introducing concepts of fuzzy sets theory and arithmetic operations on fuzzy numbers. It then presents the FFLS problem and shows how it can be reduced to a crisp linear system. The key steps of the proposed Gauss elimination method with implicit partial pivoting are outlined. Finally, the method is illustrated on a numerical example, showing the steps to obtain solutions for the variables x, y and z.
This document provides an overview of key calculus concepts and formulas taught in a Calculus I course at Miami Dade College - Hialeah Campus. The topics covered include limits and derivatives, integration, optimization techniques, and applications of calculus to economics, business, physics, and other fields. The document is intended as a study guide for students in the Calculus I class taught by Professor Mohammad Shakil.
This document provides an introduction to gauge theory. It discusses what a gauge is in quantum mechanics and how phase transformations lead to the idea of gauge symmetry. It defines what a gauge theory is, using electromagnetism as an example where the gauge field is the electromagnetic potential and gauge transformations change the phase of the electron wavefunction. It discusses how Yang-Mills generalized this to non-abelian gauge groups and the importance of principal and vector bundles. It covers connections, curvature, and gauge transformations as key mathematical concepts in gauge theory.
This document summarizes a presentation on matrix computations in machine learning. It discusses how matrix computations are used in traditional machine learning algorithms like regression, PCA, and spectral graph partitioning. It also covers specialized machine learning algorithms that involve numerical optimization and matrix computations, such as Lasso, non-negative matrix factorization, sparse PCA, matrix completion, and co-clustering. Finally, it discusses how machine learning can be applied to improve graph clustering algorithms.
This section discusses integer representations and arithmetic modulo m. It introduces base b expansions for representing integers using any base b greater than 1. Specifically, it covers binary expansions using base 2, octal expansions using base 8, and hexadecimal expansions using base 16. It defines the operations of addition and multiplication modulo m and describes how integers behave under these operations.
Density theorems for Euclidean point configurationsVjekoslavKovac1
1. The document discusses density theorems for point configurations in Euclidean space. Density theorems study when a measurable set A contained in Euclidean space can be considered "large".
2. One classical result is that for any measurable set A contained in R2 with positive upper Banach density, there exist points in A whose distance is any sufficiently large real number. This has been generalized to higher dimensions and other point configurations.
3. Open questions remain about determining all point configurations P for which one can show that a sufficiently large measurable set A contained in high dimensional Euclidean space must contain a scaled copy of P.
This document compares numerical integration methods for approximating definite integrals, including trapezoidal, Simpson's 1/3 rule, Simpson's 3/8 rule, and Weddle's rule. It presents the formulas for each method and applies them to evaluate several definite integrals. The results show that Weddle's rule provides the most accurate approximations compared to the other methods.
This document compares numerical integration methods for approximating definite integrals, including trapezoidal, Simpson's 1/3 rule, Simpson's 3/8 rule, and Weddle's rule. It presents the definitions and formulas for each method. Various definite integrals are evaluated using each method and the results are compared to the actual values, with Weddle's rule found to be the most accurate. The document proposes a new composite rule for numerical integration and concludes that Weddle's rule gives greater accuracy than the other methods tested.
Density theorems for anisotropic point configurationsVjekoslavKovac1
The document summarizes recent work on density theorems for anisotropic point configurations. It discusses classical results on Euclidean density theorems and their study of "large" measurable sets. It then outlines the general approach taken, involving decomposing counting forms into structured, error, and uniform parts. Finally, it presents some of the author's recent results on anisotropic dilates of simplices, boxes, and trees, proving the existence of such configurations in sets with positive upper Banach density.
1. This document provides step-by-step instructions for solving various calculus problems including finding zeros, derivatives, integrals, limits, continuity, asymptotes, extrema, and solving differential equations.
2. For each type of problem, it lists the key steps to take in a "You think..." prompt followed by more detailed explanations and formulas.
3. The document is intended as a reference for a student to know the general approach and techniques for multiple calculus problems at a glance.
Intuitionistic First-Order Logic: Categorical semantics via the Curry-Howard ...Marco Benini
A novel approach to giving an interpretation of logic inside category theory. This work has been developed as part of my sabbatical Marie Curie fellowship in Leeds.
Presented at the Logic Seminar, School of Mathematics, University of Leeds (2012).
Comparison Results of Trapezoidal, Simpson’s 13 rule, Simpson’s 38 rule, and ...BRNSS Publication Hub
Numerical integration plays very important role in mathematics. In this paper, overviews on the most common one, namely, trapezoidal Simpson’s 13 rule, Simpson’s 38 rule and Weddle’s rule. Different procedures compared and tried to evaluate the value of some definite integrals. A combined approach of different integral rules has been proposed for a definite integral to get more accurate value for all case
This document provides an introduction to optimization theory, beginning with an overview of different optimization problem types such as nonlinear equations, nonlinear least squares, constrained and unconstrained optimization. It then presents some key concepts in optimization theory including Taylor's theorem, positive definiteness, convexity, local and global minima, first and second order necessary/sufficient conditions, and uniqueness of minima for convex functions. The document concludes with an overview of the linear least squares problem and its properties.
Similar to Equivalence of algebraic λ-calculi (20)
1. Equivalence of Algebraic λ-calculi
Alejandro Díaz-Caro1 Simon Perdrix12
Christine Tasson3 Benoît Valiron1
1 LIG, Université de Grenoble
2 CNRS
3 CEA-LIST, MeASI
HOR • July 14th, 2010 • Edinburgh
2. Algebraic calculi
Algebraic ≡ Module structure
1
(λx.yx + · λx.y(yx)) λz.z
2
Two origins
λalg −→ differential calculus
λlin −→ quantum computation
Differences in
the encoding of the module structure
how the reduction rules apply
A. Díaz-Caro, S. Perdrix, C. Tasson, B. Valiron • Equivalence of Algebraic λ-calculi 2 / 25
3. λalg and λlin
share the same set of terms
M, N, L ::= V | (M ) N | M + N | α · M
U, V, W ::= B | α · B | V + W | 0
B ::= x | λx.M
closed under associativity and conmutativity
M +N =N +M (M + N ) + L = M + (N + L)
base term is either a variable or abstraction
a value is either 0 or of the form αi · Bi ;
a general term can be anything.
A. Díaz-Caro, S. Perdrix, C. Tasson, B. Valiron • Equivalence of Algebraic λ-calculi 3 / 25
4. Rules for λalg
Group E
α · 0 =a 0 0 + M =a M
α · (β · M ) =a (α × β) · M 0 · M =a 0
1 · M =a M α · (M + N ) =a α·M +α·N
α·M +β·M =a (α + β) · M
Group F
Group A
(M + N ) L =a (M ) L + (N ) L
(α · M ) N =a α · (M ) N
(0) M =a 0
Group B
(λx.M ) N →a M [x := N ]
A. Díaz-Caro, S. Perdrix, C. Tasson, B. Valiron • Equivalence of Algebraic λ-calculi 4 / 25
5. Rules for λlin
Group E
α·0 → 0 0+M → M
α · (β · M ) → (α × β) · M 0·M → 0
1·M → M α · (M + N ) → α·M +α·N
α·M +β·M → (α + β) · M
Group F
α·M +M → (α + 1) · M
M +M → (1 + 1) · M
Group A
(M + N ) L → (M ) L + (N ) L (M ) (N + L) → (M ) N + (M ) L
(α · M ) N → α · (M ) N (M ) α · N → α · (M ) N
(0) M → 0 (M ) 0 → 0
Group B
(λx.M ) B →a M [x := B]
A. Díaz-Caro, S. Perdrix, C. Tasson, B. Valiron • Equivalence of Algebraic λ-calculi 5 / 25
6. Context rules
Some common rules
M →M M →M
(M ) N → (M ) N M +N →M +N
N →N M →M
M +N →M +N α·M →α·M
plus one additional rule for λlin
M→ M
(V ) M → (V ) M
A. Díaz-Caro, S. Perdrix, C. Tasson, B. Valiron • Equivalence of Algebraic λ-calculi 6 / 25
7. Modifications to the original calculi
The most important
No reduction under lambda (à la Plotkin)
λx.M is a base term for any M
λx.(α · M + β · N ) = α · λx.M + β · λx.N
A. Díaz-Caro, S. Perdrix, C. Tasson, B. Valiron • Equivalence of Algebraic λ-calculi 7 / 25
8. Confluence
This system is not trivially confluent
Example
Let Yb = (λx.(b + (x) x)) (λx.(b + (x) x)), then
0 ← Yb − Yb → Yb + b − Yb → b
Several possible solutions
Type system → strong normalisation
Take scalars as positive reals
Restrict some of the reduction rules
For this talk, we simply assume that confluence holds.
A. Díaz-Caro, S. Perdrix, C. Tasson, B. Valiron • Equivalence of Algebraic λ-calculi 8 / 25
9. Question
Can we simulate λlin with λalg and λalg with λlin ?
A. Díaz-Caro, S. Perdrix, C. Tasson, B. Valiron • Equivalence of Algebraic λ-calculi 9 / 25
10. Simulating λalg with λlin
Thunks
Idea: encapsulating terms M as B = λf.M . So M ≡ (B) f
A. Díaz-Caro, S. Perdrix, C. Tasson, B. Valiron • Equivalence of Algebraic λ-calculi 10 / 25
11. Simulating λalg with λlin
Thunks
Idea: encapsulating terms M as B = λf.M . So M ≡ (B) f
(|x|)f = (x) f
(|0|)f = 0
(|λx.M |)f = λx.(|M |)f
(|(M ) N |)f = ((|M |)f ) λz.(|N |)f
(|M + N |)f = (|M |)f + (|N |)f
(|α · M |)f = α · (|M |)f
If the encoding λalg → λlin is denoted with (|−|)f , one could
expect the result
M →∗ N
a ⇒ (|M |)f →∗ (|N |)f
A. Díaz-Caro, S. Perdrix, C. Tasson, B. Valiron • Equivalence of Algebraic λ-calculi 10 / 25
12. Problem
Example
(λx.λy.(y) x) I →a λy.(y) I,
(|(λx.λy.(y) x) I|)f = (λx.λy.((y) f ) (λz.(x) f )) (λz λx.(x) f )
→∗ λy.((y) f ) (λz.(λz.λx.(x) f ) f ),
(|λy.(y) I|)f = λy.((y) f ) (λz.λx.(x) f )
“Administrative” redex hidden in the first one
A. Díaz-Caro, S. Perdrix, C. Tasson, B. Valiron • Equivalence of Algebraic λ-calculi 11 / 25
13. Removing administrative redexes
Admin f 0 = 0
Admin f x = x
Admin f (λf.M ) f = Admin f M
Admin f (M ) N = (Admin f M ) Admin f N
Admin f λx.M = λx.Admin f M (x = f )
Admin f M + N = Admin f M + Admin f N
Admin f α · M = α · Admin f M
Theorem (Simulation)
For any program (i.e. closed term) M , and value V ,
if M →∗ V then exists a value W such that
a
(|M |)f →∗ W and Admin f W = Admin f (|V |)f
A. Díaz-Caro, S. Perdrix, C. Tasson, B. Valiron • Equivalence of Algebraic λ-calculi 12 / 25
14. First,
M /N
≡Admin f
M
Lemma
If Admin f M = Admin f M and M → N then there exists N
such that Admin f N = Admin f N and such that M →∗ N .
A. Díaz-Caro, S. Perdrix, C. Tasson, B. Valiron • Equivalence of Algebraic λ-calculi 13 / 25
15. First,
M /N
≡Admin f ≡Admin f
M
∗/ N
Lemma
If Admin f M = Admin f M and M → N then there exists N
such that Admin f N = Admin f N and such that M →∗ N .
A. Díaz-Caro, S. Perdrix, C. Tasson, B. Valiron • Equivalence of Algebraic λ-calculi 13 / 25
16. Then,
M
≡Admin f
W
Lemma
If W is a value and M a term such that
Admin f W = Admin f M , then there exists a value V such that
M →∗ V and Admin f W = Admin f V .
A. Díaz-Caro, S. Perdrix, C. Tasson, B. Valiron • Equivalence of Algebraic λ-calculi 14 / 25
17. Then,
M OOO
OOO
OOO
OOO
OOO
≡Admin f OOO
OOO
OOO
OO'
∗
W ≡Admin f V
Lemma
If W is a value and M a term such that
Admin f W = Admin f M , then there exists a value V such that
M →∗ V and Admin f W = Admin f V .
A. Díaz-Caro, S. Perdrix, C. Tasson, B. Valiron • Equivalence of Algebraic λ-calculi 14 / 25
18. Finally,
M
a ∗/
N (|N |)f V
Lemma
For any program (i.e. closed term) M , if M →a N and
(|N |)f →∗ V for a value V, then there exists M such that
(|M |)f →∗ M such that Admin f M = Admin f V .
A. Díaz-Caro, S. Perdrix, C. Tasson, B. Valiron • Equivalence of Algebraic λ-calculi 15 / 25
19. Finally,
M (|M |)f / ∗M
≡Admin f
a ∗/
N (|N |)f V
Lemma
For any program (i.e. closed term) M , if M →a N and
(|N |)f →∗ V for a value V, then there exists M such that
(|M |)f →∗ M such that Admin f M = Admin f V .
A. Díaz-Caro, S. Perdrix, C. Tasson, B. Valiron • Equivalence of Algebraic λ-calculi 15 / 25
20. Now, we want
∗/
M aV
(|M |)f (|V |)f
A. Díaz-Caro, S. Perdrix, C. Tasson, B. Valiron • Equivalence of Algebraic λ-calculi 16 / 25
21. Now, we want
∗/
M aV
(|M |)f (|V |)f
FF
FF
FF ≡Admin f
FF ∗
W
A. Díaz-Caro, S. Perdrix, C. Tasson, B. Valiron • Equivalence of Algebraic λ-calculi 16 / 25
22. Base case
V = V
(|V |)f (|V |)f
= =
(|V |)f
A. Díaz-Caro, S. Perdrix, C. Tasson, B. Valiron • Equivalence of Algebraic λ-calculi 17 / 25
23. Inductive case
/ ∗/
M aM aV
(|M |)f (|M |)f (|V |)f
EE
EE
EE ≡Admin f
EE ∗
W
Lemma 2
Lemma 3 M
EE
/ ∗ EE
M (|M | f
) M EE
EE
EE
≡Admin ≡Admin EE
f f
EE
EE
/
a
M ( |f
|M )
∗
W EE ∗
W W
A. Díaz-Caro, S. Perdrix, C. Tasson, B. Valiron • Equivalence of Algebraic λ-calculi 18 / 25
24. Inductive case
/ ∗/
M aM aV
(|M |)f (|M |)f (|V |)f
GG EE
GG EE
GG EE
GG EE ∗ ≡Admin f
G# ∗
M ≡Admin f W
Lemma 2
Lemma 3 M
EE
/ ∗ EE
M (|M | f
) M EE
EE
EE
≡Admin ≡Admin EE
f f
EE
EE
/
a
M ( |f
|M )
∗
W EE ∗
W ≡Admin
f W
A. Díaz-Caro, S. Perdrix, C. Tasson, B. Valiron • Equivalence of Algebraic λ-calculi 18 / 25
25. Inductive case
/ ∗/
M aM aV
(|M |)f (|M |)f (|V |)f
GG EE
GG EE
GG EE
GG EE ∗ ≡Admin f
G# ∗
M H ≡Admin f W
HH
HH
HH ≡Admin f
HH ∗
#
W
Lemma 2
Lemma 3 M
EE
/ ∗ EE
M (|M | f
) M EE
EE
EE
≡Admin ≡Admin EE
f f
EE
EE
/
a
M ( |f
|M )
∗
W EE ∗
W W
A. Díaz-Caro, S. Perdrix, C. Tasson, B. Valiron • Equivalence of Algebraic λ-calculi 18 / 25
26. Simulating λlin with λalg
CPS encoding
We define [|−|] : λlin → λalg
[|x|] = λf. (f ) x
[|0|] = 0
[|λx.M |] = λf.(f ) λx.[|M |]
[|(M ) N |] = λf.([|M |]) λg.([|N |]) λh.((g) h) f
[|α · M |] = λf.(α · [|M |]) f
[|M + N |] = λf.([|M |] + [|N |]) f
Now, given the continuation I = λx.x, if M → V we want
([|M |]) I →a W for some W related to [|V |]
A. Díaz-Caro, S. Perdrix, C. Tasson, B. Valiron • Equivalence of Algebraic λ-calculi 19 / 25
27. The encoding for values
Ψ(x) = x,
Ψ(0) = 0,
Ψ(λx.M ) = λx.[|M |],
Ψ(α · V ) = α · Ψ(V ),
Ψ(V + W ) = Ψ(V ) + Ψ(W ).
Ψ(V ) is such that [|V |] →∗ Ψ(V ).
a
Theorem (Simulation)
M →∗ V ⇒ ([|M |]) λx.x →∗ Ψ(V ).
a
A. Díaz-Caro, S. Perdrix, C. Tasson, B. Valiron • Equivalence of Algebraic λ-calculi 20 / 25
28. An auxiliary definition
Note that
([|B|]) K = (λf.(f ) Ψ(B)) K →∗ (K) Ψ(B).
a
We define the term B : K = (K) Ψ(B).
We extend this definition to other terms in some
not-so-easy way due to the algebraic properties of the
language.
It is the main difficulty in this proof.
A. Díaz-Caro, S. Perdrix, C. Tasson, B. Valiron • Equivalence of Algebraic λ-calculi 21 / 25
29. The auxiliary definition
Let (:) : λlin × λalg → λalg :
B : K = (K) Ψ(B)
(M + N ) : K = M : K + N : K
α · M : K = α · (M : K)
0:K=0
(U ) B : K = ((Ψ(U )) Ψ(B)) K
(U ) (V + W ) : K = ((U ) V + (U ) W ) : K
(U ) (α · B) : K = α · (U ) B : K
(U ) 0 : K = 0
(U ) N : K = N : λf.((Ψ(U )) f ) K
(M ) N : K = M : λg.([|N |]) λh.((g) h) K
A. Díaz-Caro, S. Perdrix, C. Tasson, B. Valiron • Equivalence of Algebraic λ-calculi 22 / 25
30. Proof
If M →∗ V ,
([|M |]) (λx.x)
Ψ(V )
A. Díaz-Caro, S. Perdrix, C. Tasson, B. Valiron • Equivalence of Algebraic λ-calculi 23 / 25
31. Proof
If K is a value:
For all M , ([|M |]) K →∗ M : K
a
If M → N then M : K →∗ N : K
a
If V is a value, V : K →∗ (K) Ψ(V )
a
If M →∗ V ,
([|M |]) (λx.x) →∗ M : λx.x
a
Ψ(V )
A. Díaz-Caro, S. Perdrix, C. Tasson, B. Valiron • Equivalence of Algebraic λ-calculi 23 / 25
32. Proof
If K is a value:
For all M , ([|M |]) K →∗ M : K
a
If M → N then M : K →∗ N : K
a
If V is a value, V : K →∗ (K) Ψ(V )
a
If M →∗ V ,
([|M |]) (λx.x) →∗ M : λx.x
a
→∗ V : λx.x
a
Ψ(V )
A. Díaz-Caro, S. Perdrix, C. Tasson, B. Valiron • Equivalence of Algebraic λ-calculi 23 / 25
33. Proof
If K is a value:
For all M , ([|M |]) K →∗ M : K
a
If M → N then M : K →∗ N : K
a
If V is a value, V : K →∗ (K) Ψ(V )
a
If M →∗ V ,
([|M |]) (λx.x) →∗ M : λx.x
a
→∗ V : λx.x
a
→∗ (λx.x) Ψ(V )
a
Ψ(V )
A. Díaz-Caro, S. Perdrix, C. Tasson, B. Valiron • Equivalence of Algebraic λ-calculi 23 / 25
34. Proof
If K is a value:
For all M , ([|M |]) K →∗ M : K
a
If M → N then M : K →∗ N : K
a
If V is a value, V : K →∗ (K) Ψ(V )
a
If M →∗ V ,
([|M |]) (λx.x) →∗
a M : λx.x
→∗
a V : λx.x
→∗
a (λx.x) Ψ(V )
→a Ψ(V )
A. Díaz-Caro, S. Perdrix, C. Tasson, B. Valiron • Equivalence of Algebraic λ-calculi 23 / 25
35. Future and ongoing work
In classical lambda-calculus:
CPS can also be done to interpret call-by-name in
call-by-value [Plotkin 75].
The thunk construction can be related to CPS [Hatcliff and
Danvy 96].
Can we relate to these?
A. Díaz-Caro, S. Perdrix, C. Tasson, B. Valiron • Equivalence of Algebraic λ-calculi 24 / 25
36. Future and ongoing work
Other interesting questions
The choice between equality and reduction rules seems
arbitrary.
Can we exchange them?
Are the simulations still valid?
A. Díaz-Caro, S. Perdrix, C. Tasson, B. Valiron • Equivalence of Algebraic λ-calculi 25 / 25