The document describes a language designer's workbench that provides tools to support the implementation and verification of language designs. It allows language designers to define the syntax, name binding, type system, and semantics of a language and provides tools like parsers, type checkers, interpreters, and debuggers. The workbench integrates existing tools like SDF3, NaBL, TypeScript, and Stratego under a common framework. A prototype implementation of the workbench is demonstrated by defining the syntax, types, and semantics of a simple functional language called PCF.
Separation of Concerns in Language DefinitionEelco Visser
Slides for keynote at the Modularity 2014 conference in Lugano on April 23, 2014. (A considerable part of the talk consisted of live programming the syntax definition and name binding rules for a little language.)
Dynamic Semantics Specification and Interpreter GenerationEelco Visser
(1) The document describes a domain specific language called DynSem for specifying dynamic semantics of programming languages. DynSem allows defining semantics in a modular way using semantic rules.
(2) DynSem specifications can be used to generate high-performance interpreters. The document outlines various language features that can be modeled in DynSem, including arithmetic, booleans, control flow, functions, and mutable state.
(3) DynSem specifications are composed of modules that import language signatures and define semantic rules over them. Rules are used to reduce expressions to values in an environment and store. This allows modeling features like variables, functions, and mutable boxes.
Declare Your Language: Dynamic SemanticsEelco Visser
This document provides an overview of DynSem, a domain-specific language for specifying the dynamic semantics of programming languages. It discusses the design goals of DynSem in being concise, modular, executable, portable, and high-performance. The document then provides an example DynSem specification for the semantics of a simple language with features like arithmetic, booleans, functions, and mutable variables. It describes how DynSem specifications are organized into modules that can be composed to define the semantics.
This document summarizes and discusses type checking algorithms for programming languages. It introduces constraint-based type checking, which separates type checking into constraint generation and constraint solving. This provides a more declarative way to specify type checkers. The document discusses using variables and constraints to represent types during type checking. It introduces NaBL2, a domain-specific language for writing constraint generators to specify name and type constraints for programming language static semantics. NaBL2 uses scope graphs to represent name binding structures and supports features like type equality, subtyping, and type-dependent name resolution through constraint rules. An example scope graph and constraint rule for let-bindings is provided.
Slides for invited talk at Dynamic Languages Symposium (DLS'15) at SPLASH 2015 in Pittsburgh
http://2015.splashcon.org/event/dls2015-papers-declare-your-language
In the Language Designer’s Workbench project we are extending the Spoofax Language Workbench with meta-languages to declaratively specify the syntax, name binding rules, type rules, and operational semantics of a programming language design such that a variety of artifacts including parsers, static analyzers, interpreters, and IDE editor services can be derived and properties can be verified automatically. In this presentation I will talk about declarative specification for two aspects of language design: syntax and name binding.
First, I discuss the idea of declarative syntax definition as supported by grammar formalisms based on generalized parsing using the SDF3 syntax definition formalism as example. With SDF3, the language designer defines syntax in terms of productions and declarative disambiguation rules. This requires understanding a language in term of (tree) structure instead of the operational implementation of parsers. As a result, syntax definitions can be used for a range of language processors including parsers, formatters, syntax coloring, outline view, syntactic completion.
Second, I discuss our recent work on the declarative specification of name binding rules, that takes inspiration from declarative syntax definition. The NaBL name binding language supports definition of name binding rules in terms of its fundamental concepts: declarations, references, scopes, and imports. I will present the theory of name resolution that we have recently developed to provide a semantics for name binding languages such as NaBL.
This document provides an outline and overview of dynamic semantics and operational semantics. It discusses defining the meaning of programs through execution and transition systems. It introduces DynSem, a domain-specific language for specifying dynamic semantics in a modular way. DynSem specifications generate interpreters from language definitions. The document uses examples from arithmetic expressions and a language with boxes to illustrate DynSem specifications.
In this chapter we will explore strings. We are going to explain how they are implemented in Java and in what way we can process text content. Additionally, we will go through different methods for manipulating a text: we will learn how to compare strings, how to search for substrings, how to extract substrings upon previously settled parameters and last but not least how to split a string by separator chars. We will demonstrate how to correctly build strings with the StringBuilder class. We will provide a short but very useful information for the most commonly used regular expressions.
Separation of Concerns in Language DefinitionEelco Visser
Slides for keynote at the Modularity 2014 conference in Lugano on April 23, 2014. (A considerable part of the talk consisted of live programming the syntax definition and name binding rules for a little language.)
Dynamic Semantics Specification and Interpreter GenerationEelco Visser
(1) The document describes a domain specific language called DynSem for specifying dynamic semantics of programming languages. DynSem allows defining semantics in a modular way using semantic rules.
(2) DynSem specifications can be used to generate high-performance interpreters. The document outlines various language features that can be modeled in DynSem, including arithmetic, booleans, control flow, functions, and mutable state.
(3) DynSem specifications are composed of modules that import language signatures and define semantic rules over them. Rules are used to reduce expressions to values in an environment and store. This allows modeling features like variables, functions, and mutable boxes.
Declare Your Language: Dynamic SemanticsEelco Visser
This document provides an overview of DynSem, a domain-specific language for specifying the dynamic semantics of programming languages. It discusses the design goals of DynSem in being concise, modular, executable, portable, and high-performance. The document then provides an example DynSem specification for the semantics of a simple language with features like arithmetic, booleans, functions, and mutable variables. It describes how DynSem specifications are organized into modules that can be composed to define the semantics.
This document summarizes and discusses type checking algorithms for programming languages. It introduces constraint-based type checking, which separates type checking into constraint generation and constraint solving. This provides a more declarative way to specify type checkers. The document discusses using variables and constraints to represent types during type checking. It introduces NaBL2, a domain-specific language for writing constraint generators to specify name and type constraints for programming language static semantics. NaBL2 uses scope graphs to represent name binding structures and supports features like type equality, subtyping, and type-dependent name resolution through constraint rules. An example scope graph and constraint rule for let-bindings is provided.
Slides for invited talk at Dynamic Languages Symposium (DLS'15) at SPLASH 2015 in Pittsburgh
http://2015.splashcon.org/event/dls2015-papers-declare-your-language
In the Language Designer’s Workbench project we are extending the Spoofax Language Workbench with meta-languages to declaratively specify the syntax, name binding rules, type rules, and operational semantics of a programming language design such that a variety of artifacts including parsers, static analyzers, interpreters, and IDE editor services can be derived and properties can be verified automatically. In this presentation I will talk about declarative specification for two aspects of language design: syntax and name binding.
First, I discuss the idea of declarative syntax definition as supported by grammar formalisms based on generalized parsing using the SDF3 syntax definition formalism as example. With SDF3, the language designer defines syntax in terms of productions and declarative disambiguation rules. This requires understanding a language in term of (tree) structure instead of the operational implementation of parsers. As a result, syntax definitions can be used for a range of language processors including parsers, formatters, syntax coloring, outline view, syntactic completion.
Second, I discuss our recent work on the declarative specification of name binding rules, that takes inspiration from declarative syntax definition. The NaBL name binding language supports definition of name binding rules in terms of its fundamental concepts: declarations, references, scopes, and imports. I will present the theory of name resolution that we have recently developed to provide a semantics for name binding languages such as NaBL.
This document provides an outline and overview of dynamic semantics and operational semantics. It discusses defining the meaning of programs through execution and transition systems. It introduces DynSem, a domain-specific language for specifying dynamic semantics in a modular way. DynSem specifications generate interpreters from language definitions. The document uses examples from arithmetic expressions and a language with boxes to illustrate DynSem specifications.
In this chapter we will explore strings. We are going to explain how they are implemented in Java and in what way we can process text content. Additionally, we will go through different methods for manipulating a text: we will learn how to compare strings, how to search for substrings, how to extract substrings upon previously settled parameters and last but not least how to split a string by separator chars. We will demonstrate how to correctly build strings with the StringBuilder class. We will provide a short but very useful information for the most commonly used regular expressions.
Declare Your Language: Syntactic (Editor) ServicesEelco Visser
Lecture 3 on compiler construction course on definition of lexical syntax and syntactic services that can be derived from syntax definitions such as formatting and syntactic completion
This document provides an overview of Java collections basics, including arrays, lists, strings, sets, and maps. It defines each type of collection and provides examples of how to use them. Arrays allow storing fixed-length sequences of elements and can be accessed by index. Lists are like resizable arrays that allow adding, removing and inserting elements using the ArrayList class. Strings represent character sequences and provide various methods for manipulation and comparison. Sets store unique elements using HashSet or TreeSet. Maps store key-value pairs using HashMap or TreeMap.
Compiler Construction | Lecture 14 | InterpretersEelco Visser
This document summarizes a lecture on interpreters for programming languages. It discusses how operational semantics can be used to define the meaning of a program through state transitions in an interpreter. It provides examples of defining the semantics of a simple language using DynSem, a domain-specific language for specifying operational semantics. DynSem specifications can be compiled to interpreters that execute programs in the defined language.
The document describes static name resolution in programming languages. It discusses how names are bound to declarations through lexical scoping and how references are resolved to declarations by following paths through a scope graph representation. It presents the concepts of scopes, declarations, references, resolution paths, imports, and parent scopes. It also discusses how name resolution can be formalized using a calculus based on scope graphs, separation reachability and visibility, and how this supports name resolution, disambiguation, and program transformations.
In this chapter we will get familiar with primitive types and variables in Java – what they are and how to work with them. First we will consider the data types – integer types, real types with floating-point, Boolean, character, string and object type. We will continue with the variables, with their characteristics, how to declare them, how they are assigned a value and what is variable initialization.
03 and 04 .Operators, Expressions, working with the console and conditional s...Intro C# Book
The document discusses Java syntax and concepts including:
1. It introduces primitive data types in Java like int, float, boolean and String.
2. It covers variables, operators, and expressions - how they are used to store and manipulate data in Java.
3. It explains console input and output using Scanner and System.out methods for reading user input and printing output.
4. It provides examples of using conditional statements like if and if-else to control program flow based on conditions.
In this chapter we will explore strings. We are going to explain how they are implemented in C# and in what way we can process text content. Additionally, we will go through different methods for manipulating a text: we will learn how to compare strings, how to search for substrings, how to extract substrings upon previously settled parameters and last but not least how to split a string by separator chars. We will demonstrate how to correctly build strings with the StringBuilder class. We will provide a short but very useful information for the most commonly used regular expressions. We will discuss some classes for efficient construction of strings. Finally, we will take a look at the methods and classes for achieving more elegant and stricter formatting of the text content.
Python quickstart for programmers: Python Kung Fuclimatewarrior
The document provides an overview of key Python concepts including data types, operators, control flow statements, functions, objects and classes. It discusses lists in depth, covering creation, iteration, searching and common list methods. It also briefly touches on modules, exceptions, inheritance and other advanced topics.
Compiler Construction | Lecture 2 | Declarative Syntax DefinitionEelco Visser
The document describes a lecture on declarative syntax definition. It discusses the perspective on declarative syntax definition explained in an Onward! 2010 essay. It also mentions an OOPSLA 2011 paper that introduced the SPoofax Testing (SPT) language used in the section on testing syntax definitions. Finally, it provides a link to documentation on the SDF3 syntax definition formalism.
The document provides an introduction to Python programming concepts including indentation rules, documentation, data types, variables, numbers, strings, lists, dictionaries, tuples, files, control structures, functions and modules. It discusses Python syntax and examples for working with different data types, control flow, functions and importing modules.
In this chapter we will get familiar with the console as a tool for data input and output. We will explain what it is, when and how to use it, and how most programming languages access the console. We will get familiar with some of the features in C# for user interaction: reading text and numbers from the console and printing text and numbers. We will also examine the main streams for input-output operations Console.In, Console.Out and Console.Error, the Console and the usage of format strings for printing data in various formats.
All data values in Python are encapsulated in relevant object classes. Everything in Python is an object and every object has an identity, a type, and a value. Like another object-oriented language such as Java or C++, there are several data types which are built into Python. Extension modules which are written in C, Java, or other languages can define additional types.
To determine a variable's type in Python you can use the type() function. The value of some objects can be changed. Objects whose value can be changed are called mutable and objects whose value is unchangeable (once they are created) are called immutable.
Python is a programming language created by Guido van Rossum in 1991. It is easy to learn, platform independent, and has a simple syntax. Python can be used for tasks like building websites, developing games, scientific computation, and artificial intelligence. Some key features of Python include being interpreted, having dynamic typing, and being open source. Common data types in Python include numbers, strings, lists, tuples, dictionaries, and sets. Python supports operators for arithmetic, comparison, logical, bitwise, and identity operations. Control flow statements like if/else, for loops, and while loops allow for conditionally executing blocks of code.
This document provides an overview of common Arduino programming structures, syntax, and functions. It covers control structures like if/else statements and loops, data types, math functions, I/O functions, libraries for analog/digital I/O, serial communication, interrupts, and more. Memory sizes and pinouts are listed for popular Arduino boards like the Uno, Nano, and Mega.
Here we are going to take a look how to use for loop, foreach loop and while loop. Also we are going to learn how to use and invoke methods and how to define classes in Java programming language.
The document provides an overview of the Arduino programming language and hardware. It describes the basic structure of an Arduino program with setup() and loop() functions. It lists the main data types and functions for digital and analog input/output, time, math, random numbers, serial communication and more. It also provides information on libraries, the Arduino board pins and components, and compares Arduino to the Processing language.
Presented at 8th Light University London (13th May 2016)
Do this, do that. Coding from assembler to shell scripting, from the mainstream languages of the last century to the mainstream languages now, is dominated by an imperative style. From how we teach variables — they vary, right? — to how we talk about databases, we are constantly looking at state as a thing to be changed and programming languages are structured in terms of the mechanics of change — assignment, loops and how code can be threaded (cautiously) with concurrency.
Functional programming, mark-up languages, schemas, persistent data structures and more are all based around a more declarative approach to code, where instead of reasoning in terms of who does what to whom and what the consequences are, relationships and uses are described, and the flow of execution follows from how functions, data and other structures are composed. This talk will look at the differences between imperative and declarative approaches, offering lessons, habits and techniques that are applicable from requirements through to code and tests in mainstream languages.
The document discusses different types of functions in programming. It explains that built-in functions are predefined, modular functions are contained in imported modules, and user-defined functions are created using the def keyword. It also discusses function definitions, calling functions, arguments, parameters, and the return statement. Functions can accept inputs, perform tasks, and produce outputs.
19. Data Structures and Algorithm ComplexityIntro C# Book
In this chapter we will compare the data structures we have learned so far by the performance (execution speed) of the basic operations (addition, search, deletion, etc.). We will give specific tips in what situations what data structures to use. We will explain how to choose between data structures like hash-tables, arrays, dynamic arrays and sets implemented by hash-tables or balanced trees. Almost all of these structures are implemented as part of NET Framework, so to be able to write efficient and reliable code we have to learn to apply the most appropriate structures for each situation.
A(n abridged) tour of the Rust compiler [PDX-Rust March 2014]Tom Lee
The document provides an overview of the major stages in Rust's compiler:
1) Scanning converts source code to tokens. Rust's scanner is in libsyntax/parse/lexer.rs.
2) Parsing converts tokens to an AST using grammar rules. Rust's parser is in libsyntax/parse/parser.rs and outputs an AST in libsyntax/ast.rs.
3) Semantic analysis applies Rust's rules through name resolution, type checking, borrow checking etc. This is handled by librustc/middle/*.rs.
4) Code generation translates the AST to LLVM IR. Rust uses librustc/middle/trans/base.rs and writes output with
Open Source in Big Business [LCA2011 Miniconf]Tom Lee
This document provides advice for consultants on promoting open source software to big businesses. It suggests focusing on delivering value to clients over dogmatism. Open source allows flexibility and community support. The document recommends proving support for established open source solutions, convincing clients of total cost savings, and encouraging them to contribute code back to open source projects for mutual benefit.
Declare Your Language: Syntactic (Editor) ServicesEelco Visser
Lecture 3 on compiler construction course on definition of lexical syntax and syntactic services that can be derived from syntax definitions such as formatting and syntactic completion
This document provides an overview of Java collections basics, including arrays, lists, strings, sets, and maps. It defines each type of collection and provides examples of how to use them. Arrays allow storing fixed-length sequences of elements and can be accessed by index. Lists are like resizable arrays that allow adding, removing and inserting elements using the ArrayList class. Strings represent character sequences and provide various methods for manipulation and comparison. Sets store unique elements using HashSet or TreeSet. Maps store key-value pairs using HashMap or TreeMap.
Compiler Construction | Lecture 14 | InterpretersEelco Visser
This document summarizes a lecture on interpreters for programming languages. It discusses how operational semantics can be used to define the meaning of a program through state transitions in an interpreter. It provides examples of defining the semantics of a simple language using DynSem, a domain-specific language for specifying operational semantics. DynSem specifications can be compiled to interpreters that execute programs in the defined language.
The document describes static name resolution in programming languages. It discusses how names are bound to declarations through lexical scoping and how references are resolved to declarations by following paths through a scope graph representation. It presents the concepts of scopes, declarations, references, resolution paths, imports, and parent scopes. It also discusses how name resolution can be formalized using a calculus based on scope graphs, separation reachability and visibility, and how this supports name resolution, disambiguation, and program transformations.
In this chapter we will get familiar with primitive types and variables in Java – what they are and how to work with them. First we will consider the data types – integer types, real types with floating-point, Boolean, character, string and object type. We will continue with the variables, with their characteristics, how to declare them, how they are assigned a value and what is variable initialization.
03 and 04 .Operators, Expressions, working with the console and conditional s...Intro C# Book
The document discusses Java syntax and concepts including:
1. It introduces primitive data types in Java like int, float, boolean and String.
2. It covers variables, operators, and expressions - how they are used to store and manipulate data in Java.
3. It explains console input and output using Scanner and System.out methods for reading user input and printing output.
4. It provides examples of using conditional statements like if and if-else to control program flow based on conditions.
In this chapter we will explore strings. We are going to explain how they are implemented in C# and in what way we can process text content. Additionally, we will go through different methods for manipulating a text: we will learn how to compare strings, how to search for substrings, how to extract substrings upon previously settled parameters and last but not least how to split a string by separator chars. We will demonstrate how to correctly build strings with the StringBuilder class. We will provide a short but very useful information for the most commonly used regular expressions. We will discuss some classes for efficient construction of strings. Finally, we will take a look at the methods and classes for achieving more elegant and stricter formatting of the text content.
Python quickstart for programmers: Python Kung Fuclimatewarrior
The document provides an overview of key Python concepts including data types, operators, control flow statements, functions, objects and classes. It discusses lists in depth, covering creation, iteration, searching and common list methods. It also briefly touches on modules, exceptions, inheritance and other advanced topics.
Compiler Construction | Lecture 2 | Declarative Syntax DefinitionEelco Visser
The document describes a lecture on declarative syntax definition. It discusses the perspective on declarative syntax definition explained in an Onward! 2010 essay. It also mentions an OOPSLA 2011 paper that introduced the SPoofax Testing (SPT) language used in the section on testing syntax definitions. Finally, it provides a link to documentation on the SDF3 syntax definition formalism.
The document provides an introduction to Python programming concepts including indentation rules, documentation, data types, variables, numbers, strings, lists, dictionaries, tuples, files, control structures, functions and modules. It discusses Python syntax and examples for working with different data types, control flow, functions and importing modules.
In this chapter we will get familiar with the console as a tool for data input and output. We will explain what it is, when and how to use it, and how most programming languages access the console. We will get familiar with some of the features in C# for user interaction: reading text and numbers from the console and printing text and numbers. We will also examine the main streams for input-output operations Console.In, Console.Out and Console.Error, the Console and the usage of format strings for printing data in various formats.
All data values in Python are encapsulated in relevant object classes. Everything in Python is an object and every object has an identity, a type, and a value. Like another object-oriented language such as Java or C++, there are several data types which are built into Python. Extension modules which are written in C, Java, or other languages can define additional types.
To determine a variable's type in Python you can use the type() function. The value of some objects can be changed. Objects whose value can be changed are called mutable and objects whose value is unchangeable (once they are created) are called immutable.
Python is a programming language created by Guido van Rossum in 1991. It is easy to learn, platform independent, and has a simple syntax. Python can be used for tasks like building websites, developing games, scientific computation, and artificial intelligence. Some key features of Python include being interpreted, having dynamic typing, and being open source. Common data types in Python include numbers, strings, lists, tuples, dictionaries, and sets. Python supports operators for arithmetic, comparison, logical, bitwise, and identity operations. Control flow statements like if/else, for loops, and while loops allow for conditionally executing blocks of code.
This document provides an overview of common Arduino programming structures, syntax, and functions. It covers control structures like if/else statements and loops, data types, math functions, I/O functions, libraries for analog/digital I/O, serial communication, interrupts, and more. Memory sizes and pinouts are listed for popular Arduino boards like the Uno, Nano, and Mega.
Here we are going to take a look how to use for loop, foreach loop and while loop. Also we are going to learn how to use and invoke methods and how to define classes in Java programming language.
The document provides an overview of the Arduino programming language and hardware. It describes the basic structure of an Arduino program with setup() and loop() functions. It lists the main data types and functions for digital and analog input/output, time, math, random numbers, serial communication and more. It also provides information on libraries, the Arduino board pins and components, and compares Arduino to the Processing language.
Presented at 8th Light University London (13th May 2016)
Do this, do that. Coding from assembler to shell scripting, from the mainstream languages of the last century to the mainstream languages now, is dominated by an imperative style. From how we teach variables — they vary, right? — to how we talk about databases, we are constantly looking at state as a thing to be changed and programming languages are structured in terms of the mechanics of change — assignment, loops and how code can be threaded (cautiously) with concurrency.
Functional programming, mark-up languages, schemas, persistent data structures and more are all based around a more declarative approach to code, where instead of reasoning in terms of who does what to whom and what the consequences are, relationships and uses are described, and the flow of execution follows from how functions, data and other structures are composed. This talk will look at the differences between imperative and declarative approaches, offering lessons, habits and techniques that are applicable from requirements through to code and tests in mainstream languages.
The document discusses different types of functions in programming. It explains that built-in functions are predefined, modular functions are contained in imported modules, and user-defined functions are created using the def keyword. It also discusses function definitions, calling functions, arguments, parameters, and the return statement. Functions can accept inputs, perform tasks, and produce outputs.
19. Data Structures and Algorithm ComplexityIntro C# Book
In this chapter we will compare the data structures we have learned so far by the performance (execution speed) of the basic operations (addition, search, deletion, etc.). We will give specific tips in what situations what data structures to use. We will explain how to choose between data structures like hash-tables, arrays, dynamic arrays and sets implemented by hash-tables or balanced trees. Almost all of these structures are implemented as part of NET Framework, so to be able to write efficient and reliable code we have to learn to apply the most appropriate structures for each situation.
A(n abridged) tour of the Rust compiler [PDX-Rust March 2014]Tom Lee
The document provides an overview of the major stages in Rust's compiler:
1) Scanning converts source code to tokens. Rust's scanner is in libsyntax/parse/lexer.rs.
2) Parsing converts tokens to an AST using grammar rules. Rust's parser is in libsyntax/parse/parser.rs and outputs an AST in libsyntax/ast.rs.
3) Semantic analysis applies Rust's rules through name resolution, type checking, borrow checking etc. This is handled by librustc/middle/*.rs.
4) Code generation translates the AST to LLVM IR. Rust uses librustc/middle/trans/base.rs and writes output with
Open Source in Big Business [LCA2011 Miniconf]Tom Lee
This document provides advice for consultants on promoting open source software to big businesses. It suggests focusing on delivering value to clients over dogmatism. Open source allows flexibility and community support. The document recommends proving support for established open source solutions, convincing clients of total cost savings, and encouraging them to contribute code back to open source projects for mutual benefit.
The document discusses how the creator of "The Jam" magazine targeted their audience of teens. To attract teens, the magazine's masthead features a pun about jam and a quirky font. Competitions on the cover offer free tickets to popular teen festivals. Photographs depict relaxed, fun scenes to appeal to teens who reject perfection. More images than text are used to keep younger readers engaged. The main article layout presents interviews in a question-and-answer format to break up blocks of text and allow teens to choose what interests them.
My slides from "Inside Python", a talk about how to change the syntax of the Python programming language.
Modified Python 3.2 source code (with the "unless" keyword added during this presentation) is available here:
http://github.com/thomaslee/oscon2012-inside-python
Open Source Compiler Construction for the JVM [LCA2011 Miniconf]Tom Lee
The document discusses compiler construction for the JVM, including defining a grammar for a "Hello World" program, representing the program as an AST using Scala case classes, and compiling the AST to JVM bytecode using BCEL. Key points covered are defining grammars with parser combinators in Scala, representing the parse tree as an AST, and generating JVM bytecode from the AST.
Open Source Compiler Construction for the JVMTom Lee
This document discusses building a compiler for a simple language called "Awesome" that targets the Java Virtual Machine (JVM). It recommends writing a stub code generator first for quick feedback before building the full compiler. The compiler will use Scala parser combinators to parse the input into an abstract syntax tree (AST) and then walk the AST to generate equivalent JVM bytecode using the Bytecode Engineering Library (BCEL). The document outlines the overall compiler architecture and next steps to expand the language features supported by the compiler.
My slides from "Inside PHP", a talk about how to change the syntax of the PHP programming language.
Modified PHP 5.4.4 source code (with the "until" keyword added during this presentation) is available here:
http://github.com/thomaslee/oscon2012-inside-php
Scala - where objects and functions meetMario Fusco
The document provides an overview of a two-day training course on Scala that covers topics like object orientation, functional programming, pattern matching, generics, traits, case classes, tuples, collections, concurrency, options and monads. The course aims to show how Scala combines object-oriented and functional programming approaches and provides examples of commonly used Scala features like classes, traits, pattern matching, generics and collections.
The document discusses building compilers and domain-specific languages (DSLs) in F#. It describes using FParsec for parsing, building an abstract syntax tree (AST), and interpretation/execution. Examples include building parsers and interpreters for a turtle graphics language, an internal DSL for build automation, an external DSL for games, and a Small Basic compiler. It recommends resources like F# Koans, TryFSharp.org and the book for learning more.
This document provides an overview of concepts related to parsing and interpreting programming languages. It discusses syntax and semantics, context-free grammars, abstract syntax trees, parsing with parser combinators, and evaluating expressions by defining an interpreter that uses dynamic scoping semantics. Examples are provided for parsing arithmetic expressions and a simple language with numbers, functions, and let bindings.
The document discusses top-down and bottom-up parsing techniques, with top-down parsing constructing a parse tree starting from the root node and working downward, while bottom-up parsing uses shift-reduce parsing to shift input symbols onto a stack and reduce the stack based on grammar rules. It also covers topics like recursive descent parsing, predictive parsing tables, LL(1) grammars, and error recovery techniques used in parsing.
Model-Driven Software Development - Static Analysis & Error CheckingEelco Visser
The document discusses static analysis and error checking, including name resolution, type analysis, and checking for consistency. It describes analyzing syntax definitions, performing static analysis to check consistency beyond well-formedness, and reporting errors. Key aspects covered include type analysis, name resolution, reference resolution, and checking constraints.
The document discusses top-down and bottom-up parsing techniques. Top-down parsing constructs a parse tree starting from the root node and progresses depth-first. It can require backtracking. Bottom-up parsing uses shift-reduce parsing, shifting input symbols onto a stack until they can be reduced based on grammar rules.
Compiler Construction | Lecture 4 | Parsing Eelco Visser
This lecture covers parsing and turning syntax definitions into parsers. It discusses context-free grammars and derivations. Grammars can be ambiguous, allowing multiple parse trees for a sentence. Grammar transformations like disambiguation, eliminating left recursion, and left factoring can address issues while preserving the language. Associativity and priority can be defined through transformations. The reading material covers parsing schemata, classical compiler textbooks, and papers on disambiguation filters and parsing algorithms.
The document discusses top-down parsing and predictive parsing. It defines key concepts like scanners, parsers, syntax analyzers, parse trees, top-down parsing, and predictive parsing. It also describes the process of constructing a predictive parsing table using the First and Follow sets of a grammar's productions.
This document discusses tuples and sets in Python. It defines tuples as immutable sequences that can contain heterogeneous data types. Tuples can be indexed and sliced but not modified. Sets are unordered collections of unique and immutable elements. Common set operations include union, intersection, difference, and symmetric difference.
The document discusses several techniques for generating abstract syntax trees (ASTs) during syntax-directed translation, including:
1. Using synthesized attributes to build AST nodes recursively according to production rules
2. Implementing these techniques in Yacc by defining node and leaf types and building the AST in action code
3. Eliminating left recursion from grammar rules by introducing marker nonterminals and passing synthesized attributes up the parse tree
4. Generating ASTs with predictive parsers by using inherited and synthesized attributes to pass partial results between nonterminals
Declare Your Language: Name ResolutionEelco Visser
Scope graphs are used to represent the binding information in programs. They provide a language-independent representation of name resolution that can be used to conduct and represent the results of name resolution. Separating the representation of resolved programs from the declarative rules that define name binding allows language-independent tooling to be developed for name resolution and other tasks.
This document discusses F# and FParsec. It provides examples of parsing expressions in FParsec using lazy evaluation and references, as opposed to NParsec which uses bindings. FParsec allows defining recursive parsers in a natural way in F#.
This document discusses term rewriting and provides examples of how rewrite rules can be used to transform terms. Key points include:
- Rewrite rules define pattern matching and substitution to transform terms from a left-hand side to a right-hand side.
- Examples show desugaring language constructs like if-then statements, constant folding arithmetic expressions, and mapping/zipping lists with strategies as parameters to rules.
- Terms can represent programming language syntax and semantics domains. Signatures define the structure of terms.
- Rewriting systems provide a declarative way to define program transformations and semantic definitions through rewrite rules and strategies.
This document discusses static type checking in compilers. It begins by describing the structure of a compiler and how static checking fits in. It then contrasts static and dynamic checking. The rest of the document discusses various aspects of static type checking like type rules, type systems, and implementing static checking using syntax-directed definitions in Yacc. It provides examples of basic type checking for expressions and statements in a simple language. It also discusses type expressions, conversions, and functions.
The document describes algorithms for text searching and pattern matching. It presents several algorithms for tasks like simple text search, Rabin-Karp search, Knuth-Morris-Pratt search, Boyer-Moore search, edit distance, approximate matching, don't-care search, and epsilon-NFA matching using pattern trees. The algorithms are explained through pseudocode with input/output parameters and complexity analysis provided for some.
The document describes algorithms for text searching and pattern matching. It presents several algorithms for tasks like simple text search, Rabin-Karp search, Knuth-Morris-Pratt search, Boyer-Moore search, edit distance, approximate matching, don't-care search, and epsilon-NFA matching using pattern trees. The algorithms are explained through pseudocode with input/output parameters and complexity analysis provided for some.
This document provides a summary of key elements of the C programming language including program structure, data types, operators, flow control statements, standard libraries, and common functions. It covers topics such as functions, variables, comments, preprocessor directives, constants, pointers, arrays, structures, I/O, math functions, and limits of integer and floating point types. The summary is presented in a reference card format organized by sections.
The document discusses static type checking in compilers. It describes how static checking is performed at compile time to enforce type safety, whereas dynamic checking occurs at runtime. It provides examples of common static checks for types, control flow, uniqueness, and names. It also contrasts one-pass versus multi-pass compilers and how they approach static checking. Finally, it introduces type systems and shows an example of type rules and syntax-directed definitions for a simple language.
chap09alg.ppt for string matching algorithmSadiaSharmin40
The document describes algorithms for text searching and pattern matching. It includes algorithms for simple text search, Rabin-Karp search, Knuth-Morris-Pratt search, Boyer-Moore search, edit distance, approximate matching, wildcard search, and finite automata construction and matching using pattern trees. The algorithms are described in pseudocode with input and output parameters and high-level explanations of the approaches.
Similar to A Language Designer’s Workbench. A one-stop shop for implementation and verification of language designs (20)
This document discusses syntactic editor services including formatting, syntax coloring, and syntactic completion. It describes how syntactic completion can be provided generically based on a syntax definition. The document also discusses how context-free grammars can be extended with templates to specify formatting layout when pretty-printing abstract syntax trees to text. Templates are used to insert whitespace, line breaks, and indentation to produce readable output.
This document provides an overview of parsing in compiler construction. It discusses context-free grammars and how they are used to generate sentences and parse trees through derivations. It also covers ambiguity that can arise from grammars and various grammar transformations used to eliminate ambiguity, including defining associativity and priority. The dangling else problem is presented as an example of an ambiguous grammar.
This document provides an overview of the Lecture 2 on Declarative Syntax Definition for the CS4200 Compiler Construction course. The lecture covers the specification of syntax definition from which parsers can be derived, the perspective on declarative syntax definition using SDF, and reading material on the SDF3 syntax definition formalism and papers on testing syntax definitions and declarative syntax. It also discusses what syntax is, both in linguistics and programming languages, and how programs can be described in terms of syntactic categories and language constructs. An example Tiger program for solving the n-queens problem is presented to illustrate syntactic categories in Tiger.
This document provides an overview of the CS4200 Compiler Construction course at TU Delft. It discusses the course organization, structure, and assessment. The course is split into two parts - CS4200-A which covers concepts and techniques through lectures, papers, and homework assignments, and CS4200-B which involves building a compiler for a subset of Java as a semester-long project. Students will use the Spoofax language workbench to implement their compiler and will submit assignments through a private GitLab repository.
A Direct Semantics of Declarative Disambiguation RulesEelco Visser
This document discusses research into providing a direct semantics for declarative disambiguation of expression grammars. It aims to define what disambiguation rules mean, ensure they are safe and complete, and provide an effective implementation strategy. The document outlines key research questions around the meaning, safety, completeness and coverage of disambiguation rules. It also presents contributions around using subtree exclusion patterns to define safe and complete disambiguation for classes of expression grammars, and implementing this in SDF3.
Declarative Type System Specification with StatixEelco Visser
In this talk I present the design of Statix, a new constraint-based language for the executable specification of type systems. Statix specifications consist of predicates that define the well-formedness of language constructs in terms of built-in and user-defined constraints. Statix has a declarative semantics that defines whether a model satisfies a constraint. The operational semantics of Statix is defined as a sound constraint solving algorithm that searches for a solution for a constraint. The aim of the design is that Statix users can ignore the execution order of constraint solving and think in terms of the declarative semantics.
A distinctive feature of Statix is its use of scope graphs, a language parametric framework for the representation and querying of the name binding facts in programs. Since types depend on name resolution and name resolution may depend on types, it is typically not possible to construct the entire scope graph of a program before type constraint resolution. In (algorithmic) type system specifications this leads to explicit staging of the construction and querying of the type environment (class table, symbol table). Statix automatically stages the construction of the scope graph of a program such that queries are never executed when their answers may be affected by future scope graph extension. In the talk, I will explain the design of Statix by means of examples.
https://eelcovisser.org/post/309/declarative-type-system-specification-with-statix
Compiler Construction | Lecture 17 | Beyond Compiler ConstructionEelco Visser
Compiler construction techniques are applied beyond general-purpose languages through domain-specific languages (DSLs). The document discusses several DSLs developed using Spoofax including:
- WebDSL for web programming with sub-languages for entities, queries, templates, and access control.
- IceDust for modeling information systems with derived values computed on-demand, incrementally, or eventually consistently.
- PixieDust for client-side web programming with views as derived values updated incrementally.
- PIE for defining software build pipelines as tasks with dynamic dependencies computed incrementally.
The document also outlines several research challenges in compiler construction like high-level declarative language definition, verification of
Domain Specific Languages for Parallel Graph AnalytiX (PGX)Eelco Visser
This document discusses domain-specific languages (DSLs) for parallel graph analytics using PGX. It describes how DSLs allow users to implement graph algorithms and queries using high-level languages that are then compiled and optimized to run efficiently on PGX. Examples of DSL optimizations like multi-source breadth-first search are provided. The document also outlines the extensible compiler architecture used for DSLs, which can generate code for different backends like shared memory or distributed memory.
Compiler Construction | Lecture 15 | Memory ManagementEelco Visser
The document discusses different memory management techniques:
1. Reference counting counts the number of pointers to each record and deallocates records with a count of 0.
2. Mark and sweep marks all reachable records from program roots and sweeps unmarked records, adding them to a free list.
3. Copying collection copies reachable records to a "to" space, allowing the original "from" space to be freed without fragmentation.
4. Generational collection focuses collection on younger object generations more frequently to improve efficiency.
Compiler Construction | Lecture 13 | Code GenerationEelco Visser
The document discusses code generation and optimization techniques, describing compilation schemas that define how language constructs are translated to target code patterns, and covers topics like ensuring correctness of generated code through type checking and verification of static constraints on the target format. It also provides examples of compilation schemas for Tiger language constructs like arithmetic expressions and control flow and discusses generating nested functions.
Compiler Construction | Lecture 12 | Virtual MachinesEelco Visser
The document discusses the architecture of the Java Virtual Machine (JVM). It describes how the JVM uses threads, a stack, heap, and method area. It explains JVM control flow through bytecode instructions like goto, and how the operand stack is used to perform operations and hold method arguments and return values.
Compiler Construction | Lecture 9 | Constraint ResolutionEelco Visser
This document provides an overview of constraint resolution in the context of a compiler construction lecture. It discusses unification, which is the basis for many type inference and constraint solving approaches. It also describes separating type checking into constraint generation and constraint solving, and introduces a constraint language that integrates name resolution into constraint resolution through scope graph constraints. Finally, it discusses papers on further developments with this approach, including addressing expressiveness and staging issues in type systems through the Statix DSL for defining type systems.
Compiler Construction | Lecture 8 | Type ConstraintsEelco Visser
This lecture covers type checking with constraints. It introduces the NaBL2 meta-language for writing type specifications as constraint generators that map a program to constraints. The constraints are then solved to determine if a program is well-typed. NaBL2 supports defining name binding and type structures through scope graphs and constraints over names, types, and scopes. Examples show type checking patterns in NaBL2 including variables, functions, records, and name spaces.
Compiler Construction | Lecture 7 | Type CheckingEelco Visser
This document summarizes a lecture on type checking. It discusses using constraints to separate the language-specific type checking rules from the language-independent solving algorithm. Constraint-based type checking collects constraints as it traverses the AST, then solves the constraints in any order. This allows type information to be learned gradually and avoids issues with computation order.
Compiler Construction | Lecture 6 | Introduction to Static AnalysisEelco Visser
Lecture introducing the need for static analysis in addition to parsing, the complications caused by names, and an introduction to name resolution with scope graphs
Compiler Construction | Lecture 3 | Syntactic Editor ServicesEelco Visser
The document discusses syntactic editor services for programming languages. It covers formatting specifications that define how abstract syntax trees are mapped to text using templates. It also discusses syntactic completion, which proposes valid completions in an editor by using the syntax definition. The lecture focuses on defining lexical and syntactic syntax for Tiger using SDF, and generating editor services like formatting and coloring from the syntax definitions.
Tools & Techniques for Commissioning and Maintaining PV Systems W-Animations ...Transcat
Join us for this solutions-based webinar on the tools and techniques for commissioning and maintaining PV Systems. In this session, we'll review the process of building and maintaining a solar array, starting with installation and commissioning, then reviewing operations and maintenance of the system. This course will review insulation resistance testing, I-V curve testing, earth-bond continuity, ground resistance testing, performance tests, visual inspections, ground and arc fault testing procedures, and power quality analysis.
Fluke Solar Application Specialist Will White is presenting on this engaging topic:
Will has worked in the renewable energy industry since 2005, first as an installer for a small east coast solar integrator before adding sales, design, and project management to his skillset. In 2022, Will joined Fluke as a solar application specialist, where he supports their renewable energy testing equipment like IV-curve tracers, electrical meters, and thermal imaging cameras. Experienced in wind power, solar thermal, energy storage, and all scales of PV, Will has primarily focused on residential and small commercial systems. He is passionate about implementing high-quality, code-compliant installation techniques.
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODELijaia
As digital technology becomes more deeply embedded in power systems, protecting the communication
networks of Smart Grids (SG) has emerged as a critical concern. Distributed Network Protocol 3 (DNP3)
represents a multi-tiered application layer protocol extensively utilized in Supervisory Control and Data
Acquisition (SCADA)-based smart grids to facilitate real-time data gathering and control functionalities.
Robust Intrusion Detection Systems (IDS) are necessary for early threat detection and mitigation because
of the interconnection of these networks, which makes them vulnerable to a variety of cyberattacks. To
solve this issue, this paper develops a hybrid Deep Learning (DL) model specifically designed for intrusion
detection in smart grids. The proposed approach is a combination of the Convolutional Neural Network
(CNN) and the Long-Short-Term Memory algorithms (LSTM). We employed a recent intrusion detection
dataset (DNP3), which focuses on unauthorized commands and Denial of Service (DoS) cyberattacks, to
train and test our model. The results of our experiments show that our CNN-LSTM method is much better
at finding smart grid intrusions than other deep learning algorithms used for classification. In addition,
our proposed approach improves accuracy, precision, recall, and F1 score, achieving a high detection
accuracy rate of 99.50%.
A high-Speed Communication System is based on the Design of a Bi-NoC Router, ...DharmaBanothu
The Network on Chip (NoC) has emerged as an effective
solution for intercommunication infrastructure within System on
Chip (SoC) designs, overcoming the limitations of traditional
methods that face significant bottlenecks. However, the complexity
of NoC design presents numerous challenges related to
performance metrics such as scalability, latency, power
consumption, and signal integrity. This project addresses the
issues within the router's memory unit and proposes an enhanced
memory structure. To achieve efficient data transfer, FIFO buffers
are implemented in distributed RAM and virtual channels for
FPGA-based NoC. The project introduces advanced FIFO-based
memory units within the NoC router, assessing their performance
in a Bi-directional NoC (Bi-NoC) configuration. The primary
objective is to reduce the router's workload while enhancing the
FIFO internal structure. To further improve data transfer speed,
a Bi-NoC with a self-configurable intercommunication channel is
suggested. Simulation and synthesis results demonstrate
guaranteed throughput, predictable latency, and equitable
network access, showing significant improvement over previous
designs
Applications of artificial Intelligence in Mechanical Engineering.pdfAtif Razi
Historically, mechanical engineering has relied heavily on human expertise and empirical methods to solve complex problems. With the introduction of computer-aided design (CAD) and finite element analysis (FEA), the field took its first steps towards digitization. These tools allowed engineers to simulate and analyze mechanical systems with greater accuracy and efficiency. However, the sheer volume of data generated by modern engineering systems and the increasing complexity of these systems have necessitated more advanced analytical tools, paving the way for AI.
AI offers the capability to process vast amounts of data, identify patterns, and make predictions with a level of speed and accuracy unattainable by traditional methods. This has profound implications for mechanical engineering, enabling more efficient design processes, predictive maintenance strategies, and optimized manufacturing operations. AI-driven tools can learn from historical data, adapt to new information, and continuously improve their performance, making them invaluable in tackling the multifaceted challenges of modern mechanical engineering.
Null Bangalore | Pentesters Approach to AWS IAMDivyanshu
#Abstract:
- Learn more about the real-world methods for auditing AWS IAM (Identity and Access Management) as a pentester. So let us proceed with a brief discussion of IAM as well as some typical misconfigurations and their potential exploits in order to reinforce the understanding of IAM security best practices.
- Gain actionable insights into AWS IAM policies and roles, using hands on approach.
#Prerequisites:
- Basic understanding of AWS services and architecture
- Familiarity with cloud security concepts
- Experience using the AWS Management Console or AWS CLI.
- For hands on lab create account on [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
# Scenario Covered:
- Basics of IAM in AWS
- Implementing IAM Policies with Least Privilege to Manage S3 Bucket
- Objective: Create an S3 bucket with least privilege IAM policy and validate access.
- Steps:
- Create S3 bucket.
- Attach least privilege policy to IAM user.
- Validate access.
- Exploiting IAM PassRole Misconfiguration
-Allows a user to pass a specific IAM role to an AWS service (ec2), typically used for service access delegation. Then exploit PassRole Misconfiguration granting unauthorized access to sensitive resources.
- Objective: Demonstrate how a PassRole misconfiguration can grant unauthorized access.
- Steps:
- Allow user to pass IAM role to EC2.
- Exploit misconfiguration for unauthorized access.
- Access sensitive resources.
- Exploiting IAM AssumeRole Misconfiguration with Overly Permissive Role
- An overly permissive IAM role configuration can lead to privilege escalation by creating a role with administrative privileges and allow a user to assume this role.
- Objective: Show how overly permissive IAM roles can lead to privilege escalation.
- Steps:
- Create role with administrative privileges.
- Allow user to assume the role.
- Perform administrative actions.
- Differentiation between PassRole vs AssumeRole
Try at [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
Digital Twins Computer Networking Paper Presentation.pptxaryanpankaj78
A Digital Twin in computer networking is a virtual representation of a physical network, used to simulate, analyze, and optimize network performance and reliability. It leverages real-time data to enhance network management, predict issues, and improve decision-making processes.
Build the Next Generation of Apps with the Einstein 1 Platform.
Rejoignez Philippe Ozil pour une session de workshops qui vous guidera à travers les détails de la plateforme Einstein 1, l'importance des données pour la création d'applications d'intelligence artificielle et les différents outils et technologies que Salesforce propose pour vous apporter tous les bénéfices de l'IA.
Determination of Equivalent Circuit parameters and performance characteristic...pvpriya2
Includes the testing of induction motor to draw the circle diagram of induction motor with step wise procedure and calculation for the same. Also explains the working and application of Induction generator
A Language Designer’s Workbench. A one-stop shop for implementation and verification of language designs
1. A Language Designer’s Workbench
A one-stop shop for implementation and verification of language designs
Eelco Visser, Guido Wachsmuth, Andrew Tolmach
Pierre Neron, Vlad Vergu, Augusto Passalaqua, Gabriël Konat
2.
3.
4.
5.
6.
7.
8. parser
type checker
code generator
interpreter
parser
error recovery
syntax highlighting
outline
code completion
navigation
type checker
debugger
syntax definition
static semantics
dynamic semantics
abstract syntax
type system
operational
semantics
type soundness
proof
12. Language Design
Syntax
Definition
Name
Binding
Type
Constraints
Dynamic
Semantics
Transform
Language Designer’s Workbench
13. Language Design
Syntax
Name
Type
Dynamic
Languages
Transform
Definition
Binding
Constraints
Semantics
Declarative Meta-Multi-purpose Language Designer’s Workbench
14. Language Design
SDF3 NaBL TS DynSem Stratego
Language Designer’s Workbench: First Prototype
15. module PCF
sorts Exp Param Type
templates
Exp.Var = [[ID]]
Exp.App = [[Exp] [Exp]] {left}
Exp.Fun = [
fun [Param] (
[Exp]
)
]
Exp.Fix = [
fix [Param] (
[Exp]
)
]
Exp.Let = [
let [ID] : [Type] =
[Exp]
in [Exp]
]
Exp.Num = [[INT]]
Exp.Add = [[Exp] + [Exp]] {left}
Exp.Sub = [[Exp] - [Exp]] {left}
Exp.Mul = [[Exp] * [Exp]] {left}
Exp = [([Exp])] {bracket}
Exp.Ifz = [
ifz [Exp] then
[Exp]
else
[Exp]
]
Type.IntType = [int]
Type.FunType = [[Type] -> [Type]]
Param.Param = [[ID] : [Type]]
context-free priorities
Exp.App > Exp.Mul > {left: Exp.Add Exp.Sub}
> Exp.Ifz
First Little (Big) Step: PCF in Spoofax
module types
type rules
Var(x) : t
where definition of x : t
Param(x, t) : t
Fun(p, e) : FunType(tp, te)
where p : tp and e : te
App(e1, e2) : tr
where e1 : FunType(tf, tr) and e2 : ta
and tf == ta
else error "type mismatch" on e2
Fix(p, e) : tp
where p : tp and e : te
and tp == te
else error "type mismatch" on p
Let(x, tx, e1, e2) : t2
where e2 : t2 and e1 : t1
and t1 == tx
else error "type mismatch" on e1
Num(i) : IntType()
Ifz(e1, e2, e3) : t2
where e1 : IntType() and e2 : t2 and e3 : t3
and t2 == t3
else error "types not compatible" on e3
e@Add(e1, e2) + e@Sub(e1, e2) + e@Mul(e1, e2) : IntType()
where e1 : IntType()
else error "Int type expected" on e
and e2 : IntType()
else error "Int type expected" on e
module semantics
rules
E env |- Var(x) --> v
where env[x] => T(e, env'),
E env' |- e --> v
E env |- Fun(Param(x, t), e) --> C(x, e, env)
E env |- App(e1, e2) --> v
where E env |- e1 --> C(x, e, env'),
E {x |--> T(e2, env), env'} |- e --> v
E env |- Fix(Param(x, t), e) --> v
where
E {x |--> T(Fix(Param(x,t),e),env), env} |- e --> v
E env |- Let(x, t, e1, e2) --> v
where E {x |--> T(e1, env), env} |- e2 --> v
rules
Num(i) --> I(i)
Ifz(e1, e2, e3) --> v
where e1 --> I(i), i = 0, e2 --> v
Ifz(e1, e2, e3) --> v
where e1 --> I(i), i != 0, e3 --> v
Add(e1, e2) --> I(addInt(i, j))
where e1 --> I(i), e2 --> I(j)
Sub(e1, e2) --> I(subInt(i, j))
where e1 --> I(i), e2 --> I(j)
Mul(e1, e2) --> I(mulInt(i, j))
where e1 --> I(i), e2 --> I(j)
module names
namespaces Variable
binding rules
Var(x) :
refers to Variable x
Param(x, t) :
defines Variable x of type t
Fun(p, e) :
scopes Variable
Fix(p, e) :
scopes Variable
Let(x, t, e1, e2) :
defines Variable x of type t in e2
SDF3 NaBL TS DynSem
17. Syntax = Tree Structure
parse(prettyprint(t)) = t
No need to understand
how parse works!
Exp.Fun
Param ( Exp )
Exp.App
Exp
Exp.Ifz
ifz Exp then Exp else Exp
Exp.Add
Exp + Exp
Exp.Var
ID
fun
Exp
22. let fac : int -> int =
fix f : int -> int (
fun n : int (
ifz n then 1 else n * f (n - 1)
)
)
in (fac 3)
is this expression well-typed?
23. let fac : int -> int =
fix f : int -> int (
fun n : int (
ifz n then 1 else n * f (n - 1)
)
)
in (fac 3)
type rules
Var(x) : t
where definition of x : t
Param(x, t) : t
Fun(p, e) : FunType(tp, te)
where p : tp and e : te
App(e1, e2) : tr
where e1 : FunType(tf, tr) and e2 : ta
and tf == ta
else error "type mismatch" on e2
e@Sub(e1, e2) : IntType()
where e1 : IntType()
else error "Int type expected" on e
and e2 : IntType()
else error "Int type expected" on e
TS: Type analysiS
24. let fac : int -> int =
fix f : int -> int (
fun n : int (
ifz n then 1 else n * f (n - 1)
)
)
in (fac 3)
type rules
Var(x) : t
where definition of x : t
Param(x, t) : t
Fun(p, e) : FunType(tp, te)
where p : tp and e : te
App(e1, e2) : tr
where e1 : FunType(tf, tr) and e2 : ta
and tf == ta
else error "type mismatch" on e2
e@Sub(e1, e2)
where e1 : IntType()
else error "Int type expected" on e
and e2 : IntType()
else error "Int type expected" on e
TS: Type analysiS
25. type rules
Var(x) : t
where definition of x : t
Param(x, t) : t
Fun(p, e) : FunType(tp, te)
where p : tp and e : te
App(e1, e2) : tr
where e1 : FunType(tf, tr) and e2 : ta
and tf == ta
else error "type mismatch" on e2
e@Sub(e1, e2)
where e1 : IntType()
else error "Int type expected" on e
and e2 : IntType()
else error "Int type expected" on e
let fac : int -> int =
fix f : int -> int (
fun n : int (
ifz n then 1 else n * f (n - 1)
)
)
in (fac 3)
TS: Type analysiS
26. let fac : int -> int =
fix f : int -> int (
fun n : int (
ifz n then 1 else n * f (n - 1)
)
)
in (fac 3)
type rules
Var(x) : t
where definition of x : t
Param(x, t) : t
Fun(p, e) : FunType(tp, te)
where p : tp and e : te
App(e1, e2) : tr
where e1 : FunType(tf, tr) and e2 : ta
and tf == ta
else error "type mismatch" on e2
e@Sub(e1, e2)
where e1 : IntType()
else error "Int type expected" on e
and e2 : IntType()
else error "Int type expected" on e
TS: Type analysiS
27. let fac : int -> int =
fix f : int -> int (
fun n : int (
ifz n then 1 else n * f (n - 1)
)
)
in (fac 3)
type rules
Var(x) : t
where definition of x : t
Param(x, t) : t
Fun(p, e) : FunType(tp, te)
where p : tp and e : te
App(e1, e2) : tr
where e1 : FunType(tf, tr) and e2 : ta
and tf == ta
else error "type mismatch" on e2
e@Sub(e1, e2)
where e1 : IntType()
else error "Int type expected" on e
and e2 : IntType()
else error "Int type expected" on e
TS: Type analysiS
28. let fac : int -> int =
fix f : int -> int (
fun n : int (
ifz n then 1 else n * f (n - 1)
)
)
in (fac 3)
type rules
Var(x) : t
where definition of x : t
Param(x, t) : t
Fun(p, e) : FunType(tp, te)
where p : tp and e : te
App(e1, e2) : tr
where e1 : FunType(tf, tr) and e2 : ta
and tf == ta
else error "type mismatch" on e2
e@Sub(e1, e2)
where e1 : IntType()
else error "Int type expected" on e
and e2 : IntType()
else error "Int type expected" on e
TS: Type analysiS
30. Multi-Purpose Type Constraints
Inline type error reports
Type annotations
Interaction with name binding
Semantic code completion
Refactorings
type rules
Var(x) : t
where definition of x : t
Param(x, t) : t
Fun(p, e) : FunType(tp, te)
where p : tp and e : te
App(e1, e2) : tr
where e1 : FunType(tf, tr) and e2 : ta
and tf == ta
else error "type mismatch" on e2
e@Sub(e1, e2)
where e1 : IntType()
else error "Int type expected" on e
and e2 : IntType()
else error "Int type expected" on e
Incremental type analysis
32. let fac : int -> int =
fix f : int -> int (
fun n : int (
ifz n then 1 else n * f (n - 1)
)
)
in (fac 3)
what does this variable refer to?
33. module names
namespaces Variable
binding rules
Var(x) :
refers to Variable x
Param(x, t) :
defines Variable x of type t
Fun(p, e) :
scopes Variable
Fix(p, e) :
scopes Variable
Let(x, t, e1, e2) :
defines Variable x of type t in e2
let fac : int -> int =
fix f : int -> int (
fun n : int (
ifz n then 1 else n * f (n - 1)
)
)
in (fac 3)
NaBL: Name Binding Language
34. module names
namespaces Variable
binding rules
Var(x) :
refers to Variable x
Param(x, t) :
defines Variable x of type t
Fun(p, e) :
scopes Variable
Fix(p, e) :
scopes Variable
Let(x, t, e1, e2) :
defines Variable x of type t in e2
NaBL: Name Binding Language
let fac : int -> int =
fix f : int -> int (
fun n : int (
ifz n then 1 else n * f (n - 1)
)
)
in (fac 3)
35. module names
namespaces Variable
binding rules
Var(x) :
refers to Variable x
Param(x, t) :
defines Variable x of type t
Fun(p, e) :
scopes Variable
Fix(p, e) :
scopes Variable
Let(x, t, e1, e2) :
defines Variable x of type t in e2
NaBL: Name Binding Language
let fac : int -> int =
fix f : int -> int (
fun n : int (
ifz n then 1 else n * f (n - 1)
)
)
in (fac 3)
36. module names
namespaces Variable
binding rules
Var(x) :
refers to Variable x
Param(x, t) :
defines Variable x of type t
Fun(p, e) :
scopes Variable
Fix(p, e) :
scopes Variable
Let(x, t, e1, e2) :
defines Variable x of type t in e2
NaBL: Name Binding Language
let fac : int -> int =
fix f : int -> int (
fun n : int (
ifz n then 1 else n * f (n - 1)
)
)
in (fac 3)
37. module names
namespaces Variable
binding rules
Var(x) :
refers to Variable x
Param(x, t) :
defines Variable x of type t
Fun(p, e) :
scopes Variable
Fix(p, e) :
scopes Variable
Let(x, t, e1, e2) :
defines Variable x of type t in e2
let fac : int -> int =
fix f : int -> int (
fun n : int (
ifz n then 1 else n * f (n - 1)
)
)
in (fac 3)
NaBL: Name Binding Language
38. module names
namespaces Variable
binding rules
Var(x) :
refers to Variable x
Param(x, t) :
defines Variable x of type t
Fun(p, e) :
scopes Variable
Fix(p, e) :
scopes Variable
Let(x, t, e1, e2) :
defines Variable x of type t in e2
let fac : int -> int =
fix f : int -> int (
fun n : int (
ifz n then 1 else n * f (n - 1)
)
)
in (fac 3)
NaBL: Name Binding Language
39. module names
namespaces Variable
binding rules
Var(x) :
refers to Variable x
Param(x, t) :
defines Variable x of type t
Fun(p, e) :
scopes Variable
Fix(p, e) :
scopes Variable
Let(x, t, e1, e2) :
defines Variable x of type t in e2
let fac : int -> int =
fix f : int -> int (
fun n : int (
ifz n then 1 else n * f (n - 1)
)
)
in (fac 3)
NaBL: Name Binding Language
40. module names
namespaces Variable
binding rules
Var(x) :
refers to Variable x
Param(x, t) :
defines Variable x of type t
Fun(p, e) :
scopes Variable
Fix(p, e) :
scopes Variable
Let(x, t, e1, e2) :
defines Variable x of type t in e2
let fac : int -> int =
fix f : int -> int (
fun n : int (
ifz n then 1 else n * f (n - 1)
)
)
in (fac 3)
NaBL: Name Binding Language
41. Multi-Purpose Name Binding Rules
Incremental name resolution algorithm
Name checks
Reference resolution
Semantic code completion
module names
namespaces Variable
binding rules
Var(x) :
refers to Variable x
Param(x, t) :
defines Variable x of type t
Fun(p, e) :
scopes Variable
Fix(p, e) :
scopes Variable
Let(x, t, e1, e2) :
defines Variable x of type t in e2
Refactorings
43. let fac : int -> int =
fix f : int -> int (
fun n : int (
ifz n then 1 else n * f (n - 1)
)
)
in (fac 3)
What is the value of this expression?
44. let fac : int -> int =
fix f : int -> int (
fun n : int (
ifz n then 1 else n * f (n - 1)
)
)
in (fac 3)
rules
Num(i) --> I(i)
Sub(e1, e2) --> I(subInt(i, j))
where e1 --> I(i), e2 --> I(j)
Ifz(e1, e2, e3) --> v
where e1 --> I(i), i = 0, e2 --> v
Ifz(e1, e2, e3) --> v
where e1 --> I(i), i != 0, e3 --> v
DynSem: Dynamic Semantics
45. rules
Num(i) --> I(i)
Sub(e1, e2) --> I(subInt(i, j))
where e1 --> I(i), e2 --> I(j)
Ifz(e1, e2, e3) --> v
where e1 --> I(i), i = 0, e2 --> v
Ifz(e1, e2, e3) --> v
where e1 --> I(i), i != 0, e3 --> v
let fac : int -> int =
fix f : int -> int (
fun n : int (
ifz n then 1 else n * f (n - 1)
)
)
in (fac 3)
DynSem: Dynamic Semantics
46. rules
E env |- Var(x) --> v
where env[x] => T(e, env'),
E env' |- e --> v
E env |- Fun(Param(x, t), e) --> C(x, e, env)
E env |- App(e1, e2) --> v
where E env |- e1 --> C(x, e, env'),
E {x |--> T(e2, env), env'} |- e --> v
let fac : int -> int =
fix f : int -> int (
fun n : int (
ifz n then 1 else n * f (n - 1)
)
)
in (fac 3)
DynSem: Dynamic Semantics
47. let fac : int -> int =
fix f : int -> int (
fun n : int (
ifz n then 1 else n * f (n - 1)
)
)
in (fac 3)
rules
E env |- Var(x) --> v
where env[x] => T(e, env'),
E env' |- e --> v
E env |- Fun(Param(x, t), e) --> C(x, e, env)
E env |- App(e1, e2) --> v
where E env |- e1 --> C(x, e, env'),
E {x |--> T(e2, env), env'} |- e --> v
DynSem: Dynamic Semantics
48. rules
E env |- Var(x) --> v
where env[x] => T(e, env'),
E env' |- e --> v
E env |- Fun(Param(x, t), e) --> C(x, e, env)
E env |- App(e1, e2) --> v
where E env |- e1 --> C(x, e, env'),
E {x |--> T(e2, env), env'} |- e --> v
let fac : int -> int =
fix f : int -> int (
fun n : int (
ifz n then 1 else n * f (n - 1)
)
)
in (fac 3)
DynSem: Dynamic Semantics
49. Implicitly-Modular Structural Operational Semantics (I-MSOS)*
rules
E env |- Var(x) --> v
where env[x] => T(e, env'),
E env' |- e --> v
Add(e1, e2) --> I(addInt(i, j))
where e1 --> I(i),
e2 --> I(j)
rules
E env |- Var(x) --> v
where env[x] => T(e, env'),
E env' |- e --> v
E env |- Add(e1, e2) --> I(addInt(i, j))
where E env |- e1 --> I(i),
explicate E env |- e2 --> I(j)
* P. D. Mosses. Modular structural operational semantics. JLP, 60-61:195–228, 2004.
M. Churchill, P. D. Mosses, and P. Torrini. Reusable components of semantic specifications. In MODULARITY, April 2014.
50. Interpreter Generation
rules
Ifz(e1, e2, e3) --> v
where e1 --> I(i), i = 0, e2 --> v
Ifz(e1, e2, e3) --> v
where e1 --> I(i), i != 0, e3 --> v
explicate
& merge
rules
E env |- Ifz(e1, e2, e3) --> v
where E env |- e1 --> I(i),
[i = 0, E env |- e2 --> v] +
i != 0, E env |- e3 --> v]
51. Multi-Purpose Dynamic Semantics
Generate interpreter
Verification
Type soundness
rules
E env |- Var(x) --> v
where env[x] => T(e, env'),
E env' |- e --> v
E env |- Fun(Param(x, t), e) --> C(x, e, env)
E env |- App(e1, e2) --> v
where E env |- e1 --> C(x, e, env'),
E {x |--> T(e2, env), env'} |- e --> v
Semantics preservation
53. module PCF
sorts Exp Param Type
templates
Exp.Var = [[ID]]
Exp.App = [[Exp] [Exp]] {left}
Exp.Fun = [
fun [Param] (
[Exp]
)
]
Exp.Fix = [
fix [Param] (
[Exp]
)
]
Exp.Let = [
let [ID] : [Type] =
[Exp]
in [Exp]
]
Exp.Num = [[INT]]
Exp.Add = [[Exp] + [Exp]] {left}
Exp.Sub = [[Exp] - [Exp]] {left}
Exp.Mul = [[Exp] * [Exp]] {left}
Exp = [([Exp])] {bracket}
Exp.Ifz = [
ifz [Exp] then
[Exp]
else
[Exp]
]
Type.IntType = [int]
Type.FunType = [[Type] -> [Type]]
Param.Param = [[ID] : [Type]]
context-free priorities
Exp.App > Exp.Mul > {left: Exp.Add Exp.Sub}
> Exp.Ifz
From PCF in Spoofax …
module types
type rules
Var(x) : t
where definition of x : t
Param(x, t) : t
Fun(p, e) : FunType(tp, te)
where p : tp and e : te
App(e1, e2) : tr
where e1 : FunType(tf, tr) and e2 : ta
and tf == ta
else error "type mismatch" on e2
Fix(p, e) : tp
where p : tp and e : te
and tp == te
else error "type mismatch" on p
Let(x, tx, e1, e2) : t2
where e2 : t2 and e1 : t1
and t1 == tx
else error "type mismatch" on e1
Num(i) : IntType()
Ifz(e1, e2, e3) : t2
where e1 : IntType() and e2 : t2 and e3 : t3
and t2 == t3
else error "types not compatible" on e3
e@Add(e1, e2) + e@Sub(e1, e2) + e@Mul(e1, e2) : IntType()
where e1 : IntType()
else error "Int type expected" on e
and e2 : IntType()
else error "Int type expected" on e
module semantics
rules
E env |- Var(x) --> v
where env[x] => T(e, env'),
E env' |- e --> v
E env |- Fun(Param(x, t), e) --> C(x, e, env)
E env |- App(e1, e2) --> v
where E env |- e1 --> C(x, e, env'),
E {x |--> T(e2, env), env'} |- e --> v
E env |- Fix(Param(x, t), e) --> v
where
E {x |--> T(Fix(Param(x,t),e),env), env} |- e --> v
E env |- Let(x, t, e1, e2) --> v
where E {x |--> T(e1, env), env} |- e2 --> v
rules
Num(i) --> I(i)
Ifz(e1, e2, e3) --> v
where e1 --> I(i), i = 0, e2 --> v
Ifz(e1, e2, e3) --> v
where e1 --> I(i), i != 0, e3 --> v
Add(e1, e2) --> I(addInt(i, j))
where e1 --> I(i), e2 --> I(j)
Sub(e1, e2) --> I(subInt(i, j))
where e1 --> I(i), e2 --> I(j)
Mul(e1, e2) --> I(mulInt(i, j))
where e1 --> I(i), e2 --> I(j)
module names
namespaces Variable
binding rules
Var(x) :
refers to Variable x
Param(x, t) :
defines Variable x of type t
Fun(p, e) :
scopes Variable
Fix(p, e) :
scopes Variable
Let(x, t, e1, e2) :
defines Variable x of type t in e2
SDF3 NaBL TS DynSem
54. else
[Exp]
]
Type.IntType = [int]
Type.FunType = [[Type] -> [Type]]
Param.Param = [[ID] : [Type]]
context-free priorities
Exp.App > Exp.Mul > {left: Exp.Add Exp.Sub}
> Exp.Ifz
Ifz(e1, e2, e3) : t2
where e1 : IntType() and e2 : t2 and e3 : t3
and t2 == t3
else error "types not compatible" on e3
e@Add(e1, e2) + e@Sub(e1, e2) + e@Mul(e1, e2) : IntType()
where e1 : IntType()
else error "Int type expected" on e
and e2 : IntType()
else error "Int type expected" on e
Ifz(e1, e2, e3) --> v
where e1 --> I(i), i != 0, e3 --> v
Add(e1, e2) --> I(addInt(i, j))
where e1 --> I(i), e2 --> I(j)
Sub(e1, e2) --> I(subInt(i, j))
where e1 --> I(i), e2 --> I(j)
Mul(e1, e2) --> I(mulInt(i, j))
where e1 --> I(i), e2 --> I(j)
Param(x, t) :
defines Variable x of type t
Fun(p, e) :
scopes Variable
Fix(p, e) :
scopes Variable
Let(x, t, e1, e2) :
defines Variable x of type t in e2
Inductive has_type (C: Context) : term -> term -> Prop :=
| VarC_ht ns k0 t x k1 : lookup C x ns k0 t -> has_type C (Co VarC [Id x k0] k1) t
| ParamC_ht x t k0 : has_type C (Co ParamC [x;t] k0) t
| FunC_ht k0 t_p t_e p e k1 : has_type C p t_p -> has_type C e t_e -> has_type C (Co FunC [p;e] k1) (Co FunTypeC [t_p;t_e] k0)
| FixC_ht t_p t_e p e k0 : has_type C p t_p -> has_type C e t_e -> (t_p = t_e) -> has_type C (Co FixC [p;e] k0) t_p
| AppC_ht t_r k0 t_f t_a e1 e2 k1 : has_type C e1 (Co FunTypeC [t_f;t_r] k0) -> has_type C e2 t_a -> (t_f = t_a) -> has_type C (Co AppC [e1;e2] k1) t_r
| LetC_ht t2 t1 x t_x e1 e2 k0 : has_type C e2 t2 -> has_type C e1 t1 -> (t1 = t_x) -> has_type C (Co LetC [x;t_x;e1;e2] k0) t2
| NumC_ht k0 i k1 : has_type C (Co NumC [i] k1) (Co IntTypeC [] k0)
| IfzC_ht k0 t2 t3 e1 e2 e3 k1 : has_type C e1 (Co IntTypeC [] k0) -> has_type C e2 t2 -> has_type C e3 t3 -> (t2 = t3) -> has_type C (Co IfzC [e1;e2;e3] k1) t2
| AddC_ht k2 k0 k1 e1 e2 k3 : has_type C e1 (Co IntTypeC [] k0) -> has_type C e2 (Co IntTypeC [] k1) -> has_type C (Co AddC [e1;e2] k3) (Co IntTypeC [] k2)
| SubC_ht k2 k0 k1 e1 e2 k3 : has_type C e1 (Co IntTypeC [] k0) -> has_type C e2 (Co IntTypeC [] k1) -> has_type C (Co SubC [e1;e2] k3) (Co IntTypeC [] k2)
| MulC_ht k2 k0 k1 e1 e2 k3 : has_type C e1 (Co IntTypeC [] k0) -> has_type C e2 (Co IntTypeC [] k1) -> has_type C (Co MulC [e1;e2] k3) (Co IntTypeC [] k2)
| HT_eq e ty1 ty2 (hty1: has_type C e ty1) (tyeq: term_eq ty1 ty2) : has_type C e ty2
.
Inductive semantics_cbn : Env -> term -> value -> Prop :=
| Var0C_sem env' e env x k0 v : get_env x env e env' -> semantics_cbn env' e v -> semantics_cbn env (Co VarC [x] k0) v
| Fun0C_sem t k1 k0 x e env : semantics_cbn env (Co FunC [Co ParamC [x;t] k1;e] k0) (Clos x e env)
| Fix0C_sem k1 k0 env x t k3 e k2 v : semantics_cbn { x |--> (Co FixC [Co ParamC [x;t] k1;e] k0,env), env } e v -> semantics_cbn env (Co FixC [Co ParamC [x;t] k3;e] k2) v
| App0C_sem env' x e env e1 e2 k0 v : semantics_cbn env e1 (Clos x e env') -> semantics_cbn { x |--> (e2,env), env' } e v -> semantics_cbn env (Co AppC [e1;e2] k0) v
| Let0C_sem env x t e1 e2 k0 v : semantics_cbn { x |--> (e1,env), env } e2 v -> semantics_cbn env (Co LetC [x;t;e1;e2] k0) v
| Num0C_sem env k0 i : semantics_cbn env (Co NumC [i] k0) (Natval i)
| Ifz0C_sem i env e1 e2 e3 k0 v : semantics_cbn env e1 (Natval i) -> (i = 0) -> semantics_cbn env e2 v -> semantics_cbn env (Co IfzC [e1;e2;e3] k0) v
| Ifz1C_sem i env e1 e2 e3 k0 v : semantics_cbn env e1 (Natval i) -> (i <> 0) -> semantics_cbn env e3 v -> semantics_cbn env (Co IfzC [e1;e2;e3] k0) v
| Add0C_sem env e1 e2 k0 i j : semantics_cbn env e1 (Natval i) -> semantics_cbn env e2 (Natval j) -> semantics_cbn env (Co AddC [e1;e2] k0) (plus i j)
| Sub0C_sem env e1 e2 k0 i j : semantics_cbn env e1 (Natval i) -> semantics_cbn env e2 (Natval j) -> semantics_cbn env (Co SubC [e1;e2] k0) (minus i j)
| Mul0C_sem env e1 e2 k0 i j : semantics_cbn env e1 (Natval i) -> semantics_cbn env e2 (Natval j) -> semantics_cbn env (Co MulC [e1;e2] k0) (mult i j)
.
Inductive ID_NS : Set :=
| VariableNS
.
Definition NS :=
ID_NS
.
Inductive scopesR : term -> NS -> Prop :=
| Fun_scopes_Variable p e k0 : scopesR (Co FunC [p;e] k0) VariableNS
| Fix_scopes_Variable p e k0 : scopesR (Co FixC [p;e] k0) VariableNS
.
Definition scopes_R :=
scopesR
.
Inductive definesR : term -> Ident -> NS -> key -> Prop :=
| Param_defines_Variable x k1 t k0 : definesR (Co ParamC [Id x k1;t] k0) x VariableNS k1
.
Definition defines_R :=
definesR
.
Inductive refers_toR : term -> Ident -> NS -> key -> Prop :=
| Var_refers_to_Variable x k1 k0 : refers_toR (Co VarC [Id x k1] k0) x VariableNS k1
.
Definition refers_to_R :=
refers_toR
.
Inductive typed_definesR : term -> Ident -> NS -> term -> key -> Prop :=
| Param_typed_defines_Variable x t k1 t k0 : typed_definesR (Co ParamC [Id x k1;t] k0) x VariableNS t k1
.
Definition typed_defines_R :=
typed_definesR
.
Inductive sorts : Set :=
| Param_S
| ID_S
| INT_S
| Exp_S
| Type_S
.
Parameter Ident : Set.
Definition sort :=
sorts
.
Definition Ident_Sort :=
ID_S
.
Inductive Constructors :=
| INTC (n: nat)
| VarC
| FunC
| FixC
| AppC
| LetC
| ParamC
| NumC
| AddC
| SubC
| MulC
| DivC
| IfzC
| IntTypeC
| FunTypeC
.
Definition constructors :=
Constructors
.
Fixpoint
get_sig (x: constructors) : list sort * sort :=
match x with
| INTC n => ([],INT_S)
| VarC => ([ID_S],Exp_S)
| FunC => ([Param_S;Exp_S],Exp_S)
| FixC => ([Param_S;Exp_S],Exp_S)
| AppC => ([Exp_S;Exp_S],Exp_S)
| LetC => ([ID_S;Type_S;Exp_S;Exp_S],Exp_S)
| ParamC => ([ID_S;Type_S],Param_S)
| NumC => ([INT_S],Exp_S)
| AddC => ([Exp_S;Exp_S],Exp_S)
| SubC => ([Exp_S;Exp_S],Exp_S)
| MulC => ([Exp_S;Exp_S],Exp_S)
| DivC => ([Exp_S;Exp_S],Exp_S)
| IfzC => ([Exp_S;Exp_S;Exp_S],Exp_S)
| IntTypeC => ([],Type_S)
| FunTypeC => ([Type_S;Type_S],Type_S)
end. … to PCF in Coq (+ manual proof of type preservation)