Lecture introducing the need for static analysis in addition to parsing, the complications caused by names, and an introduction to name resolution with scope graphs
Compiler Construction | Lecture 3 | Syntactic Editor ServicesEelco Visser
The document discusses syntactic editor services for programming languages. It covers formatting specifications that define how abstract syntax trees are mapped to text using templates. It also discusses syntactic completion, which proposes valid completions in an editor by using the syntax definition. The lecture focuses on defining lexical and syntactic syntax for Tiger using SDF, and generating editor services like formatting and coloring from the syntax definitions.
This document provides an overview of the CS4200 Compiler Construction course at TU Delft. It discusses the course organization, structure, and assessment. The course is split into two parts - CS4200-A which covers concepts and techniques through lectures, papers, and homework assignments, and CS4200-B which involves building a compiler for a subset of Java as a semester-long project. Students will use the Spoofax language workbench to implement their compiler and will submit assignments through a private GitLab repository.
This document provides an overview of parsing in compiler construction. It discusses context-free grammars and how they are used to generate sentences and parse trees through derivations. It also covers ambiguity that can arise from grammars and various grammar transformations used to eliminate ambiguity, including defining associativity and priority. The dangling else problem is presented as an example of an ambiguous grammar.
This document discusses imperative and object-oriented programming languages. It covers basic concepts like state, variables, expressions, assignments, and control flow in imperative languages. It also discusses procedures and functions, including passing parameters, stack frames, and recursion. Finally, it briefly mentions the differences between call by value and call by reference.
Declare Your Language: Syntax DefinitionEelco Visser
This document provides information about syntax definition and the lab organization for a compiler construction course. It includes links to papers on declarative syntax definition and the SDF3 syntax definition formalism. It also provides details about submitting lab assignments through GitHub, including instructions to fork a repository template and submit solutions as pull requests. Grades will be published on a web lab and early feedback may be provided on pull requests and pushed changes.
This document provides an introduction to compiler construction using LEX and YACC. It discusses the general organization of compilers into frontend, optimizer, and backend stages. It then focuses on the frontend, explaining lexical analysis with LEX and parsing with YACC. Specific topics covered include using LEX to generate a scanner, defining grammars and semantic actions in YACC, and getting LEX and YACC to work together to build a parser.
Flex and Bison are tools for building programs that handle structured input. They were originally designed for writers of compilers and interpreters, and are replacements for the classic Lex and Yacc tools developed at Bell Labs in the 1970s. Flex handles lexical analysis by scanning input and breaking it into tokens, while Bison handles syntax analysis by parsing the tokens according to grammar rules.
This document provides an overview of Lex and Yacc. It describes Lex as a tool that generates scanners to tokenize input streams based on regular expressions. Yacc is described as a tool that generates parsers to analyze tokens based on grammar rules. The document outlines the compilation process for Lex and Yacc, describes components of a Lex source file including regular expressions and transition rules, and provides examples of Lex and Yacc usage.
Compiler Construction | Lecture 3 | Syntactic Editor ServicesEelco Visser
The document discusses syntactic editor services for programming languages. It covers formatting specifications that define how abstract syntax trees are mapped to text using templates. It also discusses syntactic completion, which proposes valid completions in an editor by using the syntax definition. The lecture focuses on defining lexical and syntactic syntax for Tiger using SDF, and generating editor services like formatting and coloring from the syntax definitions.
This document provides an overview of the CS4200 Compiler Construction course at TU Delft. It discusses the course organization, structure, and assessment. The course is split into two parts - CS4200-A which covers concepts and techniques through lectures, papers, and homework assignments, and CS4200-B which involves building a compiler for a subset of Java as a semester-long project. Students will use the Spoofax language workbench to implement their compiler and will submit assignments through a private GitLab repository.
This document provides an overview of parsing in compiler construction. It discusses context-free grammars and how they are used to generate sentences and parse trees through derivations. It also covers ambiguity that can arise from grammars and various grammar transformations used to eliminate ambiguity, including defining associativity and priority. The dangling else problem is presented as an example of an ambiguous grammar.
This document discusses imperative and object-oriented programming languages. It covers basic concepts like state, variables, expressions, assignments, and control flow in imperative languages. It also discusses procedures and functions, including passing parameters, stack frames, and recursion. Finally, it briefly mentions the differences between call by value and call by reference.
Declare Your Language: Syntax DefinitionEelco Visser
This document provides information about syntax definition and the lab organization for a compiler construction course. It includes links to papers on declarative syntax definition and the SDF3 syntax definition formalism. It also provides details about submitting lab assignments through GitHub, including instructions to fork a repository template and submit solutions as pull requests. Grades will be published on a web lab and early feedback may be provided on pull requests and pushed changes.
This document provides an introduction to compiler construction using LEX and YACC. It discusses the general organization of compilers into frontend, optimizer, and backend stages. It then focuses on the frontend, explaining lexical analysis with LEX and parsing with YACC. Specific topics covered include using LEX to generate a scanner, defining grammars and semantic actions in YACC, and getting LEX and YACC to work together to build a parser.
Flex and Bison are tools for building programs that handle structured input. They were originally designed for writers of compilers and interpreters, and are replacements for the classic Lex and Yacc tools developed at Bell Labs in the 1970s. Flex handles lexical analysis by scanning input and breaking it into tokens, while Bison handles syntax analysis by parsing the tokens according to grammar rules.
This document provides an overview of Lex and Yacc. It describes Lex as a tool that generates scanners to tokenize input streams based on regular expressions. Yacc is described as a tool that generates parsers to analyze tokens based on grammar rules. The document outlines the compilation process for Lex and Yacc, describes components of a Lex source file including regular expressions and transition rules, and provides examples of Lex and Yacc usage.
The objectives of the seminar are to shed a light on the premises of FP and give you a basic understanding of the pillars of FP so that you would feel enlightened at the end of the session. When you walk away from the seminar you should feel an inner light about the new way of programming and an urge & motivation to code like you never before did!
Functional programming should not be confused with imperative (or procedural) programming. Neither it is like object oriented programming. It is something different. Not radically so, since the concepts that we will be exploring are familiar programming concepts, just expressed in a different way. The philosophy behind how these concepts are applied to solving problems are also a little different. We shall learn and talk about essentially the fundamental elements of Functional Programming.
This document provides an introduction to Yacc and Lex, which are parser and lexer generator tools. Yacc is used to describe the grammar of a language and automatically generate a parser. The user provides grammar rules in BNF format. Lex is used to generate a lexer (also called a tokenizer) that recognizes tokens in the input based on regular expressions. It returns tokens to the parser. The document gives examples of Yacc and Lex grammar files and explains how they are compiled and used to build a parser for an input language.
This document provides an introduction to using Lex and Yacc to build compilers. Lex is used to generate a lexical analyzer from input patterns, which converts strings to tokens. Yacc generates a parser from a grammar, which analyzes tokens to build a syntax tree. The document describes building a calculator as an example, which can be converted to a compiler by changing the code generation. It also discusses additional Lex and Yacc features like strings, reserved words, debugging, recursion, and attributes.
Before 1975 writing a compiler was a very time-consuming process. Then Lesk [1975] and Johnson published papers on lex and YACC. These utilities/components greatly simplify compiler writing. We’ve used flex and bison to compile the code.
The document discusses syntax analysis and creating parsers using Yacc/Bison. It provides examples of writing grammars and semantic actions for simple arithmetic expressions. Key points covered include:
- Yacc/Bison generate LALR(1) parsers from grammar specifications
- A Yacc specification contains declarations, translation rules (productions with semantic actions), and auxiliary C functions
- Productions define the grammar, semantic actions associate values with terminals/nonterminals
- Attributes refer to semantic values, precedence levels resolve ambiguity
- Lex/Flex is used to generate a lexical analyzer from regular expressions to interface with the parser
This document discusses language processing development tools (LPDTs) that focus on generating the analysis phase of language processors. It specifically describes Lexical Analyzers and the LEX tool. LEX accepts an input specification with regular expressions that define the tokens/terminals of a language. It then generates a lexical analyzer that tokenizes the input stream. The analyzer identifies lexical units, classifies them, and builds descriptor tokens containing a class code and number. It also performs semantic actions like entering identifiers and numbers into symbol tables.
LEX and YACC are software development tools used for lexical analysis and parsing. LEX is a lexical analyzer generator that accepts an input specification defining lexical units and associated semantic actions. It generates a translator containing tables of lexical units and tokens. YACC is a parser generator that accepts a grammar specification and actions for the language being compiled. It produces a bottom-up parser that uses shift-reduce parsing. These tools allow programmers to specify the syntax of a language and generate code to analyze programs in that language.
The document outlines the course content for a C++ introductory course, including introductions to OOP concepts like classes and objects, pointers, functions, inheritance, and polymorphism. It also covers basic C++ programming concepts like I/O, data types, operators, and data structures. The course aims to provide students with fundamental C++ programming skills through explanations and examples of key C++ features.
Lex is a program generator designed for lexical processing of character input streams. It works by translating a table of regular expressions and corresponding program fragments provided by the user into a program. This program then reads an input stream, partitions it into strings matching the given expressions, and executes the associated program fragments in order. Flex is a fast lexical analyzer generator that is an alternative to Lex. It generates scanners that recognize lexical patterns in text based on pairs of regular expressions and C code provided by the user.
Yacc is a general tool for describing the input to computer programs. It generates a LALR parser that analyzes tokens from Lex and creates a syntax tree based on the grammar rules specified. Yacc was originally developed in the 1970s and generates C code for the syntax analyzer from a grammar similar to BNF. It has been used to build compilers for languages like C, Pascal, and APL as well as for other programs like document retrieval systems.
Lexical analysis is the first step in compiler construction and involves breaking a program into individual tokens. A Lex program has three sections: declarations, translation rules, and auxiliary code. The declarations section defines regular expressions and global variables. The translation rules section matches regular expressions to tokens and executes C code actions. When a regular expression is matched in the input stream, its corresponding action is executed to return the appropriate token.
The document discusses Lex and Yacc, which are programs that generate lexical analyzers and parsers. Lex uses regular expressions to generate a scanner that tokenizes input into tokens, while Yacc uses context-free grammars to generate a parser that analyzes tokens based on syntax rules. The document provides examples of how Lex and Yacc can be used together, with Lex generating tokens from input that are then passed to a Yacc-generated parser to build a syntax tree based on the grammar.
This document discusses flex and bison tools for lexical analysis and parsing. It covers:
1. How flex returns tokens with values and bison assigns token numbers starting from 258.
2. The basics of writing flex rules and scanners, and bison grammars, rules, and parsers.
3. An example bison calculator grammar and combining the flex scanner and bison parser.
This document discusses context-free grammars and parsing. It defines context-free grammars and how they are used to specify the syntactic structure of programming languages. Key points include:
- Context-free grammars use recursive rules and regular expressions to define a language's syntax.
- Parsing involves using a context-free grammar to derive a syntax tree from a sequence of tokens.
- Derivations show how strings of tokens are generated from a grammar's start symbol through replacements.
- The language defined by a grammar consists of all strings that can be derived from the start symbol.
This document contains information about Lex, Yacc, Flex, and Bison. It provides definitions and descriptions of each tool. Lex is a lexical analyzer generator that reads input specifying a lexical analyzer and outputs C code implementing a lexer. Yacc is a parser generator that takes a grammar description and snippets of C code as input and outputs a shift-reduce parser in C. Flex is a tool similar to Lex for generating scanners based on regular expressions. Bison is compatible with Yacc and can be used to develop language parsers.
This document compares the programming languages Java and C#. It discusses that C# was developed with the .NET framework in mind and is intended to be the primary language for .NET development. It outlines some subtle syntactic differences between the languages, like how print statements and inheritance are defined. It also examines some concepts that are modified in C# compared to Java, such as polymorphism and operator overloading. Finally, it presents some new concepts in C# that do not exist in Java, including enums, foreach loops, and properties.
The document provides an overview of C# and .NET concepts including:
- C# versions from 1.0 to 5.0 and new features introduced in each version such as generics, LINQ, lambda expressions etc.
- .NET Framework concepts such as Common Language Runtime (CLR), Just-In-Time (JIT) compilation, garbage collection.
- Value types vs reference types, stack vs heap memory.
- Language Integrated Query (LINQ) and expression trees.
- Various C# language concepts are demonstrated through code examples.
Programming is hard. Programming correct C and C++ is particularly hard. Indeed, both in C and certainly in C++, it is uncommon to see a screenful containing only well defined and conforming code.Why do professional programmers write code like this? Because most programmers do not have a deep understanding of the language they are using.While they sometimes know that certain things are undefined or unspecified, they often do not know why it is so. In these slides we will study small code snippets in C and C++, and use them to discuss the fundamental building blocks, limitations and underlying design philosophies of these wonderful but dangerous programming languages.
This content has a CC license. Feel free to use it for whatever you want. You may download the original PDF file from: http://www.pvv.org/~oma/DeepC_slides_oct2012.pdf
The document discusses combinator libraries and domain-specific languages (DSLs) in F#. It explains that combinator libraries allow defining DSLs that embed in a host language. It provides examples of using F# to define DSLs for expressions, HTML, and forms. The document also discusses how F# features like algebraic data types and quotations enable easily defining and compiling DSLs to other languages.
Illustrated Code: Building Software in a Literate Way
Andreas Zeller, CISPA Helmholtz Center for Information Security
Notebooks – rich, interactive documents that join together code, documentation, and outputs – are all the rage with data scientists. But can they be used for actual software development? In this talk, I share experiences from authoring two interactive textbooks – fuzzingbook.org and debuggingbook.org – and show how notebooks not only serve for exploring and explaining code and data, but also how they can be used as software modules, integrating self-checking documentation, tests, and tutorials all in one place. The resulting software focuses on the essential, is well-documented, highly maintainable, easily extensible, and has a much higher shelf life than the "duct tape and wire” prototypes frequently found in research and beyond.
#1 The diversity of terminology shows the large spectrum of shapes DSLs can take.
#2 As syntax and development environment matter, DSLs should allow the user to choose the right shape according to their usage or task.
#3 A metamorphic DSL vision is proposed where DSLs can adapt to the most appropriate shape, including transitioning between shapes based on usage or task.
The objectives of the seminar are to shed a light on the premises of FP and give you a basic understanding of the pillars of FP so that you would feel enlightened at the end of the session. When you walk away from the seminar you should feel an inner light about the new way of programming and an urge & motivation to code like you never before did!
Functional programming should not be confused with imperative (or procedural) programming. Neither it is like object oriented programming. It is something different. Not radically so, since the concepts that we will be exploring are familiar programming concepts, just expressed in a different way. The philosophy behind how these concepts are applied to solving problems are also a little different. We shall learn and talk about essentially the fundamental elements of Functional Programming.
This document provides an introduction to Yacc and Lex, which are parser and lexer generator tools. Yacc is used to describe the grammar of a language and automatically generate a parser. The user provides grammar rules in BNF format. Lex is used to generate a lexer (also called a tokenizer) that recognizes tokens in the input based on regular expressions. It returns tokens to the parser. The document gives examples of Yacc and Lex grammar files and explains how they are compiled and used to build a parser for an input language.
This document provides an introduction to using Lex and Yacc to build compilers. Lex is used to generate a lexical analyzer from input patterns, which converts strings to tokens. Yacc generates a parser from a grammar, which analyzes tokens to build a syntax tree. The document describes building a calculator as an example, which can be converted to a compiler by changing the code generation. It also discusses additional Lex and Yacc features like strings, reserved words, debugging, recursion, and attributes.
Before 1975 writing a compiler was a very time-consuming process. Then Lesk [1975] and Johnson published papers on lex and YACC. These utilities/components greatly simplify compiler writing. We’ve used flex and bison to compile the code.
The document discusses syntax analysis and creating parsers using Yacc/Bison. It provides examples of writing grammars and semantic actions for simple arithmetic expressions. Key points covered include:
- Yacc/Bison generate LALR(1) parsers from grammar specifications
- A Yacc specification contains declarations, translation rules (productions with semantic actions), and auxiliary C functions
- Productions define the grammar, semantic actions associate values with terminals/nonterminals
- Attributes refer to semantic values, precedence levels resolve ambiguity
- Lex/Flex is used to generate a lexical analyzer from regular expressions to interface with the parser
This document discusses language processing development tools (LPDTs) that focus on generating the analysis phase of language processors. It specifically describes Lexical Analyzers and the LEX tool. LEX accepts an input specification with regular expressions that define the tokens/terminals of a language. It then generates a lexical analyzer that tokenizes the input stream. The analyzer identifies lexical units, classifies them, and builds descriptor tokens containing a class code and number. It also performs semantic actions like entering identifiers and numbers into symbol tables.
LEX and YACC are software development tools used for lexical analysis and parsing. LEX is a lexical analyzer generator that accepts an input specification defining lexical units and associated semantic actions. It generates a translator containing tables of lexical units and tokens. YACC is a parser generator that accepts a grammar specification and actions for the language being compiled. It produces a bottom-up parser that uses shift-reduce parsing. These tools allow programmers to specify the syntax of a language and generate code to analyze programs in that language.
The document outlines the course content for a C++ introductory course, including introductions to OOP concepts like classes and objects, pointers, functions, inheritance, and polymorphism. It also covers basic C++ programming concepts like I/O, data types, operators, and data structures. The course aims to provide students with fundamental C++ programming skills through explanations and examples of key C++ features.
Lex is a program generator designed for lexical processing of character input streams. It works by translating a table of regular expressions and corresponding program fragments provided by the user into a program. This program then reads an input stream, partitions it into strings matching the given expressions, and executes the associated program fragments in order. Flex is a fast lexical analyzer generator that is an alternative to Lex. It generates scanners that recognize lexical patterns in text based on pairs of regular expressions and C code provided by the user.
Yacc is a general tool for describing the input to computer programs. It generates a LALR parser that analyzes tokens from Lex and creates a syntax tree based on the grammar rules specified. Yacc was originally developed in the 1970s and generates C code for the syntax analyzer from a grammar similar to BNF. It has been used to build compilers for languages like C, Pascal, and APL as well as for other programs like document retrieval systems.
Lexical analysis is the first step in compiler construction and involves breaking a program into individual tokens. A Lex program has three sections: declarations, translation rules, and auxiliary code. The declarations section defines regular expressions and global variables. The translation rules section matches regular expressions to tokens and executes C code actions. When a regular expression is matched in the input stream, its corresponding action is executed to return the appropriate token.
The document discusses Lex and Yacc, which are programs that generate lexical analyzers and parsers. Lex uses regular expressions to generate a scanner that tokenizes input into tokens, while Yacc uses context-free grammars to generate a parser that analyzes tokens based on syntax rules. The document provides examples of how Lex and Yacc can be used together, with Lex generating tokens from input that are then passed to a Yacc-generated parser to build a syntax tree based on the grammar.
This document discusses flex and bison tools for lexical analysis and parsing. It covers:
1. How flex returns tokens with values and bison assigns token numbers starting from 258.
2. The basics of writing flex rules and scanners, and bison grammars, rules, and parsers.
3. An example bison calculator grammar and combining the flex scanner and bison parser.
This document discusses context-free grammars and parsing. It defines context-free grammars and how they are used to specify the syntactic structure of programming languages. Key points include:
- Context-free grammars use recursive rules and regular expressions to define a language's syntax.
- Parsing involves using a context-free grammar to derive a syntax tree from a sequence of tokens.
- Derivations show how strings of tokens are generated from a grammar's start symbol through replacements.
- The language defined by a grammar consists of all strings that can be derived from the start symbol.
This document contains information about Lex, Yacc, Flex, and Bison. It provides definitions and descriptions of each tool. Lex is a lexical analyzer generator that reads input specifying a lexical analyzer and outputs C code implementing a lexer. Yacc is a parser generator that takes a grammar description and snippets of C code as input and outputs a shift-reduce parser in C. Flex is a tool similar to Lex for generating scanners based on regular expressions. Bison is compatible with Yacc and can be used to develop language parsers.
This document compares the programming languages Java and C#. It discusses that C# was developed with the .NET framework in mind and is intended to be the primary language for .NET development. It outlines some subtle syntactic differences between the languages, like how print statements and inheritance are defined. It also examines some concepts that are modified in C# compared to Java, such as polymorphism and operator overloading. Finally, it presents some new concepts in C# that do not exist in Java, including enums, foreach loops, and properties.
The document provides an overview of C# and .NET concepts including:
- C# versions from 1.0 to 5.0 and new features introduced in each version such as generics, LINQ, lambda expressions etc.
- .NET Framework concepts such as Common Language Runtime (CLR), Just-In-Time (JIT) compilation, garbage collection.
- Value types vs reference types, stack vs heap memory.
- Language Integrated Query (LINQ) and expression trees.
- Various C# language concepts are demonstrated through code examples.
Programming is hard. Programming correct C and C++ is particularly hard. Indeed, both in C and certainly in C++, it is uncommon to see a screenful containing only well defined and conforming code.Why do professional programmers write code like this? Because most programmers do not have a deep understanding of the language they are using.While they sometimes know that certain things are undefined or unspecified, they often do not know why it is so. In these slides we will study small code snippets in C and C++, and use them to discuss the fundamental building blocks, limitations and underlying design philosophies of these wonderful but dangerous programming languages.
This content has a CC license. Feel free to use it for whatever you want. You may download the original PDF file from: http://www.pvv.org/~oma/DeepC_slides_oct2012.pdf
The document discusses combinator libraries and domain-specific languages (DSLs) in F#. It explains that combinator libraries allow defining DSLs that embed in a host language. It provides examples of using F# to define DSLs for expressions, HTML, and forms. The document also discusses how F# features like algebraic data types and quotations enable easily defining and compiling DSLs to other languages.
Illustrated Code: Building Software in a Literate Way
Andreas Zeller, CISPA Helmholtz Center for Information Security
Notebooks – rich, interactive documents that join together code, documentation, and outputs – are all the rage with data scientists. But can they be used for actual software development? In this talk, I share experiences from authoring two interactive textbooks – fuzzingbook.org and debuggingbook.org – and show how notebooks not only serve for exploring and explaining code and data, but also how they can be used as software modules, integrating self-checking documentation, tests, and tutorials all in one place. The resulting software focuses on the essential, is well-documented, highly maintainable, easily extensible, and has a much higher shelf life than the "duct tape and wire” prototypes frequently found in research and beyond.
#1 The diversity of terminology shows the large spectrum of shapes DSLs can take.
#2 As syntax and development environment matter, DSLs should allow the user to choose the right shape according to their usage or task.
#3 A metamorphic DSL vision is proposed where DSLs can adapt to the most appropriate shape, including transitioning between shapes based on usage or task.
Compiler Construction | Lecture 17 | Beyond Compiler ConstructionEelco Visser
Compiler construction techniques are applied beyond general-purpose languages through domain-specific languages (DSLs). The document discusses several DSLs developed using Spoofax including:
- WebDSL for web programming with sub-languages for entities, queries, templates, and access control.
- IceDust for modeling information systems with derived values computed on-demand, incrementally, or eventually consistently.
- PixieDust for client-side web programming with views as derived values updated incrementally.
- PIE for defining software build pipelines as tasks with dynamic dependencies computed incrementally.
The document also outlines several research challenges in compiler construction like high-level declarative language definition, verification of
An introduction to SKOS (Simple Knowledge Organization System), a W3C recommendation/standard for interoperability of controlled vocabularies. Presented at Taxonomy Boot Camp London 2018
This document provides an overview of Elasticsearch including:
- Elasticsearch is a database server that is implemented using RESTful HTTP/JSON and is easily scalable. It is based on Lucene.
- Features include being schema-free, real-time, easy to extend with plugins, automatic peer discovery in clusters, failover and replication, and community support.
- Terminology includes index, type, document, and field which make up the data structure inside Elasticsearch. Searches can be performed across multiple indices.
- Elasticsearch works using full-text searching via inverted indexing and analysis. Analysis extracts terms from text through techniques like removing stopwords, lowercase conversion, and stemming.
- Elasticsearch can be accessed in a RESTful manner
This document provides an overview of objects, classes, messaging, and the Objective-C runtime system. It discusses key concepts including:
- Objects associate instance variables (data) with methods (operations). Objects encapsulate their data and methods.
- The id type can hold any object regardless of class. Objects are dynamically typed at runtime based on their class.
- Messages in the form [receiver message] are used to send objects messages to invoke their methods. Method names in messages are called selectors.
- Polymorphism and dynamic binding allow objects to respond differently to the same message depending on their class. The runtime looks up the appropriate method implementation at runtime based on the object's class.
The document discusses XML and related technologies like XML databases and MPEG-7. It defines XML and describes how XML documents can be stored and queried using native XML databases. It also explains the key components and applications of the MPEG-7 standard for describing multimedia content.
Decoding and developing the online finding aidkgerber
Workshop for the Library Technology Conference on Encoded Archival Description, and the mark-up languages involved in its use including HTML, XML, and XSLT.
This document provides an overview of key concepts related to programming languages. It discusses the definition of a programming language and the history and evolution of popular languages from 1951 to present. It covers programming language paradigms like procedural, object-oriented, functional, and logic-based languages. It also discusses factors that influence language design like efficiency, regularity, and issues in language translation. Finally, it summarizes the structure and operation of computers and how different programming models map to underlying computer architectures.
Language Server Protocol - Why the Hype?mikaelbarbero
The Language Server Protocol developed by Microsoft for Visual Studio Code is a language and IDE agnostic protocol which clearly separates language semantics from UI presentation. Language developers can implement the protocol and benefit from immediate support in all IDEs, while IDE developers, who implement the protocol get automatic support for all these languages without having to write any language-specific code. This session will let you learn more about the innards of the LSP. We will also have an overview of the current implementations in Eclipse, and outside Eclipse as well.
Internal DSLs in Scala allows creating domain-specific languages within Scala by leveraging Scala's features like lightweight syntax, implicit conversions, by-name parameters, and advanced type system. The presentation provides examples of internal DSLs in Scala for tasks automation (Baysick), date/time manipulation (Time DSL), and message processing (Spring Integration DSL). It also recommends some books and links for further reading on implementing internal DSLs in Scala.
Haystack 2018 - Algorithmic Extraction of Keywords Concepts and VocabulariesMax Irwin
Presentation as given to the Haystack Conference, which outlines research and techniques for automatic extraction of keywords, concepts, and vocabularies from text corpora.
This document provides an introduction and overview of the Scala programming language. It begins with a brief history of Scala's creation at EPFL by Martin Odersky in 2001. It then discusses some of the key reasons for using Scala, including its ability to scale with demands through a combination of object-oriented and functional programming concepts. The document outlines some of Scala's main features, such as its static typing, interoperability with Java, and support for modularity. It also provides examples of Scala's data types, operations, control structures, functions, and pattern matching capabilities. Overall, the summary provides a high-level introduction to Scala's origins, design philosophy, and programming paradigms.
Runtime Environment Of .Net Divya RathoreEsha Yadav
The document discusses the .NET Framework and how it aims to unify programming models, simplify development, and introduce a common language runtime. It covers the design goals of the CLR including making development and deployment easier while providing a robust and secure execution environment. Examples are provided of how tasks like splitting a string that were complex in SQL can be simplified with the .NET Framework.
This document discusses programming languages and their key concepts. It defines a programming language as a set of rules that tells a computer what operations to perform. It describes the syntax, semantics, and pragmatics of languages. It also discusses different language paradigms like imperative, functional, object-oriented, and rule-based languages. Finally, it outlines criteria for evaluating languages, including readability, writability, reliability, and cost.
STATICMOCK : A Mock Object Framework for Compiled Languages ijseajournal
Mock object frameworks are very useful for creating unit tests. However, purely compiled languages lack robust frameworks for mock objects. The frameworks that do exist rely on inheritance, compiler directives, or linker manipulation. Such techniques limit the applicability of the existing frameworks, especially when
dealing with legacy code.
We present a tool, StaticMock, for creating mock objects in compiled languages. This tool uses source-tosource
compilation together with Aspect Oriented Programming to deliver a unique solution that does not rely on the previous, commonly used techniques. We evaluate the compile-time and run-time overhead incurred by this tool, and we demonstrate the effectiveness of the tool by showing that it can be applied to
new and existing code
This document provides an overview of the "Programming in Scala" course, including:
- The course is divided into lectures introducing Scala features and a individual project to develop a non-trivial Scala application.
- It covers programming paradigms like object-oriented programming, functional programming, and compares imperative vs functional styles.
- Scala is introduced as a multi-paradigm language that blends OOP and FP on the JVM, making it appealing for its concise syntax and ability to seamlessly use Java libraries.
- The first Scala application example shows the object keyword, Unit return type, and string interpolation.
- Primitive types in Scala include integral,
This document discusses various programming language trends including functional programming languages like Haskell, purely object-oriented languages, and Scala. It also covers database trends like NoSQL databases Cassandra and MongoDB. Programming tools like Docker and Vagrant are mentioned. The document discusses paradigms like static vs dynamic typing and strong vs weak typing. It provides examples and resources for languages including Haskell, Erlang, Scala, and databases like Cassandra.
This document discusses syntactic editor services including formatting, syntax coloring, and syntactic completion. It describes how syntactic completion can be provided generically based on a syntax definition. The document also discusses how context-free grammars can be extended with templates to specify formatting layout when pretty-printing abstract syntax trees to text. Templates are used to insert whitespace, line breaks, and indentation to produce readable output.
This document provides an overview of the Lecture 2 on Declarative Syntax Definition for the CS4200 Compiler Construction course. The lecture covers the specification of syntax definition from which parsers can be derived, the perspective on declarative syntax definition using SDF, and reading material on the SDF3 syntax definition formalism and papers on testing syntax definitions and declarative syntax. It also discusses what syntax is, both in linguistics and programming languages, and how programs can be described in terms of syntactic categories and language constructs. An example Tiger program for solving the n-queens problem is presented to illustrate syntactic categories in Tiger.
A Direct Semantics of Declarative Disambiguation RulesEelco Visser
This document discusses research into providing a direct semantics for declarative disambiguation of expression grammars. It aims to define what disambiguation rules mean, ensure they are safe and complete, and provide an effective implementation strategy. The document outlines key research questions around the meaning, safety, completeness and coverage of disambiguation rules. It also presents contributions around using subtree exclusion patterns to define safe and complete disambiguation for classes of expression grammars, and implementing this in SDF3.
Declarative Type System Specification with StatixEelco Visser
In this talk I present the design of Statix, a new constraint-based language for the executable specification of type systems. Statix specifications consist of predicates that define the well-formedness of language constructs in terms of built-in and user-defined constraints. Statix has a declarative semantics that defines whether a model satisfies a constraint. The operational semantics of Statix is defined as a sound constraint solving algorithm that searches for a solution for a constraint. The aim of the design is that Statix users can ignore the execution order of constraint solving and think in terms of the declarative semantics.
A distinctive feature of Statix is its use of scope graphs, a language parametric framework for the representation and querying of the name binding facts in programs. Since types depend on name resolution and name resolution may depend on types, it is typically not possible to construct the entire scope graph of a program before type constraint resolution. In (algorithmic) type system specifications this leads to explicit staging of the construction and querying of the type environment (class table, symbol table). Statix automatically stages the construction of the scope graph of a program such that queries are never executed when their answers may be affected by future scope graph extension. In the talk, I will explain the design of Statix by means of examples.
https://eelcovisser.org/post/309/declarative-type-system-specification-with-statix
Domain Specific Languages for Parallel Graph AnalytiX (PGX)Eelco Visser
This document discusses domain-specific languages (DSLs) for parallel graph analytics using PGX. It describes how DSLs allow users to implement graph algorithms and queries using high-level languages that are then compiled and optimized to run efficiently on PGX. Examples of DSL optimizations like multi-source breadth-first search are provided. The document also outlines the extensible compiler architecture used for DSLs, which can generate code for different backends like shared memory or distributed memory.
Compiler Construction | Lecture 15 | Memory ManagementEelco Visser
The document discusses different memory management techniques:
1. Reference counting counts the number of pointers to each record and deallocates records with a count of 0.
2. Mark and sweep marks all reachable records from program roots and sweeps unmarked records, adding them to a free list.
3. Copying collection copies reachable records to a "to" space, allowing the original "from" space to be freed without fragmentation.
4. Generational collection focuses collection on younger object generations more frequently to improve efficiency.
Compiler Construction | Lecture 14 | InterpretersEelco Visser
This document summarizes a lecture on interpreters for programming languages. It discusses how operational semantics can be used to define the meaning of a program through state transitions in an interpreter. It provides examples of defining the semantics of a simple language using DynSem, a domain-specific language for specifying operational semantics. DynSem specifications can be compiled to interpreters that execute programs in the defined language.
Compiler Construction | Lecture 13 | Code GenerationEelco Visser
The document discusses code generation and optimization techniques, describing compilation schemas that define how language constructs are translated to target code patterns, and covers topics like ensuring correctness of generated code through type checking and verification of static constraints on the target format. It also provides examples of compilation schemas for Tiger language constructs like arithmetic expressions and control flow and discusses generating nested functions.
Compiler Construction | Lecture 12 | Virtual MachinesEelco Visser
The document discusses the architecture of the Java Virtual Machine (JVM). It describes how the JVM uses threads, a stack, heap, and method area. It explains JVM control flow through bytecode instructions like goto, and how the operand stack is used to perform operations and hold method arguments and return values.
Compiler Construction | Lecture 9 | Constraint ResolutionEelco Visser
This document provides an overview of constraint resolution in the context of a compiler construction lecture. It discusses unification, which is the basis for many type inference and constraint solving approaches. It also describes separating type checking into constraint generation and constraint solving, and introduces a constraint language that integrates name resolution into constraint resolution through scope graph constraints. Finally, it discusses papers on further developments with this approach, including addressing expressiveness and staging issues in type systems through the Statix DSL for defining type systems.
Compiler Construction | Lecture 8 | Type ConstraintsEelco Visser
This lecture covers type checking with constraints. It introduces the NaBL2 meta-language for writing type specifications as constraint generators that map a program to constraints. The constraints are then solved to determine if a program is well-typed. NaBL2 supports defining name binding and type structures through scope graphs and constraints over names, types, and scopes. Examples show type checking patterns in NaBL2 including variables, functions, records, and name spaces.
Compiler Construction | Lecture 7 | Type CheckingEelco Visser
This document summarizes a lecture on type checking. It discusses using constraints to separate the language-specific type checking rules from the language-independent solving algorithm. Constraint-based type checking collects constraints as it traverses the AST, then solves the constraints in any order. This allows type information to be learned gradually and avoids issues with computation order.
Compiler Construction | Lecture 4 | Parsing Eelco Visser
This lecture covers parsing and turning syntax definitions into parsers. It discusses context-free grammars and derivations. Grammars can be ambiguous, allowing multiple parse trees for a sentence. Grammar transformations like disambiguation, eliminating left recursion, and left factoring can address issues while preserving the language. Associativity and priority can be defined through transformations. The reading material covers parsing schemata, classical compiler textbooks, and papers on disambiguation filters and parsing algorithms.
Compiler Construction | Lecture 2 | Declarative Syntax DefinitionEelco Visser
The document describes a lecture on declarative syntax definition. It discusses the perspective on declarative syntax definition explained in an Onward! 2010 essay. It also mentions an OOPSLA 2011 paper that introduced the SPoofax Testing (SPT) language used in the section on testing syntax definitions. Finally, it provides a link to documentation on the SDF3 syntax definition formalism.
Compiler Construction | Lecture 1 | What is a compiler?Eelco Visser
This document provides an overview of the CS4200 Compiler Construction course at TU Delft. It discusses the organization of the course into two parts: CS4200-A which covers compiler concepts and techniques through lectures, papers, and homework assignments; and CS4200-B which involves building a compiler for a subset of Java as a semester-long project. Key topics covered include the components of a compiler like parsing, type checking, optimization, and code generation; intermediate representations; and different types of compilers.
Declare Your Language: Virtual Machines & Code GenerationEelco Visser
The document summarizes virtual machines and code generation. It discusses how high-level programming languages are abstracted from low-level machine details through virtual machines. The Java Virtual Machine architecture and bytecode instructions are described, including its stack-based design, threads, heap, and method area. Code generation mechanics like string operations are also covered.
Nashik's top web development company, Upturn India Technologies, crafts innovative digital solutions for your success. Partner with us and achieve your goals
WMF 2024 - Unlocking the Future of Data Powering Next-Gen AI with Vector Data...Luigi Fugaro
Vector databases are transforming how we handle data, allowing us to search through text, images, and audio by converting them into vectors. Today, we'll dive into the basics of this exciting technology and discuss its potential to revolutionize our next-generation AI applications. We'll examine typical uses for these databases and the essential tools
developers need. Plus, we'll zoom in on the advanced capabilities of vector search and semantic caching in Java, showcasing these through a live demo with Redis libraries. Get ready to see how these powerful tools can change the game!
Orca: Nocode Graphical Editor for Container OrchestrationPedro J. Molina
Tool demo on CEDI/SISTEDES/JISBD2024 at A Coruña, Spain. 2024.06.18
"Orca: Nocode Graphical Editor for Container Orchestration"
by Pedro J. Molina PhD. from Metadev
Mobile App Development Company In Noida | Drona InfotechDrona Infotech
React.js, a JavaScript library developed by Facebook, has gained immense popularity for building user interfaces, especially for single-page applications. Over the years, React has evolved and expanded its capabilities, becoming a preferred choice for mobile app development. This article will explore why React.js is an excellent choice for the Best Mobile App development company in Noida.
Visit Us For Information: https://www.linkedin.com/pulse/what-makes-reactjs-stand-out-mobile-app-development-rajesh-rai-pihvf/
14 th Edition of International conference on computer visionShulagnaSarkar2
About the event
14th Edition of International conference on computer vision
Computer conferences organized by ScienceFather group. ScienceFather takes the privilege to invite speakers participants students delegates and exhibitors from across the globe to its International Conference on computer conferences to be held in the Various Beautiful cites of the world. computer conferences are a discussion of common Inventions-related issues and additionally trade information share proof thoughts and insight into advanced developments in the science inventions service system. New technology may create many materials and devices with a vast range of applications such as in Science medicine electronics biomaterials energy production and consumer products.
Nomination are Open!! Don't Miss it
Visit: computer.scifat.com
Award Nomination: https://x-i.me/ishnom
Conference Submission: https://x-i.me/anicon
For Enquiry: Computer@scifat.com
Consistent toolbox talks are critical for maintaining workplace safety, as they provide regular opportunities to address specific hazards and reinforce safe practices.
These brief, focused sessions ensure that safety is a continual conversation rather than a one-time event, which helps keep safety protocols fresh in employees' minds. Studies have shown that shorter, more frequent training sessions are more effective for retention and behavior change compared to longer, infrequent sessions.
Engaging workers regularly, toolbox talks promote a culture of safety, empower employees to voice concerns, and ultimately reduce the likelihood of accidents and injuries on site.
The traditional method of conducting safety talks with paper documents and lengthy meetings is not only time-consuming but also less effective. Manual tracking of attendance and compliance is prone to errors and inconsistencies, leading to gaps in safety communication and potential non-compliance with OSHA regulations. Switching to a digital solution like Safelyio offers significant advantages.
Safelyio automates the delivery and documentation of safety talks, ensuring consistency and accessibility. The microlearning approach breaks down complex safety protocols into manageable, bite-sized pieces, making it easier for employees to absorb and retain information.
This method minimizes disruptions to work schedules, eliminates the hassle of paperwork, and ensures that all safety communications are tracked and recorded accurately. Ultimately, using a digital platform like Safelyio enhances engagement, compliance, and overall safety performance on site. https://safelyio.com/
Superpower Your Apache Kafka Applications Development with Complementary Open...Paul Brebner
Kafka Summit talk (Bangalore, India, May 2, 2024, https://events.bizzabo.com/573863/agenda/session/1300469 )
Many Apache Kafka use cases take advantage of Kafka’s ability to integrate multiple heterogeneous systems for stream processing and real-time machine learning scenarios. But Kafka also exists in a rich ecosystem of related but complementary stream processing technologies and tools, particularly from the open-source community. In this talk, we’ll take you on a tour of a selection of complementary tools that can make Kafka even more powerful. We’ll focus on tools for stream processing and querying, streaming machine learning, stream visibility and observation, stream meta-data, stream visualisation, stream development including testing and the use of Generative AI and LLMs, and stream performance and scalability. By the end you will have a good idea of the types of Kafka “superhero” tools that exist, which are my favourites (and what superpowers they have), and how they combine to save your Kafka applications development universe from swamploads of data stagnation monsters!
DevOps Consulting Company | Hire DevOps Servicesseospiralmantra
Spiral Mantra excels in providing comprehensive DevOps services, including Azure and AWS DevOps solutions. As a top DevOps consulting company, we offer controlled services, cloud DevOps, and expert consulting nationwide, including Houston and New York. Our skilled DevOps engineers ensure seamless integration and optimized operations for your business. Choose Spiral Mantra for superior DevOps services.
https://www.spiralmantra.com/devops/
The Comprehensive Guide to Validating Audio-Visual Performances.pdfkalichargn70th171
Ensuring the optimal performance of your audio-visual (AV) equipment is crucial for delivering exceptional experiences. AV performance validation is a critical process that verifies the quality and functionality of your AV setup. Whether you're a content creator, a business conducting webinars, or a homeowner creating a home theater, validating your AV performance is essential.
Odoo releases a new update every year. The latest version, Odoo 17, came out in October 2023. It brought many improvements to the user interface and user experience, along with new features in modules like accounting, marketing, manufacturing, websites, and more.
The Odoo 17 update has been a hot topic among startups, mid-sized businesses, large enterprises, and Odoo developers aiming to grow their businesses. Since it is now already the first quarter of 2024, you must have a clear idea of what Odoo 17 entails and what it can offer your business if you are still not aware of it.
This blog covers the features and functionalities. Explore the entire blog and get in touch with expert Odoo ERP consultants to leverage Odoo 17 and its features for your business too.
An Overview of Odoo ERP
Odoo ERP was first released as OpenERP software in February 2005. It is a suite of business applications used for ERP, CRM, eCommerce, websites, and project management. Ten years ago, the Odoo Enterprise edition was launched to help fund the Odoo Community version.
When you compare Odoo Community and Enterprise, the Enterprise edition offers exclusive features like mobile app access, Odoo Studio customisation, Odoo hosting, and unlimited functional support.
Today, Odoo is a well-known name used by companies of all sizes across various industries, including manufacturing, retail, accounting, marketing, healthcare, IT consulting, and R&D.
The latest version, Odoo 17, has been available since October 2023. Key highlights of this update include:
Enhanced user experience with improvements to the command bar, faster backend page loading, and multiple dashboard views.
Instant report generation, credit limit alerts for sales and invoices, separate OCR settings for invoice creation, and an auto-complete feature for forms in the accounting module.
Improved image handling and global attribute changes for mailing lists in email marketing.
A default auto-signature option and a refuse-to-sign option in HR modules.
Options to divide and merge manufacturing orders, track the status of manufacturing orders, and more in the MRP module.
Dark mode in Odoo 17.
Now that the Odoo 17 announcement is official, let’s look at what’s new in Odoo 17!
What is Odoo ERP 17?
Odoo 17 is the latest version of one of the world’s leading open-source enterprise ERPs. This version has come up with significant improvements explained here in this blog. Also, this new version aims to introduce features that enhance time-saving, efficiency, and productivity for users across various organisations.
Odoo 17, released at the Odoo Experience 2023, brought notable improvements to the user interface and added new functionalities with enhancements in performance, accessibility, data analysis, and management, further expanding its reach in the market.
Alluxio Webinar | 10x Faster Trino Queries on Your Data PlatformAlluxio, Inc.
Alluxio Webinar
June. 18, 2024
For more Alluxio Events: https://www.alluxio.io/events/
Speaker:
- Jianjian Xie (Staff Software Engineer, Alluxio)
As Trino users increasingly rely on cloud object storage for retrieving data, speed and cloud cost have become major challenges. The separation of compute and storage creates latency challenges when querying datasets; scanning data between storage and compute tiers becomes I/O bound. On the other hand, cloud API costs related to GET/LIST operations and cross-region data transfer add up quickly.
The newly introduced Trino file system cache by Alluxio aims to overcome the above challenges. In this session, Jianjian will dive into Trino data caching strategies, the latest test results, and discuss the multi-level caching architecture. This architecture makes Trino 10x faster for data lakes of any scale, from GB to EB.
What you will learn:
- Challenges relating to the speed and costs of running Trino in the cloud
- The new Trino file system cache feature overview, including the latest development status and test results
- A multi-level cache framework for maximized speed, including Trino file system cache and Alluxio distributed cache
- Real-world cases, including a large online payment firm and a top ridesharing company
- The future roadmap of Trino file system cache and Trino-Alluxio integration
Why Apache Kafka Clusters Are Like Galaxies (And Other Cosmic Kafka Quandarie...Paul Brebner
Closing talk for the Performance Engineering track at Community Over Code EU (Bratislava, Slovakia, June 5 2024) https://eu.communityovercode.org/sessions/2024/why-apache-kafka-clusters-are-like-galaxies-and-other-cosmic-kafka-quandaries-explored/ Instaclustr (now part of NetApp) manages 100s of Apache Kafka clusters of many different sizes, for a variety of use cases and customers. For the last 7 years I’ve been focused outwardly on exploring Kafka application development challenges, but recently I decided to look inward and see what I could discover about the performance, scalability and resource characteristics of the Kafka clusters themselves. Using a suite of Performance Engineering techniques, I will reveal some surprising discoveries about cosmic Kafka mysteries in our data centres, related to: cluster sizes and distribution (using Zipf’s Law), horizontal vs. vertical scalability, and predicting Kafka performance using metrics, modelling and regression techniques. These insights are relevant to Kafka developers and operators.
Manyata Tech Park Bangalore_ Infrastructure, Facilities and Morenarinav14
Located in the bustling city of Bangalore, Manyata Tech Park stands as one of India’s largest and most prominent tech parks, playing a pivotal role in shaping the city’s reputation as the Silicon Valley of India. Established to cater to the burgeoning IT and technology sectors
3. Reading Material
3
The following papers add background, conceptual exposition,
and examples to the material from the slides. Some notation and
technical details have been changed; check the documentation.
4. !4
Type checkers are algorithms that check names and types in programs.
This paper introduces the NaBL Name Binding Language, which
supports the declarative definition of the name binding and scope rules
of programming languages through the definition of scopes,
declarations, references, and imports associated.
http://dx.doi.org/10.1007/978-3-642-36089-3_18
SLE 2012
Background
5. !5
This paper introduces scope graphs as a language-independent
representation for the binding information in programs.
http://dx.doi.org/10.1007/978-3-662-46669-8_9
ESOP 2015
Best EAPLS paper at ETAPS 2015
6. !6
Separating type checking into constraint generation and
constraint solving provides more declarative definition of type
checkers. This paper introduces a constraint language
integrating name resolution into constraint resolution through
scope graph constraints.
This is the basis for the design of the NaBL2 static semantics
specification language.
https://doi.org/10.1145/2847538.2847543
PEPM 2016
7. !7
Documentation for NaBL2 at the metaborg.org website.
http://www.metaborg.org/en/latest/source/langdev/meta/lang/nabl2/index.html
8. !8
This paper describes the next generation of the approach.
Addresses (previously) open issues in expressiveness of scope graphs for
type systems:
- Structural types
- Generic types
Addresses open issue with staging of information in type systems.
Introduces Statix DSL for definition of type systems.
Prototype of Statix is available in Spoofax HEAD, but not ready for use in
project yet.
The future
OOPSLA 2018
To appear
10. Dynamically Typed vs Statically Typed
- Dynamic: type checking at run-time
- Static: type checking at compile-time (before run-time)
What does it mean to type check?
- Type safety: guarantee absence of run-time type errors
Why static type checking?
- Avoid overhead of run-time type checking
- Fail faster: find (type) errors at compile time
- Find all (type) errors: some errors may not be triggered by testing
- But: not all errors can be found statically (e.g. array bounds checking)
!10
Why Type Checking? Some Discussion Points
15. A Generic Syntax of XML Documents
!15
context-free start-symbols Document
sorts Tag Word
lexical syntax
Tag = [a-zA-Z][a-zA-Z0-9]*
Word = ~[<>]+
lexical restrictions
Word -/- ~[<>]
sorts Document Elem
context-free syntax
Document.Doc = Elem
Elem.Node = [
<[Tag]>
[Elem*]
</[Tag]>
]
Elem.Text = Word+ {longest-match}
<catalogue>
<book>
<title>Modern Compiler Implementation in ML</title>
<author>Andrew Appel</author>
<publisher>Cambridge</publisher>
</book>
<book>
<title>Parsing Schemata</title>
<author>Klaas Sikkel</author>
<publisher>Springer</publisher>
</book>
</catalogue>
What is the problem with this approach?
16. Generic Syntax is Too Liberal!
!16
context-free start-symbols Document
sorts Tag Word
lexical syntax
Tag = [a-zA-Z][a-zA-Z0-9]*
Word = ~[<>]+
lexical restrictions
Word -/- ~[<>]
sorts Document Elem
context-free syntax
Document.Doc = Elem
Elem.Node = [
<[Tag]>
[Elem*]
</[Tag]>
]
Elem.Text = Word+ {longest-match}
<catalogue>
<book>
<title>Modern Compiler Implementation in ML</title>
<author>Andrew Appel</author>
<publisher>Cambridge</publisher>
<year></year>
</book>
<language>
<name>SDF3</name>
<purpose>Syntax Definition</purpose>
<implementedin>SDF3</implementedin>
</language>
<book>
//<title>Parsing Schemata</title>
<author>Klaas Sikkel</author>
<publisher>Springer</publisher>
</book>
<book>
</koob>
</catalogue>
Year is not a valid element of book
Only books in catalogue
Book should have a title
Closing tag is not consistent with starting tag
17. Context-free grammar is … context-free!
- Cannot express alignment
Languages have context-sensitive properties
How can we have our cake and eat it too?
- Generic (liberal) syntax
- Forbid programs/documents that are not well-formed
!17
Context-Sensitive Properties
19. Generic (liberal) syntax
- Allow more programs/documents
Check properties on AST
- Reject programs/documents that are not well-formed
!19
Approach: Checking Context-Sensitive Properties
21. Check Violations using Constraint-Error Rules
!21
<catalogue>
<book>
<title>Modern Compiler Implementation in ML</title>
<author>Andrew Appel</author>
<publisher>Cambridge</publisher>
<year></year>
</book>
<language>
<name>SDF3</name>
<purpose>Syntax Definition</purpose>
<implementedin>SDF3</implementedin>
</language>
<book>
//<title>Parsing Schemata</title>
<author>Klaas Sikkel</author>
<publisher>Springer</publisher>
</book>
<book>
</koob>
</catalogue>
rules // Analysis
editor-analyze:
(ast, path, project-path) -> (ast', error*, warning*, info*)
with
ast' := <id> ast
; error* := <collect-all(constraint-error)> ast'
; warning* := <collect-all(constraint-warning)> ast'
; info* := <collect-all(constraint-info)> ast'
Find all sub-terms that are not
consistent with the context-
sensitive rules
collect-all: type-unifying
generic traversal
22. Check Violations using Constraint-Error Rules
!22
<catalogue>
<book>
<title>Modern Compiler Implementation in ML</title>
<author>Andrew Appel</author>
<publisher>Cambridge</publisher>
<year></year>
</book>
<language>
<name>SDF3</name>
<purpose>Syntax Definition</purpose>
<implementedin>SDF3</implementedin>
</language>
<book>
//<title>Parsing Schemata</title>
<author>Klaas Sikkel</author>
<publisher>Springer</publisher>
</book>
<book>
</koob>
</catalogue>
Closing tag does not match starting tag
rules
constraint-error :
Node(tag1, elems, tag2) -> (tag2, $[Closing tag does not match starting tag])
where <not(eq)>(tag1, tag2)
Origin Error message
23. Check Violations using Constraint-Error Rules
!23
<catalogue>
<book>
<title>Modern Compiler Implementation in ML</title>
<author>Andrew Appel</author>
<publisher>Cambridge</publisher>
<year></year>
</book>
<language>
<name>SDF3</name>
<purpose>Syntax Definition</purpose>
<implementedin>SDF3</implementedin>
</language>
<book>
//<title>Parsing Schemata</title>
<author>Klaas Sikkel</author>
<publisher>Springer</publisher>
</book>
<book>
</koob>
</catalogue>
Book should have a title
rules
constraint-error :
n@Node(tag@"book", elems, _) -> (tag, $[Book should have title])
where <not(has(|"title"))> elems
has(|tag) = fetch(?Node(tag, _, _))
Containment checks
24. Check Violations using Constraint-Error Rules
!24
<catalogue>
<book>
<title>Modern Compiler Implementation in ML</title>
<author>Andrew Appel</author>
<publisher>Cambridge</publisher>
<year></year>
</book>
<language>
<name>SDF3</name>
<purpose>Syntax Definition</purpose>
<implementedin>SDF3</implementedin>
</language>
<book>
//<title>Parsing Schemata</title>
<author>Klaas Sikkel</author>
<publisher>Springer</publisher>
</book>
<book>
</koob>
</catalogue>
Catalogue can only have books
rules
constraint-error :
Node("catalogue", elems, _) -> error
where <filter(not-a-book)> elems => [error | _]
not-a-book :
Node(tag, _, _) -> (tag, $[Catalogue can only have books])
where <not(eq)> (tag, "book")
Containment checks
25. Check Violations using Constraint-Error Rules
!25
<catalogue>
<book>
<title>Modern Compiler Implementation in ML</title>
<author>Andrew Appel</author>
<publisher>Cambridge</publisher>
<year></year>
</book>
<language>
<name>SDF3</name>
<purpose>Syntax Definition</purpose>
<implementedin>SDF3</implementedin>
</language>
<book>
//<title>Parsing Schemata</title>
<author>Klaas Sikkel</author>
<publisher>Springer</publisher>
</book>
<book>
</koob>
</catalogue>
Book can only have title, author, publisher
rules
constraint-error :
Node("book", elems, _) -> error
where <filter(not-a-book-elem)> elems => [error | _]
not-a-book-elem :
Node(tag, _, _) -> (tag, $[Book can only have title, author, publisher])
where <not(elem())> (tag, ["title", "author", "publisher"])
Containment checks
26. Generic (liberal) syntax
- Allow more programs/documents
- `permissive syntax’
Check properties on AST
- Reject programs/documents that are not well-formed
Advantage
- Smaller syntax definition
- Parser does not fail (so often)
- Better error messages than parser can give
!26
Approach: Checking Context-Sensitive Properties
28. XML checking
- Tag consistency
- Schema consistency
- These are structural properties
Programming languages
- Type consistency (similar to schema?)
- Name consistency: declarations and references should correspond
- Name dependent type consistency: names carry types
!28
Programming Languages vs XML
30. Scope
!30
let
var x : int := 0 + z
var y : int := x + 1
var z : int := x + y + 1
in
x + y + z
end
31. Scope: Definition before Use
!31
let
var x : int := 0 + z // z not in scope
var y : int := x + 1
var z : int := x + y + 1
in
x + y + z
end
32. Mutual Recursion
!32
let
function odd(x : int) : int =
if x > 0 then even(x - 1) else false
function even(x : int) : int =
if x > 0 then odd(x - 1) else true
in
even(34)
end
let
function odd(x : int) : int =
if x > 0 then even(x - 1) else false
var x : int
function even(x : int) : int =
if x > 0 then odd(x - 1) else true
in
even(34)
end
33. Mutually Recursive Functions should be Adjacent
!33
let
function odd(x : int) : int =
if x > 0 then even(x - 1) else false
function even(x : int) : int =
if x > 0 then odd(x - 1) else true
in
even(34)
end
let
function odd(x : int) : int =
if x > 0 then even(x - 1) else false
var x : int
function even(x : int) : int =
if x > 0 then odd(x - 1) else true
in
even(34)
end
35. Functions and Variables in Same Name Space
!35
let
type foo = int
function foo(x : foo) : foo = 3
var foo : foo := foo(4)
in foo(56) + foo // both refer to the variable foo
end
Functions and variables are in the same namespace
36. Type Dependent Name Resolution
!36
let
type point = {x : int, y : int}
var origin : point := point { x = 1, y = 2 }
in origin.x
end
37. Type Dependent Name Resolution
!37
let
type point = {x : int, y : int}
var origin : point := point { x = 1, y = 2 }
in origin.x
end
Resolving origin.x requires the type of origin
38. Name Correspondence
!38
let
type point = {x : int, y : int}
type errpoint = {x : int, x : int}
var p : point
var e : errpoint
in
p := point{ x = 3, y = 3, z = "a" }
p := point{ x = 3 }
end
39. Name Set Correspondence
!39
let
type point = {x : int, y : int}
type errpoint = {x : int, x : int}
var p : point
var e : errpoint
in
p := point{ x = 3, y = 3, z = "a" }
p := point{ x = 3 }
end
Field “y” not initialized Reference “z” not resolved
Duplicate Declaration of Field “x”
40. Recursive Types
!40
let
type intlist = {hd : int, tl : intlist}
type tree = {key : int, children : treelist}
type treelist = {hd : tree, tl : treelist}
var l : intlist
var t : tree
var tl : treelist
in
l := intlist { hd = 3, tl = l };
t := tree {
key = 2,
children = treelist {
hd = tree{ key = 3, children = 3 },
tl = treelist{ }
}
};
t.children.hd.children := t.children
end
41. Recursive Types
!41
let
type intlist = {hd : int, tl : intlist}
type tree = {key : int, children : treelist}
type treelist = {hd : tree, tl : treelist}
var l : intlist
var t : tree
var tl : treelist
in
l := intlist { hd = 3, tl = l };
t := tree {
key = 2,
children = treelist {
hd = tree{ key = 3, children = 3 },
tl = treelist{ }
}
};
t.children.hd.children := t.children
end
type mismatch
Field "tl" not initialized
Field "hd" not initialized
43. Testing Name Resolution
!43
test inner name [[
let type t = u
type u = int
var x: u := 0
in
x := 42 ;
let type [[u]] = t
var y: [[u]] := 0
in
y := 42
end
end
]] resolve #2 to #1
test outer name [[
let type t = u
type [[u]] = int
var x: [[u]] := 0
in
x := 42 ;
let type u = t
var y: u := 0
in
y := 42
end
end
]] resolve #2 to #1
44. Testing Type Checking
!44
test variable reference [[
let type t = u
type u = int
var x: u := 0
in
x := 42 ;
let type u = t
var y: u := 0
in
y := [[x]]
end
end
]] run get-type to INT()
test integer constant [[
let type t = u
type u = int
var x: u := 0
in
x := 42 ;
let type u = t
var y: u := 0
in
y := [[42]]
end
end
]] run get-type to INT()
45. Testing Errors
!45
test undefined variable [[
let type t = u
type u = int
var x: u := 0
in
x := 42 ;
let type u = t
var y: u := 0
in
y := [[z]]
end
end
]] 1 error
test type error [[
let type t = u
type u = string
var x: u := 0
in
x := 42 ;
let type u = t
var y: u := 0
in
y := [[x]]
end
end
]] 1 error
48. Compute Type of Expression
!48
rules
type-check :
Mod(e) -> t
where <type-check> e => t
type-check :
String(_) -> STRING()
type-check :
Int(_) -> INT()
type-check :
Plus(e1, e2) -> INT()
where
<type-check> e1 => INT();
<type-check> e2 => INT()
type-check :
Times(e1, e2) -> INT()
where
<type-check> e1 => INT();
<type-check> e2 => INT()
1 + 2 * 3
Mod(
Plus(Int("1"),
Times(Int("2"),
Int("3")))
)
INT()
49. Compute Type of Variable?
!49
rules
type-check :
Let([VarDec(x, e)], e_body) -> t
where
<type-check> e => t_e;
<type-check> e_body => t
type-check :
Var(x) -> t // ???
let
var x := 1
in
x + 1
end
Mod(
Let(
[VarDec("x", Int("1"))]
, [Plus(Var("x"), Int("1"))]
)
)
?
50. Type Checking Variable Bindings
!50
rules
type-check(|env) :
Let([VarDec(x, e)], e_body) -> t
where
<type-check(|env)> e => t_e;
<type-check(|[(x, t_e) | env])> e_body => t
type-check(|env) :
Var(x) -> t
where
<fetch(?(x, t))> env
let
var x := 1
in
x + 1
end
INT()Store association between variable and type in type environment
Mod(
Let(
[VarDec("x", Int("1"))]
, [Plus(Var("x"), Int("1"))]
)
)
51. Pass Environment to Sub-Expressions
!51
rules
type-check :
Mod(e) -> t
where <type-check(|[])> e => t
type-check(|env) :
String(_) -> STRING()
type-check(|env) :
Int(_) -> INT()
type-check(|env) :
Plus(e1, e2) -> INT()
where
<type-check(|env)> e1 => INT();
<type-check(|env)> e2 => INT()
type-check(|env) :
Times(e1, e2) -> INT()
where
<type-check(|env)> e1 => INT();
<type-check(|env)> e2 => INT()
let
var x := 1
in
x + 1
end
INT()
Mod(
Let(
[VarDec("x", Int("1"))]
, [Plus(Var("x"), Int("1"))]
)
)
52. Type checking ill-typed/named programs?
- add rules for ‘bad’ cases
More complicated name binding patterns?
- Hoisting of variables in JavaScript functions
- Mutually recursive bindings
- Possible approaches
‣ Multiple traversals over program
‣ Defer checking until entire scope is processed
‣ …
!52
But what about?
62. !62
function prettyprint(tree: tree) : string =
let
var output := ""
function write(s: string) =
output := concat(output, s)
function show(n: int, t: tree) =
let function indent(s: string) =
(write("n");
for i := 1 to n
do write(" ");
output := concat(output, s))
in if t = nil then indent(".")
else (indent(t.key);
show(n+1, t.left);
show(n+1, t.right))
end
in show(0, tree);
output
end
Nested Scopes
(Shadowing)
63. !63
let
type point = {
x : int,
y : int
}
var origin := point {
x = 1,
y = 2
}
in
origin.x := 10;
origin := nil
end
Type References
64. !64
let
type point = {
x : int,
y : int
}
var origin := point {
x = 1,
y = 2
}
in
origin.x := 10;
origin := nil
end
Record Fields
65. !65
let
type point = {
x : int,
y : int
}
var origin := point {
x = 1,
y = 2
}
in
origin.x := 10;
origin := nil
end
Type Dependent
Name Resolution
66. Used in many different language artifacts
- compiler, interpreter, semantics, IDE, refactoring
Binding rules encoded in many different and ad-hoc ways
- symbol tables, environments, substitutions
No standard approach to formalization
- there is no BNF for name binding
No reuse of binding rules between artifacts
- how do we know substitution respects binding rules?
!66
Name Binding is Pervasive
67. What are the name binding rules of a language?
- What are introductions of names?
- What are uses of names?
- What are the shadowing rules?
- What are the name spaces?
How can we define the name binding rules of a language?
- What is the BNF for name binding?
!67
Name Binding Rules
69. Representation
- Standardized representation for <aspect> of programs
- Independent of specific object language
Specification Formalism
- Language-specific declarative rules
- Abstract from implementation concerns
Language-Independent Interpretation
- Formalism interpreted by language-independent algorithm
- Multiple interpretations for different purposes
- Reuse between implementations of different languages
!69
Separation of Concerns in Definition of Programming Languages
70. Representation: (Abstract Syntax) Trees
- Standardized representation for structure of programs
- Basis for syntactic and semantic operations
Formalism: Syntax Definition
- Productions + Constructors + Templates + Disambiguation
- Language-specific rules: structure of each language construct
Language-Independent Interpretation
- Well-formedness of abstract syntax trees
‣ provides declarative correctness criterion for parsing
- Parsing algorithm
!70
Separation of Concerns in Syntax Definition
71. Representation
- To conduct and represent the results of name resolution
Declarative Rules
- To define name binding rules of a language
Language-Independent Tooling
- Name resolution
- Code completion
- Refactoring
- …
!71
Wanted: Separation of Concerns in Name Binding
72. DSLs for specifying binding structure of a (target) language
- Ott [Sewell+10]
- Romeo [StansiferWand14]
- Unbound [Weirich+11]
- Cαml [Pottier06]
- NaBL [Konat+12]
Generate code to do resolution and record results
What is the semantics of such a name binding language?
!72
Name Binding Languages
73. Attempt: NaBL Name Binding Language
!73
binding rules // variables
Param(t, x) :
defines Variable x of type t
Let(bs, e) :
scopes Variable
Bind(t, x, e) :
defines Variable x of type t
Var(x) :
refers to Variable x
Declarative Name Binding and Scope Rules
Gabriël D. P. Konat, Lennart C. L. Kats, Guido Wachsmuth, Eelco Visser
SLE 2012
Declarative specification
Abstracts from implementation
Incremental name resolution
But:
How to explain it to Coq?
What is the semantics of NaBL?
74. Attempt: NaBL Name Binding Language
!74
Declarative Name Binding and Scope Rules
Gabriël D. P. Konat, Lennart C. L. Kats, Guido Wachsmuth, Eelco Visser
SLE 2012
Especially:
What is the semantics of imports?
binding rules // classes
Class(c, _, _, _) :
defines Class c of type ClassT(c)
scopes Field, Method, Variable
Extends(c) :
imports Field, Method from Class c
ClassT(c) :
refers to Class c
New(c) :
refers to Class c
75. What is semantics of binding language?
- the meaning of a binding specification for language L should be given by
- a function from L programs to their “resolution structures”
So we need
- a (uniform, language-independent) method for describing such resolution structures ...
- … that can be used to compute the resolution of each program identifier
- (or to verify that a claimed resolution is valid)
That supports
- Handle broad range of language binding features ...
- … using minimal number of constructs
- Make resolution structure language-independent
- Handle named collections of names (e.g. modules, classes, etc.) within the theory
- Allow description of programs with resolution errors
!75
Approach
77. Representation
- ?
Declarative Rules
- To define name binding rules of a language
Language-Independent Tooling
- Name resolution
- Code completion
- Refactoring
- …
!77
Separation of Concerns in Name Binding
Scope Graphs
78. !78
let function fact(n : int) : int =
if n < 1 then
1
else
n * fact(n - 1)
in
fact(10)
end
fact S1 fact
S2n
n
fact
nn
Scope GraphProgram
Name Resolution
79. A Calculus for Name Resolution
!79
S R1 R2 SR
SRS
I(R1
)
S’S
S’S P
Sx
Sx R
xS
xS D
Path in scope graph connects reference to declaration
Scopes, References, Declarations, Parents, Imports
Neron, Tolmach, Visser, Wachsmuth
A Theory of Name Resolution
ESOP 2015
81. Lexical Scoping
!81
S1
S0
Parent
def x1 = z2 5
def z1 =
fun y1 {
x2 + y2
}
S’S
S’S P
z1
x1S0z2
S1
y1y2 x2
z1
x1S0z2
S1
y1y2 x2
z1
x1S0z2
S1
y1y2 x2
z1
x1S0z2
R
S1
y1y2 x2
z1
x1S0z2
R
D
S1
y1y2 x2
z1
x1S0z2
R
S1
y1y2 x2
z1
x1S0z2
R
P
S1
y1y2 x2
z1
x1S0z2
R
P
D
S’S
Parent Step
82. Imports
!82
Associated scope
Import
S0
SB
SA
module A1 {
def z1 = 5
}
module B1 {
import A2
def x1 = 1 + z2
}
A1
SA
z1
B1
SB
z2
S0
A2
x1
S R1
R2 SR
A1
SA
z1
B1
SB
z2
S0
A2
x1
A1
SA
z1
B1
SB
z2
S0
A2
x1
A1
SA
z1
B1
SB
z2
S0
A2
x1
R
A1
SA
z1
B1
SB
z2
S0
A2
x1
R
R
A1
SA
z1
B1
SB
z2
S0
A2
x1
R
R
P
A1
SA
z1
B1
SB
z2
S0
A2
x1
R
R
P
D
A1
SA
z1
B1
SB
z2
S0
A2
x1
I(A2
)R
R
P
D
A1
SA
z1
B1
SB
z2
S0
A2
x1
I(A2
)R
R
D
P
D
S R1 R2 SR
SRS
I(R1)
Import Step
83. Qualified Names
!83
module N1 {
def s1 = 5
}
module M1 {
def x1 = 1 + N2.s2
}
S0
N1
SN
s2
S0
N2
R
D
R
I(N2
) D
X1
s1
N1
SN
s2
S0
N2
R
D
X1
s1
N1
SN
s2
S0
N2 X1
s1
84. A Calculus for Name Resolution
!84
S R1 R2 SR
SRS
I(R1
)
S’S
S’S P
Sx
Sx R
xS
xS D
Reachability of declarations from
references through scope graph edges
How about ambiguities?
References with multiple paths
85. A Calculus for Name Resolution
!85
S R1 R2 SR
SRS
I(R1
)
S’S
S’S P
Sx
Sx R
xS
xS D
I(_).p’ < P.p
D < I(_).p’
D < P.p
s.p < s.p’
p < p’
Visibility
Well formed path: R.P*.I(_)*.D
Reachability
86. Ambiguous Resolutions
!86
match t with
| A x | B x => …
z1 x2
x1S0x3
z1 x2
x1S0x3
R
z1 x2
x1S0x3
R D
z1 x2
x1S0x3
R
D
z1 x2
x1S0x3
R D
D
S0def x1 = 5
def x2 = 3
def z1 = x3 + 1
87. Shadowing
!87
S0
S1
S2
D < P.p
s.p < s.p’
p < p’
S1
S2
x1
y1
y2 x2
z1
x3S0z2
def x3 = z2 5 7
def z1 =
fun x1 {
fun y1 {
x2 + y2
}
}
S1
S2
x1
y1
y2 x2
z1
x3S0z2
R
P
P
D
S1
S2
x1
y1
y2 x2
z1
x3S0z2
D
P
R
R
P
P
D
R.P.D < R.P.P.D
88. Imports shadow Parents
!88
I(_).p’ < P.p R.I(A2).D < R.P.D
A1
SA
z1
B1
SB
z2
S0
A2
x1
z3
A1
SA
z1
B1
SB
z2
S0
A2
x1
I(A2)R D
z3
A1
SA
z1
B1
SB
z2
S0
A2
x1
I(A2)R D
P
D
z3
R
S0def z3 = 2
module A1 {
def z1 = 5
}
module B1 {
import A2
def x1 = 1 + z2
}
SA
SB
89. Imports vs. Includes
!89
S0def z3 = 2
module A1 {
def z1 = 5
}
import A2
def x1 = 1 + z2
SA A1
SA
z1
z2
S0
A2
x1
I(A2)
R
D
z3
R
D
D < I(_).p’
R.D < R.I(A2).D
A1
SA
z1
z2
S0
A2
x1
R
z3
D
A1
SA
z1
z2
S0
A2
x1z3
S0def z3 = 2
module A1 {
def z1 = 5
}
include A2
def x1 = 1 + z2
SA
X
90. Import Parents
!90
def s1 = 5
module N1 {
}
def x1 = 1 + N2.s2
S0
SN
def s1 = 5
module N1 {
}
def x1 = 1 + N2.s2
S0
SN
Well formed path: R.P*.I(_)*.D
N1
SN
s2
S0
N2 X1
s1
R
I(N2
)
D
P
N1
SN
s2
S0
N2
X1
s1
91. Transitive vs. Non-Transitive
!91
With transitive imports, a well formed path is R.P*.I(_)*.D
With non-transitive imports, a well formed path is R.P*.I(_)?.D
A1
SA
z1
B1
SB
S0
A2
C1
SCz2
x1 B2
A1
SA
z1
B1
SB
S0
A2
I(A2)
D
C1
SCz2
I(B2)
R
x1 B2
??
module A1 {
def z1 = 5
}
module B1 {
import A2
}
module C1 {
import B2
def x1 = 1 + z2
}
SA
SB
SC
92. A Calculus for Name Resolution
!92
S R1 R2 SR
SRS
I(R1
)
S’S
S’S P
Sx
Sx R
xS
xS D
I(_).p’ < P.p
D < I(_).p’
D < P.p
s.p < s.p’
p < p’
Visibility
Well formed path: R.P*.I(_)*.D
Reachability
93. Visibility Policies
!93
Lexical scope
L := {P} E := P⇤
D < P
Non-transitive imports
L := {P, I} E := P⇤
· I?
D < P, D < I, I < P
Transitive imports
L := {P, TI} E := P⇤
· TI⇤
D < P, D < TI, TI < P
Transitive Includes
L := {P, Inc} E := P⇤
· Inc⇤
D < P, Inc < P
Transitive includes and imports, and non-transitive imports
L := {P, Inc, TI, I} E := P⇤
· (Inc | TI)⇤
· I?
D < P, D < TI, TI < P, Inc < P, D < I, I < P,
Figure 10. Example reachability and visibility policies by instan-
R[I](xR
Envre [I, S](S
EnvL
re [I, S](S
EnvD
re [I, S](S
Envl
re [I, S](S
102. Static analysis
- Properties that can be checked of the (‘static’) program text
- Decide which properties to encode in syntax definition vs checker
- Strict vs liberal syntax
Context-sensitive properties
- Some properties are intrinsically context-sensitive
- In particular: names
Declarative specification of name binding rules
- Common (language agnostic) understanding of name binding
!102
Static Analysis
103. Representation: Scope Graphs
- Standardized representation for lexical scoping structure of programs
- Path in scope graph relates reference to declaration
- Basis for syntactic and semantic operations
Formalism: Name Binding Constraints
- References + Declarations + Scopes + Reachability + Visibility
- Language-specific rules map AST to constraints
Language-Independent Interpretation
- Resolution calculus: correctness of path with respect to scope graph
- Name resolution algorithm
- Alpha equivalence
- Mapping from graph to tree (to text)
- Refactorings
- And many other applications
!103
A Theory of Name Resolution
104. We have modeled a large set of example binding patterns
- definition before use
- different let binding flavors
- recursive modules
- imports and includes
- qualified names
- class inheritance
- partial classes
Next goal: fully model some real languages
- In progress: Go, Rust, TypeScript
- Java, ML
!104
Validation
105. Scope graph semantics for binding languages [OOPSLA18]
- starting with NaBL
- or rather: a redesign of NaBL based on scope graphs
Dynamic analogs to static scope graphs [ECOOP16]
- how does scope graph relate to memory at run-time?
Supporting mechanized language meta-theory [POPL18]
- relating static and dynamic bindings
Resolution-sensitive program transformations
- renaming, refactoring, substitution, …
!105
Ongoing/Future Work
107. Representation
- To conduct and represent the results of name resolution
Declarative Rules
- To define name binding rules of a language
Language-Independent Tooling
- Name resolution
- Code completion
- Refactoring
- …
!107
Separation of Concerns in Name Binding
108. Representation
- ?
Declarative Rules
- To define name binding rules of a language
Language-Independent Tooling
- Name resolution
- Code completion
- Refactoring
- …
!108
Separation of Concerns in Name Binding
Scope Graphs
109. Representation
- ?
Declarative Rules
- ?
Language-Independent Tooling
- Name resolution
- Code completion
- Refactoring
- …
!109
Separation of Concerns in Name Binding
Scope & Type Constraint Rules [PEPM16]
Scope Graphs
110. Next: Constraint-Based Type Checkers
!110
Program AST
Parse
Constraints
Generate
Constraints
Solution
Resolve
Constraints
Language
Specific
Language
Independent