The document discusses syntactic editor services for programming languages. It covers formatting specifications that define how abstract syntax trees are mapped to text using templates. It also discusses syntactic completion, which proposes valid completions in an editor by using the syntax definition. The lecture focuses on defining lexical and syntactic syntax for Tiger using SDF, and generating editor services like formatting and coloring from the syntax definitions.
This document provides an overview of parsing in compiler construction. It discusses context-free grammars and how they are used to generate sentences and parse trees through derivations. It also covers ambiguity that can arise from grammars and various grammar transformations used to eliminate ambiguity, including defining associativity and priority. The dangling else problem is presented as an example of an ambiguous grammar.
Compiler Construction | Lecture 6 | Introduction to Static AnalysisEelco Visser
Lecture introducing the need for static analysis in addition to parsing, the complications caused by names, and an introduction to name resolution with scope graphs
Compiler Construction | Lecture 12 | Virtual MachinesEelco Visser
The document discusses the architecture of the Java Virtual Machine (JVM). It describes how the JVM uses threads, a stack, heap, and method area. It explains JVM control flow through bytecode instructions like goto, and how the operand stack is used to perform operations and hold method arguments and return values.
Compiler Construction | Lecture 8 | Type ConstraintsEelco Visser
This lecture covers type checking with constraints. It introduces the NaBL2 meta-language for writing type specifications as constraint generators that map a program to constraints. The constraints are then solved to determine if a program is well-typed. NaBL2 supports defining name binding and type structures through scope graphs and constraints over names, types, and scopes. Examples show type checking patterns in NaBL2 including variables, functions, records, and name spaces.
This document discusses syntactic editor services including formatting, syntax coloring, and syntactic completion. It describes how syntactic completion can be provided generically based on a syntax definition. The document also discusses how context-free grammars can be extended with templates to specify formatting layout when pretty-printing abstract syntax trees to text. Templates are used to insert whitespace, line breaks, and indentation to produce readable output.
Declare Your Language: Syntax DefinitionEelco Visser
This document provides information about syntax definition and the lab organization for a compiler construction course. It includes links to papers on declarative syntax definition and the SDF3 syntax definition formalism. It also provides details about submitting lab assignments through GitHub, including instructions to fork a repository template and submit solutions as pull requests. Grades will be published on a web lab and early feedback may be provided on pull requests and pushed changes.
Compiler Construction | Lecture 13 | Code GenerationEelco Visser
The document discusses code generation and optimization techniques, describing compilation schemas that define how language constructs are translated to target code patterns, and covers topics like ensuring correctness of generated code through type checking and verification of static constraints on the target format. It also provides examples of compilation schemas for Tiger language constructs like arithmetic expressions and control flow and discusses generating nested functions.
This document provides an overview of parsing in compiler construction. It discusses context-free grammars and how they are used to generate sentences and parse trees through derivations. It also covers ambiguity that can arise from grammars and various grammar transformations used to eliminate ambiguity, including defining associativity and priority. The dangling else problem is presented as an example of an ambiguous grammar.
Compiler Construction | Lecture 6 | Introduction to Static AnalysisEelco Visser
Lecture introducing the need for static analysis in addition to parsing, the complications caused by names, and an introduction to name resolution with scope graphs
Compiler Construction | Lecture 12 | Virtual MachinesEelco Visser
The document discusses the architecture of the Java Virtual Machine (JVM). It describes how the JVM uses threads, a stack, heap, and method area. It explains JVM control flow through bytecode instructions like goto, and how the operand stack is used to perform operations and hold method arguments and return values.
Compiler Construction | Lecture 8 | Type ConstraintsEelco Visser
This lecture covers type checking with constraints. It introduces the NaBL2 meta-language for writing type specifications as constraint generators that map a program to constraints. The constraints are then solved to determine if a program is well-typed. NaBL2 supports defining name binding and type structures through scope graphs and constraints over names, types, and scopes. Examples show type checking patterns in NaBL2 including variables, functions, records, and name spaces.
This document discusses syntactic editor services including formatting, syntax coloring, and syntactic completion. It describes how syntactic completion can be provided generically based on a syntax definition. The document also discusses how context-free grammars can be extended with templates to specify formatting layout when pretty-printing abstract syntax trees to text. Templates are used to insert whitespace, line breaks, and indentation to produce readable output.
Declare Your Language: Syntax DefinitionEelco Visser
This document provides information about syntax definition and the lab organization for a compiler construction course. It includes links to papers on declarative syntax definition and the SDF3 syntax definition formalism. It also provides details about submitting lab assignments through GitHub, including instructions to fork a repository template and submit solutions as pull requests. Grades will be published on a web lab and early feedback may be provided on pull requests and pushed changes.
Compiler Construction | Lecture 13 | Code GenerationEelco Visser
The document discusses code generation and optimization techniques, describing compilation schemas that define how language constructs are translated to target code patterns, and covers topics like ensuring correctness of generated code through type checking and verification of static constraints on the target format. It also provides examples of compilation schemas for Tiger language constructs like arithmetic expressions and control flow and discusses generating nested functions.
This document discusses imperative and object-oriented programming languages. It covers basic concepts like state, variables, expressions, assignments, and control flow in imperative languages. It also discusses procedures and functions, including passing parameters, stack frames, and recursion. Finally, it briefly mentions the differences between call by value and call by reference.
Lex is a tool that generates lexical analyzers (scanners) that are used to break input text streams into tokens. It allows rapid development of scanners by specifying patterns and actions in a lex source file. The lex source file contains three sections - definitions, translation rules, and user subroutines. The translation rules specify patterns and corresponding actions. Lex compiles the source file to a C program that performs the tokenization. Example lex programs are provided to tokenize input based on regular expressions and generate output.
Compiler Construction | Lecture 9 | Constraint ResolutionEelco Visser
This document provides an overview of constraint resolution in the context of a compiler construction lecture. It discusses unification, which is the basis for many type inference and constraint solving approaches. It also describes separating type checking into constraint generation and constraint solving, and introduces a constraint language that integrates name resolution into constraint resolution through scope graph constraints. Finally, it discusses papers on further developments with this approach, including addressing expressiveness and staging issues in type systems through the Statix DSL for defining type systems.
Declarative Type System Specification with StatixEelco Visser
In this talk I present the design of Statix, a new constraint-based language for the executable specification of type systems. Statix specifications consist of predicates that define the well-formedness of language constructs in terms of built-in and user-defined constraints. Statix has a declarative semantics that defines whether a model satisfies a constraint. The operational semantics of Statix is defined as a sound constraint solving algorithm that searches for a solution for a constraint. The aim of the design is that Statix users can ignore the execution order of constraint solving and think in terms of the declarative semantics.
A distinctive feature of Statix is its use of scope graphs, a language parametric framework for the representation and querying of the name binding facts in programs. Since types depend on name resolution and name resolution may depend on types, it is typically not possible to construct the entire scope graph of a program before type constraint resolution. In (algorithmic) type system specifications this leads to explicit staging of the construction and querying of the type environment (class table, symbol table). Statix automatically stages the construction of the scope graph of a program such that queries are never executed when their answers may be affected by future scope graph extension. In the talk, I will explain the design of Statix by means of examples.
https://eelcovisser.org/post/309/declarative-type-system-specification-with-statix
Flex and Bison are tools for building programs that handle structured input. They were originally designed for writers of compilers and interpreters, and are replacements for the classic Lex and Yacc tools developed at Bell Labs in the 1970s. Flex handles lexical analysis by scanning input and breaking it into tokens, while Bison handles syntax analysis by parsing the tokens according to grammar rules.
This document discusses Flex and Bison, which are tools for generating lexical analyzers and parsers.
Flex is used for recognizing regular expressions and dividing input streams into tokens. Bison is used for building programs that handle structured input by taking tokens from Flex and grouping them logically based on context-free grammars.
The document explains key concepts such as shift-reduce parsing, lookahead, leftmost and rightmost derivations, and how to specify grammars, tokens, types, rules and actions in a Bison specification to build a parser. It also covers ambiguity, conflicts and how precedence and associativity help resolve shift-reduce conflicts.
This document provides an overview of the Lecture 2 on Declarative Syntax Definition for the CS4200 Compiler Construction course. The lecture covers the specification of syntax definition from which parsers can be derived, the perspective on declarative syntax definition using SDF, and reading material on the SDF3 syntax definition formalism and papers on testing syntax definitions and declarative syntax. It also discusses what syntax is, both in linguistics and programming languages, and how programs can be described in terms of syntactic categories and language constructs. An example Tiger program for solving the n-queens problem is presented to illustrate syntactic categories in Tiger.
This document provides an overview and introduction to PLY, a Python implementation of lex and yacc parsing tools. PLY allows writing parsers and compilers in Python by providing modules that handle lexical analysis (ply.lex) and parsing (ply.yacc) in a similar way to traditional lex and yacc tools. The document demonstrates how to define tokens and grammar rules with PLY and discusses why PLY is useful for building parsers and compilers in Python.
This document provides an introduction to compiler construction using LEX and YACC. It discusses the general organization of compilers into frontend, optimizer, and backend stages. It then focuses on the frontend, explaining lexical analysis with LEX and parsing with YACC. Specific topics covered include using LEX to generate a scanner, defining grammars and semantic actions in YACC, and getting LEX and YACC to work together to build a parser.
Compiler Construction | Lecture 4 | Parsing Eelco Visser
This lecture covers parsing and turning syntax definitions into parsers. It discusses context-free grammars and derivations. Grammars can be ambiguous, allowing multiple parse trees for a sentence. Grammar transformations like disambiguation, eliminating left recursion, and left factoring can address issues while preserving the language. Associativity and priority can be defined through transformations. The reading material covers parsing schemata, classical compiler textbooks, and papers on disambiguation filters and parsing algorithms.
This document summarizes and discusses type checking algorithms for programming languages. It introduces constraint-based type checking, which separates type checking into constraint generation and constraint solving. This provides a more declarative way to specify type checkers. The document discusses using variables and constraints to represent types during type checking. It introduces NaBL2, a domain-specific language for writing constraint generators to specify name and type constraints for programming language static semantics. NaBL2 uses scope graphs to represent name binding structures and supports features like type equality, subtyping, and type-dependent name resolution through constraint rules. An example scope graph and constraint rule for let-bindings is provided.
Declare Your Language: Name ResolutionEelco Visser
Scope graphs are used to represent the binding information in programs. They provide a language-independent representation of name resolution that can be used to conduct and represent the results of name resolution. Separating the representation of resolved programs from the declarative rules that define name binding allows language-independent tooling to be developed for name resolution and other tasks.
This document provides an introduction to Yacc and Lex, which are parser and lexer generator tools. Yacc is used to describe the grammar of a language and automatically generate a parser. The user provides grammar rules in BNF format. Lex is used to generate a lexer (also called a tokenizer) that recognizes tokens in the input based on regular expressions. It returns tokens to the parser. The document gives examples of Yacc and Lex grammar files and explains how they are compiled and used to build a parser for an input language.
The document discusses syntax analysis and creating parsers using Yacc/Bison. It provides examples of writing grammars and semantic actions for simple arithmetic expressions. Key points covered include:
- Yacc/Bison generate LALR(1) parsers from grammar specifications
- A Yacc specification contains declarations, translation rules (productions with semantic actions), and auxiliary C functions
- Productions define the grammar, semantic actions associate values with terminals/nonterminals
- Attributes refer to semantic values, precedence levels resolve ambiguity
- Lex/Flex is used to generate a lexical analyzer from regular expressions to interface with the parser
This document provides additional information about LEX and describes several LEX patterns, variables, functions, start conditions, and examples. It explains that LEX is used for text processing and scanner generation. It also describes pattern matching symbols, special directives like ECHO and REJECT, variables like yytext, and functions like yylex(), yyless(), and yywrap(). Examples are provided for multi-file scanning, counting character sequences, and generating HTML.
This document provides an overview of Lex and Yacc, which are tools used for generating scanners and parsers. Lex is used for lexical analysis by generating a scanner from regular expressions and actions. Yacc is used for syntactic analysis by generating a parser from a context-free grammar and semantic actions. Together, the scanner generated by Lex and parser generated by Yacc can be used to build a compiler. The document discusses various aspects of Lex and Yacc like grammar rules, precedence, shift-reduce conflicts and provides examples of Lex and Yacc files.
This document provides an overview of Lex and Yacc. It describes Lex as a tool that generates scanners to tokenize input streams based on regular expressions. Yacc is described as a tool that generates parsers to analyze tokens based on grammar rules. The document outlines the compilation process for Lex and Yacc, describes components of a Lex source file including regular expressions and transition rules, and provides examples of Lex and Yacc usage.
Before 1975 writing a compiler was a very time-consuming process. Then Lesk [1975] and Johnson published papers on lex and YACC. These utilities/components greatly simplify compiler writing. We’ve used flex and bison to compile the code.
Declare Your Language: Syntactic (Editor) ServicesEelco Visser
Lecture 3 on compiler construction course on definition of lexical syntax and syntactic services that can be derived from syntax definitions such as formatting and syntactic completion
ITU - MDD - Textural Languages and GrammarsTonny Madsen
This presentation describes the use and design of textural domain specific language - DSL. It has two basic purposes:
Introduce you to some of the more important design criteria in language design
Introduce you to BNF
This presentation is developed for MDD 2010 course at ITU, Denmark.
This document discusses imperative and object-oriented programming languages. It covers basic concepts like state, variables, expressions, assignments, and control flow in imperative languages. It also discusses procedures and functions, including passing parameters, stack frames, and recursion. Finally, it briefly mentions the differences between call by value and call by reference.
Lex is a tool that generates lexical analyzers (scanners) that are used to break input text streams into tokens. It allows rapid development of scanners by specifying patterns and actions in a lex source file. The lex source file contains three sections - definitions, translation rules, and user subroutines. The translation rules specify patterns and corresponding actions. Lex compiles the source file to a C program that performs the tokenization. Example lex programs are provided to tokenize input based on regular expressions and generate output.
Compiler Construction | Lecture 9 | Constraint ResolutionEelco Visser
This document provides an overview of constraint resolution in the context of a compiler construction lecture. It discusses unification, which is the basis for many type inference and constraint solving approaches. It also describes separating type checking into constraint generation and constraint solving, and introduces a constraint language that integrates name resolution into constraint resolution through scope graph constraints. Finally, it discusses papers on further developments with this approach, including addressing expressiveness and staging issues in type systems through the Statix DSL for defining type systems.
Declarative Type System Specification with StatixEelco Visser
In this talk I present the design of Statix, a new constraint-based language for the executable specification of type systems. Statix specifications consist of predicates that define the well-formedness of language constructs in terms of built-in and user-defined constraints. Statix has a declarative semantics that defines whether a model satisfies a constraint. The operational semantics of Statix is defined as a sound constraint solving algorithm that searches for a solution for a constraint. The aim of the design is that Statix users can ignore the execution order of constraint solving and think in terms of the declarative semantics.
A distinctive feature of Statix is its use of scope graphs, a language parametric framework for the representation and querying of the name binding facts in programs. Since types depend on name resolution and name resolution may depend on types, it is typically not possible to construct the entire scope graph of a program before type constraint resolution. In (algorithmic) type system specifications this leads to explicit staging of the construction and querying of the type environment (class table, symbol table). Statix automatically stages the construction of the scope graph of a program such that queries are never executed when their answers may be affected by future scope graph extension. In the talk, I will explain the design of Statix by means of examples.
https://eelcovisser.org/post/309/declarative-type-system-specification-with-statix
Flex and Bison are tools for building programs that handle structured input. They were originally designed for writers of compilers and interpreters, and are replacements for the classic Lex and Yacc tools developed at Bell Labs in the 1970s. Flex handles lexical analysis by scanning input and breaking it into tokens, while Bison handles syntax analysis by parsing the tokens according to grammar rules.
This document discusses Flex and Bison, which are tools for generating lexical analyzers and parsers.
Flex is used for recognizing regular expressions and dividing input streams into tokens. Bison is used for building programs that handle structured input by taking tokens from Flex and grouping them logically based on context-free grammars.
The document explains key concepts such as shift-reduce parsing, lookahead, leftmost and rightmost derivations, and how to specify grammars, tokens, types, rules and actions in a Bison specification to build a parser. It also covers ambiguity, conflicts and how precedence and associativity help resolve shift-reduce conflicts.
This document provides an overview of the Lecture 2 on Declarative Syntax Definition for the CS4200 Compiler Construction course. The lecture covers the specification of syntax definition from which parsers can be derived, the perspective on declarative syntax definition using SDF, and reading material on the SDF3 syntax definition formalism and papers on testing syntax definitions and declarative syntax. It also discusses what syntax is, both in linguistics and programming languages, and how programs can be described in terms of syntactic categories and language constructs. An example Tiger program for solving the n-queens problem is presented to illustrate syntactic categories in Tiger.
This document provides an overview and introduction to PLY, a Python implementation of lex and yacc parsing tools. PLY allows writing parsers and compilers in Python by providing modules that handle lexical analysis (ply.lex) and parsing (ply.yacc) in a similar way to traditional lex and yacc tools. The document demonstrates how to define tokens and grammar rules with PLY and discusses why PLY is useful for building parsers and compilers in Python.
This document provides an introduction to compiler construction using LEX and YACC. It discusses the general organization of compilers into frontend, optimizer, and backend stages. It then focuses on the frontend, explaining lexical analysis with LEX and parsing with YACC. Specific topics covered include using LEX to generate a scanner, defining grammars and semantic actions in YACC, and getting LEX and YACC to work together to build a parser.
Compiler Construction | Lecture 4 | Parsing Eelco Visser
This lecture covers parsing and turning syntax definitions into parsers. It discusses context-free grammars and derivations. Grammars can be ambiguous, allowing multiple parse trees for a sentence. Grammar transformations like disambiguation, eliminating left recursion, and left factoring can address issues while preserving the language. Associativity and priority can be defined through transformations. The reading material covers parsing schemata, classical compiler textbooks, and papers on disambiguation filters and parsing algorithms.
This document summarizes and discusses type checking algorithms for programming languages. It introduces constraint-based type checking, which separates type checking into constraint generation and constraint solving. This provides a more declarative way to specify type checkers. The document discusses using variables and constraints to represent types during type checking. It introduces NaBL2, a domain-specific language for writing constraint generators to specify name and type constraints for programming language static semantics. NaBL2 uses scope graphs to represent name binding structures and supports features like type equality, subtyping, and type-dependent name resolution through constraint rules. An example scope graph and constraint rule for let-bindings is provided.
Declare Your Language: Name ResolutionEelco Visser
Scope graphs are used to represent the binding information in programs. They provide a language-independent representation of name resolution that can be used to conduct and represent the results of name resolution. Separating the representation of resolved programs from the declarative rules that define name binding allows language-independent tooling to be developed for name resolution and other tasks.
This document provides an introduction to Yacc and Lex, which are parser and lexer generator tools. Yacc is used to describe the grammar of a language and automatically generate a parser. The user provides grammar rules in BNF format. Lex is used to generate a lexer (also called a tokenizer) that recognizes tokens in the input based on regular expressions. It returns tokens to the parser. The document gives examples of Yacc and Lex grammar files and explains how they are compiled and used to build a parser for an input language.
The document discusses syntax analysis and creating parsers using Yacc/Bison. It provides examples of writing grammars and semantic actions for simple arithmetic expressions. Key points covered include:
- Yacc/Bison generate LALR(1) parsers from grammar specifications
- A Yacc specification contains declarations, translation rules (productions with semantic actions), and auxiliary C functions
- Productions define the grammar, semantic actions associate values with terminals/nonterminals
- Attributes refer to semantic values, precedence levels resolve ambiguity
- Lex/Flex is used to generate a lexical analyzer from regular expressions to interface with the parser
This document provides additional information about LEX and describes several LEX patterns, variables, functions, start conditions, and examples. It explains that LEX is used for text processing and scanner generation. It also describes pattern matching symbols, special directives like ECHO and REJECT, variables like yytext, and functions like yylex(), yyless(), and yywrap(). Examples are provided for multi-file scanning, counting character sequences, and generating HTML.
This document provides an overview of Lex and Yacc, which are tools used for generating scanners and parsers. Lex is used for lexical analysis by generating a scanner from regular expressions and actions. Yacc is used for syntactic analysis by generating a parser from a context-free grammar and semantic actions. Together, the scanner generated by Lex and parser generated by Yacc can be used to build a compiler. The document discusses various aspects of Lex and Yacc like grammar rules, precedence, shift-reduce conflicts and provides examples of Lex and Yacc files.
This document provides an overview of Lex and Yacc. It describes Lex as a tool that generates scanners to tokenize input streams based on regular expressions. Yacc is described as a tool that generates parsers to analyze tokens based on grammar rules. The document outlines the compilation process for Lex and Yacc, describes components of a Lex source file including regular expressions and transition rules, and provides examples of Lex and Yacc usage.
Before 1975 writing a compiler was a very time-consuming process. Then Lesk [1975] and Johnson published papers on lex and YACC. These utilities/components greatly simplify compiler writing. We’ve used flex and bison to compile the code.
Declare Your Language: Syntactic (Editor) ServicesEelco Visser
Lecture 3 on compiler construction course on definition of lexical syntax and syntactic services that can be derived from syntax definitions such as formatting and syntactic completion
ITU - MDD - Textural Languages and GrammarsTonny Madsen
This presentation describes the use and design of textural domain specific language - DSL. It has two basic purposes:
Introduce you to some of the more important design criteria in language design
Introduce you to BNF
This presentation is developed for MDD 2010 course at ITU, Denmark.
This document discusses syntax definition and provides examples using various syntax definition formalisms including Backus-Naur Form (BNF), Extended Backus-Naur Form (EBNF), and SDF3. It introduces concepts of lexical syntax, context-free syntax, abstract syntax, disambiguation, and testing syntax definitions. Specific examples are provided for defining the syntax of an expression language using BNF, EBNF, and SDF3. Testing syntax definitions using Spoofax is also discussed with examples of test cases for lexical and context-free syntax.
The document provides guidelines for Ruby coding conventions including file names, file organization, comments, definitions, statements, white space, naming conventions, and pragmatic programming practices. It recommends lower case class/module names with '.rb' suffixes for files, using require and include at the top of files, and organizing files with a beginning comment, classes/modules, main, and optional testing code. It provides examples of different types of comments and statements.
The document discusses how Smalltalk can support functional programming through existing features and extensions. It describes how the Pharo-Functional repository adds syntactic sugar like the compose operator and collection literals to make functional code more succinct. New classes and methods are also introduced to facilitate functional patterns. The author argues Smalltalk already has the fundamentals for functional programming and some simple extensions could make it more pleasant, hoping some may become mainstream features. In the meantime, anyone can add these capabilities to their Pharo image.
The document contains questions and answers related to compiler design topics such as parsing, grammars, syntax analysis, error handling, derivation, sentential forms, parse trees, ambiguity, left and right recursion elimination etc. Key points discussed are:
1. The role of parser is to verify the string of tokens generated by lexical analyzer according to the grammar rules and detect syntax errors. It outputs a parse tree.
2. Common parsing methods are top-down, bottom-up and universal. Top-down methods include LL, LR. Bottom-up methods include LR, LALR.
3. Errors can be lexical, syntactic, semantic and logical detected by different compiler phases. Error recovery strategies include panic mode
F# is a functional programming language that can also support imperative programming. It compiles to .NET code and allows interoperability with other .NET languages. Some key features include type inference, pattern matching, immutable data structures, and support for functions as first-class values. The presentation provides examples of common F# concepts like recursion, tuples, lists, objects, and using F# for GUI programming with WinForms.
The document introduces a convention-based syntax notation designed for both humans and parser generators. Key aspects include using conventions to simplify EBNF notation, defining rules without delimiters, inferring token sets from rules, and representing abstract syntax trees with variables. The approach was motivated by limitations of existing formalisms. Examples are provided for microsyntax, macrosyntax, and representing abstract syntax trees. Future work discussed expanding the approach to support additional language features.
This document discusses assembly language directives and mixed-mode programming. It provides examples of assembly directives like .byte, .word, .section that reserve data locations and section code. It also discusses using inline assembly in C/C++ programs and the rules for calling assembly routines from high-level languages.
This document provides an overview of topics related to compiler design and implementation including: lexical analysis, parsing, semantic analysis, code generation, assembly language programming, finite automata, parsing algorithms, and the WL programming language. It lists specific tasks like performing semantic analysis on WL programs, generating MIPS assembly code from WL parse trees, drawing finite automata, and parsing strings using various grammars. It also defines key concepts and compares programming languages.
Cs6660 compiler design may june 2016 Answer Keyappasami
The document describes the various phases of a compiler:
1. Lexical analysis breaks the source code into tokens.
2. Syntax analysis generates a parse tree from the tokens.
3. Semantic analysis checks for semantic correctness using the parse tree and symbol table.
4. Intermediate code generation produces machine-independent code.
5. Code optimization improves the intermediate code.
6. Code generation translates the optimized code into target machine code.
LEX is a tool that allows users to specify a lexical analyzer by defining patterns for tokens using regular expressions. The LEX compiler transforms these patterns into a transition diagram and generates C code. It takes a LEX source program as input, compiles it to produce lex.yy.c, which is then compiled with a C compiler to generate an executable that takes an input stream and returns a sequence of tokens. LEX programs have declarations, translation rules that map patterns to actions, and optional auxiliary functions. The actions are fragments of C code that execute when a pattern is matched.
Poznań Ruby User Group presentation, which took place 31.03.2022.
Regex is used for google analytics and much more!
We dig into ruby amenities of default Regexp class, but go beyond it (so it's for more advanced developers):
- explaining how backtracking and deterministic finate automatom works.
- why to reach RE2 library from google? or regular expressions at all
Regular expressions (regex) allow defining patterns to match in text strings using special characters like ., *, + to match types of characters or ranges. They have components like literal characters, character classes, shorthand classes, quantifiers and modifiers that define what is matched. PHP supports both PCRE and POSIX regex with functions like preg_match() and ereg_replace() as well as online tools for testing regular expressions.
Regular expressions (regex) allow defining patterns to match in text strings using special characters like ., *, + to match types of characters or ranges. They have components like literal characters, character classes, shorthand classes, quantifiers and modifiers that define what is matched. PHP supports both PCRE and POSIX regex with functions like preg_match() and ereg_replace() and there are online tools to test and learn regular expressions.
Compiler Engineering Lab#5 : Symbol Table, Flex ToolMashaelQ
This document discusses symbol tables and their use in compilers. It explains that symbol tables keep track of identifiers and their declarations in different scopes to resolve naming conflicts. The document also provides examples of Flex and Bison files for lexical and syntax analysis of simple calculator and BASIC languages. It outlines the installation of Flex and Bison through Cygwin on Windows systems.
Lex is a tool that generates lexical analyzers based on regular expressions. It reads a specification file defining patterns and actions, and outputs C code implementing the lexical analyzer. The generated code uses a finite state machine to efficiently scan input and recognize tokens based on the defined patterns. Lex is often used with Yacc, which generates parsers based on context-free grammars to recognize higher-level syntactic structures. Together, Lex and Yacc allow defining the lexical and syntactic rules of a language to build a compiler or interpreter.
The document discusses instruction set architecture and the MIPS architecture. It introduces key concepts like assembly language, registers, instructions, and how assembly code maps to high-level languages like C. Specific MIPS instructions for arithmetic like addition and subtraction are presented, showing how a single line of C code can require multiple assembly instructions. Register usage and conventions are also covered.
HES2011 - James Oakley and Sergey bratus-Exploiting-the-Hard-Working-DWARFHackito Ergo Sum
The document describes how DWARF bytecode, included in GCC-compiled binaries to support exception handling, can be exploited to insert trojan payloads. DWARF bytecode interpreters are included in the standard C++ runtime and are Turing-complete, allowing the bytecode to perform arbitrary computations by influencing program flow. A demonstration shows how DWARF bytecode can be used to hijack exceptions and execute malicious payloads without requiring native code.
Similar to Compiler Construction | Lecture 3 | Syntactic Editor Services (20)
This document provides an overview of the CS4200 Compiler Construction course at TU Delft. It discusses the course organization, structure, and assessment. The course is split into two parts - CS4200-A which covers concepts and techniques through lectures, papers, and homework assignments, and CS4200-B which involves building a compiler for a subset of Java as a semester-long project. Students will use the Spoofax language workbench to implement their compiler and will submit assignments through a private GitLab repository.
A Direct Semantics of Declarative Disambiguation RulesEelco Visser
This document discusses research into providing a direct semantics for declarative disambiguation of expression grammars. It aims to define what disambiguation rules mean, ensure they are safe and complete, and provide an effective implementation strategy. The document outlines key research questions around the meaning, safety, completeness and coverage of disambiguation rules. It also presents contributions around using subtree exclusion patterns to define safe and complete disambiguation for classes of expression grammars, and implementing this in SDF3.
Compiler Construction | Lecture 17 | Beyond Compiler ConstructionEelco Visser
Compiler construction techniques are applied beyond general-purpose languages through domain-specific languages (DSLs). The document discusses several DSLs developed using Spoofax including:
- WebDSL for web programming with sub-languages for entities, queries, templates, and access control.
- IceDust for modeling information systems with derived values computed on-demand, incrementally, or eventually consistently.
- PixieDust for client-side web programming with views as derived values updated incrementally.
- PIE for defining software build pipelines as tasks with dynamic dependencies computed incrementally.
The document also outlines several research challenges in compiler construction like high-level declarative language definition, verification of
Domain Specific Languages for Parallel Graph AnalytiX (PGX)Eelco Visser
This document discusses domain-specific languages (DSLs) for parallel graph analytics using PGX. It describes how DSLs allow users to implement graph algorithms and queries using high-level languages that are then compiled and optimized to run efficiently on PGX. Examples of DSL optimizations like multi-source breadth-first search are provided. The document also outlines the extensible compiler architecture used for DSLs, which can generate code for different backends like shared memory or distributed memory.
Compiler Construction | Lecture 15 | Memory ManagementEelco Visser
The document discusses different memory management techniques:
1. Reference counting counts the number of pointers to each record and deallocates records with a count of 0.
2. Mark and sweep marks all reachable records from program roots and sweeps unmarked records, adding them to a free list.
3. Copying collection copies reachable records to a "to" space, allowing the original "from" space to be freed without fragmentation.
4. Generational collection focuses collection on younger object generations more frequently to improve efficiency.
Compiler Construction | Lecture 14 | InterpretersEelco Visser
This document summarizes a lecture on interpreters for programming languages. It discusses how operational semantics can be used to define the meaning of a program through state transitions in an interpreter. It provides examples of defining the semantics of a simple language using DynSem, a domain-specific language for specifying operational semantics. DynSem specifications can be compiled to interpreters that execute programs in the defined language.
Compiler Construction | Lecture 7 | Type CheckingEelco Visser
This document summarizes a lecture on type checking. It discusses using constraints to separate the language-specific type checking rules from the language-independent solving algorithm. Constraint-based type checking collects constraints as it traverses the AST, then solves the constraints in any order. This allows type information to be learned gradually and avoids issues with computation order.
Compiler Construction | Lecture 2 | Declarative Syntax DefinitionEelco Visser
The document describes a lecture on declarative syntax definition. It discusses the perspective on declarative syntax definition explained in an Onward! 2010 essay. It also mentions an OOPSLA 2011 paper that introduced the SPoofax Testing (SPT) language used in the section on testing syntax definitions. Finally, it provides a link to documentation on the SDF3 syntax definition formalism.
Compiler Construction | Lecture 1 | What is a compiler?Eelco Visser
This document provides an overview of the CS4200 Compiler Construction course at TU Delft. It discusses the organization of the course into two parts: CS4200-A which covers compiler concepts and techniques through lectures, papers, and homework assignments; and CS4200-B which involves building a compiler for a subset of Java as a semester-long project. Key topics covered include the components of a compiler like parsing, type checking, optimization, and code generation; intermediate representations; and different types of compilers.
Declare Your Language: Virtual Machines & Code GenerationEelco Visser
The document summarizes virtual machines and code generation. It discusses how high-level programming languages are abstracted from low-level machine details through virtual machines. The Java Virtual Machine architecture and bytecode instructions are described, including its stack-based design, threads, heap, and method area. Code generation mechanics like string operations are also covered.
Declare Your Language: Dynamic SemanticsEelco Visser
This document provides an overview of DynSem, a domain-specific language for specifying the dynamic semantics of programming languages. It discusses the design goals of DynSem in being concise, modular, executable, portable, and high-performance. The document then provides an example DynSem specification for the semantics of a simple language with features like arithmetic, booleans, functions, and mutable variables. It describes how DynSem specifications are organized into modules that can be composed to define the semantics.
Declare Your Language: Constraint Resolution 2Eelco Visser
1) The document discusses unification and the union-find algorithm for efficiently solving unification problems.
2) Unification involves resolving constraints between terms to find a substitution that makes the terms equal. This can have exponential time and space complexity.
3) The union-find algorithm uses set representatives and linking to maintain disjoint sets in near-constant time complexity, improving on naive recursive algorithms.
4) An example shows how union-find can be applied to fully unify a complex expression by progressively linking variables and subterms between the two sides.
Declare Your Language: Constraint Resolution 1Eelco Visser
This document provides an overview of constraint resolution and type checking with constraints. It discusses unification theory and the semantics and implementation of constraint solving. Specifically, it covers:
- Unification algorithms and their properties of soundness, completeness, and principality
- The meaning of constraints and how constraint satisfaction is formally defined using constraint semantics
- How to write a constraint solver using the Constraint Handling Rules (CHR) formalism and the semantics of solving subtyping constraints
- The evaluation rules for CHR-based constraint solving, including solving built-in constraints, introducing new constraints, and simplifying/propagating constraints using rewrite rules
- How the CHR rules for subtyping constraints are evaluated using
14 th Edition of International conference on computer visionShulagnaSarkar2
About the event
14th Edition of International conference on computer vision
Computer conferences organized by ScienceFather group. ScienceFather takes the privilege to invite speakers participants students delegates and exhibitors from across the globe to its International Conference on computer conferences to be held in the Various Beautiful cites of the world. computer conferences are a discussion of common Inventions-related issues and additionally trade information share proof thoughts and insight into advanced developments in the science inventions service system. New technology may create many materials and devices with a vast range of applications such as in Science medicine electronics biomaterials energy production and consumer products.
Nomination are Open!! Don't Miss it
Visit: computer.scifat.com
Award Nomination: https://x-i.me/ishnom
Conference Submission: https://x-i.me/anicon
For Enquiry: Computer@scifat.com
Alluxio Webinar | 10x Faster Trino Queries on Your Data PlatformAlluxio, Inc.
Alluxio Webinar
June. 18, 2024
For more Alluxio Events: https://www.alluxio.io/events/
Speaker:
- Jianjian Xie (Staff Software Engineer, Alluxio)
As Trino users increasingly rely on cloud object storage for retrieving data, speed and cloud cost have become major challenges. The separation of compute and storage creates latency challenges when querying datasets; scanning data between storage and compute tiers becomes I/O bound. On the other hand, cloud API costs related to GET/LIST operations and cross-region data transfer add up quickly.
The newly introduced Trino file system cache by Alluxio aims to overcome the above challenges. In this session, Jianjian will dive into Trino data caching strategies, the latest test results, and discuss the multi-level caching architecture. This architecture makes Trino 10x faster for data lakes of any scale, from GB to EB.
What you will learn:
- Challenges relating to the speed and costs of running Trino in the cloud
- The new Trino file system cache feature overview, including the latest development status and test results
- A multi-level cache framework for maximized speed, including Trino file system cache and Alluxio distributed cache
- Real-world cases, including a large online payment firm and a top ridesharing company
- The future roadmap of Trino file system cache and Trino-Alluxio integration
Superpower Your Apache Kafka Applications Development with Complementary Open...Paul Brebner
Kafka Summit talk (Bangalore, India, May 2, 2024, https://events.bizzabo.com/573863/agenda/session/1300469 )
Many Apache Kafka use cases take advantage of Kafka’s ability to integrate multiple heterogeneous systems for stream processing and real-time machine learning scenarios. But Kafka also exists in a rich ecosystem of related but complementary stream processing technologies and tools, particularly from the open-source community. In this talk, we’ll take you on a tour of a selection of complementary tools that can make Kafka even more powerful. We’ll focus on tools for stream processing and querying, streaming machine learning, stream visibility and observation, stream meta-data, stream visualisation, stream development including testing and the use of Generative AI and LLMs, and stream performance and scalability. By the end you will have a good idea of the types of Kafka “superhero” tools that exist, which are my favourites (and what superpowers they have), and how they combine to save your Kafka applications development universe from swamploads of data stagnation monsters!
Orca: Nocode Graphical Editor for Container OrchestrationPedro J. Molina
Tool demo on CEDI/SISTEDES/JISBD2024 at A Coruña, Spain. 2024.06.18
"Orca: Nocode Graphical Editor for Container Orchestration"
by Pedro J. Molina PhD. from Metadev
Enhanced Screen Flows UI/UX using SLDS with Tom KittPeter Caitens
Join us for an engaging session led by Flow Champion, Tom Kitt. This session will dive into a technique of enhancing the user interfaces and user experiences within Screen Flows using the Salesforce Lightning Design System (SLDS). This technique uses Native functionality, with No Apex Code, No Custom Components and No Managed Packages required.
WMF 2024 - Unlocking the Future of Data Powering Next-Gen AI with Vector Data...Luigi Fugaro
Vector databases are transforming how we handle data, allowing us to search through text, images, and audio by converting them into vectors. Today, we'll dive into the basics of this exciting technology and discuss its potential to revolutionize our next-generation AI applications. We'll examine typical uses for these databases and the essential tools
developers need. Plus, we'll zoom in on the advanced capabilities of vector search and semantic caching in Java, showcasing these through a live demo with Redis libraries. Get ready to see how these powerful tools can change the game!
The Power of Visual Regression Testing_ Why It Is Critical for Enterprise App...kalichargn70th171
Visual testing plays a vital role in ensuring that software products meet the aesthetic requirements specified by clients in functional and non-functional specifications. In today's highly competitive digital landscape, users expect a seamless and visually appealing online experience. Visual testing, also known as automated UI testing or visual regression testing, verifies the accuracy of the visual elements that users interact with.
Unlock the Secrets to Effortless Video Creation with Invideo: Your Ultimate G...The Third Creative Media
"Navigating Invideo: A Comprehensive Guide" is an essential resource for anyone looking to master Invideo, an AI-powered video creation tool. This guide provides step-by-step instructions, helpful tips, and comparisons with other AI video creators. Whether you're a beginner or an experienced video editor, you'll find valuable insights to enhance your video projects and bring your creative ideas to life.
Nashik's top web development company, Upturn India Technologies, crafts innovative digital solutions for your success. Partner with us and achieve your goals
Streamlining End-to-End Testing Automation with Azure DevOps Build & Release Pipelines
Automating end-to-end (e2e) test for Android and iOS native apps, and web apps, within Azure build and release pipelines, poses several challenges. This session dives into the key challenges and the repeatable solutions implemented across multiple teams at a leading Indian telecom disruptor, renowned for its affordable 4G/5G services, digital platforms, and broadband connectivity.
Challenge #1. Ensuring Test Environment Consistency: Establishing a standardized test execution environment across hundreds of Azure DevOps agents is crucial for achieving dependable testing results. This uniformity must seamlessly span from Build pipelines to various stages of the Release pipeline.
Challenge #2. Coordinated Test Execution Across Environments: Executing distinct subsets of tests using the same automation framework across diverse environments, such as the build pipeline and specific stages of the Release Pipeline, demands flexible and cohesive approaches.
Challenge #3. Testing on Linux-based Azure DevOps Agents: Conducting tests, particularly for web and native apps, on Azure DevOps Linux agents lacking browser or device connectivity presents specific challenges in attaining thorough testing coverage.
This session delves into how these challenges were addressed through:
1. Automate the setup of essential dependencies to ensure a consistent testing environment.
2. Create standardized templates for executing API tests, API workflow tests, and end-to-end tests in the Build pipeline, streamlining the testing process.
3. Implement task groups in Release pipeline stages to facilitate the execution of tests, ensuring consistency and efficiency across deployment phases.
4. Deploy browsers within Docker containers for web application testing, enhancing portability and scalability of testing environments.
5. Leverage diverse device farms dedicated to Android, iOS, and browser testing to cover a wide range of platforms and devices.
6. Integrate AI technology, such as Applitools Visual AI and Ultrafast Grid, to automate test execution and validation, improving accuracy and efficiency.
7. Utilize AI/ML-powered central test automation reporting server through platforms like reportportal.io, providing consolidated and real-time insights into test performance and issues.
These solutions not only facilitate comprehensive testing across platforms but also promote the principles of shift-left testing, enabling early feedback, implementing quality gates, and ensuring repeatability. By adopting these techniques, teams can effectively automate and execute tests, accelerating software delivery while upholding high-quality standards across Android, iOS, and web applications.
Penify - Let AI do the Documentation, you write the Code.KrishnaveniMohan1
Penify automates the software documentation process for Git repositories. Every time a code modification is merged into "main", Penify uses a Large Language Model to generate documentation for the updated code. This automation covers multiple documentation layers, including InCode Documentation, API Documentation, Architectural Documentation, and PR documentation, each designed to improve different aspects of the development process. By taking over the entire documentation process, Penify tackles the common problem of documentation becoming outdated as the code evolves.
https://www.penify.dev/
What to do when you have a perfect model for your software but you are constrained by an imperfect business model?
This talk explores the challenges of bringing modelling rigour to the business and strategy levels, and talking to your non-technical counterparts in the process.
Secure-by-Design Using Hardware and Software Protection for FDA ComplianceICS
This webinar explores the “secure-by-design” approach to medical device software development. During this important session, we will outline which security measures should be considered for compliance, identify technical solutions available on various hardware platforms, summarize hardware protection methods you should consider when building in security and review security software such as Trusted Execution Environments for secure storage of keys and data, and Intrusion Detection Protection Systems to monitor for threats.
Ensuring Efficiency and Speed with Practical Solutions for Clinical OperationsOnePlan Solutions
Clinical operations professionals encounter unique challenges. Balancing regulatory requirements, tight timelines, and the need for cross-functional collaboration can create significant internal pressures. Our upcoming webinar will introduce key strategies and tools to streamline and enhance clinical development processes, helping you overcome these challenges.
Mobile App Development Company In Noida | Drona InfotechDrona Infotech
React.js, a JavaScript library developed by Facebook, has gained immense popularity for building user interfaces, especially for single-page applications. Over the years, React has evolved and expanded its capabilities, becoming a preferred choice for mobile app development. This article will explore why React.js is an excellent choice for the Best Mobile App development company in Noida.
Visit Us For Information: https://www.linkedin.com/pulse/what-makes-reactjs-stand-out-mobile-app-development-rajesh-rai-pihvf/
Mobile App Development Company In Noida | Drona Infotech
Compiler Construction | Lecture 3 | Syntactic Editor Services
1. Lecture 3: Syntactic Editor Services
CS4200 Compiler Construction
Eelco Visser
TU Delft
September 2018
2. Lexical syntax
- defining the syntax of tokens / terminals including layout
- making lexical syntax and layout explicit
Syntactic editor services
- more interpretations of syntax definitions
Formatting specification
- how to map (abstract syntax) trees to text
Syntactic completion
- proposing valid syntactic completions in an editor
!2
This Lecture
4. !4
The inverse of parsing is unparsing or pretty-printing or
formatting, i.e. mapping a tree representation of a program
to a textual representation. A plain context-free grammar
can be used as specification of an unparser. However, then
it is unclear where the whitespace should go.
This paper extends context-free grammars with templates
that provide hints for layout of program text when
formatting.
https://doi.org/10.1145/2427048.2427056
Based on master’s thesis project of Tobi Vollebregt
5. !5
Syntax definitions cannot be used just for parsing, but
for many other operations. This paper shows how
syntactic completion can be provided generically given
a syntax definition.
https://doi.org/10.1145/2427048.2427056
Part of PhD thesis work Eduardo Amorim
6. !6
The SDF3 syntax
definition formalism is
documented at the
metaborg.org website.
http://www.metaborg.org/en/latest/source/langdev/meta/lang/sdf3/index.html
14. Core language
- context-free grammar productions
- with constructors
- only character classes as terminals
- explicit definition of layout
Desugaring
- express lexical syntax in terms of character classes
- explicate layout between context-free syntax symbols
- separate lexical and context-free syntax non-terminals
!14
Explication of Lexical Syntax
16. Separation of Lexical and Context-free Syntax
!16
syntax
Id-LEX = [65-9097-122] [48-5765-909597-122]*-LEX
Id-LEX = "if" {reject}
Id-LEX = "then" {reject}
Id-CF = Id-LEX
Exp-CF.Var = Id-CF
lexical syntax
Id = [a-zA-Z] [a-zA-Z0-9_]*
Id = "if" {reject}
Id = "then" {reject}
context-free syntax
Exp.Var = Id
17. syntax
Id = [65-9097-122] [48-5765-909597-122]*
Id = "if" {reject}
Id = "then" {reject}
Exp.Var = Id
Why Separation of Lexical and Context-Free Syntax?
!17
Homework: what would go wrong if we not do this?
lexical syntax
Id = [a-zA-Z] [a-zA-Z0-9_]*
Id = "if" {reject}
Id = "then" {reject}
context-free syntax
Exp.Var = Id
25. Generated Syntax Coloring
!25
// compute the n-th fibonacci number
let function fib(n: int): int =
if n <= 1 then 1
else fib(n - 1) + fib(n - 2)
in fib(10)
end
module libspoofax/color/default
imports
libspoofax/color/colors
colorer // Default, token-based
highlighting
keyword : 127 0 85 bold
identifier : default
string : blue
number : darkgreen
var : 139 69 19 italic
operator : 0 0 128
layout : 63 127 95 italic
26. Customized Syntax Coloring
!26
module Tiger-Colorer
colorer
red = 255 0 0
green = 0 255 0
blue = 0 0 255
TUDlavender = 123 160 201
colorer token-based highlighting
keyword : red
Id : TUDlavender
StrConst : darkgreen
TypeId : blue
layout : green
// compute the n-th fibonacci number
let function fib(n: int): int =
if n <= 1 then 1
else fib(n - 1) + fib(n - 2)
in fib(10)
end
28. !28
The inverse of parsing is unparsing or pretty-printing or
formatting, i.e. mapping a tree representation of a program
to a textual representation. A plain context-free grammar
can be used as specification of an unparser. However, then
it is unclear where the whitespace should go.
This paper extends context-free grammars with templates
that provide hints for layout of program text when
formatting.
https://doi.org/10.1145/2427048.2427056
Based on master’s thesis project of Tobi Vollebregt
31. From ASTs to text
- insert keywords
- insert layout: spaces, line breaks, indentation
- insert parentheses to preserve tree structure
Unparser
- derive transformation rules from context-free grammar
- keywords, literals defined in grammar productions
- parentheses determined by priority, associativity rules
- separate all symbols by a space => not pretty, or even readable
Pretty-printer
- introduce spaces, line breaks, and indentation to produce readable text
- doing that manually is tedious
!31
Pretty-Printing
32. Specifying Formatting Layout with Templates
!32
context-free syntax
Exp.Seq = <
(
<{Exp ";n"}*>
)
>
Exp.If = <
if <Exp> then
<Exp>
else
<Exp>
>
Exp.IfThen = <
if <Exp> then
<Exp>
>
Exp.While = <
while <Exp> do
<Exp>
>
Inverse quotation
- template quotes literal text with <>
- anti-quotations insert non-terminals with <>
Layout directives
- whitespace (linebreaks, indentation, spaces) in template
guides formatting
- is interpreted as LAYOUT? for parsing
Formatter generation
- generate rules for mapping AST to text (via box expressions)
Applications
- code generation; pretty-printing generated AST
- syntactic completions
- formatting
34. Templates for Tiger: Functions
!34
context-free syntax
Dec.FunDecs = <<{FunDec "n"}+>> {longest-match}
FunDec.ProcDec = <
function <Id>(<{FArg ", "}*>) =
<Exp>
>
FunDec.FunDec = <
function <Id>(<{FArg ", "}*>) : <Type> =
<Exp>
>
FArg.FArg = <<Id> : <Type>>
Exp.Call = <<Id>(<{Exp ", "}*>)>
No space after function name in call
Space after comma!
Function declarations
separated by newline
Indent body
of function
35. Templates for Tiger: Bindings and Records
!35
context-free syntax
Exp.Let = <
let
<{Dec "n"}*>
in
<{Exp ";n"}*>
end
>
context-free syntax // records
Type.RecordTy = <
{
<{Field ", n"}*>
}
>
Field.Field = <<Id> : <TypeId>>
Exp.NilExp = <nil>
Exp.Record = <<TypeId>{ <{InitField ", "}*> }>
InitField.InitField = <<Id> = <Exp>>
LValue.FieldVar = <<LValue>.<Id>>
Note spacing / layout in separators
43. Tiger Syntax: Whitespace & Comments
!43
module Whitespace
lexical syntax
LAYOUT = [ tnr]
context-free restrictions
LAYOUT? -/- [ tnr]
module Comments
lexical syntax // multiline comments
CommentChar = [*]
LAYOUT = "/*" InsideComment* "*/"
InsideComment = ~[*]
InsideComment = CommentChar
lexical restrictions
CommentChar -/- [/]
context-free restrictions
LAYOUT? -/- [/].[/]
lexical syntax // single line comments
LAYOUT = "//" ~[nr]* NewLineEOF
NewLineEOF = [nr]
NewLineEOF = EOF
EOF =
// end of file since it cannot be followed by any character
// avoids the need for a newline to close a single line comment
// at the last line of a file
lexical restrictions
EOF -/- ~[]
context-free restrictions
LAYOUT? -/- [/].[*]
48. !48
Syntax definitions cannot be used just for parsing, but
for many other operations. This paper shows how
syntactic completion can be provided generically given
a syntax definition.
https://doi.org/10.1145/2427048.2427056
Part of PhD thesis work Eduardo Amorim
49. !49
S. Amann, S. Proksch, S. Nadi, and M. Mezini. A study of
visual studio usage in practice. In SANER, 2016.
96. Syntax coloring
- ESV: mapping token sorts to colors
Formatting
- Unparsing: mapping from abstract syntax to concrete syntax
- Pretty-printing: derived from templates
- Parenthesization: derived from disambiguation declarations
Completion
- Make incompleteness explicit, part of the structure
- Generating proposals: derived from structure
- Pretty-printing proposals: derived from templates
!96
Syntactic Services from Syntax Definition
98. !98
Compilers: Principles, Techniques, and Tools, 2nd Edition
Alfred V. Aho, Columbia University
Monica S. Lam, Stanford University
Ravi Sethi, Avaya Labs
Jeffrey D. Ullman, Stanford University
2007 | Pearson
Classical compiler textbook
Chapter 4: Syntax Analysis