The document discusses a lecture on pretty printing and declarative syntax definition. It compares SDF to context-free grammars, noting SDF provides additional description means like regular expressions and layout. It also discusses how these additional features in SDF are needed for compiler construction.
The document provides information about a bioinformatics practicum covering topics like installing and using TextPad, regular expressions (regex), arrays/hashes, variables, flow control, loops, input/output, subroutines, and the three basic Perl data types of scalars, arrays, and hashes. It also discusses customizing TextPad, calculating Pi using random numbers, Buffon's needle problem, programming concepts like warnings and strict mode, what a regex is and why you would use one, regex atoms and quantifiers, anchors, grouping, alternation, and variable interpolation. Pattern matching, memory parentheses, finding all matches, greediness, the substitute function, and the translate function are also summarized.
This document provides an introduction and overview of regular expressions (regex). It begins with an introduction to regex and how they are used. It then outlines the table of contents which covers literal characters, character classes, anchors, boundaries, alternation, optional and repetitive items, and grouping. The document discusses topics like literal characters, special characters, character classes, shorthand classes, negated classes, metacharacters inside classes, dot matching, and start/end of string anchors. It provides examples and explanations of how regex engines work to match patterns.
Introduction to Regular Expressions RootsTech 2013Ben Brumfield
This document provides an introduction to regular expressions. It defines regular expressions as a small language for describing text patterns. Regular expressions allow for powerful search and replace operations in text. The document outlines some basic regular expression syntax including characters, character classes, quantifiers, anchors, grouping, alternation, and capture groups. It also discusses using regular expressions for tasks like validation, reformatting, and search/replace.
Regular expressions (regex) allow defining patterns to match in text strings using special characters like ., *, + to match types of characters or ranges. They have components like literal characters, character classes, shorthand classes, quantifiers and modifiers that define what is matched. PHP supports both PCRE and POSIX regex with functions like preg_match() and ereg_replace() and there are online tools to test and learn regular expressions.
Regular expressions are patterns used to match strings in programs like sed, awk, grep, and others. They descend from finite automata theory and are used for tasks like searching text files, lexical analysis of programming languages, and early web search engines. In Linux systems, regular expressions are used by commands like sed, grep, and vi. Sed can perform editing operations on lines that match regular expression addresses.
Tutorial on Regular Expression in Perl (perldoc Perlretut)FrescatiStory
This document provides a tutorial on regular expressions (regexps) in Perl. It begins by explaining that regexps describe patterns that can be used to search and extract parts of strings. The tutorial then covers basic regexp concepts like word matching, character classes that match multiple characters, and anchor characters "^" and "$" to match starts or ends of strings. It provides many examples of using simple regexps to search strings and emulates the Unix grep command as an example Perl program.
This document provides an overview of regular expressions and the grep command in Unix/Linux. It defines what regular expressions are, describes common regex patterns like characters, character classes, anchors, repetition, and groups. It also explains the differences between the grep, egrep, and fgrep commands and provides examples of using grep with regular expressions to search files.
The document provides information about a bioinformatics practicum covering topics like installing and using TextPad, regular expressions (regex), arrays/hashes, variables, flow control, loops, input/output, subroutines, and the three basic Perl data types of scalars, arrays, and hashes. It also discusses customizing TextPad, calculating Pi using random numbers, Buffon's needle problem, programming concepts like warnings and strict mode, what a regex is and why you would use one, regex atoms and quantifiers, anchors, grouping, alternation, and variable interpolation. Pattern matching, memory parentheses, finding all matches, greediness, the substitute function, and the translate function are also summarized.
This document provides an introduction and overview of regular expressions (regex). It begins with an introduction to regex and how they are used. It then outlines the table of contents which covers literal characters, character classes, anchors, boundaries, alternation, optional and repetitive items, and grouping. The document discusses topics like literal characters, special characters, character classes, shorthand classes, negated classes, metacharacters inside classes, dot matching, and start/end of string anchors. It provides examples and explanations of how regex engines work to match patterns.
Introduction to Regular Expressions RootsTech 2013Ben Brumfield
This document provides an introduction to regular expressions. It defines regular expressions as a small language for describing text patterns. Regular expressions allow for powerful search and replace operations in text. The document outlines some basic regular expression syntax including characters, character classes, quantifiers, anchors, grouping, alternation, and capture groups. It also discusses using regular expressions for tasks like validation, reformatting, and search/replace.
Regular expressions (regex) allow defining patterns to match in text strings using special characters like ., *, + to match types of characters or ranges. They have components like literal characters, character classes, shorthand classes, quantifiers and modifiers that define what is matched. PHP supports both PCRE and POSIX regex with functions like preg_match() and ereg_replace() and there are online tools to test and learn regular expressions.
Regular expressions are patterns used to match strings in programs like sed, awk, grep, and others. They descend from finite automata theory and are used for tasks like searching text files, lexical analysis of programming languages, and early web search engines. In Linux systems, regular expressions are used by commands like sed, grep, and vi. Sed can perform editing operations on lines that match regular expression addresses.
Tutorial on Regular Expression in Perl (perldoc Perlretut)FrescatiStory
This document provides a tutorial on regular expressions (regexps) in Perl. It begins by explaining that regexps describe patterns that can be used to search and extract parts of strings. The tutorial then covers basic regexp concepts like word matching, character classes that match multiple characters, and anchor characters "^" and "$" to match starts or ends of strings. It provides many examples of using simple regexps to search strings and emulates the Unix grep command as an example Perl program.
This document provides an overview of regular expressions and the grep command in Unix/Linux. It defines what regular expressions are, describes common regex patterns like characters, character classes, anchors, repetition, and groups. It also explains the differences between the grep, egrep, and fgrep commands and provides examples of using grep with regular expressions to search files.
Lexing and parsing involves breaking down input like code, markup languages, or configuration files into individual tokens and analyzing the syntax and structure according to formal grammars. Common techniques include using lexer generators to tokenize input and parser generators to construct parse trees and abstract syntax trees based on formal grammars. While regular expressions are sometimes useful, lexers and parsers are better suited for many formal language tasks and ensure well-formed syntax.
This document introduces regular expressions (regex), which are patterns used to match character combinations in strings. Regex can be used for text search/replace, validation, and other string manipulation tasks. The basics covered include matching characters, character classes, quantifiers, grouping, alternation, anchors, and capturing subgroups for replacement. Examples demonstrate matching names, dates, URLs, and other patterns.
This document discusses context-free grammars and context-free languages. It provides examples of context-free grammars and derivations of sentences using those grammars. A context-free grammar is defined as having productions of the form xA → α, where x is a string of variables and terminals and α is a string of variables and terminals. A language is considered context-free if there exists a context-free grammar that generates that language.
Regular expressions (regex) allow complex pattern matching in text. The document discusses regex basics like literals, character classes, quantifiers, and flags in Python. It explains how to use the re module to compile patterns into RegexObjects and search/match strings. RegexObjects provide reusable patterns while re module functions provide shortcuts but cache compiled patterns.
Regular expressions provide a concise way to match patterns in text. They work by converting the regex into a state machine that can efficiently search a string to find matches. Important regex syntax includes quantifiers like *, +, ?, character classes like [a-z], and anchors like ^ and $. Regular expression engines turn the regex pattern into a program that can search strings. Thompson's NFA construction algorithm is commonly used to build the state machine from a regex for efficient matching.
Brogramming - Python, Bash for Data Processing, and GitRon Reiter
The document discusses Python coding conventions and best practices including PEP 8. It covers topics like formatting, spacing, imports, and other stylistic guidelines to make Python code more readable and consistent across projects. The Zen of Python philosophy emphasizes simple and explicit code over complex solutions. Git concepts like merge vs rebase and Git flow are also briefly mentioned.
This document provides an introduction to regular expressions (regexes). It explains that regexes describe patterns of text that can be used to search for and replace text. It covers basic regex syntax like literals, wildcards, anchors, quantifiers, character sets, flags, backreferences, and the RegExp object. It also discusses using regexes in JavaScript string methods, text editors, and command line tools.
Regular expressions allow matching and manipulation of textual data. They were first discovered by mathematician Stephen Kleene and their search algorithm was developed by Ken Thompson in 1968 for use in tools like ed, grep, and sed. Regular expressions follow certain grammars and use meta characters to match patterns in text. They are used for tasks like validation, parsing, and data conversion.
This document provides an overview of regular expressions including what they are, their history and usage, common patterns and syntax, and examples of using regular expressions in Java. Regular expressions allow complex searches and text manipulation through special pattern syntax. They are very powerful for tasks like validation, extraction, replacement and more. The document covers topics such as character classes, quantifiers, capturing groups, boundaries, and internationalization considerations.
This document provides an overview and introduction to using regular expressions (regex) in Perl. It discusses the basic building blocks of regex patterns including characters, character classes, quantifiers, anchors, grouping, alternation, and interpolation. It explains how to use the binding operator (=~) to apply a regex to a variable. It also covers retrieving matched substrings using pattern memory ($1, $2 etc.) and finding all matches using the 'g' modifier. The document demonstrates substituting text using the s/// function and provides an example of the tr/// function.
This document contains multiple Bible verses from Romans, 1 Peter, Galatians, and Philippians. It discusses being conformed to the image of God's son and lists the fruit of the Spirit, including love, joy, peace, patience, kindness, goodness, faithfulness, gentleness and self-control. It then provides definitions for each of the fruits of the Spirit.
In this soliloquy from Shakespeare's Hamlet, the protagonist contemplates suicide as an escape from his troubles. He considers whether it is better to endure life's struggles or take arms against them through death. While Hamlet hates life and desires death, he is uncertain about what may come after death. Through the use of structure, diction, and metaphors, Shakespeare depicts Hamlet's dislike of life, attraction to death, and fear of the unknown afterlife. It is this uncertainty that prevents Hamlet from committing suicide to end his suffering.
This document provides context and analysis for William Shakespeare's play Hamlet. It begins with assignment details for an essay on the play. It then provides background on Shakespeare, the time period, and an overview of the major plot points and characters in Hamlet. The document analyzes and summarizes key scenes and speeches to unpack themes of deception, grief, madness, and the difficulty of discerning truth. It cautions against oversimplified readings of Hamlet and encourages engaging one's imagination to understand the complexities of the characters and their situations.
This document summarizes the temptation scene in John Milton's epic poem Paradise Lost. It describes how Satan, in the form of a serpent, convinces Eve to eat the forbidden fruit from the Tree of Knowledge in the Garden of Eden. Satan is attracted to Eve's beauty and uses flattery and deception and questions God's warnings to tempt her into eating the fruit, which leads to the fall of man.
Annotations are more than phpdoc comments, they're a fully-featured way of including additional information alongside your code. We might have rejected an RFC to add support into the PHP core, but the community has embraced this tool anyway! This session shows you who is doing what with annotations, and will give you some ideas on how to use the existing tools in your own projects to keep life simple. Developers, architects and anyone responsible for the technical direction of an application should attend this session.
The document discusses formal grammars and their applications in language specification and parsing. It introduces key concepts such as formal grammars, derivation, terminal and non-terminal symbols, and different types of grammars including context-sensitive, context-free and regular grammars. It also discusses applications of formal grammars in parsing and how they relate to theoretical computer science concepts like decidability and complexity of the word problem for different grammar types.
Declarative Syntax Definition - Grammars and TreesGuido Wachsmuth
This lecture lays the theoretic foundations for declarative syntax formalisms and syntax-based language processors, which we will discuss later in the course. We introduce the notions of formal languages, formal grammars, and syntax trees, starting from Chomsky's work on formal grammars as generative devices.
We start with a formal model of languages and investigate formal grammars and their derivation relations as finite models of infinite productivity. We further discuss several classes of formal grammars and their corresponding classes of formal languages. In a second step, we introduce the word problem, analyse its decidability and complexity for different classes of formal languages, and discuss consequences of this analysis on language processing. We conclude the lecture with a discussion about parse tree construction, abstract syntax trees, and ambiguities.
This document discusses communicating with aliens using lambda calculus. It explains that lambda calculus is a minimal functional programming language that can represent logic, mathematics, computation, and theorems. Everything in lambda calculus is a function, and it provides a way to define and apply functions. Basic concepts like boolean logic and arithmetic can be modeled. The document suggests that if aliens understand logic and modus ponens, they would understand lambda calculus as it is a universal system discovered from fundamentals of logic. It then provides examples of lambda calculus functions and applications.
Lexing and parsing involves breaking down input like code, markup languages, or configuration files into individual tokens and analyzing the syntax and structure according to formal grammars. Common techniques include using lexer generators to tokenize input and parser generators to construct parse trees and abstract syntax trees based on formal grammars. While regular expressions are sometimes useful, lexers and parsers are better suited for many formal language tasks and ensure well-formed syntax.
This document introduces regular expressions (regex), which are patterns used to match character combinations in strings. Regex can be used for text search/replace, validation, and other string manipulation tasks. The basics covered include matching characters, character classes, quantifiers, grouping, alternation, anchors, and capturing subgroups for replacement. Examples demonstrate matching names, dates, URLs, and other patterns.
This document discusses context-free grammars and context-free languages. It provides examples of context-free grammars and derivations of sentences using those grammars. A context-free grammar is defined as having productions of the form xA → α, where x is a string of variables and terminals and α is a string of variables and terminals. A language is considered context-free if there exists a context-free grammar that generates that language.
Regular expressions (regex) allow complex pattern matching in text. The document discusses regex basics like literals, character classes, quantifiers, and flags in Python. It explains how to use the re module to compile patterns into RegexObjects and search/match strings. RegexObjects provide reusable patterns while re module functions provide shortcuts but cache compiled patterns.
Regular expressions provide a concise way to match patterns in text. They work by converting the regex into a state machine that can efficiently search a string to find matches. Important regex syntax includes quantifiers like *, +, ?, character classes like [a-z], and anchors like ^ and $. Regular expression engines turn the regex pattern into a program that can search strings. Thompson's NFA construction algorithm is commonly used to build the state machine from a regex for efficient matching.
Brogramming - Python, Bash for Data Processing, and GitRon Reiter
The document discusses Python coding conventions and best practices including PEP 8. It covers topics like formatting, spacing, imports, and other stylistic guidelines to make Python code more readable and consistent across projects. The Zen of Python philosophy emphasizes simple and explicit code over complex solutions. Git concepts like merge vs rebase and Git flow are also briefly mentioned.
This document provides an introduction to regular expressions (regexes). It explains that regexes describe patterns of text that can be used to search for and replace text. It covers basic regex syntax like literals, wildcards, anchors, quantifiers, character sets, flags, backreferences, and the RegExp object. It also discusses using regexes in JavaScript string methods, text editors, and command line tools.
Regular expressions allow matching and manipulation of textual data. They were first discovered by mathematician Stephen Kleene and their search algorithm was developed by Ken Thompson in 1968 for use in tools like ed, grep, and sed. Regular expressions follow certain grammars and use meta characters to match patterns in text. They are used for tasks like validation, parsing, and data conversion.
This document provides an overview of regular expressions including what they are, their history and usage, common patterns and syntax, and examples of using regular expressions in Java. Regular expressions allow complex searches and text manipulation through special pattern syntax. They are very powerful for tasks like validation, extraction, replacement and more. The document covers topics such as character classes, quantifiers, capturing groups, boundaries, and internationalization considerations.
This document provides an overview and introduction to using regular expressions (regex) in Perl. It discusses the basic building blocks of regex patterns including characters, character classes, quantifiers, anchors, grouping, alternation, and interpolation. It explains how to use the binding operator (=~) to apply a regex to a variable. It also covers retrieving matched substrings using pattern memory ($1, $2 etc.) and finding all matches using the 'g' modifier. The document demonstrates substituting text using the s/// function and provides an example of the tr/// function.
This document contains multiple Bible verses from Romans, 1 Peter, Galatians, and Philippians. It discusses being conformed to the image of God's son and lists the fruit of the Spirit, including love, joy, peace, patience, kindness, goodness, faithfulness, gentleness and self-control. It then provides definitions for each of the fruits of the Spirit.
In this soliloquy from Shakespeare's Hamlet, the protagonist contemplates suicide as an escape from his troubles. He considers whether it is better to endure life's struggles or take arms against them through death. While Hamlet hates life and desires death, he is uncertain about what may come after death. Through the use of structure, diction, and metaphors, Shakespeare depicts Hamlet's dislike of life, attraction to death, and fear of the unknown afterlife. It is this uncertainty that prevents Hamlet from committing suicide to end his suffering.
This document provides context and analysis for William Shakespeare's play Hamlet. It begins with assignment details for an essay on the play. It then provides background on Shakespeare, the time period, and an overview of the major plot points and characters in Hamlet. The document analyzes and summarizes key scenes and speeches to unpack themes of deception, grief, madness, and the difficulty of discerning truth. It cautions against oversimplified readings of Hamlet and encourages engaging one's imagination to understand the complexities of the characters and their situations.
This document summarizes the temptation scene in John Milton's epic poem Paradise Lost. It describes how Satan, in the form of a serpent, convinces Eve to eat the forbidden fruit from the Tree of Knowledge in the Garden of Eden. Satan is attracted to Eve's beauty and uses flattery and deception and questions God's warnings to tempt her into eating the fruit, which leads to the fall of man.
Annotations are more than phpdoc comments, they're a fully-featured way of including additional information alongside your code. We might have rejected an RFC to add support into the PHP core, but the community has embraced this tool anyway! This session shows you who is doing what with annotations, and will give you some ideas on how to use the existing tools in your own projects to keep life simple. Developers, architects and anyone responsible for the technical direction of an application should attend this session.
The document discusses formal grammars and their applications in language specification and parsing. It introduces key concepts such as formal grammars, derivation, terminal and non-terminal symbols, and different types of grammars including context-sensitive, context-free and regular grammars. It also discusses applications of formal grammars in parsing and how they relate to theoretical computer science concepts like decidability and complexity of the word problem for different grammar types.
Declarative Syntax Definition - Grammars and TreesGuido Wachsmuth
This lecture lays the theoretic foundations for declarative syntax formalisms and syntax-based language processors, which we will discuss later in the course. We introduce the notions of formal languages, formal grammars, and syntax trees, starting from Chomsky's work on formal grammars as generative devices.
We start with a formal model of languages and investigate formal grammars and their derivation relations as finite models of infinite productivity. We further discuss several classes of formal grammars and their corresponding classes of formal languages. In a second step, we introduce the word problem, analyse its decidability and complexity for different classes of formal languages, and discuss consequences of this analysis on language processing. We conclude the lecture with a discussion about parse tree construction, abstract syntax trees, and ambiguities.
This document discusses communicating with aliens using lambda calculus. It explains that lambda calculus is a minimal functional programming language that can represent logic, mathematics, computation, and theorems. Everything in lambda calculus is a function, and it provides a way to define and apply functions. Basic concepts like boolean logic and arithmetic can be modeled. The document suggests that if aliens understand logic and modus ponens, they would understand lambda calculus as it is a universal system discovered from fundamentals of logic. It then provides examples of lambda calculus functions and applications.
Shows some advanced REXX techniques to make your programs more efficient and more readable for easier debugging. Also describes some tips for creating file and program structures not discussed in a typical REXX class.
Eitaro Fukamachi presents CL21, a redesign of Common Lisp for the 21st century. CL21 aims to improve Common Lisp's consistency, expressiveness, compatibility, and efficiency. It focuses on simplifying naming conventions, removing unnecessary symbols, and making the language more suitable for modern use while maintaining 100% compatibility with Common Lisp code and libraries. The project is still in development with discussions ongoing about final syntax and standard library decisions. CL21 hopes to make Lisp a premier language for prototyping by building on Common Lisp's strengths.
This document describes the structure and components of a simple one-pass compiler to generate code for the Java Virtual Machine (JVM). It discusses lexical analysis, syntax-directed translation, predictive parsing, and code generation. The compiler consists of a lexical analyzer, syntax-directed translator using a context-free grammar, and parser/code generator to develop for the translator. It provides examples of attribute grammars, translation schemes, and techniques for handling ambiguity, precedence, and left recursion in parsing.
This document discusses parsing techniques for programming languages. It begins by defining regular expressions and context-free grammars. It then describes LL(1) parsing which is a top-down parsing technique that uses prediction sets to parse in linear time. The document provides an example of an LL(1) grammar and parsing table. It also discusses problems that can arise with making grammars LL(1) and techniques for resolving them. The document concludes by introducing LR parsing as a bottom-up technique that uses states and shift-reduce actions to parse in linear time.
LEX and YACC are software development tools used for lexical analysis and parsing. LEX is a lexical analyzer generator that accepts an input specification defining lexical units and associated semantic actions. It generates a translator containing tables of lexical units and tokens. YACC is a parser generator that accepts a grammar specification and actions for the language being compiled. It produces a bottom-up parser that uses shift-reduce parsing. These tools allow programmers to specify the syntax of a language and generate code to analyze programs in that language.
The document discusses syntax definition in programming languages. It provides examples of lexical syntax, context-free syntax, abstract syntax trees, disambiguation, and testing syntax definitions using SDF3 and Spoofax. Key topics covered include regular expressions, Backus-Naur Form, Extended Backus-Naur Form, SDF3 syntax, Spoofax architecture for language implementation, and syntax processing tools like parsers, pretty-printers and compilers.
This document discusses Flex and Bison, which are tools for generating lexical analyzers and parsers.
Flex is used for recognizing regular expressions and dividing input streams into tokens. Bison is used for building programs that handle structured input by taking tokens from Flex and grouping them logically based on context-free grammars.
The document explains key concepts such as shift-reduce parsing, lookahead, leftmost and rightmost derivations, and how to specify grammars, tokens, types, rules and actions in a Bison specification to build a parser. It also covers ambiguity, conflicts and how precedence and associativity help resolve shift-reduce conflicts.
Unicode regular expression tutorial with examples in Perl, PHP, and JavaScript.
Presented at: Shutterstock “Brown Bag Lunch” Tech Talk, 23 January 2013, New York, NY
Compiler Components and their Generators - Traditional Parsing AlgorithmsGuido Wachsmuth
This document discusses parsing algorithms for compilers. It begins with an overview of topics to be covered, including lexical analysis, parsing algorithms like predictive and LR parsing, grammar classes, and an assignment on implementing a MiniJava compiler. It then covers predictive parsing in more detail, including how to generate parsing tables from grammars and how to use these tables in a predictive parsing automaton. Finally, it discusses LR parsing and how it can handle issues like left recursion that predictive parsing cannot. It provides an example of an LR parsing step involving expression evaluation.
Declarative Semantics Definition - Term RewritingGuido Wachsmuth
This document discusses term rewriting and its applications in compiler construction. It covers term rewriting systems, rewrite rules that transform terms, and rewrite strategies that control rule application. Examples are provided for desugaring code using rewrite rules and constant folding arithmetic expressions using rewrite rules and strategies. Stratego is presented as a domain-specific language for program transformation based on term rewriting.
/Regex makes me want to (weep|give up|(╯°□°)╯︵ ┻━┻)\.?/ibrettflorio
REGEX! Love it or hate it, sometimes you actually need it. And when that time comes, there's no reason to be afraid or to ask help from that one weirdo on your team who actually loves regular expressions. (I'm that weirdo, fwiw.)
This session is geared towards beginning and intermediate regex users, as well as experienced programmers and developers who just don't really grok regex. We'll cover the following topics using practical examples that you might encounter in your own projects. (ie. No matching against "dog" and "cat".)
* What is regex? How's it work? A brief history.
* Syntax, special characters, character classes
* Grouping, capturing, and common gotchas
* Use cases for matching, validating, and replacing
* More advanced topics like backreferences and lookarounds
Ruby is a dynamic, open source programming language that was created in 1993 by Yukihiro Matsumoto who wanted to ensure that programming is simple, practical and enjoyable. It combines object-oriented and imperative programming and provides automatic memory management. Some key aspects of Ruby include being dynamically typed, following the principle of least surprise, and being multi-paradigm supporting object-oriented, functional and imperative programming.
The document discusses syntax analysis and parsing. It covers context-free grammars, Backus-Naur Form (BNF), Extended BNF, and different parsing techniques like recursive descent parsing and LL parsing. It also discusses Scala's combinator parser, which uses parser combinators to parse input based on a grammar.
The document provides a quick reference guide and cheat sheet for basic regular expressions (regex). It includes tables that list common regex syntax elements, with examples and explanations. The tables are meant to serve as an accelerated course for regex beginners to have as a reference. They cover basic characters, quantifiers, character classes, anchors and other syntax. The full document encourages printing the tables for easy reference when working with regex.
5-Introduction to Parsing and Context Free Grammar-09-05-2023.pptxvenkatapranaykumarGa
The document provides information about parsing and context-free grammars. It defines key concepts such as nonterminals, terminals, productions, derivations, ambiguity, left recursion, left factoring, LL(1) parsing, and computing first sets. It also lists different types of parsing including top-down parsing, bottom-up parsing, backtracking, predictive parsing, LR parsing, operator precedence parsing, and recursive descent parsing.
The document discusses the history and development of the Haskell programming language over 15 years. It summarizes key events like the initial meeting that kicked off Haskell in 1987, the release of the first Haskell report in 1990, and the standardization of Haskell 98. It also reflects on important aspects of Haskell like laziness, monads, and the process of language design by committee and community.
Similar to Declarative Syntax Definition - Pretty Printing (20)
We start with a linguistic discussion of language, its properties, and the study of language in philosophy and linguistics. We then investigate natural languages, controlled languages, and artificial languages to emphasise the human ability to control and construct languages. At the end, we arrive at the notion of software languages as means to communicate software between people.
The document discusses domain-specific type systems. It provides examples of type rules for defining the types of expressions in a domain-specific language. The type rules cover aspects like declaring types, renaming variables, determining the type of expressions, and type checking. It also discusses implementing type analysis using a language-independent task engine that can perform the analysis incrementally by tracking dependencies between tasks.
Compiler Components and their Generators - LR ParsingGuido Wachsmuth
The document discusses traditional parsing algorithms used in compiler construction. It covers predictive parsing algorithms and LR parsing algorithms, which can parse LL(k) and LR(k) grammars respectively. LR parsing uses parse tables that are generated from LR(0) items, closures and goto functions. The document also mentions LR parse table generation and the SLR and LALR algorithms.
Compiling Imperative and Object-Oriented Languages - Garbage CollectionGuido Wachsmuth
The document discusses garbage collection techniques. It describes mark and sweep garbage collection, which involves two steps: 1) marking all reachable records from program roots like variables; and 2) sweeping through and deleting any unmarked records. Reference counting is also covered, where records with a reference count of 0 are deleted. Copy collection and generational garbage collection are briefly mentioned.
Compiler Components and their Generators - Lexical AnalysisGuido Wachsmuth
The document discusses lexical analysis in compiler construction, including an overview of the topics covered such as regular languages represented as regular grammars, regular expressions, and finite state automata. It also discusses the equivalence between these formalisms and techniques for constructing tools for lexical analysis.
Compiling Imperative and Object-Oriented Languages - Register AllocationGuido Wachsmuth
The document discusses register allocation in compiler construction. It begins by introducing interference graphs, which are constructed during liveness analysis to represent variables that cannot be assigned to the same register. It then discusses graph coloring, where the goal is to assign registers to variables and temporaries represented as nodes in the interference graph, or store them in memory if not enough registers are available. The document provides examples of constructing interference graphs from code and using graph coloring to assign registers.
Compiling Imperative and Object-Oriented Languages - Dataflow AnalysisGuido Wachsmuth
The document discusses dataflow analysis techniques used in compiler construction. It provides an overview of control flow graphs and various dataflow analyses including liveness analysis. Liveness analysis determines which variables are live, or in use, at each point in the program by tracking variable definitions and uses through the control flow graph. The key concepts of liveness analysis are defined, including live-in, live-out, definition and usage. An example is provided to demonstrate how liveness information is computed for a simple program.
Introduction - Imperative and Object-Oriented LanguagesGuido Wachsmuth
This document provides an overview of imperative and object-oriented languages. It discusses the properties of imperative languages like state, statements, control flow, procedures and types. It then covers object-oriented concepts like objects, messages, classes, inheritance and polymorphism. Examples are given in various languages like C, Java bytecode, x86 assembly to illustrate concepts like variables, expressions, functions and object-oriented features. Finally, it provides an outlook on upcoming lectures covering declarative language definition.
Compiling Imperative and Object-Oriented Languages - Activation RecordsGuido Wachsmuth
1. Activation records, also known as stack frames, contain information about the execution of methods.
2. They are placed on the call stack and include the method's local variables, partial results, and return address.
3. As methods are invoked, new activation records are pushed onto the stack and popped off when the method returns.
4. The Java Virtual Machine uses a stack-based design where operands are pushed onto the stack, the operation is performed, and results are left on the stack or in local variables.
The document describes the process of code generation from source code to machine code. It shows source code being parsed and checked for errors before code generation produces machine code. It then shows the different components involved in code generation like the operand stack, constant pool, local variables, heap and bytecode instructions. It provides examples of how values are loaded onto the operand stack and stored in local variables and heap as the bytecode instructions are executed step-by-step.
Declarative Semantics Definition - Static Analysis and Error CheckingGuido Wachsmuth
The document discusses static analysis and error checking in compiler construction. It covers several key topics:
- The static analysis process of parsing source code, checking for errors, and generating machine code.
- Name analysis, binding, and scoping during static checking and for editor services like refactoring and code generation.
- Testing static semantics including name binding, type systems, and constraints.
- Restricting context-free languages using static semantics and judgements of well-formedness and well-typedness.
- Formal type systems including those for Tiger language examples involving types, expressions, and scoping.
This introduction lecture sets the scene for the course. We introduce the notions of software languages and language software from a bigger, interdisciplinary picture.
We start with a linguistic discussion of language, its properties, and the study of language in philosophy and linguistics. We then investigate natural languages, controlled languages, and artificial languages to emphasise the human ability to control and construct languages. At the end of the first part of the lecture, we arrive at the notion of software languages as means to communicate software between people.
In the second part of the lecture, we extend the notion of software languages as means to realise processes on machines. We give an overview of language software, starting from interpreters and compilers. We then introduce various language processors as basic building blocks of compilers. We continue with a comparison of traditional compilers and modern compilers in IDEs. Finally, we introduce traditional compiler compilers and modern language workbenches as tools to construct compilers.
How to Build a Module in Odoo 17 Using the Scaffold MethodCeline George
Odoo provides an option for creating a module by using a single line command. By using this command the user can make a whole structure of a module. It is very easy for a beginner to make a module. There is no need to make each file manually. This slide will show how to create a module using the scaffold method.
A workshop hosted by the South African Journal of Science aimed at postgraduate students and early career researchers with little or no experience in writing and publishing journal articles.
Leveraging Generative AI to Drive Nonprofit InnovationTechSoup
In this webinar, participants learned how to utilize Generative AI to streamline operations and elevate member engagement. Amazon Web Service experts provided a customer specific use cases and dived into low/no-code tools that are quick and easy to deploy through Amazon Web Service (AWS.)
This presentation was provided by Steph Pollock of The American Psychological Association’s Journals Program, and Damita Snow, of The American Society of Civil Engineers (ASCE), for the initial session of NISO's 2024 Training Series "DEIA in the Scholarly Landscape." Session One: 'Setting Expectations: a DEIA Primer,' was held June 6, 2024.
This presentation includes basic of PCOS their pathology and treatment and also Ayurveda correlation of PCOS and Ayurvedic line of treatment mentioned in classics.
This slide is special for master students (MIBS & MIFB) in UUM. Also useful for readers who are interested in the topic of contemporary Islamic banking.
LAND USE LAND COVER AND NDVI OF MIRZAPUR DISTRICT, UPRAHUL
This Dissertation explores the particular circumstances of Mirzapur, a region located in the
core of India. Mirzapur, with its varied terrains and abundant biodiversity, offers an optimal
environment for investigating the changes in vegetation cover dynamics. Our study utilizes
advanced technologies such as GIS (Geographic Information Systems) and Remote sensing to
analyze the transformations that have taken place over the course of a decade.
The complex relationship between human activities and the environment has been the focus
of extensive research and worry. As the global community grapples with swift urbanization,
population expansion, and economic progress, the effects on natural ecosystems are becoming
more evident. A crucial element of this impact is the alteration of vegetation cover, which plays a
significant role in maintaining the ecological equilibrium of our planet.Land serves as the foundation for all human activities and provides the necessary materials for
these activities. As the most crucial natural resource, its utilization by humans results in different
'Land uses,' which are determined by both human activities and the physical characteristics of the
land.
The utilization of land is impacted by human needs and environmental factors. In countries
like India, rapid population growth and the emphasis on extensive resource exploitation can lead
to significant land degradation, adversely affecting the region's land cover.
Therefore, human intervention has significantly influenced land use patterns over many
centuries, evolving its structure over time and space. In the present era, these changes have
accelerated due to factors such as agriculture and urbanization. Information regarding land use and
cover is essential for various planning and management tasks related to the Earth's surface,
providing crucial environmental data for scientific, resource management, policy purposes, and
diverse human activities.
Accurate understanding of land use and cover is imperative for the development planning
of any area. Consequently, a wide range of professionals, including earth system scientists, land
and water managers, and urban planners, are interested in obtaining data on land use and cover
changes, conversion trends, and other related patterns. The spatial dimensions of land use and
cover support policymakers and scientists in making well-informed decisions, as alterations in
these patterns indicate shifts in economic and social conditions. Monitoring such changes with the
help of Advanced technologies like Remote Sensing and Geographic Information Systems is
crucial for coordinated efforts across different administrative levels. Advanced technologies like
Remote Sensing and Geographic Information Systems
9
Changes in vegetation cover refer to variations in the distribution, composition, and overall
structure of plant communities across different temporal and spatial scales. These changes can
occur natural.
How to Manage Your Lost Opportunities in Odoo 17 CRMCeline George
Odoo 17 CRM allows us to track why we lose sales opportunities with "Lost Reasons." This helps analyze our sales process and identify areas for improvement. Here's how to configure lost reasons in Odoo 17 CRM
Reimagining Your Library Space: How to Increase the Vibes in Your Library No ...Diana Rendina
Librarians are leading the way in creating future-ready citizens – now we need to update our spaces to match. In this session, attendees will get inspiration for transforming their library spaces. You’ll learn how to survey students and patrons, create a focus group, and use design thinking to brainstorm ideas for your space. We’ll discuss budget friendly ways to change your space as well as how to find funding. No matter where you’re at, you’ll find ideas for reimagining your space in this session.
Film vocab for eal 3 students: Australia the movie
Declarative Syntax Definition - Pretty Printing
1. Declarative Syntax Definition
pretty printing
Guido Wachsmuth
Delft
Course IN4303 2012/13
University of
Technology Compiler Construction
Challenge the future
4. Assessment
last lecture
Compare SDF with the plain formalism of context-free grammars.
Discuss additional description means provided by SDF and why
they are needed in compiler construction.
Pretty Printing 2
5. Assessment
last lecture
Compare SDF with the plain formalism of context-free grammars.
Discuss additional description means provided by SDF and why
they are needed in compiler construction.
• regular expressions
Pretty Printing 2
6. Assessment
last lecture
Compare SDF with the plain formalism of context-free grammars.
Discuss additional description means provided by SDF and why
they are needed in compiler construction.
• regular expressions
• layout
Pretty Printing 2
7. Assessment
last lecture
Compare SDF with the plain formalism of context-free grammars.
Discuss additional description means provided by SDF and why
they are needed in compiler construction.
• regular expressions
• layout
• AST constructors
Pretty Printing 2
8. Assessment
last lecture
Compare SDF with the plain formalism of context-free grammars.
Discuss additional description means provided by SDF and why
they are needed in compiler construction.
• regular expressions
• layout
• AST constructors
• follow restrictions
Pretty Printing 2
9. Assessment
last lecture
Compare SDF with the plain formalism of context-free grammars.
Discuss additional description means provided by SDF and why
they are needed in compiler construction.
• regular expressions
• layout
• AST constructors
• follow restrictions
• annotations for associativity
Pretty Printing 2
10. Assessment
last lecture
Compare SDF with the plain formalism of context-free grammars.
Discuss additional description means provided by SDF and why
they are needed in compiler construction.
• regular expressions
• layout
• AST constructors
• follow restrictions
• annotations for associativity
• priorities
Pretty Printing 2
13. Overview
today’s lecture
plagues of traditional parsing algorithms
• paradise lost
• paradise regained
pretty-printing
• from trees to text
• box layout in pretty-print tables
Lexical Analysis 3
14. Overview
today’s lecture
plagues of traditional parsing algorithms
• paradise lost
• paradise regained
pretty-printing
• from trees to text
• box layout in pretty-print tables
template language
• alternative to SDF
• generation of pretty-printing strategies
• generation of completion templates
Lexical Analysis 3
159. public boolean
authenticate(String user, String pw) {
SQL stm = <| SELECT id FROM Users
WHERE name = ${user}
AND password = ${pw} |>;
return executeQuery(stm).size() != 0;
}
191. public boolean
authenticate(String user, String pw) {
SQL stm = <| SELECT id FROM Users
WHERE name = ${user}
AND password = ${pw} |>;
return executeQuery(stm).size() != 0;
}
210. Recap: Parsing & AST construction
let
function fact(n : int): int = if n < 1 then 1 else n * fact(n - 1)
in
printint(fact(10))
end
Let(
[ FunDec(
"fact"
, [FArg("n", Tid("int"))]
, Tid("int")
, IfThenElse(
Lt(Var("n"), Int("1"))
, Int("1")
, Times(Var("n"), Call("fact", [Minus(Var("n"), Int("1"))]))
)
)
]
, [Call("printint", [Call("fact", [Int("10")])])]
)
Pretty Printing 183
211. Pretty Printing
from ASTs to text
• keywords
• layout: spaces, line breaks, indentation
specification
• partially defined in grammar
• missing layout
Pretty Printing 184
229. Literature
learn more
declarative syntax definition
Lennart C. L. Kats, Eelco Visser, Guido Wachsmuth: Pure and Declarative
Syntax Definition - Paradise Lost and Regained. SPLASH 2010
Pretty Printing 198
230. Literature
learn more
declarative syntax definition
Lennart C. L. Kats, Eelco Visser, Guido Wachsmuth: Pure and Declarative
Syntax Definition - Paradise Lost and Regained. SPLASH 2010
pretty-printing
Marc G.J. van den Brand, Eelco Visser: Generation of Formatters for Context-
Free Languages. ACM TOSEM 5(1):1-41, 1996.
Merijn de Jonge. Pretty-Printing for Software Reengineering. ICSM 2002
Pretty Printing 198
231. Literature
learn more
declarative syntax definition
Lennart C. L. Kats, Eelco Visser, Guido Wachsmuth: Pure and Declarative
Syntax Definition - Paradise Lost and Regained. SPLASH 2010
pretty-printing
Marc G.J. van den Brand, Eelco Visser: Generation of Formatters for Context-
Free Languages. ACM TOSEM 5(1):1-41, 1996.
Merijn de Jonge. Pretty-Printing for Software Reengineering. ICSM 2002
template language
Tobi Vollebregt, Lennart C. L. Kats, Eelco Visser. Declarative Specification of
Template-Based Textual Editors. LDTA 2012
Pretty Printing 198
232. Outlook
beyond IN4303
Model-driven Software Development
• domain-specific languages
• build your own language with Spoofax
Seminar Metaprogramming
• science behind the scenes
Master theses
• theory & practice
• WebDSL & mobl
• Spoofax
Pretty Printing 199
233. Outlook
coming next
declarative semantics definition
• Lecture 4: Term Rewriting
• Lecture 5: Static Analysis and Error Checking
• Lecture 6: Code Generation
Labs
• Sep 27 syntax definition
• Oct 4 code completion & pretty printing
Pretty Printing 200
236. Pictures
attribution & copyrights
Slide 1:
Latin Bible by Gerard Brils (photo by Adrian Pingstone), public domain
Slide 188:
West Cornwall Pasty Co. by Dominica Williamson, some rights reserved
Slide 199-200:
Ostsee by Mario Thiel, some rights reserved
Slides 5-180:
Pure and Declarative Syntax Definition: Paradise Lost and Regained, some rights reserved
Pretty Printing 203
Editor's Notes
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
My name is Guido Wachsmuth, \n
and I will tell you a sad story.\n
A story about the paradise of pure and declarative syntax definition,\n
about how this paradise once was lost,\n
and about how it was regained later,\n
but also about how this paradise is denied until these days. \n
In the beginning\n
were the words, and the words\n
were trees, and the trees were words. \n
All words were made through grammars, and without grammars was not any word made that was made. \n
Those were the days of the garden of Eden. \nAnd there where language engineers strolling through the garden. \n
They made languages which were sets of words by making grammars full of beauty. \n
And with these grammars, they turned words into trees and trees into words. \n
And the trees were natural, \n
and pure, \n
and beautiful, as were the grammars.\n
Among them were software engineers who made software as the language engineers made languages. And they dwelt with them and they were one people. The language engineers were software engineers and the software engineers were language engineers. \n
And the language engineers made language software. They made recognisers to know words, and generators to make words, and parsers to turn words into trees, and formatters to turn trees into words.\n
But the software they made was not as natural, \n
and pure, \n
and beautiful as the grammars they made. \n
So they made software to make language software and began to make language software by making syntax definitions. And the syntax definitions were grammars and grammars were syntax definitions. With their software, they turned syntax definitions into language software. And the syntax definitions were language software and language software were syntax definitions.\n
And the syntax definitions were natural, \n
and pure, \n
and beautiful, as were the grammars.\n
Now the serpent was more crafty than any other beast of the field. He said to the language engineers\n\nDid you actually decide not to build any parsers?\n\nAnd the language engineers said to the serpent,\n\nWe build parsers, but we decided not to build others than general parsers, nor shall we try it, lest we lose our syntax definitions to be natural, and pure, and beautiful.\n\nBut the serpent said to the language engineers,\n\nYou will not surely lose your syntax definitions to be natural, and pure, and beautiful. For you know that when you build particular parsers your benchmarks will be improved, and your parsers will be the best, running fast and efficient.\n
So when the language engineers saw that restricted parsers were good for efficiency, and that they were a delight to the benchmarks, they made software to make efficient parsers and began to make efficient parsers by making parser definitions.\n
Those days, the language engineers went out from the garden of Eden. \n
In pain they made parser definitions all the days of their life. But the parser definitions were not grammars and grammars were not parser definitions.\n
And by the sweat of their faces they turned parser definitions into efficient parsers. \n
But the parser definitions were not natural, \n
nor pure, \n
nor beautiful, as the grammars had been before.\n
Their software was full of plagues. \n
The first plague were grammar classes. Only few grammars could be turned directly into parser definitions. And language engineers massaged their grammars all the days of their life to make them fit into a grammar class. And the parser definitions became unnatural, and impure, and ugly. And there was weeping and mourning.\n
The second plague was disambiguation. Their new parsers were deterministic. So the language engineers encoded precedence in parser definitions. And the parser definitions became unnatural, and impure, and ugly.\n\n
The third plague was lexical syntax. The new software could not handle lexical syntax definitions. So the language engineers made another software to turn lexical syntax definitions into scanners. But lexical syntax definitions were less expressive than the grammars they used before. And they were separated from parser definitions, as were scan- ners from parsers. And there was weeping and wailing.\n\n
The fourth plague was tree construction. The language engineers wanted the efficient parsers to turn words into trees, as their old parsers did. So they added code to their parser definitions. And the parser definitions became unnatural, and impure, and ugly. And those who were oblivious to the working of the efficient parsers made parsers that turn the right words into the wrong trees.\n\n
The fifth and sixth plague were evolution\n
and composition. Once the language engineers added a new rule to their parser definitions, those tended to break. And they massaged them by the sweat of their faces to make them fit again into the grammar class. And they were not able to compose two parser definitions to a single parser definition because of grammar classes and separate scanners. And there was weeping and groaning.\n
The seventh plague was the restriction to parsers. The language engineers turned parser definitions into recognizers and into parsers. But they could not turn them into generators or formatters. That was because parser definitions were not grammars.\n
Many have undertaken to compile a narrative of the things that have been accomplished among us. It seemed good to us also, having followed all things closely for some time past, to tell an orderly account for you that you may have certainty concerning the things you have been taught.\n
It all starts with the beauty of grammars and trees\n
and it all starts with Noam Chomsky.\n\nLike other linguists, he was thinking about the infinite productivity of language.\nWe can produce sentences that no-one produced before.\n\n
But how can we put this productivity into finite models?\nChomsky&#x2019;s came up with grammars as generative devices.\nFinite models, declaring how words are made from letters and sentences are made from words.\n\n
This is a grammar for generating binary numbers.\n
It consists of 4 production rules\n
At the RHS of these rules we see terminal symbols.\nThese are the symbols binary numbers are made of.\n
At the LHS of each rule we see a nonterminal symbol.\nNonterminal symbols are variables in the generation.\nThey can also occur on the right-hand side of a production rule.\nWe say: The LHS symbol produces the symbols on the RHS.\n
The start symbol is a particular nonterminal symbol.\n
When we generate words through grammars,\n
we start with this start symbol.\n
and apply production rules by replacing nonterminal symbols.\nWe replace a nonterminal symbol from the LHS of a rule with the symbols of the RHS.\n
We do this as long as there are nonterminal symbols for which we can find production rules.\nWe are free to choose any nonterminal symbol and apply any rule.\nSo let&#x2019;s take another rule for NUM this time.\n
Now we can replace one of the both symbols for digits.\nLet&#x2019;s apply the first rule.\n
There is still one nonterminal symbol left.\nLet&#x2019;s take the other rule for digits this time.\nWe are done. \nWe generated the word 10.\n
But with grammars we can not only generate words but also sentences.\n
Here is a grammar for generating expressions.\nAgain we have production rules\n
with terminal symbols occuring on the RHS of these rules.\nHere, we have two different kinds of terminal symbols:\n+ and * each representing a particular morphem,\nand NUM, representing a whole class of morphems, let&#x2019;s assume in this case decimal numbers.\n
We have only a single nonterminal symbol\n
which is also the start symbol.\n
We can now generate an expression.\n
We replace the start symbol by applying the first rule for EXP.\n
Now let&#x2019;s replace the first EXP and let&#x2019;s use the second rule this time.\n
Let&#x2019;s take again the first EXP and replace it by a morphem from the morphem class NUM.\n
When we do this two more times\n
we have generated an expression.\n
We can use grammars to describe languages by making a grammar that generates all the words of a given language.\n
For example, we can come up with a grammar for Latin.\n
Which should generate all these Latin sentences.\nAnd all other correct Latin sentences.\n
But a grammar can also define a language. \nIt defines the language of sentences that it generates.\nAnd we can make new languages my making grammars.\nAnd we can ask the grammar if a sentence is in the language or not.\nThe grammar is the only truth.\n
For this, we read the production rules of a grammar as reduction rules.\nThe symbols of the RHS can be reduced to the nonterminal symbol of the LHS.\nTo emphasise this reading, we can write the rules in a reductive way, switching LHS and RHS.\n
Now let&#x2019;s see if this is a valid expression according to our grammar we&#x2019;ve seen before.\n
We apply reduction rules where ever we can, finding the LHS of a rule and replacing it with the nonterminal symbol from the RHS.\nHere we can reduce 21 to EXP.\n
The same for 7\n
and 3.\n
Now we can replace EXP*EXP by EXP\n
and finally EXP+EXP by EXP.\nIt remains the start symbol of our grammar, telling us the truth:\n3*7+21 is a valid expression.\n
We can use grammars also to turn words into trees.\n
We do so because we are not only interested if a sentence is an element of a language,\n
but also its structure. \nAs we know from primary school, \nthis structure can typically be represented as trees.\n
we saw already two readings of grammars\n
now, here we have a third one: rules as tree construction rules. \nnonterminal symbol as the root, symbols constituting the root as leaves\n
what did before the scanner, now does the parser\n
\n
\n
\n
\n
\n
\n
\n
\n
closed under composition\n
closed under composition\n
closed under composition\n
closed under composition\n
closed under composition\n
closed under composition\n
closed under composition\n
closed under composition\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
Fear not! Stand firm! See the naturalness, and the pureness, and the beauty of declarative syntax definitions, which will work for you. \nFor the parser definitions that you see today, you shall never see again.\n
Go out to the promised land! \nMake new software to make parsers and begin to make parsers by making syntax definitions again. \nLet the syntax definitions be grammars again and grammars be syntax definitions again. \nAnd the syntax definitions will be natural, and pure, and beautiful again, as will the grammars.\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
WebDSL & mobl: Flash to HTML5, query optimisation, data synchronisation, partitioning, e-learning platform\nSpoofax: debugging, integration, testing, visualisation, incremental compilation, DSLs for robotics\n
\n
round-up on every lecture\n\nwhat to take with you\n\ncheck yourself, pre- and post-paration\n