The document discusses static analysis and error checking in compiler construction. It covers several key topics:
- The static analysis process of parsing source code, checking for errors, and generating machine code.
- Name analysis, binding, and scoping during static checking and for editor services like refactoring and code generation.
- Testing static semantics including name binding, type systems, and constraints.
- Restricting context-free languages using static semantics and judgements of well-formedness and well-typedness.
- Formal type systems including those for Tiger language examples involving types, expressions, and scoping.
The document discusses static analysis and error checking in compiler construction. It introduces key concepts like parsing source code, performing static semantic checks, and generating machine code. Specific techniques covered include name analysis, type systems, formal semantics, and testing static analysis. Examples are provided using Tiger, a simple imperative language, to illustrate type rules and name binding. The document also discusses theoretical foundations in formal language theory and decidability/complexity.
The document discusses lexical analysis and regular languages. It begins with an overview of lexical analysis and its components, including regular languages defined via regular grammars, regular expressions, and finite state automata. It then covers the equivalence between these formalisms for describing regular languages and how to construct a nondeterministic finite automaton from a regular expression.
Declarative Syntax Definition - Grammars and TreesGuido Wachsmuth
This lecture lays the theoretic foundations for declarative syntax formalisms and syntax-based language processors, which we will discuss later in the course. We introduce the notions of formal languages, formal grammars, and syntax trees, starting from Chomsky's work on formal grammars as generative devices.
We start with a formal model of languages and investigate formal grammars and their derivation relations as finite models of infinite productivity. We further discuss several classes of formal grammars and their corresponding classes of formal languages. In a second step, we introduce the word problem, analyse its decidability and complexity for different classes of formal languages, and discuss consequences of this analysis on language processing. We conclude the lecture with a discussion about parse tree construction, abstract syntax trees, and ambiguities.
Compiler Components and their Generators - Traditional Parsing AlgorithmsGuido Wachsmuth
This document discusses parsing algorithms for compilers. It begins with an overview of topics to be covered, including lexical analysis, parsing algorithms like predictive and LR parsing, grammar classes, and an assignment on implementing a MiniJava compiler. It then covers predictive parsing in more detail, including how to generate parsing tables from grammars and how to use these tables in a predictive parsing automaton. Finally, it discusses LR parsing and how it can handle issues like left recursion that predictive parsing cannot. It provides an example of an LR parsing step involving expression evaluation.
Compiler Components and their Generators - Lexical AnalysisGuido Wachsmuth
The document discusses lexical analysis in compiler construction, including an overview of the topics covered such as regular languages represented as regular grammars, regular expressions, and finite state automata. It also discusses the equivalence between these formalisms and techniques for constructing tools for lexical analysis.
The document discusses formal grammars and their applications in language specification and parsing. It introduces key concepts such as formal grammars, derivation, terminal and non-terminal symbols, and different types of grammars including context-sensitive, context-free and regular grammars. It also discusses applications of formal grammars in parsing and how they relate to theoretical computer science concepts like decidability and complexity of the word problem for different grammar types.
Declarative Semantics Definition - Term RewritingGuido Wachsmuth
This document discusses term rewriting and its applications in compiler construction. It covers term rewriting systems, rewrite rules that transform terms, and rewrite strategies that control rule application. Examples are provided for desugaring code using rewrite rules and constant folding arithmetic expressions using rewrite rules and strategies. Stratego is presented as a domain-specific language for program transformation based on term rewriting.
This document discusses syntax definition and provides examples using various syntax definition formalisms including Backus-Naur Form (BNF), Extended Backus-Naur Form (EBNF), and SDF3. It introduces concepts of lexical syntax, context-free syntax, abstract syntax, disambiguation, and testing syntax definitions. Specific examples are provided for defining the syntax of an expression language using BNF, EBNF, and SDF3. Testing syntax definitions using Spoofax is also discussed with examples of test cases for lexical and context-free syntax.
The document discusses static analysis and error checking in compiler construction. It introduces key concepts like parsing source code, performing static semantic checks, and generating machine code. Specific techniques covered include name analysis, type systems, formal semantics, and testing static analysis. Examples are provided using Tiger, a simple imperative language, to illustrate type rules and name binding. The document also discusses theoretical foundations in formal language theory and decidability/complexity.
The document discusses lexical analysis and regular languages. It begins with an overview of lexical analysis and its components, including regular languages defined via regular grammars, regular expressions, and finite state automata. It then covers the equivalence between these formalisms for describing regular languages and how to construct a nondeterministic finite automaton from a regular expression.
Declarative Syntax Definition - Grammars and TreesGuido Wachsmuth
This lecture lays the theoretic foundations for declarative syntax formalisms and syntax-based language processors, which we will discuss later in the course. We introduce the notions of formal languages, formal grammars, and syntax trees, starting from Chomsky's work on formal grammars as generative devices.
We start with a formal model of languages and investigate formal grammars and their derivation relations as finite models of infinite productivity. We further discuss several classes of formal grammars and their corresponding classes of formal languages. In a second step, we introduce the word problem, analyse its decidability and complexity for different classes of formal languages, and discuss consequences of this analysis on language processing. We conclude the lecture with a discussion about parse tree construction, abstract syntax trees, and ambiguities.
Compiler Components and their Generators - Traditional Parsing AlgorithmsGuido Wachsmuth
This document discusses parsing algorithms for compilers. It begins with an overview of topics to be covered, including lexical analysis, parsing algorithms like predictive and LR parsing, grammar classes, and an assignment on implementing a MiniJava compiler. It then covers predictive parsing in more detail, including how to generate parsing tables from grammars and how to use these tables in a predictive parsing automaton. Finally, it discusses LR parsing and how it can handle issues like left recursion that predictive parsing cannot. It provides an example of an LR parsing step involving expression evaluation.
Compiler Components and their Generators - Lexical AnalysisGuido Wachsmuth
The document discusses lexical analysis in compiler construction, including an overview of the topics covered such as regular languages represented as regular grammars, regular expressions, and finite state automata. It also discusses the equivalence between these formalisms and techniques for constructing tools for lexical analysis.
The document discusses formal grammars and their applications in language specification and parsing. It introduces key concepts such as formal grammars, derivation, terminal and non-terminal symbols, and different types of grammars including context-sensitive, context-free and regular grammars. It also discusses applications of formal grammars in parsing and how they relate to theoretical computer science concepts like decidability and complexity of the word problem for different grammar types.
Declarative Semantics Definition - Term RewritingGuido Wachsmuth
This document discusses term rewriting and its applications in compiler construction. It covers term rewriting systems, rewrite rules that transform terms, and rewrite strategies that control rule application. Examples are provided for desugaring code using rewrite rules and constant folding arithmetic expressions using rewrite rules and strategies. Stratego is presented as a domain-specific language for program transformation based on term rewriting.
This document discusses syntax definition and provides examples using various syntax definition formalisms including Backus-Naur Form (BNF), Extended Backus-Naur Form (EBNF), and SDF3. It introduces concepts of lexical syntax, context-free syntax, abstract syntax, disambiguation, and testing syntax definitions. Specific examples are provided for defining the syntax of an expression language using BNF, EBNF, and SDF3. Testing syntax definitions using Spoofax is also discussed with examples of test cases for lexical and context-free syntax.
This document discusses term rewriting and provides examples of how rewrite rules can be used to transform terms. Key points include:
- Rewrite rules define pattern matching and substitution to transform terms from a left-hand side to a right-hand side.
- Examples show desugaring language constructs like if-then statements, constant folding arithmetic expressions, and mapping/zipping lists with strategies as parameters to rules.
- Terms can represent programming language syntax and semantics domains. Signatures define the structure of terms.
- Rewriting systems provide a declarative way to define program transformations and semantic definitions through rewrite rules and strategies.
This document provides an overview of LL parsing algorithms. It begins with a recap of formal language theory concepts like regular grammars, regular expressions, finite state automata, context-free grammars, derivation, and language generation. It then discusses predictive (recursive descent) parsing and LL parsing in particular. Key concepts covered include LL(k) grammars, filling the LL parsing table based on FIRST and FOLLOW sets, and using the table to perform LL parsing. An example grammar and its parsing table are provided to illustrate the process.
The document discusses syntax definition in programming languages. It provides examples of lexical syntax, context-free syntax, abstract syntax trees, disambiguation, and testing syntax definitions using SDF3 and Spoofax. Key topics covered include regular expressions, Backus-Naur Form, Extended Backus-Naur Form, SDF3 syntax, Spoofax architecture for language implementation, and syntax processing tools like parsers, pretty-printers and compilers.
Introduction - Imperative and Object-Oriented LanguagesGuido Wachsmuth
This document provides an overview of imperative and object-oriented languages. It discusses the properties of imperative languages like state, statements, control flow, procedures and types. It then covers object-oriented concepts like objects, messages, classes, inheritance and polymorphism. Examples are given in various languages like C, Java bytecode, x86 assembly to illustrate concepts like variables, expressions, functions and object-oriented features. Finally, it provides an outlook on upcoming lectures covering declarative language definition.
This document provides an outline and overview of dynamic semantics and operational semantics. It discusses defining the meaning of programs through execution and transition systems. It introduces DynSem, a domain-specific language for specifying dynamic semantics in a modular way. DynSem specifications generate interpreters from language definitions. The document uses examples from arithmetic expressions and a language with boxes to illustrate DynSem specifications.
The document describes static name resolution in programming languages. It discusses how names are bound to declarations through lexical scoping and how references are resolved to declarations by following paths through a scope graph representation. It presents the concepts of scopes, declarations, references, resolution paths, imports, and parent scopes. It also discusses how name resolution can be formalized using a calculus based on scope graphs, separation reachability and visibility, and how this supports name resolution, disambiguation, and program transformations.
Declare Your Language: Syntactic (Editor) ServicesEelco Visser
Lecture 3 on compiler construction course on definition of lexical syntax and syntactic services that can be derived from syntax definitions such as formatting and syntactic completion
This document summarizes and discusses type checking algorithms for programming languages. It introduces constraint-based type checking, which separates type checking into constraint generation and constraint solving. This provides a more declarative way to specify type checkers. The document discusses using variables and constraints to represent types during type checking. It introduces NaBL2, a domain-specific language for writing constraint generators to specify name and type constraints for programming language static semantics. NaBL2 uses scope graphs to represent name binding structures and supports features like type equality, subtyping, and type-dependent name resolution through constraint rules. An example scope graph and constraint rule for let-bindings is provided.
The document discusses lexical analysis in compilers. It describes how a lexical analyzer groups characters into tokens by recognizing patterns in the input based on regular expressions. It provides examples of token classes and structures. It also explains how lexical analysis is implemented using a lexical analyzer generator called LEX, which translates a LEX source file into a C program that performs lexical analysis.
This document discusses type-dependent name resolution in programming languages. It notes that sometimes type information is needed before name resolution can be performed, such as when resolving names in records where the record fields depend on the types. It gives an example where a program defines two records A and B, with B containing a field of type A, and names must be resolved through the record types. The document suggests that name resolution and type checking/inference can often be done in either order for languages, but type information is sometimes necessary for resolving some names.
Dynamic Semantics Specification and Interpreter GenerationEelco Visser
(1) The document describes a domain specific language called DynSem for specifying dynamic semantics of programming languages. DynSem allows defining semantics in a modular way using semantic rules.
(2) DynSem specifications can be used to generate high-performance interpreters. The document outlines various language features that can be modeled in DynSem, including arithmetic, booleans, control flow, functions, and mutable state.
(3) DynSem specifications are composed of modules that import language signatures and define semantic rules over them. Rules are used to reduce expressions to values in an environment and store. This allows modeling features like variables, functions, and mutable boxes.
Compiler Construction | Lecture 4 | Parsing Eelco Visser
This lecture covers parsing and turning syntax definitions into parsers. It discusses context-free grammars and derivations. Grammars can be ambiguous, allowing multiple parse trees for a sentence. Grammar transformations like disambiguation, eliminating left recursion, and left factoring can address issues while preserving the language. Associativity and priority can be defined through transformations. The reading material covers parsing schemata, classical compiler textbooks, and papers on disambiguation filters and parsing algorithms.
Regular languages can be described using regular grammars, regular expressions, or finite automata. A regular grammar contains productions of the form A->aB or A->a where A and B are nonterminals and a is a terminal. A language is regular if it can be generated by a regular grammar. Regular expressions describe languages using operators like concatenation, union, and Kleene star. Finite automata are machines that accept or reject strings using a finite number of states. The three models are equivalent in that they can generate the same regular languages.
This document discusses parsing and context-free grammars. It defines parsing as verifying that tokens generated by a lexical analyzer follow syntactic rules of a language using a parser. Context-free grammars are defined using terminals, non-terminals, productions and a start symbol. Top-down and bottom-up parsing are introduced. Techniques for grammar analysis and improvement like left factoring, eliminating left recursion, calculating first and follow sets are explained with examples.
1) The document discusses parsing methods for context-free grammars including top-down and bottom-up approaches. Top-down parsing starts with the start symbol and works towards the leaves, while bottom-up parsing begins at the leaves and works towards the root.
2) Key aspects of parsing covered include left recursion elimination, left factoring, shift-reduce parsing which uses a stack and parsing table, and constructing parse trees from the parsing process.
3) The output of parsers can include parse sequences, parse trees, and abstract syntax trees which abstract away implementation details.
This document provides examples and explanations of regular expressions. It covers basic regex syntax like | (or), [] (character sets), . (wildcards), ^/$ (start/end anchors), and *+? (quantifiers). Simple examples demonstrate matching numbers, ranges, and strings. More complex examples show grouping, repetition, and matching BGP autonomous system paths. The document concludes with examples of using regex to filter Cisco IOS show command output.
The document discusses the role of parsers in compilers. It explains that parsers check syntax and report errors, perform semantic checks like type checking, and produce an intermediate representation of the source code. Parsers use syntax-directed translation with methods like abstract syntax trees. The document also covers topics like error handling strategies, the viable prefix property, left recursion elimination, and constructing LL(1) parsing tables.
This document contains lecture notes on regular expressions from a compilers course taught by Rebaz Najeeb at Koya University. It discusses topics like specification of tokens using regular expressions, regular expression operations and examples, and regular definitions. Several examples of regular expressions to match certain string patterns are provided, such as strings of even/odd length, strings starting with a specific alphabet, and patterns involving numbers. Homework involves writing a regular expression for valid email addresses.
Slides for invited talk at Dynamic Languages Symposium (DLS'15) at SPLASH 2015 in Pittsburgh
http://2015.splashcon.org/event/dls2015-papers-declare-your-language
In the Language Designer’s Workbench project we are extending the Spoofax Language Workbench with meta-languages to declaratively specify the syntax, name binding rules, type rules, and operational semantics of a programming language design such that a variety of artifacts including parsers, static analyzers, interpreters, and IDE editor services can be derived and properties can be verified automatically. In this presentation I will talk about declarative specification for two aspects of language design: syntax and name binding.
First, I discuss the idea of declarative syntax definition as supported by grammar formalisms based on generalized parsing using the SDF3 syntax definition formalism as example. With SDF3, the language designer defines syntax in terms of productions and declarative disambiguation rules. This requires understanding a language in term of (tree) structure instead of the operational implementation of parsers. As a result, syntax definitions can be used for a range of language processors including parsers, formatters, syntax coloring, outline view, syntactic completion.
Second, I discuss our recent work on the declarative specification of name binding rules, that takes inspiration from declarative syntax definition. The NaBL name binding language supports definition of name binding rules in terms of its fundamental concepts: declarations, references, scopes, and imports. I will present the theory of name resolution that we have recently developed to provide a semantics for name binding languages such as NaBL.
PrOntoLearn: Unsupervised Lexico-Semantic Ontology Generation using Probabili...Rommel Carvalho
Presentation given by Saminda Abeyruwan at the 6th Uncertainty Reasoning for the Semantic Web Workshop at the 9th International Semantic Web Conference in November 7, 2010.
Paper: PrOntoLearn: Unsupervised Lexico-Semantic Ontology Generation using Probabilistic Methods
Abstract: Formalizing an ontology for a domain manually is well-known as a tedious and cumbersome process. It is constrained by the knowledge acquisition bottleneck. Therefore, researchers developed algorithms and systems that can help to automatize the process. Among them are systems that include text corpora for the acquisition. Our idea is also based on vast amount of text corpora. Here, we provide a novel unsupervised bottom-up ontology generation method. It is based on lexico-semantic structures and Bayesian reasoning to expedite the ontology generation process. We provide a quantitative and two qualitative results illustrating our approach using a high throughput screening assay corpus and two custom text corpora. This process could also provide evidence for domain experts to build ontologies based on top-down approaches.
This document discusses semantic similarity measures and hybrid measures for semantic relation extraction. It summarizes a presentation given by Alexander Panchenko on similarity measures. The presentation covers pattern-based measures, comparisons of different measures, hybrid measures that combine multiple single measures, and applications of semantic similarity measures like a lexico-semantic search engine and file categorization system. Evaluation shows that supervised hybrid measures like Logit outperform single measures based on precision-recall.
This document discusses term rewriting and provides examples of how rewrite rules can be used to transform terms. Key points include:
- Rewrite rules define pattern matching and substitution to transform terms from a left-hand side to a right-hand side.
- Examples show desugaring language constructs like if-then statements, constant folding arithmetic expressions, and mapping/zipping lists with strategies as parameters to rules.
- Terms can represent programming language syntax and semantics domains. Signatures define the structure of terms.
- Rewriting systems provide a declarative way to define program transformations and semantic definitions through rewrite rules and strategies.
This document provides an overview of LL parsing algorithms. It begins with a recap of formal language theory concepts like regular grammars, regular expressions, finite state automata, context-free grammars, derivation, and language generation. It then discusses predictive (recursive descent) parsing and LL parsing in particular. Key concepts covered include LL(k) grammars, filling the LL parsing table based on FIRST and FOLLOW sets, and using the table to perform LL parsing. An example grammar and its parsing table are provided to illustrate the process.
The document discusses syntax definition in programming languages. It provides examples of lexical syntax, context-free syntax, abstract syntax trees, disambiguation, and testing syntax definitions using SDF3 and Spoofax. Key topics covered include regular expressions, Backus-Naur Form, Extended Backus-Naur Form, SDF3 syntax, Spoofax architecture for language implementation, and syntax processing tools like parsers, pretty-printers and compilers.
Introduction - Imperative and Object-Oriented LanguagesGuido Wachsmuth
This document provides an overview of imperative and object-oriented languages. It discusses the properties of imperative languages like state, statements, control flow, procedures and types. It then covers object-oriented concepts like objects, messages, classes, inheritance and polymorphism. Examples are given in various languages like C, Java bytecode, x86 assembly to illustrate concepts like variables, expressions, functions and object-oriented features. Finally, it provides an outlook on upcoming lectures covering declarative language definition.
This document provides an outline and overview of dynamic semantics and operational semantics. It discusses defining the meaning of programs through execution and transition systems. It introduces DynSem, a domain-specific language for specifying dynamic semantics in a modular way. DynSem specifications generate interpreters from language definitions. The document uses examples from arithmetic expressions and a language with boxes to illustrate DynSem specifications.
The document describes static name resolution in programming languages. It discusses how names are bound to declarations through lexical scoping and how references are resolved to declarations by following paths through a scope graph representation. It presents the concepts of scopes, declarations, references, resolution paths, imports, and parent scopes. It also discusses how name resolution can be formalized using a calculus based on scope graphs, separation reachability and visibility, and how this supports name resolution, disambiguation, and program transformations.
Declare Your Language: Syntactic (Editor) ServicesEelco Visser
Lecture 3 on compiler construction course on definition of lexical syntax and syntactic services that can be derived from syntax definitions such as formatting and syntactic completion
This document summarizes and discusses type checking algorithms for programming languages. It introduces constraint-based type checking, which separates type checking into constraint generation and constraint solving. This provides a more declarative way to specify type checkers. The document discusses using variables and constraints to represent types during type checking. It introduces NaBL2, a domain-specific language for writing constraint generators to specify name and type constraints for programming language static semantics. NaBL2 uses scope graphs to represent name binding structures and supports features like type equality, subtyping, and type-dependent name resolution through constraint rules. An example scope graph and constraint rule for let-bindings is provided.
The document discusses lexical analysis in compilers. It describes how a lexical analyzer groups characters into tokens by recognizing patterns in the input based on regular expressions. It provides examples of token classes and structures. It also explains how lexical analysis is implemented using a lexical analyzer generator called LEX, which translates a LEX source file into a C program that performs lexical analysis.
This document discusses type-dependent name resolution in programming languages. It notes that sometimes type information is needed before name resolution can be performed, such as when resolving names in records where the record fields depend on the types. It gives an example where a program defines two records A and B, with B containing a field of type A, and names must be resolved through the record types. The document suggests that name resolution and type checking/inference can often be done in either order for languages, but type information is sometimes necessary for resolving some names.
Dynamic Semantics Specification and Interpreter GenerationEelco Visser
(1) The document describes a domain specific language called DynSem for specifying dynamic semantics of programming languages. DynSem allows defining semantics in a modular way using semantic rules.
(2) DynSem specifications can be used to generate high-performance interpreters. The document outlines various language features that can be modeled in DynSem, including arithmetic, booleans, control flow, functions, and mutable state.
(3) DynSem specifications are composed of modules that import language signatures and define semantic rules over them. Rules are used to reduce expressions to values in an environment and store. This allows modeling features like variables, functions, and mutable boxes.
Compiler Construction | Lecture 4 | Parsing Eelco Visser
This lecture covers parsing and turning syntax definitions into parsers. It discusses context-free grammars and derivations. Grammars can be ambiguous, allowing multiple parse trees for a sentence. Grammar transformations like disambiguation, eliminating left recursion, and left factoring can address issues while preserving the language. Associativity and priority can be defined through transformations. The reading material covers parsing schemata, classical compiler textbooks, and papers on disambiguation filters and parsing algorithms.
Regular languages can be described using regular grammars, regular expressions, or finite automata. A regular grammar contains productions of the form A->aB or A->a where A and B are nonterminals and a is a terminal. A language is regular if it can be generated by a regular grammar. Regular expressions describe languages using operators like concatenation, union, and Kleene star. Finite automata are machines that accept or reject strings using a finite number of states. The three models are equivalent in that they can generate the same regular languages.
This document discusses parsing and context-free grammars. It defines parsing as verifying that tokens generated by a lexical analyzer follow syntactic rules of a language using a parser. Context-free grammars are defined using terminals, non-terminals, productions and a start symbol. Top-down and bottom-up parsing are introduced. Techniques for grammar analysis and improvement like left factoring, eliminating left recursion, calculating first and follow sets are explained with examples.
1) The document discusses parsing methods for context-free grammars including top-down and bottom-up approaches. Top-down parsing starts with the start symbol and works towards the leaves, while bottom-up parsing begins at the leaves and works towards the root.
2) Key aspects of parsing covered include left recursion elimination, left factoring, shift-reduce parsing which uses a stack and parsing table, and constructing parse trees from the parsing process.
3) The output of parsers can include parse sequences, parse trees, and abstract syntax trees which abstract away implementation details.
This document provides examples and explanations of regular expressions. It covers basic regex syntax like | (or), [] (character sets), . (wildcards), ^/$ (start/end anchors), and *+? (quantifiers). Simple examples demonstrate matching numbers, ranges, and strings. More complex examples show grouping, repetition, and matching BGP autonomous system paths. The document concludes with examples of using regex to filter Cisco IOS show command output.
The document discusses the role of parsers in compilers. It explains that parsers check syntax and report errors, perform semantic checks like type checking, and produce an intermediate representation of the source code. Parsers use syntax-directed translation with methods like abstract syntax trees. The document also covers topics like error handling strategies, the viable prefix property, left recursion elimination, and constructing LL(1) parsing tables.
This document contains lecture notes on regular expressions from a compilers course taught by Rebaz Najeeb at Koya University. It discusses topics like specification of tokens using regular expressions, regular expression operations and examples, and regular definitions. Several examples of regular expressions to match certain string patterns are provided, such as strings of even/odd length, strings starting with a specific alphabet, and patterns involving numbers. Homework involves writing a regular expression for valid email addresses.
Slides for invited talk at Dynamic Languages Symposium (DLS'15) at SPLASH 2015 in Pittsburgh
http://2015.splashcon.org/event/dls2015-papers-declare-your-language
In the Language Designer’s Workbench project we are extending the Spoofax Language Workbench with meta-languages to declaratively specify the syntax, name binding rules, type rules, and operational semantics of a programming language design such that a variety of artifacts including parsers, static analyzers, interpreters, and IDE editor services can be derived and properties can be verified automatically. In this presentation I will talk about declarative specification for two aspects of language design: syntax and name binding.
First, I discuss the idea of declarative syntax definition as supported by grammar formalisms based on generalized parsing using the SDF3 syntax definition formalism as example. With SDF3, the language designer defines syntax in terms of productions and declarative disambiguation rules. This requires understanding a language in term of (tree) structure instead of the operational implementation of parsers. As a result, syntax definitions can be used for a range of language processors including parsers, formatters, syntax coloring, outline view, syntactic completion.
Second, I discuss our recent work on the declarative specification of name binding rules, that takes inspiration from declarative syntax definition. The NaBL name binding language supports definition of name binding rules in terms of its fundamental concepts: declarations, references, scopes, and imports. I will present the theory of name resolution that we have recently developed to provide a semantics for name binding languages such as NaBL.
PrOntoLearn: Unsupervised Lexico-Semantic Ontology Generation using Probabili...Rommel Carvalho
Presentation given by Saminda Abeyruwan at the 6th Uncertainty Reasoning for the Semantic Web Workshop at the 9th International Semantic Web Conference in November 7, 2010.
Paper: PrOntoLearn: Unsupervised Lexico-Semantic Ontology Generation using Probabilistic Methods
Abstract: Formalizing an ontology for a domain manually is well-known as a tedious and cumbersome process. It is constrained by the knowledge acquisition bottleneck. Therefore, researchers developed algorithms and systems that can help to automatize the process. Among them are systems that include text corpora for the acquisition. Our idea is also based on vast amount of text corpora. Here, we provide a novel unsupervised bottom-up ontology generation method. It is based on lexico-semantic structures and Bayesian reasoning to expedite the ontology generation process. We provide a quantitative and two qualitative results illustrating our approach using a high throughput screening assay corpus and two custom text corpora. This process could also provide evidence for domain experts to build ontologies based on top-down approaches.
This document discusses semantic similarity measures and hybrid measures for semantic relation extraction. It summarizes a presentation given by Alexander Panchenko on similarity measures. The presentation covers pattern-based measures, comparisons of different measures, hybrid measures that combine multiple single measures, and applications of semantic similarity measures like a lexico-semantic search engine and file categorization system. Evaluation shows that supervised hybrid measures like Logit outperform single measures based on precision-recall.
This document discusses strategies for improving teacher training programs for foreign language teachers, specifically English teachers, in Sudan and Saudi Arabia. It suggests abolishing colleges of education and replacing them with one-year training courses after undergraduate degrees for teachers. This would attract more motivated candidates into teaching. The training would equip teachers with pedagogical knowledge and classroom skills. The document also discusses the importance of language proficiency and communicative ability in teacher training programs, as well as balancing methodology training with language improvement. It provides historical context on the introduction and development of English language education in Sudan and Saudi Arabia.
This document discusses qualities of good and bad language teachers based on student surveys. Good teachers were described as friendly, helpful, and made learning enjoyable through games and humor. Bad teachers were strict, avoided questions, and created an uncomfortable learning environment. The document also provides tips for teachers, such as using space, students' names, and the board to engage students and help the class stay together. Maintaining student focus through techniques like cross-checking responses is also discussed.
This document provides an overview of error analysis in second language learning. It discusses key topics such as:
- Definitions of error analysis and the distinction between errors and mistakes. Errors reflect a learner's developing linguistic system, while mistakes are performance issues.
- The significance of analyzing learner errors for teachers, researchers, and learners themselves. Errors can indicate what remains to be learned and provide insights into the language learning process.
- Models for conducting error analysis, including data collection, error identification and classification, and explaining error sources.
- Procedures for collecting spontaneous and elicited language samples from learners and interpreting errors in context. The analysis involves identifying intended meanings to reconstruct errors.
Types of errors
Among the most frequent sources of errors Brown counts
(1) interlingual transfer,
(2) intralingual transfer,
(3) context of learning,
and (4) various communication strategies the learners use
This document provides an overview of the main branches of linguistics. It discusses phonetics, phonology, morphology, syntax, semantics, and pragmatics. Phonetics studies speech sounds and their production, transmission, and perception. Phonology examines sound systems and phonemes. Morphology analyzes the formation of words from morphemes. Syntax establishes rules for sentence structure. Semantics deals with meaning at various linguistic levels. Pragmatics studies meaning in context during communication.
Applied linguistics is the interdisciplinary study of language and its applications in real world contexts. It draws on linguistic theories and research to solve practical language-related problems. Key areas include second language acquisition, teaching methodology, testing, and the relationships between language and society, technology, and other fields. Throughout the 20th century, applied linguistics influenced the development of language teaching methods, shifting the focus from grammar translation to more communicative, meaning-based approaches grounded in theories of language acquisition and use.
Declare Your Language: Name ResolutionEelco Visser
Scope graphs are used to represent the binding information in programs. They provide a language-independent representation of name resolution that can be used to conduct and represent the results of name resolution. Separating the representation of resolved programs from the declarative rules that define name binding allows language-independent tooling to be developed for name resolution and other tasks.
The document discusses static type checking in compilers. It describes how static checking is performed at compile time to enforce type safety, whereas dynamic checking occurs at runtime. It provides examples of common static checks for types, control flow, uniqueness, and names. It also contrasts one-pass versus multi-pass compilers and how they approach static checking. Finally, it introduces type systems and shows an example of type rules and syntax-directed definitions for a simple language.
This presentation is from the 22nd Tcl Conference (Manassas, VA, 21-23 October 2015). It's where I describe where we've got up to with compiling Tcl to native machine code.
Theory of computation:Finite Automata, Regualr Expression, Pumping LemmaPRAVEENTALARI4
The document provides an overview of unit 1 of an introduction to automata course. It covers topics such as mathematical concepts for proofs including induction and deduction, formal languages involving strings and regular expressions, different types of finite automata including deterministic finite automata (DFAs) and nondeterministic finite automata, epsilon moves in finite automata, minimization of DFAs, properties of regular languages, and variants of finite automata. It also provides examples and problems related to many of these concepts.
The document discusses lexical analysis and lexical analyzer generators. It provides background on why lexical analysis is a separate phase in compiler design and how it simplifies parsing. It also describes how a lexical analyzer interacts with a parser and some key attributes of tokens like lexemes and patterns. Finally, it explains how regular expressions are used to specify patterns for tokens and how tools like Lex and Flex can be used to generate lexical analyzers from regular expression definitions.
This document discusses static type checking in compilers. It begins by describing the structure of a compiler and how static checking fits in. It then contrasts static and dynamic checking. The rest of the document discusses various aspects of static type checking like type rules, type systems, and implementing static checking using syntax-directed definitions in Yacc. It provides examples of basic type checking for expressions and statements in a simple language. It also discusses type expressions, conversions, and functions.
The document provides an outline of topics for a C/C++ tutorial, including a "Hello World" program, data types, variables, operators, conditionals, loops, arrays, strings, functions, pointers, command-line arguments, data structures, and memory allocation. It gives examples and explanations of key concepts in C/C++ programming.
This document provides an overview of building a simple one-pass compiler to generate bytecode for the Java Virtual Machine (JVM). It discusses defining a programming language syntax, developing a parser, implementing syntax-directed translation to generate intermediate code targeting the JVM, and generating Java bytecode. The structure of the compiler includes a lexical analyzer, syntax-directed translator, and code generator to produce JVM bytecode from a grammar and language definition.
Model-Driven Software Development - Static Analysis & Error CheckingEelco Visser
The document discusses static analysis and error checking, including name resolution, type analysis, and checking for consistency. It describes analyzing syntax definitions, performing static analysis to check consistency beyond well-formedness, and reporting errors. Key aspects covered include type analysis, name resolution, reference resolution, and checking constraints.
Ejercicios de estilo en la programaciónSoftware Guru
El escritor francés Raymond Queneau escribió a mediados del siglo XX un libro llamado "Ejercicios de Estilo" donde mostraba una misma historia corta, redactada de 99 formas distintas.
En esta plática realizaremos el mismo ejercicio con un programa de software. Abarcaremos distintos estilos y paradigmas: programación monolítica, orientada a objetos, relacional, orientada a aspectos, monadas, map-reduce, y muchos otros, a través de los cuales podremos apreciar la riqueza del pensamiento humano aplicado a la computación.
Esto va mucho más allá de un ejercicio académico; el diseño de sistemas de gran escala se alimenta de esta variedad de estilos. También platicaremos sobre los peligros de quedar atrapado bajo un conjunto reducido de estilos a lo largo de tu carrera, y la necesidad de verdaderamente entender distintos estilos al diseñar arquitecturas de sistemas de software.
Semblanza del conferencista:
Crista Lopez es profesora en la Facultad de Ciencias Computacionales de la Universidad de California en Irvine. Su investigación se enfoca en prácticas de ingeniería de software para sistemas de gran escala. Previamente, fue miembro fundador del equipo en Xerox PARC creador del paradigma de programación orientado a aspectos (AOP). Crista es una de las desarrolladoras principales de OpenSimulator, una plataforma open source para crear mundos virtuales 3D. También es fundadora de Encitra, empresa especializada en la utilización de la realidad virtual para proyectos de desarrollo urbano sustentable. @cristalopes
This document describes the structure and components of a simple one-pass compiler to generate code for the Java Virtual Machine (JVM). It discusses lexical analysis, syntax-directed translation, predictive parsing, and code generation. The compiler consists of a lexical analyzer, syntax-directed translator using a context-free grammar, and parser/code generator to develop for the translator. It provides examples of attribute grammars, translation schemes, and techniques for handling ambiguity, precedence, and left recursion in parsing.
Compiler Construction | Lecture 7 | Type CheckingEelco Visser
This document summarizes a lecture on type checking. It discusses using constraints to separate the language-specific type checking rules from the language-independent solving algorithm. Constraint-based type checking collects constraints as it traverses the AST, then solves the constraints in any order. This allows type information to be learned gradually and avoids issues with computation order.
This document discusses the process of compiling programs from source code to executable code. It covers lexical analysis, parsing, semantic analysis, code optimization, and code generation. The overall compilation process involves breaking the source code into tokens, generating an abstract syntax tree, performing semantic checks, translating to intermediate representations, optimizing the code, and finally generating target machine code.
Compiler Construction | Lecture 9 | Constraint ResolutionEelco Visser
This document provides an overview of constraint resolution in the context of a compiler construction lecture. It discusses unification, which is the basis for many type inference and constraint solving approaches. It also describes separating type checking into constraint generation and constraint solving, and introduces a constraint language that integrates name resolution into constraint resolution through scope graph constraints. Finally, it discusses papers on further developments with this approach, including addressing expressiveness and staging issues in type systems through the Statix DSL for defining type systems.
Chapter 2&3 (java fundamentals and Control Structures).ppthenokmetaferia1
The document discusses various Java fundamental concepts and control structures. It covers topics like identifiers, variables, constants, primitive data types, operators, expressions, and control flow statements including selection statements like if-else and switch statements, as well as looping statements like while, do-while, for loops, and for-each loops. Examples are provided for many of the concepts discussed.
This document provides a summary of key Java concepts including keywords, packages, data types, and common data structures and algorithms. It includes tables that define Java keywords and their usage, standard Java packages, primitive data types and conversions between them, and collections and algorithms from the Java utilities package. The document also provides examples of using regular expressions, formatted output, and MessageFormat in Java.
This slides describes the basic concepts of industrial-strength compiler design. This includes basic concept of static single-assignment form (SSA) and various optimizations such as dead code elimination, global value numbering, constant propagation, etc. This is intend for a 150 minutes undergraduate compiler class.
The document discusses the benefits of declarative programming using Scala. It provides examples of implementing algorithms and data structures declaratively in Scala. It also discusses the history and future of Scala, as well as how Scala encourages thinking about programs as transformations rather than changes to memory.
This document discusses type systems and their use in static analysis of programs. It begins by defining what a type is - a set of values that represents concrete program elements. It then presents the notation and rules for a simple typed lambda calculus. Key points are that type systems provide abstraction over program elements, type checking is done using inference rules, and environments map variables to their types. Type systems ensure soundness by guaranteeing programs evaluate without type errors. Constraints may also need to be solved, such as ensuring types are consistent in conditionals and recursion.
This document provides a summary of key elements of the C programming language including program structure, data types, operators, flow control statements, standard libraries, and common functions. It covers topics such as functions, variables, comments, preprocessor directives, constants, pointers, arrays, structures, I/O, math functions, and limits of integer and floating point types. The summary is presented in a reference card format organized by sections.
Similar to Declarative Semantics Definition - Static Analysis and Error Checking (20)
We start with a linguistic discussion of language, its properties, and the study of language in philosophy and linguistics. We then investigate natural languages, controlled languages, and artificial languages to emphasise the human ability to control and construct languages. At the end, we arrive at the notion of software languages as means to communicate software between people.
The document discusses domain-specific type systems. It provides examples of type rules for defining the types of expressions in a domain-specific language. The type rules cover aspects like declaring types, renaming variables, determining the type of expressions, and type checking. It also discusses implementing type analysis using a language-independent task engine that can perform the analysis incrementally by tracking dependencies between tasks.
The document discusses a lecture on pretty printing and declarative syntax definition. It compares SDF to context-free grammars, noting SDF provides additional description means like regular expressions and layout. It also discusses how these additional features in SDF are needed for compiler construction.
Compiler Components and their Generators - LR ParsingGuido Wachsmuth
The document discusses traditional parsing algorithms used in compiler construction. It covers predictive parsing algorithms and LR parsing algorithms, which can parse LL(k) and LR(k) grammars respectively. LR parsing uses parse tables that are generated from LR(0) items, closures and goto functions. The document also mentions LR parse table generation and the SLR and LALR algorithms.
Compiling Imperative and Object-Oriented Languages - Garbage CollectionGuido Wachsmuth
The document discusses garbage collection techniques. It describes mark and sweep garbage collection, which involves two steps: 1) marking all reachable records from program roots like variables; and 2) sweeping through and deleting any unmarked records. Reference counting is also covered, where records with a reference count of 0 are deleted. Copy collection and generational garbage collection are briefly mentioned.
Compiling Imperative and Object-Oriented Languages - Register AllocationGuido Wachsmuth
The document discusses register allocation in compiler construction. It begins by introducing interference graphs, which are constructed during liveness analysis to represent variables that cannot be assigned to the same register. It then discusses graph coloring, where the goal is to assign registers to variables and temporaries represented as nodes in the interference graph, or store them in memory if not enough registers are available. The document provides examples of constructing interference graphs from code and using graph coloring to assign registers.
Compiling Imperative and Object-Oriented Languages - Dataflow AnalysisGuido Wachsmuth
The document discusses dataflow analysis techniques used in compiler construction. It provides an overview of control flow graphs and various dataflow analyses including liveness analysis. Liveness analysis determines which variables are live, or in use, at each point in the program by tracking variable definitions and uses through the control flow graph. The key concepts of liveness analysis are defined, including live-in, live-out, definition and usage. An example is provided to demonstrate how liveness information is computed for a simple program.
Compiling Imperative and Object-Oriented Languages - Activation RecordsGuido Wachsmuth
1. Activation records, also known as stack frames, contain information about the execution of methods.
2. They are placed on the call stack and include the method's local variables, partial results, and return address.
3. As methods are invoked, new activation records are pushed onto the stack and popped off when the method returns.
4. The Java Virtual Machine uses a stack-based design where operands are pushed onto the stack, the operation is performed, and results are left on the stack or in local variables.
The document describes the process of code generation from source code to machine code. It shows source code being parsed and checked for errors before code generation produces machine code. It then shows the different components involved in code generation like the operand stack, constant pool, local variables, heap and bytecode instructions. It provides examples of how values are loaded onto the operand stack and stored in local variables and heap as the bytecode instructions are executed step-by-step.
This introduction lecture sets the scene for the course. We introduce the notions of software languages and language software from a bigger, interdisciplinary picture.
We start with a linguistic discussion of language, its properties, and the study of language in philosophy and linguistics. We then investigate natural languages, controlled languages, and artificial languages to emphasise the human ability to control and construct languages. At the end of the first part of the lecture, we arrive at the notion of software languages as means to communicate software between people.
In the second part of the lecture, we extend the notion of software languages as means to realise processes on machines. We give an overview of language software, starting from interpreters and compilers. We then introduce various language processors as basic building blocks of compilers. We continue with a comparison of traditional compilers and modern compilers in IDEs. Finally, we introduce traditional compiler compilers and modern language workbenches as tools to construct compilers.
How to Make a Field Mandatory in Odoo 17Celine George
In Odoo, making a field required can be done through both Python code and XML views. When you set the required attribute to True in Python code, it makes the field required across all views where it's used. Conversely, when you set the required attribute in XML views, it makes the field required only in the context of that particular view.
How to Manage Your Lost Opportunities in Odoo 17 CRMCeline George
Odoo 17 CRM allows us to track why we lose sales opportunities with "Lost Reasons." This helps analyze our sales process and identify areas for improvement. Here's how to configure lost reasons in Odoo 17 CRM
This presentation was provided by Steph Pollock of The American Psychological Association’s Journals Program, and Damita Snow, of The American Society of Civil Engineers (ASCE), for the initial session of NISO's 2024 Training Series "DEIA in the Scholarly Landscape." Session One: 'Setting Expectations: a DEIA Primer,' was held June 6, 2024.
LAND USE LAND COVER AND NDVI OF MIRZAPUR DISTRICT, UPRAHUL
This Dissertation explores the particular circumstances of Mirzapur, a region located in the
core of India. Mirzapur, with its varied terrains and abundant biodiversity, offers an optimal
environment for investigating the changes in vegetation cover dynamics. Our study utilizes
advanced technologies such as GIS (Geographic Information Systems) and Remote sensing to
analyze the transformations that have taken place over the course of a decade.
The complex relationship between human activities and the environment has been the focus
of extensive research and worry. As the global community grapples with swift urbanization,
population expansion, and economic progress, the effects on natural ecosystems are becoming
more evident. A crucial element of this impact is the alteration of vegetation cover, which plays a
significant role in maintaining the ecological equilibrium of our planet.Land serves as the foundation for all human activities and provides the necessary materials for
these activities. As the most crucial natural resource, its utilization by humans results in different
'Land uses,' which are determined by both human activities and the physical characteristics of the
land.
The utilization of land is impacted by human needs and environmental factors. In countries
like India, rapid population growth and the emphasis on extensive resource exploitation can lead
to significant land degradation, adversely affecting the region's land cover.
Therefore, human intervention has significantly influenced land use patterns over many
centuries, evolving its structure over time and space. In the present era, these changes have
accelerated due to factors such as agriculture and urbanization. Information regarding land use and
cover is essential for various planning and management tasks related to the Earth's surface,
providing crucial environmental data for scientific, resource management, policy purposes, and
diverse human activities.
Accurate understanding of land use and cover is imperative for the development planning
of any area. Consequently, a wide range of professionals, including earth system scientists, land
and water managers, and urban planners, are interested in obtaining data on land use and cover
changes, conversion trends, and other related patterns. The spatial dimensions of land use and
cover support policymakers and scientists in making well-informed decisions, as alterations in
these patterns indicate shifts in economic and social conditions. Monitoring such changes with the
help of Advanced technologies like Remote Sensing and Geographic Information Systems is
crucial for coordinated efforts across different administrative levels. Advanced technologies like
Remote Sensing and Geographic Information Systems
9
Changes in vegetation cover refer to variations in the distribution, composition, and overall
structure of plant communities across different temporal and spatial scales. These changes can
occur natural.
Chapter wise All Notes of First year Basic Civil Engineering.pptxDenish Jangid
Chapter wise All Notes of First year Basic Civil Engineering
Syllabus
Chapter-1
Introduction to objective, scope and outcome the subject
Chapter 2
Introduction: Scope and Specialization of Civil Engineering, Role of civil Engineer in Society, Impact of infrastructural development on economy of country.
Chapter 3
Surveying: Object Principles & Types of Surveying; Site Plans, Plans & Maps; Scales & Unit of different Measurements.
Linear Measurements: Instruments used. Linear Measurement by Tape, Ranging out Survey Lines and overcoming Obstructions; Measurements on sloping ground; Tape corrections, conventional symbols. Angular Measurements: Instruments used; Introduction to Compass Surveying, Bearings and Longitude & Latitude of a Line, Introduction to total station.
Levelling: Instrument used Object of levelling, Methods of levelling in brief, and Contour maps.
Chapter 4
Buildings: Selection of site for Buildings, Layout of Building Plan, Types of buildings, Plinth area, carpet area, floor space index, Introduction to building byelaws, concept of sun light & ventilation. Components of Buildings & their functions, Basic concept of R.C.C., Introduction to types of foundation
Chapter 5
Transportation: Introduction to Transportation Engineering; Traffic and Road Safety: Types and Characteristics of Various Modes of Transportation; Various Road Traffic Signs, Causes of Accidents and Road Safety Measures.
Chapter 6
Environmental Engineering: Environmental Pollution, Environmental Acts and Regulations, Functional Concepts of Ecology, Basics of Species, Biodiversity, Ecosystem, Hydrological Cycle; Chemical Cycles: Carbon, Nitrogen & Phosphorus; Energy Flow in Ecosystems.
Water Pollution: Water Quality standards, Introduction to Treatment & Disposal of Waste Water. Reuse and Saving of Water, Rain Water Harvesting. Solid Waste Management: Classification of Solid Waste, Collection, Transportation and Disposal of Solid. Recycling of Solid Waste: Energy Recovery, Sanitary Landfill, On-Site Sanitation. Air & Noise Pollution: Primary and Secondary air pollutants, Harmful effects of Air Pollution, Control of Air Pollution. . Noise Pollution Harmful Effects of noise pollution, control of noise pollution, Global warming & Climate Change, Ozone depletion, Greenhouse effect
Text Books:
1. Palancharmy, Basic Civil Engineering, McGraw Hill publishers.
2. Satheesh Gopi, Basic Civil Engineering, Pearson Publishers.
3. Ketki Rangwala Dalal, Essentials of Civil Engineering, Charotar Publishing House.
4. BCP, Surveying volume 1
हिंदी वर्णमाला पीपीटी, hindi alphabet PPT presentation, hindi varnamala PPT, Hindi Varnamala pdf, हिंदी स्वर, हिंदी व्यंजन, sikhiye hindi varnmala, dr. mulla adam ali, hindi language and literature, hindi alphabet with drawing, hindi alphabet pdf, hindi varnamala for childrens, hindi language, hindi varnamala practice for kids, https://www.drmullaadamali.com
This slide is special for master students (MIBS & MIFB) in UUM. Also useful for readers who are interested in the topic of contemporary Islamic banking.
Exploiting Artificial Intelligence for Empowering Researchers and Faculty, In...Dr. Vinod Kumar Kanvaria
Exploiting Artificial Intelligence for Empowering Researchers and Faculty,
International FDP on Fundamentals of Research in Social Sciences
at Integral University, Lucknow, 06.06.2024
By Dr. Vinod Kumar Kanvaria
it describes the bony anatomy including the femoral head , acetabulum, labrum . also discusses the capsule , ligaments . muscle that act on the hip joint and the range of motion are outlined. factors affecting hip joint stability and weight transmission through the joint are summarized.
This document provides an overview of wound healing, its functions, stages, mechanisms, factors affecting it, and complications.
A wound is a break in the integrity of the skin or tissues, which may be associated with disruption of the structure and function.
Healing is the body’s response to injury in an attempt to restore normal structure and functions.
Healing can occur in two ways: Regeneration and Repair
There are 4 phases of wound healing: hemostasis, inflammation, proliferation, and remodeling. This document also describes the mechanism of wound healing. Factors that affect healing include infection, uncontrolled diabetes, poor nutrition, age, anemia, the presence of foreign bodies, etc.
Complications of wound healing like infection, hyperpigmentation of scar, contractures, and keloid formation.
How to Fix the Import Error in the Odoo 17Celine George
An import error occurs when a program fails to import a module or library, disrupting its execution. In languages like Python, this issue arises when the specified module cannot be found or accessed, hindering the program's functionality. Resolving import errors is crucial for maintaining smooth software operation and uninterrupted development processes.
11. Static Analysis and Error Checking 4
static checking
name analysis
name binding and scope
12. Static Analysis and Error Checking 4
static checking
editor services
name analysis
name binding and scope
13. Static Analysis and Error Checking 4
static checking
editor services
transformation
name analysis
name binding and scope
14. Static Analysis and Error Checking 4
static checking
editor services
transformation
refactoring
name analysis
name binding and scope
15. Static Analysis and Error Checking 4
static checking
editor services
transformation
refactoring
code generation
name analysis
name binding and scope
16. SDF3
NaBL
TS
Stratego
ESV
editor
Static Analysis and Error Checking
SPT
tests
5
syntax definition
concrete syntax
abstract syntax
static semantics
name binding
type system
dynamic semantics
translation
interpretation
17. SDF3
NaBL
TS
Stratego
ESV
editor
Static Analysis and Error Checking
SPT
tests
6
syntax definition
concrete syntax
abstract syntax
static semantics
name binding
type system
dynamic semantics
translation
interpretation
18. Static Analysis and Error Checking 7
formal semantics
type system
name binding
testing
name binding
type system
constraints
specification
name binding
type system
constraints
27. /* factorial function */
!
let
!
var x := 0
!
function fact(n : int) : int =
if n < 1 then 1 else (n * fact(n - 1))
!
in
!
for i := 1 to 3 do (
x := x + fact(i);
printint(x);
print(" ")
)
!
end
Static Analysis and Error Checking 11
28. #include <stio.h>
!
/* factorial function */
!
int fac(int num) {
if (num < 1)
Static Analysis and Error Checking 12
return 1;
else
return num * fac(num - 1);
}
!
int main() {
printf(“%d! = %dn”, 10, fac(10));
return 0;
}
29. class Main {
!
public static void main(String[] args) {
System.out.println(new Fac().fac(10));
}
}
!
class Fac {
!
public int fac(int num) {
int num_aux;
if (num < 1)
num_aux = 1;
else
num_aux = num * this.fac(num - 1);
return num_aux;
}
}
Static Analysis and Error Checking 13
30. Static Analysis and Error Checking 14
static semantics
restricting context-free languages
context-sensitive
language
context-free grammar
L(G) = {w∈Σ* | S ⇒G* w}
31. restricting context-free languages
Static Analysis and Error Checking 14
static semantics
context-free superset
context-sensitive
language
context-free grammar
L(G) = {w∈Σ* | S ⇒G* w}
32. restricting context-free languages
Static Analysis and Error Checking 14
static semantics
context-free superset
context-sensitive
language
context-free grammar
L(G) = {w∈Σ* | S ⇒G* w}
33. restricting context-free languages
Static Analysis and Error Checking 14
static semantics
context-free superset
context-sensitive
language
context-free grammar
L(G) = {w∈Σ* | S ⇒G* w}
static semantics
L = {w∈ L(G) | ⊢ w}
34. restricting context-free languages
Static Analysis and Error Checking 14
static semantics
context-free superset
context-sensitive
language
context-free grammar
L(G) = {w∈Σ* | S ⇒G* w}
static semantics
L = {w∈ L(G) | ⊢ w}
judgements
well-formed ⊢ w
well-typed E ⊢ e : t
35. restricting context-free languages
Static Analysis and Error Checking 14
static semantics
context-free superset
context-sensitive
language
context-free grammar
L(G) = {w∈Σ* | S ⇒G* w}
static semantics
L = {w∈ L(G) | ⊢ w}
judgements
well-formed ⊢ w
well-typed E ⊢ e : t
37. Tiger
type system
Static Analysis and Error Checking 16
E ⊢ i : int
E ⊢ s : string
E ⊢ nil : ⊥
38. Tiger
type system
Static Analysis and Error Checking 17
E ⊢ () : ∅
E ⊢ e1 : t1
E ⊢ e2 : t2
E ⊢ e1 ; e2 : t2
39. Tiger
type system
E ⊢ e1 : array of t
E ⊢ e2 : int
E ⊢ e1[e2] : t
Static Analysis and Error Checking 18
E ⊢ e1 : int
E ⊢ e2 : int
E ⊢ e1 + e2 : int
E ⊢ e1 : int
E ⊢ e2 : int
E ⊢ e1 < e2 : int
E ⊢ e1 : string
E ⊢ e2 : string
E ⊢ e1 < e2 : int
40. Tiger
type system
E ⊢ e1 : t1
E ⊢ e2 : t2
t1 ≅ t2
E ⊢ e1 = e2 : int
t1 <: t2
t1 ≅ t2
t2 <: t1
t1 ≅ t2
t ≠ ∅
t ≅ t
Static Analysis and Error Checking 19
⊥<: {f1, …, fn}
⊥<: array of t
41. Tiger
type system
Static Analysis and Error Checking 20
E ⊢ e1 : t1
E ⊢ e2 : t2
t1 ≅ t2
E ⊢ e1 := e2 : ∅
E ⊢ e1 : int
E ⊢ e2 : t1
E ⊢ e3 : t2
E ⊢ if e1 then e2 else e3: sup<: {t1, t2}
43. Tiger
scoping
Static Analysis and Error Checking 22
let
type t = u
type u = int
var x: u := 0
in
x := 42 ;
let
type u = t
var y: u := 0
in
y := 42
end
end
44. Tiger
scoping
Static Analysis and Error Checking 22
let
type t = u
type u = int
var x: u := 0
in
x := 42 ;
let
type u = t
var y: u := 0
in
y := 42
end
end
45. Tiger
scoping
Static Analysis and Error Checking 22
let
type t = u
type u = int
var x: u := 0
in
x := 42 ;
let
type u = t
var y: u := 0
in
y := 42
end
end
46. Tiger
scoping
Static Analysis and Error Checking 22
let
type t = u
type u = int
var x: u := 0
in
x := 42 ;
let
type u = t
var y: u := 0
in
y := 42
end
end
47. Tiger
scoping
Static Analysis and Error Checking 22
let
type t = u
type u = int
var x: u := 0
in
x := 42 ;
let
type u = t
var y: u := 0
in
y := 42
end
end
48. Tiger
variable names
E ⊢ e1 : t1
t ≅ t1
E ⊕ v ↦ t ⊢ e2 : t2
E ⊢ let var v : t = e1 in e2: t2
Static Analysis and Error Checking 23
E(v) = t
E ⊢ v : t
49. Tiger
function names
E ⊕ v1 ↦ t1 ,…, vn ↦ tn ⊢ e1 : tf
E ⊕ f ↦ t1 × … × tn → tf ⊢ e2 : t
E ⊢ let
function f (v1 : t1, …, vn : tn) = e1
in e2: t
E(f) = t1 × … × tn → tf
e1 : t1
…
en : tn
E ⊢ f (e1, …, en) : t
Static Analysis and Error Checking 24
51. Static Analysis and Error Checking 26
test outer name [[
let type t = u
Testing
name binding
type [[u]] = int
var x: [[u]] := 0
in
x := 42 ;
let type u = t
var y: u := 0
in
y := 42
end
end
]] resolve #2 to #1
test inner name [[
let type t = u
type u = int
var x: u := 0
in
x := 42 ;
let type [[u]] = t
var y: [[u]] := 0
in
y := 42
end
end
]] resolve #2 to #1
52. Static Analysis and Error Checking 27
test integer constant [[
let type t = u
type u = int
var x: u := 0
in
x := 42 ;
let type u = t
var y: u := 0
in
y := [[42]]
end
Testing
type system
end
]] run get-type to IntTy()
test variable reference [[
let type t = u
type u = int
var x: u := 0
in
x := 42 ;
let type u = t
var y: u := 0
in
y := [[x]]
end
end
]] run get-type to IntTy()
53. Static Analysis and Error Checking 28
Testing
constraints
test undefined variable [[
let type t = u
type u = int
var x: u := 0
in
x := 42 ;
let type u = t
var y: u := 0
in
y := [[z]]
end
end
]] 1 error
test type error [[
let type t = u
type u = string
var x: u := 0
in
x := 42 ;
let type u = t
var y: u := 0
in
y := [[x]]
end
end
]] 1 error
54. Static Analysis and Error Checking 29
testing
static semantics
context-free superset
language
56. Name Binding Language
Static Analysis and Error Checking 31
concepts
defines
!
refers
!
namespaces
!
scopes
!
imports
57. Name Binding Language
definitions and references
Static Analysis and Error Checking 32
TypeDec(t, _):
defines Type t
Tid(t) :
refers to Type t
let
type t = u
type u = int
var x: u := 0
in
x := 42 ;
let
type u = t
var y: u := 0
in
y := 42
end
end
58. Name Binding Language
Static Analysis and Error Checking 33
unique definitions
TypeDec(t, _):
defines unique Type t
Tid(t) :
refers to Type t
let
type t = u
type u = int
var x: u := 0
in
x := 42 ;
let
type u = t
var y: u := 0
in
y := 42
end
end
59. Name Binding Language
Static Analysis and Error Checking 34
namespaces
let
type mt = int
type rt = {f1: string, f2: int}
type at = array of int
!
var x := 42
var y: int := 42
!
function p() = print("foo")
function sqr(x: int): int = x*x
in
…
end
namespaces
Type Variable Function
!
TypeDec(t, _):
defines unique Type t
!
FunDec(f, _, _):
defines unique Function f
FunDec(f, _, _, _):
defines unique Function f
Call(f, _) :
refers to Function f
!
VarDec(v, _):
defines unique Variable v
FArg(a, _):
defines unique Variable a
Var(v):
refers to Variable v
60. Name Binding Language
Static Analysis and Error Checking 35
scopes
FunDec(f, _, _):
defines unique Function f
scopes Variable
!
FunDec(f, _, _, _):
defines unique Function f
scopes Variable
Let(_, _):
scopes Type, Function, Variable
let
type t = u
type u = int
var x: u := 0
in
x := 42 ;
let
type u = t
var y: u := 0
in
y := 42
end
end
61. Name Binding Language
Static Analysis and Error Checking 35
scopes
FunDec(f, _, _):
defines unique Function f
scopes Variable
!
FunDec(f, _, _, _):
defines unique Function f
scopes Variable
Let(_, _):
scopes Type, Function, Variable
let
type t = u
type u = int
var x: u := 0
in
x := 42 ;
let
type u = t
var y: u := 0
in
y := 42
end
end
62. Name Binding Language
Static Analysis and Error Checking 35
scopes
FunDec(f, _, _):
defines unique Function f
scopes Variable
!
FunDec(f, _, _, _):
defines unique Function f
scopes Variable
Let(_, _):
scopes Type, Function, Variable
let
type t = u
type u = int
var x: u := 0
in
x := 42 ;
let
type u = t
var y: u := 0
in
y := 42
end
end
63. Name Binding Language
Static Analysis and Error Checking 35
scopes
FunDec(f, _, _):
defines unique Function f
scopes Variable
!
FunDec(f, _, _, _):
defines unique Function f
scopes Variable
Let(_, _):
scopes Type, Function, Variable
let
type t = u
type u = int
var x: u := 0
in
x := 42 ;
let
type u = t
var y: u := 0
in
y := 42
end
end
64. Name Binding Language
Static Analysis and Error Checking 35
scopes
FunDec(f, _, _):
defines unique Function f
scopes Variable
!
FunDec(f, _, _, _):
defines unique Function f
scopes Variable
Let(_, _):
scopes Type, Function, Variable
let
type t = u
type u = int
var x: u := 0
in
x := 42 ;
let
type u = t
var y: u := 0
in
y := 42
end
end
65. Name Binding Language
Static Analysis and Error Checking 36
definition scopes
For(v, start, end, body): for x := 0 to 42 do x;
defines Variable v in body
66. Static Analysis and Error Checking 37
Spoofax
bound renaming
let
type t = u
type u = int
var x: u := 0
in
x := 42 ;
let
type u = t
var y: u := 0
in
y := 42
end
end
let
type t0 = u0
type u0 = int
var x: u0 := 0
in
x := 42 ;
let
type u1 = t0
var y: u1 := 0
in
y := 42
end
end
67. Static Analysis and Error Checking 38
Spoofax
annotated terms
t{t1, ..., tn}
!
!
!
add additional information to a term but preserve its signature
69. Static Analysis and Error Checking 40
TS
axioms
type rules
!
Int(_) : IntTy()
String(_): StringTy()
!
signatures
!
NilTy: Type
type rules
!
Nil(): NilTy()
E ⊢ i : int
E ⊢ s : string
E ⊢ nil : ⊥
70. Static Analysis and Error Checking 41
TS
inference rules
type rules
!
Add(e1,e2): IntTy()
where e1: ty1
and ty1 == IntTy()
and e2: ty2
and ty2 == IntTy()
E ⊢ e1 : int
E ⊢ e2 : int
E ⊢ e1 + e2 : int
71. Static Analysis and Error Checking 42
TS
inference rules
type rules
!
Lt(e1,e2): IntTy()
where e1: ty1
and e2: ty2
and ( ( ty1 == IntTy() and ty2 == IntTy() )
or ( ty1 == StringTy() and ty2 == StringTy() )
)
E ⊢ e1 : int
E ⊢ e2 : int
E ⊢ e1 < e2 : int
E ⊢ e1 : string
E ⊢ e2 : string
E ⊢ e1 < e2 : int
72. defines unique Variable x
of type ty
Static Analysis and Error Checking 43
NaBL and TS
interaction
binding rules
!
VarDec(x, ty):
type rules
!
Var(x): ty
where definition of x: ty
73. Static Analysis and Error Checking 44
NaBL and TS
interaction
FArg(a, t):
defines unique Variable a of type t
!
FunDec(f, a*, e):
defines unique Function f of type (t*, t)
where a* has type t*
and e has type t
!
Call(f, a*) :
refers to Function f of type (t*, _)
where a* has type t*
75. Static Analysis and Error Checking 46
TS
type errors
type rules
!
Add(e1,e2): IntTy()
where e1: ty1
and ty1 == IntTy()
else error "…" on e1
and e2: ty2
and ty2 == IntTy()
else error "…" on e2
E ⊢ e1 : int
E ⊢ e2 : int
E ⊢ e1 + e2 : int
76. Static Analysis and Error Checking 47
TS
missing definitions
type rules
!
Var(x): ty
where definition of x: ty
else error "…" on x
77. Static Analysis and Error Che4c8king 48
Spoofax
origin tracking
let var x := 21 in y * 2 end
78. Static Analysis and Error Che4c8king 48
Spoofax
origin tracking
let var x := 21 in y * 2 end
Let([VarDec("x", Int("21"))], [Times(Var("y"), Int("2"))])
79. Static Analysis and Error Che4c8king 48
Spoofax
origin tracking
let var x := 21 in y * 2 end
Let([VarDec("x", Int("21"))], [Times(Var("y"), Int("2"))])
desugar: Times(e1, e2) -> Bop(MUL(), e1, e2)
80. Static Analysis and Error Che4c8king 48
Spoofax
origin tracking
let var x := 21 in y * 2 end
Let([VarDec("x", Int("21"))], [Times(Var("y"), Int("2"))])
desugar: Times(e1, e2) -> Bop(MUL(), e1, e2)
Let([VarDec("x", Int("21"))], [Bop(MUL(), Var("y"), Int("2"))])
81. Static Analysis and Error Che4c8king 48
Spoofax
origin tracking
let var x := 21 in y * 2 end
Let([VarDec("x", Int("21"))], [Times(Var("y"), Int("2"))])
desugar: Times(e1, e2) -> Bop(MUL(), e1, e2)
Let([VarDec("x", Int("21"))], [Bop(MUL(), Var("y"), Int("2"))])
Let([VarDec("x"{"…"}, Int("21"))], [Bop(MUL(), Var("y"), Int("2"))])
82. Static Analysis and Error Che4c8king 48
Spoofax
origin tracking
let var x := 21 in y * 2 end
Let([VarDec("x", Int("21"))], [Times(Var("y"), Int("2"))])
desugar: Times(e1, e2) -> Bop(MUL(), e1, e2)
Let([VarDec("x", Int("21"))], [Bop(MUL(), Var("y"), Int("2"))])
Let([VarDec("x"{"…"}, Int("21"))], [Bop(MUL(), Var("y"), Int("2"))])
Var(x): ty
where definition of x: ty
else error "…" on x
83. Static Analysis and Error Che4c8king 48
Spoofax
origin tracking
let var x := 21 in y * 2 end
Let([VarDec("x", Int("21"))], [Times(Var("y"), Int("2"))])
desugar: Times(e1, e2) -> Bop(MUL(), e1, e2)
Let([VarDec("x", Int("21"))], [Bop(MUL(), Var("y"), Int("2"))])
Let([VarDec("x"{"…"}, Int("21"))], [Bop(MUL(), Var("y"), Int("2"))])
Var(x): ty
where definition of x: ty
else error "…" on x
84. Static Analysis and Error Che4c8king 48
Spoofax
origin tracking
let var x := 21 in y * 2 end
Let([VarDec("x", Int("21"))], [Times(Var("y"), Int("2"))])
desugar: Times(e1, e2) -> Bop(MUL(), e1, e2)
Let([VarDec("x", Int("21"))], [Bop(MUL(), Var("y"), Int("2"))])
Let([VarDec("x"{"…"}, Int("21"))], [Bop(MUL(), Var("y"), Int("2"))])
Var(x): ty
where definition of x: ty
else error "…" on x
85. Static Analysis and Error Che4c8king 48
Spoofax
origin tracking
let var x := 21 in y * 2 end
Let([VarDec("x", Int("21"))], [Times(Var("y"), Int("2"))])
desugar: Times(e1, e2) -> Bop(MUL(), e1, e2)
Let([VarDec("x", Int("21"))], [Bop(MUL(), Var("y"), Int("2"))])
Let([VarDec("x"{"…"}, Int("21"))], [Bop(MUL(), Var("y"), Int("2"))])
Var(x): ty
where definition of x: ty
else error "…" on x
86. Static Analysis and Error Che4c8king 48
Spoofax
origin tracking
let var x := 21 in y * 2 end
Let([VarDec("x", Int("21"))], [Times(Var("y"), Int("2"))])
desugar: Times(e1, e2) -> Bop(MUL(), e1, e2)
Let([VarDec("x", Int("21"))], [Bop(MUL(), Var("y"), Int("2"))])
Let([VarDec("x"{"…"}, Int("21"))], [Bop(MUL(), Var("y"), Int("2"))])
Var(x): ty
where definition of x: ty
else error "…" on x
87. Static Analysis and Error Che4c8king 48
Spoofax
origin tracking
let var x := 21 in y * 2 end
Let([VarDec("x", Int("21"))], [Times(Var("y"), Int("2"))])
desugar: Times(e1, e2) -> Bop(MUL(), e1, e2)
Let([VarDec("x", Int("21"))], [Bop(MUL(), Var("y"), Int("2"))])
Let([VarDec("x"{"…"}, Int("21"))], [Bop(MUL(), Var("y"), Int("2"))])
Var(x): ty
where definition of x: ty
else error "…" on x
88. Static Analysis and Error Che4c8king 48
Spoofax
origin tracking
let var x := 21 in y * 2 end
Let([VarDec("x", Int("21"))], [Times(Var("y"), Int("2"))])
desugar: Times(e1, e2) -> Bop(MUL(), e1, e2)
Let([VarDec("x", Int("21"))], [Bop(MUL(), Var("y"), Int("2"))])
Let([VarDec("x"{"…"}, Int("21"))], [Bop(MUL(), Var("y"), Int("2"))])
Var(x): ty
where definition of x: ty
else error "…" on x
89. Static Analysis and Error Che4c8king 48
Spoofax
origin tracking
let var x := 21 in y * 2 end
Let([VarDec("x", Int("21"))], [Times(Var("y"), Int("2"))])
desugar: Times(e1, e2) -> Bop(MUL(), e1, e2)
Let([VarDec("x", Int("21"))], [Bop(MUL(), Var("y"), Int("2"))])
Let([VarDec("x"{"…"}, Int("21"))], [Bop(MUL(), Var("y"), Int("2"))])
Var(x): ty
where definition of x: ty
else error "…" on x
90. Static Analysis and Error Che4c8king 48
Spoofax
origin tracking
let var x := 21 in y * 2 end
Let([VarDec("x", Int("21"))], [Times(Var("y"), Int("2"))])
desugar: Times(e1, e2) -> Bop(MUL(), e1, e2)
Let([VarDec("x", Int("21"))], [Bop(MUL(), Var("y"), Int("2"))])
Let([VarDec("x"{"…"}, Int("21"))], [Bop(MUL(), Var("y"), Int("2"))])
Var(x): ty
where definition of x: ty
else error "…" on x
91. Static Analysis and Error Checking 49
derivation of editor services
error checking
reference resolution
code completion
Spoofax
static analysis
95. Except where otherwise noted, this work is licensed under
Static Analysis and Error Checking 50
96. Static Analysis and Error Checking 51
attribution
slide title author license
1 Inspection Kent Wien CC BY-NC 2.0
2, 3 PICOL icons Melih Bilgil CC BY 3.0
10 Noam Chomsky Maria Castelló Solbes CC BY-NC-SA 2.0
11, 16-20, 22-24 Tiger Bernard Landgraf CC BY-SA 3.0
12 The C Programming Language Bill Bradford CC BY 2.0
13 Italian Java book cover
Editor's Notes
\n
\n
\n
feedback loop\n
feedback loop\n
feedback loop\n
feedback loop\n
feedback loop\n
restrictions on production rules => grammar classes\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
round-up on every lecture\n\nwhat to take with you\n\ncheck yourself, pre- and post-paration\n