This document discusses composing source-to-source data-flow transformations with dependent dynamic rewrite rules. It outlines strategies for basic constant propagation, dead code elimination, and copy propagation. It also discusses issues that can arise with dynamic rewrite rules, such as insufficient dependencies, free variable capture, and escaping variables, and proposes solutions like undefining rules when variables are modified or go out of scope.
The document outlines various statistical and data analysis techniques that can be performed in R including importing data, data visualization, correlation and regression, and provides code examples for functions to conduct t-tests, ANOVA, PCA, clustering, time series analysis, and producing publication-quality output. It also reviews basic R syntax and functions for computing summary statistics, transforming data, and performing vector and matrix operations.
This document provides an introduction to the statistical programming language R. It outlines what R is, how to access and use its interface, and how to work with basic data types like vectors, matrices, and factors. It also demonstrates how to import and export data, perform basic plotting and graphics, and gives examples working with biological data from Affymetrix chips. The presenter encourages attendees to ask questions and notes they are not a perfect teacher.
This document provides an overview of the statistical programming language R. It discusses key R concepts like data types, vectors, matrices, data frames, lists, and functions. It also covers important R tools for data analysis like statistical functions, linear regression, multiple regression, and file input/output. The goal of R is to provide a large integrated collection of tools for data analysis and statistical computing.
This document discusses various data structures in R programming including vectors, matrices, arrays, data frames, lists, and factors. It provides examples of how to create each structure and access elements within them. Various methods for importing and exporting data in different file formats like Excel, CSV, and text files are also covered.
R is a free and open-source programming language for statistical analysis and graphics. It allows users to import, clean, transform, visualize and model data. Key features of R include its large collection of statistical and graphical techniques, ability to easily extend its functionality through user-contributed packages, and open-source nature which allows for free use and development. The document provides instructions on installing R, getting started with the R interface and commands, and an overview of common functions and operations for data analysis, visualization and statistics.
The goal of this workshop is to introduce fundamental capabilities of R as a tool for performing data analysis. Here, we learn about the most comprehensive statistical analysis language R, to get a basic idea how to analyze real-word data, extract patterns from data and find causality.
This document discusses various functions in R for exporting data, including print(), cat(), paste(), paste0(), sprintf(), writeLines(), write(), write.table(), write.csv(), and sink(). It provides descriptions, syntax, examples, and help documentation for each function. The functions can be used to output data to the console, files, or save R objects. write.table() and write.csv() convert data to a data frame or matrix before writing to a text file or CSV. sink() diverts R output to a file instead of the console.
- R is a free software environment for statistical computing and graphics. It has an active user community and supports graphical capabilities.
- R can import and export data, perform data manipulation and summaries. It provides various plotting functions and control structures to control program flow.
- Debugging tools in R include traceback, debug, browser and trace which help identify and fix issues in functions.
The document outlines various statistical and data analysis techniques that can be performed in R including importing data, data visualization, correlation and regression, and provides code examples for functions to conduct t-tests, ANOVA, PCA, clustering, time series analysis, and producing publication-quality output. It also reviews basic R syntax and functions for computing summary statistics, transforming data, and performing vector and matrix operations.
This document provides an introduction to the statistical programming language R. It outlines what R is, how to access and use its interface, and how to work with basic data types like vectors, matrices, and factors. It also demonstrates how to import and export data, perform basic plotting and graphics, and gives examples working with biological data from Affymetrix chips. The presenter encourages attendees to ask questions and notes they are not a perfect teacher.
This document provides an overview of the statistical programming language R. It discusses key R concepts like data types, vectors, matrices, data frames, lists, and functions. It also covers important R tools for data analysis like statistical functions, linear regression, multiple regression, and file input/output. The goal of R is to provide a large integrated collection of tools for data analysis and statistical computing.
This document discusses various data structures in R programming including vectors, matrices, arrays, data frames, lists, and factors. It provides examples of how to create each structure and access elements within them. Various methods for importing and exporting data in different file formats like Excel, CSV, and text files are also covered.
R is a free and open-source programming language for statistical analysis and graphics. It allows users to import, clean, transform, visualize and model data. Key features of R include its large collection of statistical and graphical techniques, ability to easily extend its functionality through user-contributed packages, and open-source nature which allows for free use and development. The document provides instructions on installing R, getting started with the R interface and commands, and an overview of common functions and operations for data analysis, visualization and statistics.
The goal of this workshop is to introduce fundamental capabilities of R as a tool for performing data analysis. Here, we learn about the most comprehensive statistical analysis language R, to get a basic idea how to analyze real-word data, extract patterns from data and find causality.
This document discusses various functions in R for exporting data, including print(), cat(), paste(), paste0(), sprintf(), writeLines(), write(), write.table(), write.csv(), and sink(). It provides descriptions, syntax, examples, and help documentation for each function. The functions can be used to output data to the console, files, or save R objects. write.table() and write.csv() convert data to a data frame or matrix before writing to a text file or CSV. sink() diverts R output to a file instead of the console.
- R is a free software environment for statistical computing and graphics. It has an active user community and supports graphical capabilities.
- R can import and export data, perform data manipulation and summaries. It provides various plotting functions and control structures to control program flow.
- Debugging tools in R include traceback, debug, browser and trace which help identify and fix issues in functions.
Learn to manipulate strings in R using the built in R functions. This tutorial is part of the Working With Data module of the R Programming Course offered by r-squared.
This hands-on R course will guide users through a variety of programming functions in the open-source statistical software program, R. Topics covered include indexing, loops, conditional branching, S3 classes, and debugging. Full workshop materials available from http://projects.iq.harvard.edu/rtc/r-prog
- Apply functions in R are used to apply a specified function to each column or row of R objects. Common apply functions include apply(), lapply(), sapply(), tapply(), vapply(), and mapply().
- The dplyr package is a powerful R package for data manipulation. It provides verbs like select(), filter(), arrange(), mutate(), and summarize() to work with tabular data.
- Functions like apply(), lapply(), sapply() apply a function over lists or matrices. Arrange() reorders data, mutate() adds new variables, and summarize() collapses multiple values into single values.
This document provides an overview of the R programming language. It describes R as a functional programming language for statistical computing and graphics that is open source and has over 6000 packages. Key features of R discussed include matrix calculation, data visualization, statistical analysis, machine learning, and data manipulation. The document also covers using R Studio as an IDE, reading and writing different data types, programming features like flow control and functions, and examples of correlation, regression, and plotting in R.
R is a programming language and environment commonly used in statistical computing, data analytics and scientific research.
It is one of the most popular languages used by statisticians, data analysts, researchers and marketers to retrieve, clean, analyze, visualize and present data.
Due to its expressive syntax and easy-to-use interface, it has grown in popularity in recent years.
Learn the built-in mathematical functions in R. This tutorial is part of the Working With Data module of the R Programming course offered by r-squared.
The document describes building regression and classification models in R, including linear regression, generalized linear models, decision trees, and random forests. It uses examples of CPI data to demonstrate linear regression and predicts CPI values in 2011. For classification, it builds a decision tree model on the iris dataset using the party package and visualizes the tree. The document provides information on evaluating and comparing different models.
The document discusses recent developments in the R programming environment for data analysis, including packages like magrittr, readr, tidyr, and dplyr that enable data wrangling workflows. It provides an overview of the key functions in these packages that allow users to load, reshape, manipulate, model, visualize, and report on data in a pipeline using the %>% operator.
Engineering Fast Indexes for Big-Data Applications: Spark Summit East talk by...Spark Summit
Contemporary computing hardware offers massive new performance opportunities. Yet high-performance programming remains a daunting challenge.
We present some of the lessons learned while designing faster indexes, with a particular emphasis on compressed bitmap indexes. Compressed bitmap indexes accelerate queries in popular systems such as Apache Spark, Git, Elastic, Druid and Apache Kylin.
Vasia Kalavri – Training: Gelly School Flink Forward
- Gelly is a graph processing library built on Apache Flink that provides APIs for Java and Scala to work with graphs and perform graph algorithms
- It allows seamless integration of graph-based and record-based analysis by mixing the Gelly and Flink DataSet APIs
- Common graph algorithms like connected components, PageRank, and similarity recommendations are included in the library
R is a programming language and software environment for statistical analysis and graphics. It allows for effective data manipulation, storage, and graphical display. Some key features of R include being free and open source with many contributed packages, having simple yet elegant code, and the ability to perform statistical analysis and visualization. The R studio interface has components for running code in the console, editing code in the editor, and viewing outputs like plots and help documentation. Common data structures in R include vectors, matrices, lists, and data frames.
1) The document discusses merging multiple CSV files containing air pollution data from 332 different monitors into a single dataframe in R. Each file has data from a single monitor and the ID is in the file name (e.g. data for monitor 200 is in "200.csv").
2) It provides information on relevant functions in R like rbind() and lists the steps to bind all 332 files into a single dataframe. First, all file paths are stored in a list using list.files(). Then, the files are read and row bound (rbind()) into a growing data object.
3) It also discusses how to handle missing values (NAs) in R and provides an example function to calculate the
R is a free and open-source programming language and software environment for statistical analysis, graphics, and statistical computing. It was originally developed in the 1990s at Bell Laboratories by statisticians John Chambers and colleagues. Key points about R include that it is an interpreted language, supports functional programming, and is object-oriented. R can be used for tasks like statistical analysis, data visualization, and machine learning. It has a large community of users and developers contributing packages for specialized analysis techniques.
This is a very ^2 basic introduction to R.
The purpose of this presentation is to prepare you with all that you have to know about fundamentals of using R to operate data frames, which you can easily get by importing data from relational database table or csv/text file.
This document discusses using R for statistical analysis with MongoDB as the database. It introduces MongoDB as a NoSQL database for storing large, complex datasets. It describes the rmongodb package for connecting R to MongoDB, allowing users to query, aggregate, and analyze MongoDB data directly in R without importing entire datasets into memory. Examples show performing queries, aggregations, and accessing results as native R objects. The document promotes R and MongoDB as a solution for big data analytics.
It covers- Introduction to R language, Creating, Exploring data with Various Data Structures e.g. Vector, Array, Matrices, and Factors. Using Methods with examples.
A relatively short Introduction to R as presented at the Belgian Software Craftmanship meetup group.
The goal of this presentation is to give you an introduction to:
• The style of the language
• It's ecosystem
• How common things like data manipulation and visualization work
• How to use it for machine learning
• Webdevelopment and report generation in R
• Integrating R in your system
License:
Introduction To R by Samuel Bosch
To the extent possible under law, the person who associated CC0 with Introduction To R has waived all copyright and related or neighboring rights
to Introduction To R.
http://creativecommons.org/publicdomain/zero/1.0/
Я расскажу о нестандартных особенностях языка для реальных проектов. Речь пойдет о том, зачем усложнять себе жизнь и какие преимущества это может дать.
- Protocol-Oriented Programming и его дилеммы
- Когда и зачем использовать обобщения и вложенные типы
- Настоящее и будущее Swift
A couple of basic applications of dynamic rewrite rules in Stratego: constant propagation, variable renaming, function inlining, common-subexpression elimination
Learn to manipulate strings in R using the built in R functions. This tutorial is part of the Working With Data module of the R Programming Course offered by r-squared.
This hands-on R course will guide users through a variety of programming functions in the open-source statistical software program, R. Topics covered include indexing, loops, conditional branching, S3 classes, and debugging. Full workshop materials available from http://projects.iq.harvard.edu/rtc/r-prog
- Apply functions in R are used to apply a specified function to each column or row of R objects. Common apply functions include apply(), lapply(), sapply(), tapply(), vapply(), and mapply().
- The dplyr package is a powerful R package for data manipulation. It provides verbs like select(), filter(), arrange(), mutate(), and summarize() to work with tabular data.
- Functions like apply(), lapply(), sapply() apply a function over lists or matrices. Arrange() reorders data, mutate() adds new variables, and summarize() collapses multiple values into single values.
This document provides an overview of the R programming language. It describes R as a functional programming language for statistical computing and graphics that is open source and has over 6000 packages. Key features of R discussed include matrix calculation, data visualization, statistical analysis, machine learning, and data manipulation. The document also covers using R Studio as an IDE, reading and writing different data types, programming features like flow control and functions, and examples of correlation, regression, and plotting in R.
R is a programming language and environment commonly used in statistical computing, data analytics and scientific research.
It is one of the most popular languages used by statisticians, data analysts, researchers and marketers to retrieve, clean, analyze, visualize and present data.
Due to its expressive syntax and easy-to-use interface, it has grown in popularity in recent years.
Learn the built-in mathematical functions in R. This tutorial is part of the Working With Data module of the R Programming course offered by r-squared.
The document describes building regression and classification models in R, including linear regression, generalized linear models, decision trees, and random forests. It uses examples of CPI data to demonstrate linear regression and predicts CPI values in 2011. For classification, it builds a decision tree model on the iris dataset using the party package and visualizes the tree. The document provides information on evaluating and comparing different models.
The document discusses recent developments in the R programming environment for data analysis, including packages like magrittr, readr, tidyr, and dplyr that enable data wrangling workflows. It provides an overview of the key functions in these packages that allow users to load, reshape, manipulate, model, visualize, and report on data in a pipeline using the %>% operator.
Engineering Fast Indexes for Big-Data Applications: Spark Summit East talk by...Spark Summit
Contemporary computing hardware offers massive new performance opportunities. Yet high-performance programming remains a daunting challenge.
We present some of the lessons learned while designing faster indexes, with a particular emphasis on compressed bitmap indexes. Compressed bitmap indexes accelerate queries in popular systems such as Apache Spark, Git, Elastic, Druid and Apache Kylin.
Vasia Kalavri – Training: Gelly School Flink Forward
- Gelly is a graph processing library built on Apache Flink that provides APIs for Java and Scala to work with graphs and perform graph algorithms
- It allows seamless integration of graph-based and record-based analysis by mixing the Gelly and Flink DataSet APIs
- Common graph algorithms like connected components, PageRank, and similarity recommendations are included in the library
R is a programming language and software environment for statistical analysis and graphics. It allows for effective data manipulation, storage, and graphical display. Some key features of R include being free and open source with many contributed packages, having simple yet elegant code, and the ability to perform statistical analysis and visualization. The R studio interface has components for running code in the console, editing code in the editor, and viewing outputs like plots and help documentation. Common data structures in R include vectors, matrices, lists, and data frames.
1) The document discusses merging multiple CSV files containing air pollution data from 332 different monitors into a single dataframe in R. Each file has data from a single monitor and the ID is in the file name (e.g. data for monitor 200 is in "200.csv").
2) It provides information on relevant functions in R like rbind() and lists the steps to bind all 332 files into a single dataframe. First, all file paths are stored in a list using list.files(). Then, the files are read and row bound (rbind()) into a growing data object.
3) It also discusses how to handle missing values (NAs) in R and provides an example function to calculate the
R is a free and open-source programming language and software environment for statistical analysis, graphics, and statistical computing. It was originally developed in the 1990s at Bell Laboratories by statisticians John Chambers and colleagues. Key points about R include that it is an interpreted language, supports functional programming, and is object-oriented. R can be used for tasks like statistical analysis, data visualization, and machine learning. It has a large community of users and developers contributing packages for specialized analysis techniques.
This is a very ^2 basic introduction to R.
The purpose of this presentation is to prepare you with all that you have to know about fundamentals of using R to operate data frames, which you can easily get by importing data from relational database table or csv/text file.
This document discusses using R for statistical analysis with MongoDB as the database. It introduces MongoDB as a NoSQL database for storing large, complex datasets. It describes the rmongodb package for connecting R to MongoDB, allowing users to query, aggregate, and analyze MongoDB data directly in R without importing entire datasets into memory. Examples show performing queries, aggregations, and accessing results as native R objects. The document promotes R and MongoDB as a solution for big data analytics.
It covers- Introduction to R language, Creating, Exploring data with Various Data Structures e.g. Vector, Array, Matrices, and Factors. Using Methods with examples.
A relatively short Introduction to R as presented at the Belgian Software Craftmanship meetup group.
The goal of this presentation is to give you an introduction to:
• The style of the language
• It's ecosystem
• How common things like data manipulation and visualization work
• How to use it for machine learning
• Webdevelopment and report generation in R
• Integrating R in your system
License:
Introduction To R by Samuel Bosch
To the extent possible under law, the person who associated CC0 with Introduction To R has waived all copyright and related or neighboring rights
to Introduction To R.
http://creativecommons.org/publicdomain/zero/1.0/
Я расскажу о нестандартных особенностях языка для реальных проектов. Речь пойдет о том, зачем усложнять себе жизнь и какие преимущества это может дать.
- Protocol-Oriented Programming и его дилеммы
- Когда и зачем использовать обобщения и вложенные типы
- Настоящее и будущее Swift
A couple of basic applications of dynamic rewrite rules in Stratego: constant propagation, variable renaming, function inlining, common-subexpression elimination
We have been using Java 8 in production for over a year now and would like to share some of our experiences and best practices. Java 8 helps us to adopt a more functional style in our programs, and this talk will discuss how to write code that is both elegant, readable and efficient.
During the talk I go in depth on some of the new language features in Java 8, and also touch on some nice additional utilities. Some of the topics include:
- Lambdas vs. Method Handles
- Streams + Optional = true?
- parallelStream FTW?
- To mutate or not to mutate
Learn how to develop with Couchbase Lite for .NET. The session will include a look at the development environment and required C# APIs using a walkthrough of a demo app.
The document introduces building a parser in PHP by explaining reasons for common fears around parsing, showing examples of language grammars like BNF and EBNF, and demonstrating how to generate a parser in PHP using PEG parsing expressions to parse a sample query language across multiple versions, with the potential to optimize parsed queries.
Future features for openCypher: Schema, Constraints, Subqueries, Configurable...openCypher
Presented at the First openCypher Implementers Meeting in Walldorf, Germany, February 2017 @ http://www.opencypher.org/blog/2017/03/31/first-ocim-blog/
The document describes strategies for online partial evaluation of programs. It begins by explaining the goal of online partial evaluation and outlining several strategies: constant propagation (Specialize 0), function unfolding (Specialize 1), unfolding only static calls (Specialize 2), memoizing unfoldings (Specialize 3), and specializing function definitions (Specialize 4). Each strategy is then explained in detail with examples showing how it transforms programs during partial evaluation. The strategies aim to optimize programs based on known inputs while avoiding recomputing the same values multiple times.
The workshop will present how to combine tools to quickly query, transform and model data using command line tools.
The goal is to show that command line tools are efficient at handling reasonable sizes of data and can accelerate the data science
process. We will show that in many instances, command line processing ends up being much faster than ‘big-data’ solutions. The content
of the workshop is derived from the book of the same name (http://datascienceatthecommandline.com/). In addition, we will cover
vowpal-wabbit (https://github.com/JohnLangford/vowpal_wabbit) as a versatile command line tool for modeling large datasets.
The document discusses key topics in software engineering including software products, product attributes, the importance of product characteristics, the software engineering process, engineering process models, software process models, and the advantages and problems of different process models. It introduces these topics and provides some brief explanations about each one.
NSC #2 - D2 06 - Richard Johnson - SAGEly AdviceNoSuchCon
The document discusses automated testing techniques for software, including fuzzing and concolic testing. Fuzzing involves generating random inputs to exercise a program, while concolic testing uses symbolic execution to track data flows and observe how program logic is influenced by inputs. Concolic testing can generate inputs that cover more program states but requires instrumenting the code to analyze execution.
Relaxing global-as-view in mediated data integration from linked dataAlessandro Adamou
- Mediated data integration systems present data from multiple sources in a unified view using mappings between a global schema and local source schemas (GAV or LAV mappings)
- In GAV systems, adding new data sources requires defining new mappings, which can be computationally expensive
- The authors propose using Linked Data principles to allow for a more gradual, "pay-as-you-go" approach where the global schema and mappings emerge iteratively through intermediate schemas and endomappings
- They demonstrate this approach on a real-world urban open data integration project that queries multiple data sources accessible via SPARQL or custom APIs
Pyretic - A new programmer friendly language for SDNnvirters
Managing a network requires support for multiple concurrent tasks, from routing and traffic monitoring, to access control and server load balancing. Software-Defined Networking (SDN) allows applications to realize these tasks directly, by installing packet-processing rules on switches. However, today's SDN platforms provide limited support for creating modular applications.
Join Bay Area Network Virtualization as Dr. Joshua Reich, Postdoctoral Research Scientist and Computing Innovation Fellow at Princeton University presents Pyretic - a new programmer-friendly domain-specific language embedded in Python that enables modular programming for SDN applications. Pyretic is part of the Frenetic Network Programming Language initiative sponsored by Princeton University and Cornell University, with support from the National Science Foundation, the Office of Naval Research, Google, Intel and Dell.
Distributed model-to-model transformations can be computationally expensive for large models or complex transformations. The authors present an approach to distribute ATL model transformations using MapReduce. Local match and apply phases are performed in parallel by mappers. Global resolve is done by reducers to combine local results. An evaluation shows near-linear speedup on Amazon EMR for models up to 100,000 lines of code. Challenges include load balancing, persistence for concurrent read/write, and parallelizing all transformation phases.
JAZOON'13 - Paul Brauner - A backend developer meets the web: my Dart experiencejazoon13
This document summarizes the author's experience with the Dart programming language. It discusses that the author has a PhD in logic/types and worked as a postdoc in languages before joining Google. It then provides an overview of Dart, describing it as a class-based, optionally typed language with terse syntax and clean, unsurprising semantics. It also discusses Dart's module system, libraries, and future-proof API design. The document promotes Dart and its tools for web development, noting its ability to target all browsers and have a fast edit-refresh cycle. It also mentions alternatives like CoffeeScript and Google Web Toolkit (GWT).
This document provides an introduction to Apache Flink. It begins with an overview of the presenters and structure of the presentation. It then discusses Flink's APIs, architecture, and execution model. Key concepts are explained like streaming vs batch processing, scaling, the job manager and task managers. It provides a demo of Flink's DataSet API for batch processing and explains a WordCount example program. The goal is to get attendees started with Apache Flink.
This document discusses refactoring Java code to Clojure using macros. It provides examples of refactoring Java code that uses method chaining to equivalent Clojure code using the threading macros (->> and -<>). It also discusses other Clojure features like type hints, the doto macro, and polyglot projects using Leiningen.
This document discusses building web applications using Clojure and ClojureScript. It covers the Lispy syntax of Clojure, cross-platform coding using reader conditionals, and popular Clojure libraries. It demonstrates the frontend architecture using React and core.async for asynchronous programming. Remote communication involves serialization with Transit. The backend uses small libraries without frameworks, with routing and handler functions. Overall it provides an overview of building full-stack webapps with the Clojure ecosystem.
This document discusses syntactic editor services including formatting, syntax coloring, and syntactic completion. It describes how syntactic completion can be provided generically based on a syntax definition. The document also discusses how context-free grammars can be extended with templates to specify formatting layout when pretty-printing abstract syntax trees to text. Templates are used to insert whitespace, line breaks, and indentation to produce readable output.
This document provides an overview of parsing in compiler construction. It discusses context-free grammars and how they are used to generate sentences and parse trees through derivations. It also covers ambiguity that can arise from grammars and various grammar transformations used to eliminate ambiguity, including defining associativity and priority. The dangling else problem is presented as an example of an ambiguous grammar.
This document provides an overview of the Lecture 2 on Declarative Syntax Definition for the CS4200 Compiler Construction course. The lecture covers the specification of syntax definition from which parsers can be derived, the perspective on declarative syntax definition using SDF, and reading material on the SDF3 syntax definition formalism and papers on testing syntax definitions and declarative syntax. It also discusses what syntax is, both in linguistics and programming languages, and how programs can be described in terms of syntactic categories and language constructs. An example Tiger program for solving the n-queens problem is presented to illustrate syntactic categories in Tiger.
This document provides an overview of the CS4200 Compiler Construction course at TU Delft. It discusses the course organization, structure, and assessment. The course is split into two parts - CS4200-A which covers concepts and techniques through lectures, papers, and homework assignments, and CS4200-B which involves building a compiler for a subset of Java as a semester-long project. Students will use the Spoofax language workbench to implement their compiler and will submit assignments through a private GitLab repository.
A Direct Semantics of Declarative Disambiguation RulesEelco Visser
This document discusses research into providing a direct semantics for declarative disambiguation of expression grammars. It aims to define what disambiguation rules mean, ensure they are safe and complete, and provide an effective implementation strategy. The document outlines key research questions around the meaning, safety, completeness and coverage of disambiguation rules. It also presents contributions around using subtree exclusion patterns to define safe and complete disambiguation for classes of expression grammars, and implementing this in SDF3.
Declarative Type System Specification with StatixEelco Visser
In this talk I present the design of Statix, a new constraint-based language for the executable specification of type systems. Statix specifications consist of predicates that define the well-formedness of language constructs in terms of built-in and user-defined constraints. Statix has a declarative semantics that defines whether a model satisfies a constraint. The operational semantics of Statix is defined as a sound constraint solving algorithm that searches for a solution for a constraint. The aim of the design is that Statix users can ignore the execution order of constraint solving and think in terms of the declarative semantics.
A distinctive feature of Statix is its use of scope graphs, a language parametric framework for the representation and querying of the name binding facts in programs. Since types depend on name resolution and name resolution may depend on types, it is typically not possible to construct the entire scope graph of a program before type constraint resolution. In (algorithmic) type system specifications this leads to explicit staging of the construction and querying of the type environment (class table, symbol table). Statix automatically stages the construction of the scope graph of a program such that queries are never executed when their answers may be affected by future scope graph extension. In the talk, I will explain the design of Statix by means of examples.
https://eelcovisser.org/post/309/declarative-type-system-specification-with-statix
Compiler Construction | Lecture 17 | Beyond Compiler ConstructionEelco Visser
Compiler construction techniques are applied beyond general-purpose languages through domain-specific languages (DSLs). The document discusses several DSLs developed using Spoofax including:
- WebDSL for web programming with sub-languages for entities, queries, templates, and access control.
- IceDust for modeling information systems with derived values computed on-demand, incrementally, or eventually consistently.
- PixieDust for client-side web programming with views as derived values updated incrementally.
- PIE for defining software build pipelines as tasks with dynamic dependencies computed incrementally.
The document also outlines several research challenges in compiler construction like high-level declarative language definition, verification of
Domain Specific Languages for Parallel Graph AnalytiX (PGX)Eelco Visser
This document discusses domain-specific languages (DSLs) for parallel graph analytics using PGX. It describes how DSLs allow users to implement graph algorithms and queries using high-level languages that are then compiled and optimized to run efficiently on PGX. Examples of DSL optimizations like multi-source breadth-first search are provided. The document also outlines the extensible compiler architecture used for DSLs, which can generate code for different backends like shared memory or distributed memory.
Compiler Construction | Lecture 15 | Memory ManagementEelco Visser
The document discusses different memory management techniques:
1. Reference counting counts the number of pointers to each record and deallocates records with a count of 0.
2. Mark and sweep marks all reachable records from program roots and sweeps unmarked records, adding them to a free list.
3. Copying collection copies reachable records to a "to" space, allowing the original "from" space to be freed without fragmentation.
4. Generational collection focuses collection on younger object generations more frequently to improve efficiency.
Compiler Construction | Lecture 14 | InterpretersEelco Visser
This document summarizes a lecture on interpreters for programming languages. It discusses how operational semantics can be used to define the meaning of a program through state transitions in an interpreter. It provides examples of defining the semantics of a simple language using DynSem, a domain-specific language for specifying operational semantics. DynSem specifications can be compiled to interpreters that execute programs in the defined language.
Compiler Construction | Lecture 13 | Code GenerationEelco Visser
The document discusses code generation and optimization techniques, describing compilation schemas that define how language constructs are translated to target code patterns, and covers topics like ensuring correctness of generated code through type checking and verification of static constraints on the target format. It also provides examples of compilation schemas for Tiger language constructs like arithmetic expressions and control flow and discusses generating nested functions.
Compiler Construction | Lecture 12 | Virtual MachinesEelco Visser
The document discusses the architecture of the Java Virtual Machine (JVM). It describes how the JVM uses threads, a stack, heap, and method area. It explains JVM control flow through bytecode instructions like goto, and how the operand stack is used to perform operations and hold method arguments and return values.
Compiler Construction | Lecture 9 | Constraint ResolutionEelco Visser
This document provides an overview of constraint resolution in the context of a compiler construction lecture. It discusses unification, which is the basis for many type inference and constraint solving approaches. It also describes separating type checking into constraint generation and constraint solving, and introduces a constraint language that integrates name resolution into constraint resolution through scope graph constraints. Finally, it discusses papers on further developments with this approach, including addressing expressiveness and staging issues in type systems through the Statix DSL for defining type systems.
Compiler Construction | Lecture 8 | Type ConstraintsEelco Visser
This lecture covers type checking with constraints. It introduces the NaBL2 meta-language for writing type specifications as constraint generators that map a program to constraints. The constraints are then solved to determine if a program is well-typed. NaBL2 supports defining name binding and type structures through scope graphs and constraints over names, types, and scopes. Examples show type checking patterns in NaBL2 including variables, functions, records, and name spaces.
Compiler Construction | Lecture 7 | Type CheckingEelco Visser
This document summarizes a lecture on type checking. It discusses using constraints to separate the language-specific type checking rules from the language-independent solving algorithm. Constraint-based type checking collects constraints as it traverses the AST, then solves the constraints in any order. This allows type information to be learned gradually and avoids issues with computation order.
Compiler Construction | Lecture 6 | Introduction to Static AnalysisEelco Visser
Lecture introducing the need for static analysis in addition to parsing, the complications caused by names, and an introduction to name resolution with scope graphs
Compiler Construction | Lecture 4 | Parsing Eelco Visser
This lecture covers parsing and turning syntax definitions into parsers. It discusses context-free grammars and derivations. Grammars can be ambiguous, allowing multiple parse trees for a sentence. Grammar transformations like disambiguation, eliminating left recursion, and left factoring can address issues while preserving the language. Associativity and priority can be defined through transformations. The reading material covers parsing schemata, classical compiler textbooks, and papers on disambiguation filters and parsing algorithms.
Compiler Construction | Lecture 3 | Syntactic Editor ServicesEelco Visser
The document discusses syntactic editor services for programming languages. It covers formatting specifications that define how abstract syntax trees are mapped to text using templates. It also discusses syntactic completion, which proposes valid completions in an editor by using the syntax definition. The lecture focuses on defining lexical and syntactic syntax for Tiger using SDF, and generating editor services like formatting and coloring from the syntax definitions.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
Digital Marketing Trends in 2024 | Guide for Staying AheadWask
https://www.wask.co/ebooks/digital-marketing-trends-in-2024
Feeling lost in the digital marketing whirlwind of 2024? Technology is changing, consumer habits are evolving, and staying ahead of the curve feels like a never-ending pursuit. This e-book is your compass. Dive into actionable insights to handle the complexities of modern marketing. From hyper-personalization to the power of user-generated content, learn how to build long-term relationships with your audience and unlock the secrets to success in the ever-shifting digital landscape.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
Project Management Semester Long Project - Acuityjpupo2018
Acuity is an innovative learning app designed to transform the way you engage with knowledge. Powered by AI technology, Acuity takes complex topics and distills them into concise, interactive summaries that are easy to read & understand. Whether you're exploring the depths of quantum mechanics or seeking insight into historical events, Acuity provides the key information you need without the burden of lengthy texts.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Ocean lotus Threat actors project by John Sitima 2024 (1).pptxSitimaJohn
Ocean Lotus cyber threat actors represent a sophisticated, persistent, and politically motivated group that poses a significant risk to organizations and individuals in the Southeast Asian region. Their continuous evolution and adaptability underscore the need for robust cybersecurity measures and international cooperation to identify and mitigate the threats posed by such advanced persistent threat groups.
OpenID AuthZEN Interop Read Out - AuthorizationDavid Brossard
During Identiverse 2024 and EIC 2024, members of the OpenID AuthZEN WG got together and demoed their authorization endpoints conforming to the AuthZEN API
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
1. Composing Source-to-Source
Data-Flow Transformations with
Dependent Dynamic Rewrite Rules
Program Transformation 2004–2005
Eelco Visser
Institute of Information & Computing Sciences
Utrecht University,
The Netherlands
March 3, 2005
3. Part I
Data-Flow Transformation Strategies
http://www.strategoxt.org
Composing Source-to-Source Data-Flow Transformations with
4. Flow-Sensitive Constant Propagation
(x := 3;
y := x + 1;
if foo(x) then
(y := 2 * x;
x := y - 2)
else
(x := y;
y := 23);
z := x + y)
http://www.strategoxt.org
(x := 3;
y := 4;
if foo(3) then
(y := 6;
x := 4)
else
(x := 4;
y := 23);
z := 4 + y)
Composing Source-to-Source Data-Flow Transformations with
5. x := 3
x := 3
x -> 3
y := x + 1
y := 4
x -> 3
y -> 4
if foo(x)
if foo(3)
x -> 3
y -> 4
y := 2 * x
y := 6
x -> 3
y -> 4
x := y
x := 4
x -> 3
y -> 6
x -> 4
y -> 4
x := y - 2
x := 4
y := 23
y := 23
x -> 4
y -> 6
x -> 4
y -> 23
x -> 4
y z := x + y
z := 4 + y
6.
7. Strategy for Basic Constant Propagation
prop-const = PropConst <+ prop-const-assign
<+ prop-const-declare <+ prop-const-let <+ prop-const-if
<+ prop-const-while <+ (all(prop-const); try(EvalBinOp))
prop-const-assign =
|[ x := <prop-const => e> ]|
; if <is-value> e
then rules( PropConst.x : |[ x ]| -> |[ e ]| )
else rules( PropConst.x :- |[ x ]| ) end
prop-const-declare =
|[ var x := <prop-const => e> ]|
; if <is-value> e
then rules( PropConst+x : |[ x ]| -> |[ e ]| )
else rules( PropConst+x :- |[ x ]| ) end
prop-const-let =
?|[ let d* in e* end ]|; {| PropConst : all(prop-const) |}
http://www.strategoxt.org
Composing Source-to-Source Data-Flow Transformations with
8. Intersection of Rule Sets
prop-const-if =
|[ if <prop-const> then <id> else <id> ]|
; (|[ if <id> then <prop-const> else <id> ]|
/PropConst |[ if <id> then <id> else <prop-const> ]|)
http://www.strategoxt.org
Composing Source-to-Source Data-Flow Transformations with
9. Intersection of Rule Sets
x
let var x := 1 var y := z 1
var z := 3 var a := 4 1
in x := x + z;
4
a := 5;
4
if y then
(y := y + 5;
4
z := 8)
4
else
(x := a + 21;
26
y := x + 1;
26
z := a + z);
26
b := a + z;
z := z + x end
-
y
-
z
3
3
3
a
4
4
5
b
-
-
3
8
5
5
-
27
27
-
3
3
8
8
8
8
5
5
5
5
5
5
13
13
http://www.strategoxt.org
let var x := 1 var y := z
var z := 3 var a := 4
in x := 4;
a := 5;
if y then
(y := y + 5;
z := 8)
else
(x := 26;
y := 27;
z := 8);
b := 13;
z := 8 + x end
Composing Source-to-Source Data-Flow Transformations with
10. Fixed-Point Intersection of Rule Sets
let var w := 20 var x := 20
var y := 20 var z := 10
in while SomethingUnknown()
(if x = 20 then w := 20
if y = 20 then x := 20
if z = 20 then y := 20
w; x; y; z end
do
else w := 10;
else x := 10;
else y := 10);
1
2
let var w := 20 var x := 20
var y := 20 var z := 10
in while SomethingUnknown() do
(if x = 20 then w := 20 else w := 10;
if y = 20 then x := 20 else x := 10;
y := 10);
w; x; y; 10 end
http://www.strategoxt.org
3
4
w
20
20
20
20
20
-
x
20
20
20
-
y
20
10
10
10
10
-
z
10
10
10
10
10
10
10
10
10
Composing Source-to-Source Data-Flow Transformations with
11. Fixed-Point Intersection of Rule Sets
prop-const-while =
?|[ while e1 do e2 ]|
; (/PropConst* |[ while <prop-const> do <prop-const> ]|)
http://www.strategoxt.org
Composing Source-to-Source Data-Flow Transformations with
12. Unreachable Code Elimination
let var x := 0 var y := 0
in x := 10;
while A do
(if x = 10
then dosomething()
else (dosomethingelse();
x := x + 1));
y := x
end
http://www.strategoxt.org
let var x := 0
var y := 0
in x := 10;
while A do
dosomething();
y := 10
end
Composing Source-to-Source Data-Flow Transformations with
13. Unreachable Code Elimination
prop-const-if =
|[ if <prop-const> then <id> else <id> ]|
; (EvalIf; prop-const
<+ (|[ if <id> then <prop-const> else <id> ]|
/PropConst
|[ if <id> then <id> else <prop-const> ]|))
prop-const-while =
?|[ while e1 do e2 ]|
; (|[ while <prop-const> do <id> ]|; EvalWhile
<+ (/PropConst*
|[ while <prop-const> do <prop-const> ]|))
EvalIf : |[ if
EvalIf : |[ if
where
EvalWhile : |[
0 then e1 else e2 ]| -> |[ e2 ]|
i then e1 else e2 ]| -> |[ e1 ]|
<not(eq)>(|[ i ]|, |[ 0 ]|)
while 0 do e ]| -> |[ () ]|
http://www.strategoxt.org
Composing Source-to-Source Data-Flow Transformations with
14. Dead Code Elimination
(x := foo(b);
y := bar(h);
a := c + 23;
if 4 > x then
(d := b + a;
g := 4 + y)
else
(b := 2;
a := y + 3;
a := 4 + x);
print(a))
{c,b}
{x,c}
{x,c}
{x,a}
{a}
{a}
{x}
{x}
{x}
{a}
http://www.strategoxt.org
(x := foo(b);
a := c + 23;
if not(4> x) then
a := 4 + x;
print(a))
Composing Source-to-Source Data-Flow Transformations with
15. Dead Code Elimination
dce = VarNeeded <+ ElimAssign <+ dce-assign
<+ dce-seq <+ dce-if <+ dce-while <+ all(dce)
ElimAssign :
|[ x := e ]| -> |[ () ]|
where <not(Needed)> |[ x ]|
VarNeeded =
?|[ x ]|
; rules(Needed : |[ x ]|)
dce-assign =
?|[ x := e ]|
; rules(Needed :- |[ x ]|)
; |[ <id> := <dce> ]|
http://www.strategoxt.org
Composing Source-to-Source Data-Flow Transformations with
16. Dead Code Elimination – Control-Flow
dce-seq =
|[ (<* reverse-filter(dce; not(?|[ () ]|)) >) ]|
dce-if =
(|[ if <id> then <dce> else <id> ]|
Needed/ |[ if <id> then <id> else <dce> ]|)
; |[ if <dce> then <id> else <id> ]|
; try(ElimIf)
dce-while =
|[ while <id> do <id> ]|
; (Needed/* |[ while <dce> do <dce> ]|)
http://www.strategoxt.org
Composing Source-to-Source Data-Flow Transformations with
17. Part II
Dependencies in Data-Flow Transformation Rules
http://www.strategoxt.org
Composing Source-to-Source Data-Flow Transformations with
18. Copy Propagation
Replace copies x produced by assignments of the form x := y by
original y
a := b;
c := d + a
http://www.strategoxt.org
a := b;
c := d + b
Composing Source-to-Source Data-Flow Transformations with
19. Copy Propagation
Replace copies x produced by assignments of the form x := y by
original y
a := b;
c := d + b
a := b;
c := d + a
First attempt using dynamic rules (wrong)
copy-prop-assign =
?|[ x := y ]|;
if <not(eq)>(x,y) then
rules( CopyProp.x : |[ x ]| -> |[ y ]| )
else
rules( CopyProp.x :- |[ x ]| )
end
http://www.strategoxt.org
Composing Source-to-Source Data-Flow Transformations with
20. Problem: Insufficient Dependencies
(a := b;
b := foo();
c := d + a)
(a := b;
b := foo();
c := d + b)
copy-prop-assign =
?|[ x := y ]|;
if <not(eq)>(x,y) then
rules( CopyProp.x : |[ x ]| -> |[ y ]| )
else
rules( CopyProp.x :- |[ x ]| )
end
http://www.strategoxt.org
Composing Source-to-Source Data-Flow Transformations with
21. Problem: Insufficient Dependencies
(a := b;
b := foo();
c := d + a)
(a := b;
b := foo();
c := d + b)
Problem: rule not undefined when variable in rhs changed
Solution: undefine rule when any of its variables is modified
copy-prop-assign =
?|[ x := y ]|;
if <not(eq)>(x,y) then
rules( CopyProp.x : |[ x ]| -> |[ y ]| )
else
rules( CopyProp.x :- |[ x ]| )
end
http://www.strategoxt.org
Composing Source-to-Source Data-Flow Transformations with
22. Problem: Free Variable Capture
let var a := bar()
var b := baz()
in a := b;
let var b := foo()
in print(a)
end
end
let var a := bar()
var b := baz()
in a := b;
let var b := foo()
in print(b) // wrong!
end
end
copy-prop-assign =
?|[ x := y ]|;
if <not(eq)>(x,y) then
rules( CopyProp.x : |[ x ]| -> |[ y ]| )
else
rules( CopyProp.x :- |[ x ]| )
end
http://www.strategoxt.org
Composing Source-to-Source Data-Flow Transformations with
23. Problem: Free Variable Capture
let var a := bar()
var b := baz()
in a := b;
let var b := foo()
in print(a)
end
end
let var a := bar()
var b := baz()
in a := b;
let var b := foo()
in print(b) // wrong!
end
end
Problem: rule not undefined when variables become shadowed
Solution: undefine rule locally when some variable shadowed
copy-prop-assign =
?|[ x := y ]|;
if <not(eq)>(x,y) then
rules( CopyProp.x : |[ x ]| -> |[ y ]| )
else
rules( CopyProp.x :- |[ x ]| )
end
http://www.strategoxt.org
Composing Source-to-Source Data-Flow Transformations with
24. Problem: Escaping Variables (1)
let var a := bar()
in let var b := foo()
in a := b
end;
print(a)
end
let var a := bar()
in let var b := foo()
in a := b
end;
print(b) // wrong!
end
copy-prop-assign =
?|[ x := y ]|;
if <not(eq)>(x,y) then
rules( CopyProp.x : |[ x ]| -> |[ y ]| )
else
rules( CopyProp.x :- |[ x ]| )
end
http://www.strategoxt.org
Composing Source-to-Source Data-Flow Transformations with
25. Problem: Escaping Variables (1)
let var a := bar()
in let var b := foo()
in a := b
end;
print(a)
end
let var a := bar()
in let var b := foo()
in a := b
end;
print(b) // wrong!
end
Problem: rule not undefined when a variable goes out of scope
Solution: (re)define rule in local scope
copy-prop-assign =
?|[ x := y ]|;
if <not(eq)>(x,y) then
rules( CopyProp.x : |[ x ]| -> |[ y ]| )
else
rules( CopyProp.x :- |[ x ]| )
end
http://www.strategoxt.org
Composing Source-to-Source Data-Flow Transformations with
26. Problem: Escaping Variables (2)
let var
var
in let
in
a := bar()
c := baz()
var b := foo()
a := b;
a := c
end;
print(a)
let var
var
in let
in
a := bar()
c := baz()
var b := foo()
a := b;
a := c
end;
print(c) // ok!
end
end
copy-prop-assign = ?|[ x := y ]|;
if <not(eq)>(x,y) then
rules( CopyProp.x : |[ x ]| -> |[ y ]| )
else rules( CopyProp.x :- |[ x ]| ) end
http://www.strategoxt.org
Composing Source-to-Source Data-Flow Transformations with
27. Problem: Escaping Variables (2)
let var
var
in let
in
a := bar()
c := baz()
var b := foo()
a := b;
a := c
end;
print(a)
let var
var
in let
in
a := bar()
c := baz()
var b := foo()
a := b;
a := c
end;
print(c) // ok!
end
end
Problem: definition in local scope is too restricted
Solution: (re)define rule in innermost scope of all variables
involved
copy-prop-assign = ?|[ x := y ]|;
if <not(eq)>(x,y) then
rules( CopyProp.x : |[ x ]| -> |[ y ]| )
else rules( CopyProp.x :- |[ x ]| ) end
http://www.strategoxt.org
Composing Source-to-Source Data-Flow Transformations with
28. Common-Subexpression Elimination
(x
y
z
a
z
:=
:=
:=
:=
:=
a + b;
a + b;
a + c;
1;
(a + c) + (a + b))
http://www.strategoxt.org
⇒
(x
y
z
a
z
:=
:=
:=
:=
:=
a + b;
x;
a + c;
1;
(a + c) + (a + b))
Composing Source-to-Source Data-Flow Transformations with
29. Common-Subexpression Elimination
(x
y
z
a
z
:=
:=
:=
:=
:=
a + b;
a + b;
a + c;
1;
(a + c) + (a + b))
⇒
(x
y
z
a
z
:=
:=
:=
:=
:=
a + b;
x;
a + c;
1;
(a + c) + (a + b))
Assignment
x := e
Propagation rule
|[ e ]| -> |[ x ]|
Dependencies in common-subexpression elimination
all variables in assignment x := e
http://www.strategoxt.org
Composing Source-to-Source Data-Flow Transformations with
30. Common-Subexpression Elimination
cse = cse-assign <+ (all(cse); try(ReplaceExp))
cse-assign =
|[ x := <cse => e> ]|
; where(<undefine-subexpressions> |[ x ]|)
; if <not(is-subterm(||[ x ]|))> |[ e ]| then
rules(ReplaceExp : |[ e ]| -> |[ x ]|)
; where(<register-subexpressions(|e)> |[ x := e ]|)
end
register-subexpressions(|e) =
get-vars; map({y : ?|[ y ]|
; rules(UsedInExp :+ |[ y ]| -> e)})
undefine-subexpressions =
bagof-UsedInExp; map({?e; rules(ReplaceExp :- |[ e ]|)})
get-vars = collect({?|[ x ]|})
http://www.strategoxt.org
Composing Source-to-Source Data-Flow Transformations with
31. Dependent Dynamic Rules
Declare rule dependencies
R.lab : p1 -> p2
depends on [(lab1,dep1),...,(labn,depn)]
Undefine all rules depending on dep
undefine-R(|dep)
Locally undefine all rules depending on dep
new-R(|lab, dep)
and label current scope with lab
http://www.strategoxt.org
Composing Source-to-Source Data-Flow Transformations with
32. Copy Propagation – Assignments
copy-prop =
repeat1(CopyProp)
<+ copy-prop-assign
<+ copy-prop-declare
<+ copy-prop-let <+ copy-prop-if <+ copy-prop-while
<+ all(copy-prop)
copy-prop-declare =
|[ var x ta := <copy-prop => e> ]|
; where( new-CopyProp(|x, x) )
; where( try(<copy-prop-assign-aux> |[ x := e ]|) )
copy-prop-assign =
|[ x := <copy-prop => e> ]|
; where( undefine-CopyProp(|x) )
; where( try(copy-prop-assign-aux) )
http://www.strategoxt.org
Composing Source-to-Source Data-Flow Transformations with
33. Copy Propagation – Propagation Rule
copy-prop-assign-aux =
? |[ x := y ]|
; where( <not(eq)>(x,y) )
; where( innermost-scope-CopyProp => z )
; rules(
CopyProp.z : |[ x ]| -> |[ y ]|
depends on [(x,x), (y,y)]
)
innermost-scope-CopyProp =
get-var-names => vars
; innermost-scope-CopyProp(elem-of(|vars))
http://www.strategoxt.org
Composing Source-to-Source Data-Flow Transformations with
34. Copy Propagation – Control-Flow
copy-prop-let =
|[ let <*id> in <*id> end ]|
; {| CopyProp : all(copy-prop) |}
copy-prop-if =
|[ if <copy-prop> then <id> else <id> ]|
; ( |[ if <id> then <copy-prop> else <id> ]|
/CopyProp |[ if <id> then <id> else <copy-prop> ]|)
copy-prop-while =
|[ while <id> do <id> ]|
; (/CopyProp* |[ while <copy-prop> do <copy-prop> ]|)
http://www.strategoxt.org
Composing Source-to-Source Data-Flow Transformations with
35. Common-Subexpression Elimination – Assignments
cse =
cse-assign <+ cse-vardec <+ cse-let <+ cse-if
<+ cse-while <+ all(cse); try(CSE)
cse-vardec =
|[ var x ta := <cse => e> ]|
; new-CSE(|x, x)
; where( try(<cse-assign-aux> |[ x := e ]|) )
cse-assign =
|[ x := <cse => e> ]|
; undefine-CSE(|x)
; where(try(cse-assign-aux))
http://www.strategoxt.org
Composing Source-to-Source Data-Flow Transformations with
36. Common-Subexpression Elimination – Propagation
cse-assign-aux =
? |[ x := e ]|
; where( <not(oncetd(?|[ x ]|)); pure> |[ e ]| )
; where( get-var-names; map(!(<id>,<id>)) => xs )
; where( innermost-scope-CSE => z )
; rules( CSE.z : |[ e ]| -> |[ x ]| depends on xs )
pure =
?|[ i ]| + ?|[ x ]| + |[ <bo:id>(<pure>, <pure>) ]|
innermost-scope-CSE =
get-var-names => vars
; innermost-scope-CSE(where(<elem>(<id>, vars)))
http://www.strategoxt.org
Composing Source-to-Source Data-Flow Transformations with
37. Common-Subexpression Elimination – Control-Flow
cse-let =
|[ let <*id> in <*id> end ]|
; {| CSE : all(cse) |}
cse-if =
|[ if <cse> then <id> else <id> ]|
; ( |[ if <id> then <cse> else <id> ]|
/CSE |[ if <id> then <id> else <cse> ]|)
cse-while =
|[ while <id> do <id> ]|
; (/CSE* |[ while <cse> do <cse> ]|)
http://www.strategoxt.org
Composing Source-to-Source Data-Flow Transformations with
38. Part III
Generic Data-Flow Transformation Strategies
http://www.strategoxt.org
Composing Source-to-Source Data-Flow Transformations with
39. Generic Data-Flow Transformation Strategies
Data-flow transformation strategies are similar
Factor out underlying strategy
Requires generalization over combinators
new-dynamic-rules(|Rs,x,x)
undefine-dynamic-rules(|Rs,x)
/~Rs1~Rs2/
Allows very concise specifications for specific transformations
Combination of transformations
http://www.strategoxt.org
Composing Source-to-Source Data-Flow Transformations with
41. Generic Strategy – Assignments
prop-assign =
|[ <id> := <fp> ]|
; (transform(fp)
<+ before
; ?|[ x := e ]|
; undefine-dynamic-rules(|RsDf,x)
; after)
prop-declare =
|[ var <id> := <fp> ]|
; (transform(fp)
<+ before; ?|[ var x := e ]|
; new-dynamic-rules(|RsSc,x,x);after)
prop-let =
?|[ let d* in e* end ]|
; (transform(fp)
<+ {|~RsSc : before; all(fp); after |})
http://www.strategoxt.org
Composing Source-to-Source Data-Flow Transformations with
42. Generic Strategy – Control Flow
prop-if =
|[ if <fp> then <id> else <id> ]|
; (transform(fp)
<+ before
; (|[ if <id> then <fp> else <id> ]|
/~Rs1~Rs2/ |[ if <id> then <id> else <fp> ]|)
; after)
prop-while =
?|[ while e1 do e2 ]|
; (transform(fp)
<+ before
; /~Rs1~Rs2/* |[ while <fp> do <fp> ]|
; after)
http://www.strategoxt.org
Composing Source-to-Source Data-Flow Transformations with
43. Instantation: Constant Propagation
prop-const = forward-prop(prop-const-transform, id,
prop-const-after | ["PropConst"],[],[])
prop-const-transform(recur) =
EvalFor <+ EvalIf; recur
<+ |[ while <recur> do <id> ]|; EvalWhile
prop-const-after =
try(prop-const-assign <+ prop-const-declare
<+ PropConst <+ EvalBinOp)
prop-const-assign =
?|[ x := e ]|; where( <is-value> e )
; rules( PropConst.x : |[ x ]| -> |[ e ]|
depends on [(x,x)] )
prop-const-declare =
?|[var x ta := e]|; where(<prop-const-assign>|[x := e]|)
http://www.strategoxt.org
Composing Source-to-Source Data-Flow Transformations with
44. Instantation: Copy Propagation
copy-prop =
forward-prop(no-transform,id,copy-prop-after
|["CopyProp"],[],[])
copy-prop-after =
try(copy-prop-assign <+ copy-prop-declare
<+ repeat1(CopyProp))
copy-prop-declare =
? |[ var x ta := e ]|
; where(try(<copy-prop-assign> |[ x := e ]|))
copy-prop-assign =
? |[ x := y ]|
; where( <not(eq)> (x, y) )
; where( get-var-dependencies => xs )
; where( innermost-scope-CopyProp => z )
; rules( CopyProp.z : |[ x ]| -> |[ y ]| depends on xs )
http://www.strategoxt.org
Composing Source-to-Source Data-Flow Transformations with
45. Instantation: Common-Subexpression Elimination
cse =
forward-prop(no-transform, id, cse-after|["CSE"],[],[])
cse-after =
try(cse-assign <+ cse-declare <+ CSE)
cse-declare =
?|[ var x := e ]|; where( <cse-assign> |[ x := e ]| )
cse-assign
; where(
; where(
; where(
; rules(
= ?|[ x := e ]|
<pure-and-not-trivial(|x)> |[ e ]| )
get-var-dependencies => xs )
innermost-scope-CSE => z )
CSE.z : |[ e ]| -> |[ x ]| depends on xs )
http://www.strategoxt.org
Composing Source-to-Source Data-Flow Transformations with