Is it easier to add functional programming features to a query language, or to add query capabilities to a functional language? In Morel, we have done the latter.
Functional and query languages have much in common, and yet much to learn from each other. Functional languages have a rich type system that includes polymorphism and functions-as-values and Turing-complete expressiveness; query languages have optimization techniques that can make programs several orders of magnitude faster, and runtimes that can use thousands of nodes to execute queries over terabytes of data.
Morel is an implementation of Standard ML on the JVM, with language extensions to allow relational expressions. Its compiler can translate programs to relational algebra and, via Apache Calcite’s query optimizer, run those programs on relational backends.
In this talk, we describe the principles that drove Morel’s design, the problems that we had to solve in order to implement a hybrid functional/relational language, and how Morel can be applied to implement data-intensive systems.
(A talk given by Julian Hyde at Strange Loop 2021, St. Louis, MO, on October 1st, 2021.)
Apache Calcite is a dynamic data management framework. Think of it as a toolkit for building databases: it has an industry-standard SQL parser, validator, highly customizable optimizer (with pluggable transformation rules and cost functions, relational algebra, and an extensive library of rules), but it has no preferred storage primitives. In this tutorial, the attendees will use Apache Calcite to build a fully fledged query processor from scratch with very few lines of code. This processor is a full implementation of SQL over an Apache Lucene storage engine. (Lucene does not support SQL queries and lacks a declarative language for performing complex operations such as joins or aggregations.) Attendees will also learn how to use Calcite as an effective tool for research.
If SQL is the universal language of data, why do we author our most important data applications (metrics, analytics, business intelligence) in languages other than SQL? Multidimensional databases and languages such as MDX, DAX and Tableau LOD solve these problems but introduce others: they require specialized knowledge, complicate the data pipeline and don’t integrate well. Is it possible to define and query business intelligence models in SQL?
Apache Calcite has extended SQL to support measures, evaluation context, and multidimensional expressions. With these concepts you can define data models that contain measures, use them in queries, and define new measures in queries.
A talk given by Julian Hyde to Apache Calcite's virtual meetup, 2023-03-15.
Apache Calcite (a tutorial given at BOSS '21)Julian Hyde
Apache Calcite is a dynamic data management framework. Think of it as a toolkit for building databases: it has an industry-standard SQL parser, validator, highly customizable optimizer (with pluggable transformation rules and cost functions, relational algebra, and an extensive library of rules), but it has no preferred storage primitives.
In this tutorial (given at BOSS '21 in Copenhagen as part of VLDB '21) the attendees will use Apache Calcite to build a fully fledged query processor from scratch with very few lines of code. This processor is a full implementation of SQL over an Apache Lucene storage engine. (Lucene does not support SQL queries and lacks a declarative language for performing complex operations such as joins or aggregations.) Attendees will also learn how to use Calcite as an effective tool for research.
Presenters: Julian Hyde and Stamatis Zampetakis
Building a SIMD Supported Vectorized Native Engine for Spark SQLDatabricks
Spark SQL works very well with structured row-based data. Vectorized reader and writer for parquet/orc can make I/O much faster. It also used WholeStageCodeGen to improve the performance by Java JIT code. However Java JIT is usually not working very well on utilizing latest SIMD instructions under complicated queries. Apache Arrow provides columnar in-memory layout and SIMD optimized kernels as well as a LLVM based SQL engine Gandiva. These native based libraries can accelerate Spark SQL by reduce the CPU usage for both I/O and execution.
This talk provides an in-depth overview of the key concepts of Apache Calcite. It explores the Calcite catalog, parsing, validation, and optimization with various planners.
Apache Calcite is a dynamic data management framework. Think of it as a toolkit for building databases: it has an industry-standard SQL parser, validator, highly customizable optimizer (with pluggable transformation rules and cost functions, relational algebra, and an extensive library of rules), but it has no preferred storage primitives. In this tutorial, the attendees will use Apache Calcite to build a fully fledged query processor from scratch with very few lines of code. This processor is a full implementation of SQL over an Apache Lucene storage engine. (Lucene does not support SQL queries and lacks a declarative language for performing complex operations such as joins or aggregations.) Attendees will also learn how to use Calcite as an effective tool for research.
If SQL is the universal language of data, why do we author our most important data applications (metrics, analytics, business intelligence) in languages other than SQL? Multidimensional databases and languages such as MDX, DAX and Tableau LOD solve these problems but introduce others: they require specialized knowledge, complicate the data pipeline and don’t integrate well. Is it possible to define and query business intelligence models in SQL?
Apache Calcite has extended SQL to support measures, evaluation context, and multidimensional expressions. With these concepts you can define data models that contain measures, use them in queries, and define new measures in queries.
A talk given by Julian Hyde to Apache Calcite's virtual meetup, 2023-03-15.
Apache Calcite (a tutorial given at BOSS '21)Julian Hyde
Apache Calcite is a dynamic data management framework. Think of it as a toolkit for building databases: it has an industry-standard SQL parser, validator, highly customizable optimizer (with pluggable transformation rules and cost functions, relational algebra, and an extensive library of rules), but it has no preferred storage primitives.
In this tutorial (given at BOSS '21 in Copenhagen as part of VLDB '21) the attendees will use Apache Calcite to build a fully fledged query processor from scratch with very few lines of code. This processor is a full implementation of SQL over an Apache Lucene storage engine. (Lucene does not support SQL queries and lacks a declarative language for performing complex operations such as joins or aggregations.) Attendees will also learn how to use Calcite as an effective tool for research.
Presenters: Julian Hyde and Stamatis Zampetakis
Building a SIMD Supported Vectorized Native Engine for Spark SQLDatabricks
Spark SQL works very well with structured row-based data. Vectorized reader and writer for parquet/orc can make I/O much faster. It also used WholeStageCodeGen to improve the performance by Java JIT code. However Java JIT is usually not working very well on utilizing latest SIMD instructions under complicated queries. Apache Arrow provides columnar in-memory layout and SIMD optimized kernels as well as a LLVM based SQL engine Gandiva. These native based libraries can accelerate Spark SQL by reduce the CPU usage for both I/O and execution.
This talk provides an in-depth overview of the key concepts of Apache Calcite. It explores the Calcite catalog, parsing, validation, and optimization with various planners.
The evolution of Apache Calcite and its CommunityJulian Hyde
Apache Calcite is an open source framework for building databases, and includes a SQL parser, relational algebra, and a highly extensible query optimizer.
It has achieved wide adoption, used in many commercial products, open source projects, and as a test bed for computer science research.
But there is a bootstrap problem: If software is written by a community of contributors, and each contributor acts in their own self-interest, how do you get the first working version of the product? The answer is in the story of how the technology evolved, and how the community evolved with it, and in this talk we tell that story.
"SPARQL Cheat Sheet" is a short collection of slides intended to act as a guide to SPARQL developers. It includes the syntax and structure of SPARQL queries, common SPARQL prefixes and functions, and help with RDF datasets.
The "SPARQL Cheat Sheet" is intended to accompany the SPARQL By Example slides available at http://www.cambridgesemantics.com/2008/09/sparql-by-example/ .
Understanding and Improving Code GenerationDatabricks
Code generation is integral to Spark’s physical execution engine. When implemented, the Spark engine creates optimized bytecode at runtime improving performance when compared to interpreted execution. Spark has taken the next step with whole-stage codegen which collapses an entire query into a single function.
This session covers how to work with PySpark interface to develop Spark applications. From loading, ingesting, and applying transformation on the data. The session covers how to work with different data sources of data, apply transformation, python best practices in developing Spark Apps. The demo covers integrating Apache Spark apps, In memory processing capabilities, working with notebooks, and integrating analytics tools into Spark Applications.
Slides for Data Syndrome one hour course on PySpark. Introduces basic operations, Spark SQL, Spark MLlib and exploratory data analysis with PySpark. Shows how to use pylab with Spark to create histograms.
Using Apache Calcite for Enabling SQL and JDBC Access to Apache Geode and Oth...Christian Tzolov
When working with BigData & IoT systems we often feel the need for a Common Query Language. The system specific languages usually require longer adoption time and are harder to integrate within the existing stacks.
To fill this gap some NoSql vendors are building SQL access to their systems. Building SQL engine from scratch is a daunting job and frameworks like Apache Calcite can help you with the heavy lifting. Calcite allow you to integrate SQL parser, cost-based optimizer, and JDBC with your NoSql system.
We will walk through the process of building a SQL access layer for Apache Geode (In-Memory Data Grid). I will share my experience, pitfalls and technical consideration like balancing between the SQL/RDBMS semantics and the design choices and limitations of the data system.
Hopefully this will enable you to add SQL capabilities to your prefered NoSQL data system.
This was a short introduction to Scala programming language.
me and my colleague lectured these slides in Programming Language Design and Implementation course in K.N. Toosi University of Technology.
Spark SQL Tutorial | Spark SQL Using Scala | Apache Spark Tutorial For Beginn...Simplilearn
This presentation about Spark SQL will help you understand what is Spark SQL, Spark SQL features, architecture, data frame API, data source API, catalyst optimizer, running SQL queries and a demo on Spark SQL. Spark SQL is an Apache Spark's module for working with structured and semi-structured data. It is originated to overcome the limitations of Apache Hive. Now, let us get started and understand Spark SQL in detail.
Below topics are explained in this Spark SQL presentation:
1. What is Spark SQL?
2. Spark SQL features
3. Spark SQL architecture
4. Spark SQL - Dataframe API
5. Spark SQL - Data source API
6. Spark SQL - Catalyst optimizer
7. Running SQL queries
8. Spark SQL demo
This Apache Spark and Scala certification training is designed to advance your expertise working with the Big Data Hadoop Ecosystem. You will master essential skills of the Apache Spark open source framework and the Scala programming language, including Spark Streaming, Spark SQL, machine learning programming, GraphX programming, and Shell Scripting Spark. This Scala Certification course will give you vital skillsets and a competitive advantage for an exciting career as a Hadoop Developer.
What is this Big Data Hadoop training course about?
The Big Data Hadoop and Spark developer course have been designed to impart an in-depth knowledge of Big Data processing using Hadoop and Spark. The course is packed with real-life projects and case studies to be executed in the CloudLab.
What are the course objectives?
Simplilearn’s Apache Spark and Scala certification training are designed to:
1. Advance your expertise in the Big Data Hadoop Ecosystem
2. Help you master essential Apache and Spark skills, such as Spark Streaming, Spark SQL, machine learning programming, GraphX programming and Shell Scripting Spark
3. Help you land a Hadoop developer job requiring Apache Spark expertise by giving you a real-life industry project coupled with 30 demos
What skills will you learn?
By completing this Apache Spark and Scala course you will be able to:
1. Understand the limitations of MapReduce and the role of Spark in overcoming these limitations
2. Understand the fundamentals of the Scala programming language and its features
3. Explain and master the process of installing Spark as a standalone cluster
4. Develop expertise in using Resilient Distributed Datasets (RDD) for creating applications in Spark
5. Master Structured Query Language (SQL) using SparkSQL
6. Gain a thorough understanding of Spark streaming features
7. Master and describe the features of Spark ML programming and GraphX programming
Learn more at https://www.simplilearn.com/big-data-and-analytics/apache-spark-scala-certification-training
Morel, a data-parallel programming languageJulian Hyde
What would the perfect data-parallel programming language look like? It would be as expressive as a general-purpose functional programming language, as powerful and concise as SQL, and run programs just as efficiently on a laptop or a thousand-node cluster.
We present Morel, a functional programming language with relational extensions, working towards that goal. Morel is implemented in the Apache Calcite community on top of Calcite’s relational algebra framework. In this talk, we describe Morel’s evolution, including how we are pushing Calcite’s capabilities with graph and recursive queries.
A talk given by Julian Hyde at ApacheCon, New Orleans, October 4th 2022.
Is there a perfect data-parallel programming language? (Experiments with More...Julian Hyde
The perfect data parallel language has not yet been invented. SQL queries can achieve great performance and scale, but there are many general purpose algorithms that it cannot express. In Morel, we build on the functional and relational roots of MapReduce in an elegant and strongly-typed general-purpose programming language. But Morel is, in a real sense, a query language; programs are executed on relational frameworks such as Google BigQuery and Spark.
In this talk, we describe the principles that drove Morel’s design, the problems that we had to solve in order to implement a hybrid functional/relational language, and how Morel can be applied to implement data-intensive systems.
We also introduce Apache Calcite, the popular open source framework for query planning, and describe how Morel's compiler uses Calcite's relational algebra and rewrite rules to generate efficient plans.
The evolution of Apache Calcite and its CommunityJulian Hyde
Apache Calcite is an open source framework for building databases, and includes a SQL parser, relational algebra, and a highly extensible query optimizer.
It has achieved wide adoption, used in many commercial products, open source projects, and as a test bed for computer science research.
But there is a bootstrap problem: If software is written by a community of contributors, and each contributor acts in their own self-interest, how do you get the first working version of the product? The answer is in the story of how the technology evolved, and how the community evolved with it, and in this talk we tell that story.
"SPARQL Cheat Sheet" is a short collection of slides intended to act as a guide to SPARQL developers. It includes the syntax and structure of SPARQL queries, common SPARQL prefixes and functions, and help with RDF datasets.
The "SPARQL Cheat Sheet" is intended to accompany the SPARQL By Example slides available at http://www.cambridgesemantics.com/2008/09/sparql-by-example/ .
Understanding and Improving Code GenerationDatabricks
Code generation is integral to Spark’s physical execution engine. When implemented, the Spark engine creates optimized bytecode at runtime improving performance when compared to interpreted execution. Spark has taken the next step with whole-stage codegen which collapses an entire query into a single function.
This session covers how to work with PySpark interface to develop Spark applications. From loading, ingesting, and applying transformation on the data. The session covers how to work with different data sources of data, apply transformation, python best practices in developing Spark Apps. The demo covers integrating Apache Spark apps, In memory processing capabilities, working with notebooks, and integrating analytics tools into Spark Applications.
Slides for Data Syndrome one hour course on PySpark. Introduces basic operations, Spark SQL, Spark MLlib and exploratory data analysis with PySpark. Shows how to use pylab with Spark to create histograms.
Using Apache Calcite for Enabling SQL and JDBC Access to Apache Geode and Oth...Christian Tzolov
When working with BigData & IoT systems we often feel the need for a Common Query Language. The system specific languages usually require longer adoption time and are harder to integrate within the existing stacks.
To fill this gap some NoSql vendors are building SQL access to their systems. Building SQL engine from scratch is a daunting job and frameworks like Apache Calcite can help you with the heavy lifting. Calcite allow you to integrate SQL parser, cost-based optimizer, and JDBC with your NoSql system.
We will walk through the process of building a SQL access layer for Apache Geode (In-Memory Data Grid). I will share my experience, pitfalls and technical consideration like balancing between the SQL/RDBMS semantics and the design choices and limitations of the data system.
Hopefully this will enable you to add SQL capabilities to your prefered NoSQL data system.
This was a short introduction to Scala programming language.
me and my colleague lectured these slides in Programming Language Design and Implementation course in K.N. Toosi University of Technology.
Spark SQL Tutorial | Spark SQL Using Scala | Apache Spark Tutorial For Beginn...Simplilearn
This presentation about Spark SQL will help you understand what is Spark SQL, Spark SQL features, architecture, data frame API, data source API, catalyst optimizer, running SQL queries and a demo on Spark SQL. Spark SQL is an Apache Spark's module for working with structured and semi-structured data. It is originated to overcome the limitations of Apache Hive. Now, let us get started and understand Spark SQL in detail.
Below topics are explained in this Spark SQL presentation:
1. What is Spark SQL?
2. Spark SQL features
3. Spark SQL architecture
4. Spark SQL - Dataframe API
5. Spark SQL - Data source API
6. Spark SQL - Catalyst optimizer
7. Running SQL queries
8. Spark SQL demo
This Apache Spark and Scala certification training is designed to advance your expertise working with the Big Data Hadoop Ecosystem. You will master essential skills of the Apache Spark open source framework and the Scala programming language, including Spark Streaming, Spark SQL, machine learning programming, GraphX programming, and Shell Scripting Spark. This Scala Certification course will give you vital skillsets and a competitive advantage for an exciting career as a Hadoop Developer.
What is this Big Data Hadoop training course about?
The Big Data Hadoop and Spark developer course have been designed to impart an in-depth knowledge of Big Data processing using Hadoop and Spark. The course is packed with real-life projects and case studies to be executed in the CloudLab.
What are the course objectives?
Simplilearn’s Apache Spark and Scala certification training are designed to:
1. Advance your expertise in the Big Data Hadoop Ecosystem
2. Help you master essential Apache and Spark skills, such as Spark Streaming, Spark SQL, machine learning programming, GraphX programming and Shell Scripting Spark
3. Help you land a Hadoop developer job requiring Apache Spark expertise by giving you a real-life industry project coupled with 30 demos
What skills will you learn?
By completing this Apache Spark and Scala course you will be able to:
1. Understand the limitations of MapReduce and the role of Spark in overcoming these limitations
2. Understand the fundamentals of the Scala programming language and its features
3. Explain and master the process of installing Spark as a standalone cluster
4. Develop expertise in using Resilient Distributed Datasets (RDD) for creating applications in Spark
5. Master Structured Query Language (SQL) using SparkSQL
6. Gain a thorough understanding of Spark streaming features
7. Master and describe the features of Spark ML programming and GraphX programming
Learn more at https://www.simplilearn.com/big-data-and-analytics/apache-spark-scala-certification-training
Morel, a data-parallel programming languageJulian Hyde
What would the perfect data-parallel programming language look like? It would be as expressive as a general-purpose functional programming language, as powerful and concise as SQL, and run programs just as efficiently on a laptop or a thousand-node cluster.
We present Morel, a functional programming language with relational extensions, working towards that goal. Morel is implemented in the Apache Calcite community on top of Calcite’s relational algebra framework. In this talk, we describe Morel’s evolution, including how we are pushing Calcite’s capabilities with graph and recursive queries.
A talk given by Julian Hyde at ApacheCon, New Orleans, October 4th 2022.
Is there a perfect data-parallel programming language? (Experiments with More...Julian Hyde
The perfect data parallel language has not yet been invented. SQL queries can achieve great performance and scale, but there are many general purpose algorithms that it cannot express. In Morel, we build on the functional and relational roots of MapReduce in an elegant and strongly-typed general-purpose programming language. But Morel is, in a real sense, a query language; programs are executed on relational frameworks such as Google BigQuery and Spark.
In this talk, we describe the principles that drove Morel’s design, the problems that we had to solve in order to implement a hybrid functional/relational language, and how Morel can be applied to implement data-intensive systems.
We also introduce Apache Calcite, the popular open source framework for query planning, and describe how Morel's compiler uses Calcite's relational algebra and rewrite rules to generate efficient plans.
This presentation takes you on a functional programming journey, it starts from basic Scala programming language design concepts and leads to a concept of Monads, how some of them designed in Scala and what is the purpose of them
Ejercicios de estilo en la programaciónSoftware Guru
El escritor francés Raymond Queneau escribió a mediados del siglo XX un libro llamado "Ejercicios de Estilo" donde mostraba una misma historia corta, redactada de 99 formas distintas.
En esta plática realizaremos el mismo ejercicio con un programa de software. Abarcaremos distintos estilos y paradigmas: programación monolítica, orientada a objetos, relacional, orientada a aspectos, monadas, map-reduce, y muchos otros, a través de los cuales podremos apreciar la riqueza del pensamiento humano aplicado a la computación.
Esto va mucho más allá de un ejercicio académico; el diseño de sistemas de gran escala se alimenta de esta variedad de estilos. También platicaremos sobre los peligros de quedar atrapado bajo un conjunto reducido de estilos a lo largo de tu carrera, y la necesidad de verdaderamente entender distintos estilos al diseñar arquitecturas de sistemas de software.
Semblanza del conferencista:
Crista Lopez es profesora en la Facultad de Ciencias Computacionales de la Universidad de California en Irvine. Su investigación se enfoca en prácticas de ingeniería de software para sistemas de gran escala. Previamente, fue miembro fundador del equipo en Xerox PARC creador del paradigma de programación orientado a aspectos (AOP). Crista es una de las desarrolladoras principales de OpenSimulator, una plataforma open source para crear mundos virtuales 3D. También es fundadora de Encitra, empresa especializada en la utilización de la realidad virtual para proyectos de desarrollo urbano sustentable. @cristalopes
statistical computation using R- an intro..Kamarudheen KV
This presentation deals with some basics of R language. It is very useful for benners in R. It describes the basics in a very easy manner, so those who are not familiar with R it would be very helpful.
Building a semantic/metrics layer using CalciteJulian Hyde
A semantic layer, also known as a metrics layer, lies between business users and the database, and lets those users compose queries in the concepts that they understand. It also governs access to the data, manages data transformations, and can tune the database by defining materializations.
Like many new ideas, the semantic layer is a distillation and evolution of many old ideas, such as query languages, multidimensional OLAP, and query federation.
In this talk, we describe the features we are adding to Calcite to define business views, query measures, and optimize performance.
A talk given at Community over Code, the annual conference of the Apache Software Foundation, in Halifax, NS, on 9th October, 2023.
If SQL is the universal language of data, why do we author our most important data applications (metrics, analytics, business intelligence) in languages other than SQL? Multidimensional databases and languages such as MDX, DAX and Tableau LOD solve these problems but introduce others: they require specialized knowledge, complicate the data pipeline and don’t integrate well. Is it possible to define and query business intelligence models in SQL?
Apache Calcite has extended SQL to support metrics (which we call ‘measures’), filter context, and analytic expressions. With these concepts you can define data models (which we call Analytic Views) that contain metrics, use them in queries, and define new metrics in queries.
In this talk by the original developer of Apache Calcite, we describe the SQL syntax extensions for metrics,
and how to use them for cross-dimensional calculations such as period-over-period, percent-of-total,
non-additive and semi-additive measures. We describe how we got around fundamental limitations in SQL
semantics, and approaches for optimizing queries that use metrics.
A talk given by Julian Hyde at Data Council, Austin, TX, on March 29, 2023.
The Hop project entered Apache Software Foundation as an Incubator project in 2020, and Julian Hyde, one of their mentors, gave this presentation to educate the initial committers on the Apache Way and what to expect during the Incubation process.
The talk was given by Julian Hyde on October 1st, 2020, with the original title "Apache Incubation - What's it all about?"
Efficient spatial queries on vanilla databasesJulian Hyde
A talk given by Julian Hyde at the Apache Calcite online meetup, 2021/01/20.
Spatial and GIS applications have traditionally required specialized databases, or at least specialized data structures like r-trees. Unfortunately this means that hybrid applications such as spatial analytics are not well served, and many people are unaware of the power of spatial queries because their favorite database does not support them.
In this talk, we describe how Apache Calcite enables efficient spatial queries using generic data structures such as HBase’s key-sorted tables, using techniques like Hilbert space-filling curves and materialized views. Calcite implements much of the OpenGIS function set and recognizes query patterns that can be rewritten to use particular spatial indexes. Calcite is bringing spatial query to the masses!
Smarter Together - Bringing Relational Algebra, Powered by Apache Calcite, in...Julian Hyde
What if Looker saw the queries you just executed and could predict your next query? Could it make those queries faster, by smarter caching, or aggregate navigation? Could it read your past SQL queries and help you write your LookML model? Those are some of the reasons to add relational algebra into Looker’s query engine, and why Looker hired Julian Hyde, author of Apache Calcite, to lead the effort. In this talk about the internals of Looker’s query engine, Julian Hyde will describe how the engine works, how Looker queries are described in Calcite’s relational algebra, and some features that it makes possible.
A talk by Julian Hyde at JOIN 2019 in San Francisco.
A talk given by Julian Hyde at DataCouncil SF on April 18, 2019
How do you organize your data so that your users get the right answers at the right time? That question is a pretty good definition of data engineering — but it is also describes the purpose of every DBMS (database management system). And it’s not a coincidence that these are so similar.
This talk looks at the patterns that reoccur throughout data management — such as caching, partitioning, sorting, and derived data sets. As the speaker is the author of Apache Calcite, we first look at these patterns through the lens of Relational Algebra and DBMS architecture. But then we apply these patterns to the modern data pipeline, ETL and analytics. As a case study, we look at how Looker’s “derived tables” blur the line between ETL and caching, and leverage the power of cloud databases.
Don't optimize my queries, organize my data!Julian Hyde
Your queries won't run fast if your data is not organized right. Apache Calcite optimizes queries, but can we make it optimize data? We had to solve several challenges. Users are too busy to tell us the structure of their database, and the query load changes daily, so Calcite has to learn and adapt. We talk about new algorithms we developed for gathering statistics on massive database, and how we infer and evolve the data model based on the queries.
A talk given by Julian Hyde at ApacheCon NA 2018 in Montreal on September 26th, 2018.
Spatial and GIS applications have traditionally required specialized databases, or at least specialized data structures like r-trees. Unfortunately this means that hybrid applications such as spatial analytics are not well served, and many people are unaware of the power of spatial queries because their favorite database does not support them.
In this talk, we describe how Apache Calcite enables efficient spatial queries using generic data structures such as HBase’s key-sorted tables, using techniques like Hilbert space-filling curves and materialized views. Calcite implements much of the OpenGIS function set and recognizes query patterns that can be rewritten to use particular spatial indexes. Calcite is bringing spatial query to the masses!
Data all over the place! How SQL and Apache Calcite bring sanity to streaming...Julian Hyde
The revolution has happened. We are living the age of the deconstructed database. The modern enterprises are powered by data, and that data lives in many formats and locations, in-flight and at rest, but somewhat surprisingly, the lingua franca for remains SQL.
In this talk, Julian describes Apache Calcite, a toolkit for relational algebra that powers many systems including Apache Beam, Flink and Hive. He discusses some areas of development in Calcite: streaming SQL, materialized views, enabling spatial query on vanilla databases, and what a mash-up of all three might look like.
He also describes how SQL is being extended to handle streaming, and the challenges that will need to be solved if it is to become standard.
A talk given by Julian Hyde at Lyft, San Francisco, on 2018/06/27.
Apache Calcite: A Foundational Framework for Optimized Query Processing Over ...Julian Hyde
A talk given at ACM SIGMOD 2018 in support of the paper <a href="https://arxiv.org/abs/1802.10233"> Calcite: A Foundational Framework for Optimized Query Processing Over Heterogeneous Data Sources</a>.
Apache Calcite is a foundational software framework that provides query processing, optimization, and query language support to many popular open-source data processing systems such as Apache Hive, Apache Storm, Apache Flink, Druid, and MapD. Calcite's architecture consists of a modular and extensible query optimizer with hundreds of built-in optimization rules, a query processor capable of processing a variety of query languages, an adapter architecture designed for extensibility, and support for heterogeneous data models and stores (relational, semi-structured, streaming, and geospatial). This flexible, embeddable, and extensible architecture is what makes Calcite an attractive choice for adoption in big-data frameworks. It is an active project that continues to introduce support for the new types of data sources, query languages, and approaches to query processing and optimization.
A talk given by Julian Hyde at DataEngConf SF on April 17th 2018.
Did you know that databases often “cheat”? Even with a scalable query engine and smart optimizer, many real-world queries would be too slow if the engine read all the data, so the engine re-writes your query to use a pre-materialized result. B-tree indexes made the first relational databases possible, and there are now many flavors of materialization, from explicit materialized views to OLAP-style caching and spatial indexes. Materialization is more relevant than ever in today’s heterogenous, distributed systems.
If you are evaluating data engines, we describe what materialization features to look for in your next engine. If you are implementing an engine, we describe the features provided by Apache Calcite to design, maintain and use materializations.
Don’t optimize my queries, optimize my data!Julian Hyde
Your queries won't run fast if your data is not organized right. Apache Calcite optimizes queries, but can we evolve it so that it can optimize data? We had to solve several challenges. Users are too busy to tell us the structure of their database, and the query load changes daily, so Calcite has to learn and adapt.
We talk about new algorithms we developed for gathering statistics on massive database, and how we infer and evolve the data model based on the queries, suggesting materialized views that will make your queries run faster without you changing them.
A talk given by Julian Hyde at DataEngConf NYC, Columbia University, on 2017/10/30.
Query optimizers and people have one thing in common: the better they understand their data, the better they can do their jobs. Optimizing queries is hard if you don't have good estimates for the sizes of the intermediate join and aggregate results. Data profiling is a technique that scans data, looking for patterns within the data such as keys, functional dependencies, and correlated columns. These richer statistics can be used in Apache Calcite's query optimizer, and the projects that use it, such as Apache Hive, Phoenix and Drill. We describe how we built a data profiler as a table function in Apache Calcite, review the recent research and algorithms that made it possible, and show how you can use the profiler to improve the quality of your data.
A talk given by Julian Hyde at DataWorks Summit, San Jose, on June 14th 2017.
A smarter Pig: Building a SQL interface to Apache Pig using Apache CalciteJulian Hyde
What if Apache Pig had a SQL front-end and query optimizer? What if Apache Calcite was able to use Pig and MapReduce to run queries? In this project, we aimed to answer both questions by adding a Pig adapter for Calcite. In this talk, we describe Calcite's adapter framework, how we used it to write a Pig adapter, and how you can use this SQL interface to Pig for interactive and long-running queries.
A talk given by Eli Levine and Julian Hyde at Apache: Big Data, Miami, on May 17th, 2017.
Query optimizers and people have one thing in common: the better they understand their data, the better they can do their jobs. Optimizing queries is hard if you don't have good estimates for the sizes of the intermediate join and aggregate results. Data profiling is a technique that scans data, looking for patterns within the data such as keys, functional dependencies, and correlated columns. These richer statistics can be used in Apache Calcite's query optimizer, and the projects that use it, such as Apache Hive, Phoenix and Drill. We describe how we built a data profiler as a table function in Apache Calcite, review the recent research and algorithms that made it possible, and show how you can use the profiler to improve the quality of your data.
A talk given by Julian Hyde at Apache: Big Data, Miami, on May 16th 2017.
Streaming is necessary to handle data rates and latency, but SQL is unquestionably the lingua franca of data. Is it possible to combine SQL with streaming, and if so, what does the resulting language look like? Apache Calcite is extending SQL to include streaming, and Apache Apex is using Calcite to support streaming SQL. In this talk, Julian Hyde describes streaming SQL in detail and shows how you can use streaming SQL in your application. He also describes how Calcite’s planner optimizes queries for throughput and latency.
Julian Hyde gave this talk at Apex Big Data World, Mountain View, on April 4, 2017.
A talk given by Julian Hyde at FlinkForward, Berlin, on 2016/09/12.
Streaming is necessary to handle data rates and latency, but SQL is unquestionably the lingua franca of data. Is it possible to combine SQL with streaming, and if so, what does the resulting language look like? Apache Calcite is extending SQL to include streaming, and Apache Flink is using Calcite to support both regular and streaming SQL. In this talk, Julian Hyde describes streaming SQL in detail and shows how you can use streaming SQL in your application. He also describes how Calcite’s planner optimizes queries for throughput and latency.
Cost-based Query Optimization in Apache Phoenix using Apache CalciteJulian Hyde
This talk, given by Maryann Xue and Julian Hyde at Hadoop Summit, San Jose on June 30th, 2016, describes how we re-engineered Apache Phoenix with a cost-based optimizer based on Apache Calcite.
Apache Phoenix has rapidly become a workhorse in many organizations, providing a convenient standard SQL interface to HBase suitable for a wide variety of workloads from transactions to ETL and analytics. But Phoenix's initial query optimizer was based on static optimization procedures and thus could not choose between several potential plans or indices based on cost metrics.
We describe how we rebuilt Phoenix's parser and query optimizer using the Calcite framework, improving Phoenix's performance and SQL compliance. The new architecture uses relational algebra as an intermediate language, and this enables you to switch in other engines, especially those also based on Calcite. As an example of this, we demonstrate querying a Phoenix database via Apache Drill.
Streaming is necessary to handle IoT data rates and latency but SQL is unquestionably the lingua franca of data. Apache Samza and Apache Storm have new high-level query interfaces based on standard SQL with streaming extensions, both powered by Apache Calcite. Calcite's relational algebra allows query optimization and federation with data-at-rest in databases, memory, or HDFS.
A talk given by Julian Hyde at Hadoop Summit, San Jose, on 2016/06/29.
How Recreation Management Software Can Streamline Your Operations.pptxwottaspaceseo
Recreation management software streamlines operations by automating key tasks such as scheduling, registration, and payment processing, reducing manual workload and errors. It provides centralized management of facilities, classes, and events, ensuring efficient resource allocation and facility usage. The software offers user-friendly online portals for easy access to bookings and program information, enhancing customer experience. Real-time reporting and data analytics deliver insights into attendance and preferences, aiding in strategic decision-making. Additionally, effective communication tools keep participants and staff informed with timely updates. Overall, recreation management software enhances efficiency, improves service delivery, and boosts customer satisfaction.
AI Pilot Review: The World’s First Virtual Assistant Marketing SuiteGoogle
AI Pilot Review: The World’s First Virtual Assistant Marketing Suite
👉👉 Click Here To Get More Info 👇👇
https://sumonreview.com/ai-pilot-review/
AI Pilot Review: Key Features
✅Deploy AI expert bots in Any Niche With Just A Click
✅With one keyword, generate complete funnels, websites, landing pages, and more.
✅More than 85 AI features are included in the AI pilot.
✅No setup or configuration; use your voice (like Siri) to do whatever you want.
✅You Can Use AI Pilot To Create your version of AI Pilot And Charge People For It…
✅ZERO Manual Work With AI Pilot. Never write, Design, Or Code Again.
✅ZERO Limits On Features Or Usages
✅Use Our AI-powered Traffic To Get Hundreds Of Customers
✅No Complicated Setup: Get Up And Running In 2 Minutes
✅99.99% Up-Time Guaranteed
✅30 Days Money-Back Guarantee
✅ZERO Upfront Cost
See My Other Reviews Article:
(1) TubeTrivia AI Review: https://sumonreview.com/tubetrivia-ai-review
(2) SocioWave Review: https://sumonreview.com/sociowave-review
(3) AI Partner & Profit Review: https://sumonreview.com/ai-partner-profit-review
(4) AI Ebook Suite Review: https://sumonreview.com/ai-ebook-suite-review
Unleash Unlimited Potential with One-Time Purchase
BoxLang is more than just a language; it's a community. By choosing a Visionary License, you're not just investing in your success, you're actively contributing to the ongoing development and support of BoxLang.
Top Features to Include in Your Winzo Clone App for Business Growth (4).pptxrickgrimesss22
Discover the essential features to incorporate in your Winzo clone app to boost business growth, enhance user engagement, and drive revenue. Learn how to create a compelling gaming experience that stands out in the competitive market.
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I ...Juraj Vysvader
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I didn't get rich from it but it did have 63K downloads (powered possible tens of thousands of websites).
In the ever-evolving landscape of technology, enterprise software development is undergoing a significant transformation. Traditional coding methods are being challenged by innovative no-code solutions, which promise to streamline and democratize the software development process.
This shift is particularly impactful for enterprises, which require robust, scalable, and efficient software to manage their operations. In this article, we will explore the various facets of enterprise software development with no-code solutions, examining their benefits, challenges, and the future potential they hold.
Navigating the Metaverse: A Journey into Virtual Evolution"Donna Lenk
Join us for an exploration of the Metaverse's evolution, where innovation meets imagination. Discover new dimensions of virtual events, engage with thought-provoking discussions, and witness the transformative power of digital realms."
Prosigns: Transforming Business with Tailored Technology SolutionsProsigns
Unlocking Business Potential: Tailored Technology Solutions by Prosigns
Discover how Prosigns, a leading technology solutions provider, partners with businesses to drive innovation and success. Our presentation showcases our comprehensive range of services, including custom software development, web and mobile app development, AI & ML solutions, blockchain integration, DevOps services, and Microsoft Dynamics 365 support.
Custom Software Development: Prosigns specializes in creating bespoke software solutions that cater to your unique business needs. Our team of experts works closely with you to understand your requirements and deliver tailor-made software that enhances efficiency and drives growth.
Web and Mobile App Development: From responsive websites to intuitive mobile applications, Prosigns develops cutting-edge solutions that engage users and deliver seamless experiences across devices.
AI & ML Solutions: Harnessing the power of Artificial Intelligence and Machine Learning, Prosigns provides smart solutions that automate processes, provide valuable insights, and drive informed decision-making.
Blockchain Integration: Prosigns offers comprehensive blockchain solutions, including development, integration, and consulting services, enabling businesses to leverage blockchain technology for enhanced security, transparency, and efficiency.
DevOps Services: Prosigns' DevOps services streamline development and operations processes, ensuring faster and more reliable software delivery through automation and continuous integration.
Microsoft Dynamics 365 Support: Prosigns provides comprehensive support and maintenance services for Microsoft Dynamics 365, ensuring your system is always up-to-date, secure, and running smoothly.
Learn how our collaborative approach and dedication to excellence help businesses achieve their goals and stay ahead in today's digital landscape. From concept to deployment, Prosigns is your trusted partner for transforming ideas into reality and unlocking the full potential of your business.
Join us on a journey of innovation and growth. Let's partner for success with Prosigns.
Enterprise Resource Planning System includes various modules that reduce any business's workload. Additionally, it organizes the workflows, which drives towards enhancing productivity. Here are a detailed explanation of the ERP modules. Going through the points will help you understand how the software is changing the work dynamics.
To know more details here: https://blogs.nyggs.com/nyggs/enterprise-resource-planning-erp-system-modules/
Launch Your Streaming Platforms in MinutesRoshan Dwivedi
The claim of launching a streaming platform in minutes might be a bit of an exaggeration, but there are services that can significantly streamline the process. Here's a breakdown:
Pros of Speedy Streaming Platform Launch Services:
No coding required: These services often use drag-and-drop interfaces or pre-built templates, eliminating the need for programming knowledge.
Faster setup: Compared to building from scratch, these platforms can get you up and running much quicker.
All-in-one solutions: Many services offer features like content management systems (CMS), video players, and monetization tools, reducing the need for multiple integrations.
Things to Consider:
Limited customization: These platforms may offer less flexibility in design and functionality compared to custom-built solutions.
Scalability: As your audience grows, you might need to upgrade to a more robust platform or encounter limitations with the "quick launch" option.
Features: Carefully evaluate which features are included and if they meet your specific needs (e.g., live streaming, subscription options).
Examples of Services for Launching Streaming Platforms:
Muvi [muvi com]
Uscreen [usencreen tv]
Alternatives to Consider:
Existing Streaming platforms: Platforms like YouTube or Twitch might be suitable for basic streaming needs, though monetization options might be limited.
Custom Development: While more time-consuming, custom development offers the most control and flexibility for your platform.
Overall, launching a streaming platform in minutes might not be entirely realistic, but these services can significantly speed up the process compared to building from scratch. Carefully consider your needs and budget when choosing the best option for you.
Cyaniclab : Software Development Agency Portfolio.pdfCyanic lab
CyanicLab, an offshore custom software development company based in Sweden,India, Finland, is your go-to partner for startup development and innovative web design solutions. Our expert team specializes in crafting cutting-edge software tailored to meet the unique needs of startups and established enterprises alike. From conceptualization to execution, we offer comprehensive services including web and mobile app development, UI/UX design, and ongoing software maintenance. Ready to elevate your business? Contact CyanicLab today and let us propel your vision to success with our top-notch IT solutions.
Code reviews are vital for ensuring good code quality. They serve as one of our last lines of defense against bugs and subpar code reaching production.
Yet, they often turn into annoying tasks riddled with frustration, hostility, unclear feedback and lack of standards. How can we improve this crucial process?
In this session we will cover:
- The Art of Effective Code Reviews
- Streamlining the Review Process
- Elevating Reviews with Automated Tools
By the end of this presentation, you'll have the knowledge on how to organize and improve your code review proces
May Marketo Masterclass, London MUG May 22 2024.pdfAdele Miller
Can't make Adobe Summit in Vegas? No sweat because the EMEA Marketo Engage Champions are coming to London to share their Summit sessions, insights and more!
This is a MUG with a twist you don't want to miss.
First Steps with Globus Compute Multi-User EndpointsGlobus
In this presentation we will share our experiences around getting started with the Globus Compute multi-user endpoint. Working with the Pharmacology group at the University of Auckland, we have previously written an application using Globus Compute that can offload computationally expensive steps in the researcher's workflows, which they wish to manage from their familiar Windows environments, onto the NeSI (New Zealand eScience Infrastructure) cluster. Some of the challenges we have encountered were that each researcher had to set up and manage their own single-user globus compute endpoint and that the workloads had varying resource requirements (CPUs, memory and wall time) between different runs. We hope that the multi-user endpoint will help to address these challenges and share an update on our progress here.
Understanding Globus Data Transfers with NetSageGlobus
NetSage is an open privacy-aware network measurement, analysis, and visualization service designed to help end-users visualize and reason about large data transfers. NetSage traditionally has used a combination of passive measurements, including SNMP and flow data, as well as active measurements, mainly perfSONAR, to provide longitudinal network performance data visualization. It has been deployed by dozens of networks world wide, and is supported domestically by the Engagement and Performance Operations Center (EPOC), NSF #2328479. We have recently expanded the NetSage data sources to include logs for Globus data transfers, following the same privacy-preserving approach as for Flow data. Using the logs for the Texas Advanced Computing Center (TACC) as an example, this talk will walk through several different example use cases that NetSage can answer, including: Who is using Globus to share data with my institution, and what kind of performance are they able to achieve? How many transfers has Globus supported for us? Which sites are we sharing the most data with, and how is that changing over time? How is my site using Globus to move data internally, and what kind of performance do we see for those transfers? What percentage of data transfers at my institution used Globus, and how did the overall data transfer performance compare to the Globus users?
Listen to the keynote address and hear about the latest developments from Rachana Ananthakrishnan and Ian Foster who review the updates to the Globus Platform and Service, and the relevance of Globus to the scientific community as an automation platform to accelerate scientific discovery.
7. Data-parallel programming
Batch processing
Input is a large, immutable data sets
Processing can be processed in parallel pipelines
Significant chance of hardware failure during execution
Related programming models:
● Stream and incremental processing
● Deductive and graph programming
8. Data-parallel programming
Data-parallel
framework
(e.g.
MapReduce,
FlumeJava,
Apache Spark)
(*) You write two small functions
fun wc_mapper line =
List.map (fn w => (w, 1)) (split line);
fun wc_reducer (key, values) =
List.foldl (fn (x, y) => x + y) 0 values;
(*) The framework provides mapReduce
fun mapReduce mapper reducer list = ...;
(*) Combine them to build your program
fun wordCount list = mapReduce wc_mapper wc_reducer list;
9. Data-parallel programming
Data-parallel
framework
(e.g.
MapReduce,
FlumeJava,
Apache Spark)
(*) You write two small functions
fun wc_mapper line =
List.map (fn w => (w, 1)) (split line);
fun wc_reducer (key, values) =
List.foldl (fn (x, y) => x + y) 0 values;
(*) The framework provides mapReduce
fun mapReduce mapper reducer list = ...;
(*) Combine them to build your program
fun wordCount list = mapReduce wc_mapper wc_reducer list;
SQL SELECT word, COUNT(*) AS c
FROM Documents AS d,
LATERAL TABLE (split(d.text)) AS word // requires a ‘split’ UDF
GROUP BY word;
10. Data-parallel programming
Data-parallel
framework
(e.g.
MapReduce,
FlumeJava,
Apache Spark)
(*) You write two small functions
fun wc_mapper line =
List.map (fn w => (w, 1)) (split line);
fun wc_reducer (key, values) =
List.foldl (fn (x, y) => x + y) 0 values;
(*) The framework provides mapReduce
fun mapReduce mapper reducer list = ...;
(*) Combine them to build your program
fun wordCount list = mapReduce wc_mapper wc_reducer list;
SQL SELECT word, COUNT(*) AS c
FROM Documents AS d,
LATERAL TABLE (split(d.text)) AS word // requires a ‘split’ UDF
GROUP BY word;
Morel from line in lines,
word in split line (*) requires ‘split’ function - see later...
group word compute c = count
11. What are our options?
Extend SQL Extend a functional programming
language
We’ll need to:
● Allow functions defined in
queries
● Add relations-as-values
● Add functions-as-values
● Modernize the type system
(adding type variables, function
types, algebraic types)
● Write an optimizing compiler
We’ll need to:
● Add syntax for relational operations
● Map onto external data
● Write a query optimizer
Nice stuff we get for free:
● Algebraic types
● Pattern matching
● Inline function and value declarations
12. Morel is a functional programming language. It is
derived from Standard ML, and is extended with
list comprehensions and other relational
operators. Like Standard ML, Morel has
parametric and algebraic data types with
Hindley-Milner type inference. Morel is
implemented in Java, and is optimized and
executed using a combination of techniques from
functional programming language compilers and
database query optimizers.
14. Extend Standard ML
Standard ML is strongly typed, and
can very frequently deduce every
type in a program.
Standard ML has record types.
(These are important for queries.)
Haven’t decided whether Morel is
eager (like Standard ML) or lazy
(like Haskell and SQL)
Haskell’s type system is more
powerful. So, Haskell programs
often have explicit types.
Standard
ML
Haskell
Scala
ML
Java
F#
C#
Morel
SQL
Lisp
Scheme
Clojure
OCaml
1975
2019
15. Themes
Early stage language
The target audience is SQL users
Less emphasis on abstract algebra
Program compilation versus query optimization
Functional programming in-the-small versus in-the-large
16. Quick intro to Standard ML
(All of the following examples work in both Standard ML and Morel.)
17. Standard ML: values and types
- "Hello, world!";
val it = "Hello, world!" : string
- 1 + 2;
val it = 3 : int
- ~1.5;
val it = ~1.5 : real
- [1, 1, 2, 3, 5];
val it = [1,1,2,3,5] : int list
- fn i => i mod 2 = 1;
val it = fn : int -> bool
- (1, "a");
val it = (1,"a") : int * string
- {name = "Fred", empno = 100};
val it = {empno=100,name="Fred"} : {empno:int, name:string}
18. Standard ML: variables and functions
- val x = 1;
val x = 1 : int
- val isOdd = fn i => i mod 2 = 1;
val isOdd = fn : int -> bool
- fun isOdd i = i mod 2 = 0;
val isOdd = fn : int -> bool
- isOdd x;
val it = true : bool
- let
= val x = 6
= fun isOdd i = i mod 2 = 1
= in
= isOdd x
= end;
val it = false : bool
val assigns a value to a variable.
fun declares a function.
● fun is syntactic sugar for
val ... = fn ... => ...
let allows you to make several declarations
before evaluating an expression.
19. - datatype 'a tree =
= EMPTY
= | LEAF of 'a
= | NODE of ('a * 'a tree * 'a tree);
Algebraic data types, case, and recursion
datatype declares an algebraic data type. 'a Is a
type variable and therefore 'a tree is a
polymorphic type.
20. - datatype 'a tree =
= EMPTY
= | LEAF of 'a
= | NODE of ('a * 'a tree * 'a tree);
- val t =
= NODE (1, LEAF 2, NODE (3, EMPTY, LEAF 7));
EMPTY
Algebraic data types, case, and recursion
datatype declares an algebraic data type. 'a Is a
type variable and therefore 'a tree is a
polymorphic type.
Define an instance of tree using its constructors
NODE, LEAF and EMPTY.
LEAF
2
NODE
1
NODE
3
LEAF
7
21. - datatype 'a tree =
= EMPTY
= | LEAF of 'a
= | NODE of ('a * 'a tree * 'a tree);
- val t =
= NODE (1, LEAF 2, NODE (3, EMPTY, LEAF 7));
- val rec sumTree = fn t =>
= case t of EMPTY => 0
= | LEAF i => i
= | NODE (i, l, r) =>
= i + sumTree l + sumTree r;
val sumTree = fn : int tree -> int
- sumTree t;
val it = 13 : int
Algebraic data types, case, and recursion
datatype declares an algebraic data type. 'a Is a
type variable and therefore 'a tree is a
polymorphic type.
Define an instance of tree using its constructors
NODE, LEAF and EMPTY.
case matches patterns,
deconstructing data types, and
binding variables as it goes.
sumTree use case and calls
itself recursively.
EMPTY
LEAF
2
NODE
1
NODE
3
LEAF
7
22. - datatype 'a tree =
= EMPTY
= | LEAF of 'a
= | NODE of ('a * 'a tree * 'a tree);
- val t =
= NODE (1, LEAF 2, NODE (3, EMPTY, LEAF 7));
- val rec sumTree = fn t =>
= case t of EMPTY => 0
= | LEAF i => i
= | NODE (i, l, r) =>
= i + sumTree l + sumTree r;
val sumTree = fn : int tree -> int
- sumTree t;
val it = 13 : int
- fun sumTree EMPTY = 0
= | sumTree (LEAF i) = i
= | sumTree (NODE (i, l, r)) =
= i + sumTree l + sumTree r;
val sumTree = fn : int tree -> int
Algebraic data types, case, and recursion
datatype declares an algebraic data type. 'a Is a
type variable and therefore 'a tree is a
polymorphic type.
Define an instance of tree using its constructors
NODE, LEAF and EMPTY.
case matches patterns,
deconstructing data types, and
binding variables as it goes.
sumTree use case and calls
itself recursively.
EMPTY
LEAF
2
NODE
1
NODE
3
LEAF
7
fun is a shorthand for
case and val rec.
23. Relations and higher-order functions
Relations are lists of records.
A higher-order function is a function whose
arguments or return value are functions.
Common higher-order functions that act on
lists:
● List.filter – equivalent to the Filter
relational operator (SQL WHERE)
● List.map – equivalent to Project
relational operator (SQL SELECT)
● flatMap – as List.map, but may
produce several output elements per
input element
#deptno is a system-generated function that
accesses the deptno field of a record.
- val emps = [
= {id = 100, name = "Fred", deptno = 10},
= {id = 101, name = "Velma", deptno = 20},
= {id = 102, name = "Shaggy", deptno = 30},
= {id = 103, name = "Scooby", deptno = 30}];
- List.filter (fn e => #deptno e = 30) emps;
val it = [{deptno=30,id=102,name="Shaggy"},
{deptno=30,id=103,name="Scooby"}]
: {deptno:int, id:int, name:string} list
SELECT *
FROM emps AS e
WHERE e.deptno = 30
Equivalent SQL
24. Implementing Join using higher-order functions
List.map function is equivalent to
Project relational operator (SQL
SELECT)
We also define a flatMap function.
- val depts = [
= {deptno = 10, name = "Sales"},
= {deptno = 20, name = "Marketing"},
= {deptno = 30, name = "R&D"}];
- fun flatMap f xs = List.concat (List.map f xs);
val flatMap =
fn : ('a -> 'b list) -> 'a list -> 'b list
- List.map
= (fn (d, e) => {deptno = #deptno d, name = #name e})
= (List.filter
= (fn (d, e) => #deptno d = #deptno e)
= (flatMap
= (fn e => (List.map (fn d => (d, e)) depts))
= emps));
val it =
[{deptno=10,name="Fred"}, {deptno=20,name="Velma"},
{deptno=30,name="Shaggy"}, {deptno=30,name="Scooby"}]
: {deptno:int, name:string} list
SELECT d.deptno, e.name
FROM emps AS e,
depts AS d
WHERE d.deptno = e.deptno
Equivalent SQL
26. List.map
(fn (d, e) => {deptno = #deptno d, name = #name e})
(List.filter
(fn (d, e) => #deptno d = #deptno e)
(flatMap
(fn e => (List.map (fn d => (d, e)) depts))
emps));
Implementing Join in Morel using from
Morel extensions to Standard ML:
● from operator creates a list
comprehension
● x.field is shorthand for
#field x
● {#field} is shorthand for
{field = #field x}
SELECT d.deptno, e.name
FROM emps AS e,
depts AS d
WHERE d.deptno = e.deptno
Equivalent SQL
27. List.map
(fn (d, e) => {deptno = #deptno d, name = #name e})
(List.filter
(fn (d, e) => #deptno d = #deptno e)
(flatMap
(fn e => (List.map (fn d => (d, e)) depts))
emps));
from e in emps,
d in depts
where #deptno e = #deptno d
yield {deptno = #deptno d, name = #name e};
Implementing Join in Morel using from
Morel extensions to Standard ML:
● from operator creates a list
comprehension
● x.field is shorthand for
#field x
● {#field} is shorthand for
{field = #field x}
SELECT d.deptno, e.name
FROM emps AS e,
depts AS d
WHERE d.deptno = e.deptno
Equivalent SQL
28. List.map
(fn (d, e) => {deptno = #deptno d, name = #name e})
(List.filter
(fn (d, e) => #deptno d = #deptno e)
(flatMap
(fn e => (List.map (fn d => (d, e)) depts))
emps));
from e in emps,
d in depts
where #deptno e = #deptno d
yield {deptno = #deptno d, name = #name e};
from e in emps,
d in depts
where e.deptno = d.deptno
yield {d.deptno, e.name};
Implementing Join in Morel using from
Morel extensions to Standard ML:
● from operator creates a list
comprehension
● x.field is shorthand for
#field x
● {#field} is shorthand for
{field = #field x}
SELECT d.deptno, e.name
FROM emps AS e,
depts AS d
WHERE d.deptno = e.deptno
Equivalent SQL
29. WordCount
let
fun split0 [] word words = word :: words
| split0 (#" " :: s) word words = split0 s "" (word :: words)
| split0 (c :: s) word words = split0 s (word ^ (String.str c)) words
fun split = List.rev (split0 (String.explode s) "" [])
in
from line in lines,
word in split line
group word compute c = count
end;
30. WordCount
- let
= fun split0 [] word words = word :: words
= | split0 (#" " :: s) word words = split0 s "" (word :: words)
= | split0 (c :: s) word words = split0 s (word ^ (String.str c)) words
= fun split = List.rev (split0 (String.explode s) "" [])
= in
= from line in lines,
= word in split line
= group word compute c = count
= end;
val wordCount = fn : string list -> {c:int, word:string} list
- wordCount ["a skunk sat on a stump",
= "and thunk the stump stunk",
= "but the stump thunk the skunk stunk"];
val it =
[{c=2,word="a"},{c=3,word="the"},{c=1,word="but"},
{c=1,word="sat"},{c=1,word="and"},{c=2,word="stunk"},
{c=3,word="stump"},{c=1,word="on"},{c=2,word="thunk"},
{c=2,word="skunk"}] : {c:int, word:string} list
31. - let
= fun split0 [] word words = word :: words
= | split0 (#" " :: s) word words = split0 s "" (word :: words)
= | split0 (c :: s) word words = split0 s (word ^ (String.str c)) words
= fun split = List.rev (split0 (String.explode s) "" [])
= in
= from line in lines,
= word in split line
= group word compute c = count
= end;
val wordCount = fn : string list -> {c:int, word:string} list
- wordCount ["a skunk sat on a stump",
= "and thunk the stump stunk",
= "but the stump thunk the skunk stunk"];
val it =
[{c=2,word="a"},{c=3,word="the"},{c=1,word="but"},
{c=1,word="sat"},{c=1,word="and"},{c=2,word="stunk"},
{c=3,word="stump"},{c=1,word="on"},{c=2,word="thunk"},
{c=2,word="skunk"}] : {c:int, word:string} list
WordCount
Functional
programming
“in the small”
32. WordCount
- let
= fun split0 [] word words = word :: words
= | split0 (#" " :: s) word words = split0 s "" (word :: words)
= | split0 (c :: s) word words = split0 s (word ^ (String.str c)) words
= fun split = List.rev (split0 (String.explode s) "" [])
= in
= from line in lines,
= word in split line
= group word compute c = count
= end;
val wordCount = fn : string list -> {c:int, word:string} list
- wordCount ["a skunk sat on a stump",
= "and thunk the stump stunk",
= "but the stump thunk the skunk stunk"];
val it =
[{c=2,word="a"},{c=3,word="the"},{c=1,word="but"},
{c=1,word="sat"},{c=1,word="and"},{c=2,word="stunk"},
{c=3,word="stump"},{c=1,word="on"},{c=2,word="thunk"},
{c=2,word="skunk"}] : {c:int, word:string} list
Functional
programming
“in the large”
33. Functions as views, functions as values
- fun emps2 () =
= from e in emps
= yield {e.id,
= e.name,
= e.deptno,
= comp = fn revenue => case e.deptno of
= 30 => e.id + revenue / 2
= | _ => e.id};
val emps2 = fn : unit -> {comp:int -> int,
deptno:int, id:int, name:string} list
34. Functions as views, functions as values
emps2 is a function
that returns a
collection
- fun emps2 () =
= from e in emps
= yield {e.id,
= e.name,
= e.deptno,
= comp = fn revenue => case e.deptno of
= 30 => e.id + revenue / 2
= | _ => e.id};
val emps2 = fn : unit -> {comp:int -> int,
deptno:int, id:int, name:string} list
35. - fun emps2 () =
= from e in emps
= yield {e.id,
= e.name,
= e.deptno,
= comp = fn revenue => case e.deptno of
= 30 => e.id + revenue / 2
= | _ => e.id};
val emps2 = fn : unit -> {comp:int -> int,
deptno:int, id:int, name:string} list
Functions as views, functions as values
The comp field is a
function value (in
fact, it’s a closure)
emps2 is a function
that returns a
collection
36. Functions as views, functions as values
- fun emps2 () =
= from e in emps
= yield {e.id,
= e.name,
= e.deptno,
= comp = fn revenue => case e.deptno of
= 30 => e.id + revenue / 2
= | _ => e.id};
val emps2 = fn : unit -> {comp:int -> int,
deptno:int, id:int, name:string} list
- fun emps2 () =
= from e in emps
= yield {e.id,
= e.name,
= e.deptno,
= comp = fn revenue => case e.deptno of
= 30 => e.id + revenue / 2
= | _ => e.id};
val emps2 = fn : unit -> {comp:int -> int,
deptno:int, id:int, name:string} list
- from e in emps2 ()
= yield {e.name, e.id, c = e.comp 1000};
The comp field is a
function value (in
fact, it’s a closure)
emps2 is a function
that returns a
collection
37. Functions as views, functions as values
- fun emps2 () =
= from e in emps
= yield {e.id,
= e.name,
= e.deptno,
= comp = fn revenue => case e.deptno of
= 30 => e.id + revenue / 2
= | _ => e.id};
val emps2 = fn : unit -> {comp:int -> int,
deptno:int, id:int, name:string} list
- from e in emps2 ()
= yield {e.name, e.id, c = e.comp 1000};
val it =
[{c=100,id=100,name="Fred"},
{c=101,id=101,name="Velma"},
{c=602,id=102,name="Shaggy"},
{c=603,id=103,name="Scooby"}]
: {c:int, id:int, name:string} list
38. Chaining relational operators
- from e in emps
= order e.deptno, e.id desc
= yield {e.name, nameLength = String.size e.name, e.id, e.deptno}
= where nameLength > 4
= group deptno compute c = count, s = sum of nameLength
= where s > 10
= yield c + s;
val it = [14] : int list
39. Chaining relational operators - step 1
- from e in emps;
val it =
[{deptno=10,id=100,name="Fred"},
{deptno=20,id=101,name="Velma"},
{deptno=30,id=102,name="Shaggy"},
{deptno=30,id=103,name="Scooby"}]
: {deptno:int, id:int, name:string} list
40. Chaining relational operators - step 2
- from e in emps
= order e.deptno, e.id desc;
val it =
[{deptno=10,id=100,name="Fred"},
{deptno=20,id=101,name="Velma"},
{deptno=30,id=103,name="Scooby"},
{deptno=30,id=102,name="Shaggy"}]
: {deptno:int, id:int, name:string} list
41. Chaining relational operators - step 3
- from e in emps
= order e.deptno, e.id desc
= yield {e.name, nameLength = String.size e.name, e.id, e.deptno};
val it =
[{deptno=10,id=100,name="Fred",nameLength=4},
{deptno=20,id=101,name="Velma",nameLength=5},
{deptno=30,id=103,name="Scooby",nameLength=6},
{deptno=30,id=102,name="Shaggy",nameLength=6}]
: {deptno:int, id:int, name:string, nameLength:int} list
42. Chaining relational operators - step 4
- from e in emps
= order e.deptno, e.id desc
= yield {e.name, nameLength = String.size e.name, e.id, e.deptno}
= where nameLength > 4;
val it =
[{deptno=20,id=101,name="Velma",nameLength=5},
{deptno=30,id=103,name="Scooby",nameLength=6},
{deptno=30,id=102,name="Shaggy",nameLength=6}]
: {deptno:int, id:int, name:string, nameLength:int} list
43. Chaining relational operators - step 5
- from e in emps
= order e.deptno, e.id desc
= yield {e.name, nameLength = String.size e.name, e.id, e.deptno}
= where nameLength > 4
= group deptno compute c = count, s = sum of nameLength;
val it =
[{c=1,deptno=20,s=5},
{c=2,deptno=30,s=12}]
: {c:int, deptno:int, s:int} list
44. Chaining relational operators - step 6
- from e in emps
= order e.deptno, e.id desc
= yield {e.name, nameLength = String.size e.name, e.id, e.deptno}
= where nameLength > 4
= group deptno compute c = count, s = sum of nameLength
= where s > 10;
val it =
[{c=2,deptno=30,s=12}]
: {c:int, deptno:int, s:int} list
45. Chaining relational operators
= from e in emps
= order e.deptno, e.id desc
= yield {e.name, nameLength = String.size e.name, e.id, e.deptno}
= where nameLength > 4
= group deptno compute c = count, s = sum of nameLength
= where s > 10
= yield c + s;
val it = [14] : int list
46. Chaining relational operators
Morel from e in emps
order e.deptno, e.id desc
yield {e.name, nameLength = String.size e.name, e.id, e.deptno}
where nameLength > 4
group deptno compute c = count, s = sum of nameLength
where s > 10
yield c + s;
SQL (almost
equivalent)
SELECT c + s
FROM (SELECT deptno, COUNT(*) AS c, SUM(nameLength) AS s
FROM (SELECT e.name, CHAR_LENGTH(e.ename) AS nameLength, e.id, e.deptno
FROM (SELECT *
FROM emps AS e
ORDER BY e.deptno, e.id DESC))
WHERE nameLength > 4
GROUP BY deptno
HAVING s > 10)
Java (very
approximately)
for (Emp e : emps) {
String name = e.name;
int nameLength = name.length();
int id = e.id;
int deptno = e.deptno;
if (nameLength > 4) {
...
48. Apache Calcite
Apache Calcite Server
JDBC server
JDBC client
SQL parser &
validator
Query
planner
Adapter
Pluggable
rewrite rules
Pluggable
stats / cost
Pluggable
catalog
Physical
operators
Storage
Relational
algebra
Toolkit for writing a DBMS
Many parts are optional or
pluggable
Relational algebra is the
core
49. SELECT d.name, COUNT(*) AS c
FROM Emps AS e
JOIN Depts AS d USING (deptno)
WHERE e.age > 50
GROUP BY d.deptno
HAVING COUNT(*) > 5
ORDER BY c DESC
Relational algebra
Based on set theory, plus
operators: Project, Filter,
Aggregate, Union, Join, Sort
Calcite provides:
● SQL to relational algebra
● Query planner
● Physical operators to execute
plan
● An adapter system to make
external data sources look
like tables
Scan [Emps] Scan [Depts]
Join [e.deptno = d.deptno]
Filter [e.age > 50]
Aggregate [deptno, COUNT(*) AS c]
Filter [c > 5]
Project [name, c]
Sort [c DESC]
50. Algebraic rewrite
Scan [Emps] Scan [Depts]
Join [e.deptno = d.deptno]
Filter [e.age > 50]
Aggregate [deptno, COUNT(*) AS c]
Filter [c > 5]
Project [name, c]
Sort [c DESC]
Calcite optimizes queries by
applying rewrite rules that
preserve semantics.
Planner uses dynamic
programming, seeking the
lowest total cost.
SELECT d.name, COUNT(*) AS c
FROM (SELECT * FROM Emps
WHERE e.age > 50) AS e
JOIN Depts AS d USING (deptno)
GROUP BY d.deptno
HAVING COUNT(*) > 5
ORDER BY c DESC
51. Integration with Apache Calcite - schemas
Expose Calcite schemas as named
records, each field of which is a table.
A default Morel connection has
variables foodmart and scott
(connections to hsqldb via Calcite’s
JDBC adapter).
Connections to other Calcite
adapters (Apache Cassandra, Druid,
Kafka, Pig, Redis) are also possible.
- foodmart;
val it = {account=<relation>, currency=<relation>,
customer=<relation>, ...} : ...;
- scott;
val it = {bonus=<relation>, dept=<relation>,
emp=<relation>, salgrade=<relation>} : ...;
- scott.dept;
val it =
[{deptno=10,dname="ACCOUNTING",loc="NEW YORK"},
{deptno=20,dname="RESEARCH",loc="DALLAS"},
{deptno=30,dname="SALES",loc="CHICAGO"},
{deptno=40,dname="OPERATIONS",loc="BOSTON"}]
: {deptno:int, dname:string, loc:string} list
- from d in scott.dept
= where notExists (from e in scott.emp
= where e.deptno = d.deptno
= andalso e.job = "CLERK");
val it =
[{deptno=40,dname="OPERATIONS",loc="BOSTON"}]
: {deptno:int, dname:string, loc:string} list
52. - Sys.set ("hybrid", true);
val it = () : unit
- from d in scott.dept
= where notExists (from e in scott.emp
= where e.deptno = d.deptno
= andalso e.job = "CLERK");
val it = [{deptno=40,dname="OPERATIONS",loc="BOSTON"}]
: {deptno:int, dname:string, loc:string} list
Integration with Apache Calcite - relational algebra
In “hybrid” mode, Morel’s compiler identifies
sections of Morel programs that are
relational operations and converts them to
Calcite relational algebra.
Calcite can them optimize these and
execute them on their native systems.
53. - Sys.set ("hybrid", true);
val it = () : unit
- from d in scott.dept
= where notExists (from e in scott.emp
= where e.deptno = d.deptno
= andalso e.job = "CLERK");
val it = [{deptno=40,dname="OPERATIONS",loc="BOSTON"}]
: {deptno:int, dname:string, loc:string} list
- Sys.plan();
val it = "calcite(plan
LogicalProject(deptno=[$0], dname=[$1], loc=[$2])
LogicalFilter(condition=[IS NULL($4)])
LogicalJoin(condition=[=($0, $3)], joinType=[left])
LogicalProject(deptno=[$0], dname=[$1], loc=[$2])
JdbcTableScan(table=[[scott, DEPT]])
LogicalProject(deptno=[$0], $f1=[true])
LogicalAggregate(group=[{0}])
LogicalProject(deptno=[$1])
LogicalFilter(condition=[AND(=($5, 'CLERK'), IS NOT NULL($1))])
LogicalProject(comm=[$6], deptno=[$7], empno=[$0],ename=[$1], hiredate=[$4], job=[$2]
JdbcTableScan(table=[[scott, EMP]]))" : string
Integration with Apache Calcite - relational algebra
In “hybrid” mode, Morel’s compiler identifies
sections of Morel programs that are
relational operations and converts them to
Calcite relational algebra.
Calcite can them optimize these and
execute them on their native systems.
54. Optimization
Relational query optimization
● Applies to relational operators (~10 core
operators + UDFs) not general-purpose
code
● Transformation rules that match patterns
(e.g. Filter on Project)
● Decisions based on cost & statistics
● Hybrid plans - choose target engine, sort
order
● Materialized views
● Macro-level optimizations
Functional program optimization
● Applies to typed lambda calculus
● Inline constant lambda expressions
● Eliminate dead code
● Defend against mutually recursive groups
● Careful to not inline expressions that are
used more than once (increasing code
size) or which bind closures more often
(increasing running time)
● Micro-level optimizations
[GM] “The Volcano Optimizer Generator: Extensibility and Efficient Search” (Graefe, McKenna 1993)
[PJM] “Secrets of the Glasgow Haskell Compiler inliner” (Peyton Jones, Marlow 2002)
62. Morel
Functional query language
Rich type system, concise as SQL, Turing complete
Combines (relational) query optimization with
(lambda) FP optimization techniques
Execute in Java interpreter and/or data-parallel
engines from e in emps,
d in depts
where e.deptno = d.deptno
yield {d.deptno, e.name};
from line in lines,
word in split line
group word compute c = count;
66. Compared to other languages
Haskell – Haskell comprehensions are more general (monads vs lists). Morel is
focused on relational algebra and probably benefits from a simpler type system.
Builder APIs (e.g. LINQ, FlumeJava, Cascading, Apache Spark) – Builder APIs
are two languages. E.g. Apache Spark programs are Scala that builds an
algebraic expression. Scala code (especially lambdas) is sprinkled throughout the
algebra. Scala compilation is not integrated with algebra optimization.
SQL – SQL’s type system does not have parameterized types, so higher order
functions are awkward. Tables and columns have separate namespaces, which
complicates handling of nested collections. Functions and temporary variables
cannot be defined inside queries, so queries are not Turing-complete (unless you
use recursive query gymnastics).
67. Standard ML: types
Type Example
Primitive types bool
char
int
real
string
unit
true: bool
#"a": char
~1: int
3.14: real
"foo": string
(): unit
Function types string -> int
int * int -> int
String.size
fn (x, y) => x + y * y
Tuple types int * string (10, "Fred")
Record types {empno:int, name:string} {empno=10, name="Fred"}
Collection types int list
(bool * (int -> int)) list
[1, 2, 3]
[(true, fn i => i + 1)]
Type variables 'a List.length: 'a list -> int
68. Functional programming ↔ relational programming
Functional
programming
in-the-small
- fun squareList [] = []
= | squareList (x :: xs) = x * x :: squareList xs;
val squareList = fn : int list -> int list
- squareList [1, 2, 3];
val it = [1,4,9] : int list
Functional
programming
in-the-large
- fun squareList xs = List.map (fn x => x * x) xs;
val squareList = fn : int list -> int list
- squareList [1, 2, 3];
val it = [1,4,9] : int list
Relational
programming
- fun squareList xs =
= from x in xs
= yield x * x;
- squareList [1, 2, 3];
val it = [1,4,9] : int list
69. wordCount again
wordCount
in-the-small
- fun wordCount list = ...;
val wordCount = fn : string list -> {count:int, word:string} list
wordCount
in-the-large using
mapReduce
- fun mapReduce mapper reducer list = ...;
val mapReduce = fn : ('a -> ('b * 'c) list) ->
('b * 'c list -> 'd) -> 'a list -> ('b * 'd) list
- fun wc_mapper line =
= List.map (fn w => (w, 1)) (split line);
val wc_mapper = fn : string -> (string * int) list
- fun wc_reducer (key, values) =
= List.foldl (fn (x, y) => x + y) 0 values;
val wc_reducer = fn : 'a * int list -> int
- fun wordCount list = mapReduce wc_mapper wc_reducer list;
val wordCount = fn : string list -> {count:int, word:string} list
Relational
implementation of
mapReduce
- fun mapReduce mapper reducer list =
= from e in list,
= (k, v) in mapper e
= group k compute c = (fn vs => reducer (k, vs)) of v;
70. group …. compute
- fun median reals = ...;
val median = fn : real list -> real
- from e in emps
= group x = e.deptno mod 3,
= e.job
= compute c = count,
= sum of e.sal,
= m = median of e.sal + e.comm;
val it = {c:int, job:string, m:real, sum:real, x.int} list