An introduction to structured data processing using Data source and Dataframe API's of spark.Presented at Bangalore Apache Spark Meetup by Madhukara Phatak on 31/05/2015.
Anatomy of Data Frame API : A deep dive into Spark Data Frame APIdatamantra
In this presentation, we discuss about internals of spark data frame API. All the code discussed in this presentation available at https://github.com/phatak-dev/anatomy_of_spark_dataframe_api
Improving Mobile Payments With Real time Sparkdatamantra
Talk about real world spark streaming implementation for improving mobile payments experience. Presented at Target data meetup at Bangalore by Madhukara Phatak on 22/08/2015.
Anatomy of Data Frame API : A deep dive into Spark Data Frame APIdatamantra
In this presentation, we discuss about internals of spark data frame API. All the code discussed in this presentation available at https://github.com/phatak-dev/anatomy_of_spark_dataframe_api
Improving Mobile Payments With Real time Sparkdatamantra
Talk about real world spark streaming implementation for improving mobile payments experience. Presented at Target data meetup at Bangalore by Madhukara Phatak on 22/08/2015.
Case Study: Stream Processing on AWS using Kappa ArchitectureJoey Bolduc-Gilbert
In the summer of 2016, XpertSea decided to migrate its operations to AWS and to build a data processing system that is able to scale to the extent of our ambitions. Come see how we built our platform inspired by Kappa Architecture, able to support connected devices located all-around the globe and state-of-the-art machine learning algorithms.
Superworkflow of Graph Neural Networks with K8S and FugueDatabricks
When machine learning models are productionized, they are commonly formed as workflows with multiple tasks, managed by a task scheduler such as Airflow, Prefect. Traditionally each task within the same workflow uses similar computing frameworks (e.g. Python, Spark, and PyTorch) in the same backend computing environment (e.g. AWS EMR, Google DataProc) with globally fixed settings (e.g. instances, cores, memory).
In complicated use cases, such traditional workflows create large resource and runtime inefficiency, hence it is highly desired to use different computing frameworks in the same workflow in different computing environments. Such workflows can be named as superworkflows. Fugue is an open-sourced abstraction layer on top of different computing frameworks and creates uniform interfaces to use these frameworks without dealing with the complexities associated with them. To this end, Fugue can be viewed as a superframework.
In addition, Kubernetes (K8S) is a container orchestration system, and it is easy to create different computing environments (e.g. Spark, PyTorch) with different docker images as everything is containerized in K8S. It is natural to combine K8S and Fugue to create superworkflows for complicated machine learning problems. In this talk, we use a popular graph neural network named Node2Vec as an example to illustrate how to create an efficient superworkflow using Fugue and K8S on very large graphs with hundreds of millions of vertices and edges.
We also demonstrate how to partition the whole Node2Vec process into multiple tasks based on their complexities and parallelism. Benchmark testing is conducted for comparing performance and resource efficiency. Finally, it is easy to generalize this superworkflow concept to other deep learning problems.
Extending Spark SQL 2.4 with New Data Sources (Live Coding Session)Databricks
Spark SQL 2.4.x gives you two Data Source APIs that your structured queries can use to access data in custom formats, possibly in unsupported storage systems. There is the older and almost legacy DataSource API V1 and what you can consider a modern DataSource API V2. This talk will introduce you to the main entities of each DataSource API and show you the steps how to write a new data source live on stage. That should give you enough knowledge on expanding available data sources in Spark SQL with new ones.
When Apache Spark Meets TiDB with Xiaoyu MaDatabricks
During the past 10 years, big-data storage layers mainly focus on analytical use cases. When it comes to analytical cases, users usually offload data onto Hadoop cluster and perform queries on HDFS files. People struggle dealing with modifications on append only storage and maintain fragile ETL pipelines.
On the other hand, although Spark SQL has been proven effective parallel query processing engine, some tricks common in traditional databases are not available due to characteristics of storage underneath. TiSpark sits directly on top of a distributed database (TiDB)’s storage engine, expand Spark SQL’s planning with its own extensions and utilizes unique features of database storage engine to achieve functions not possible for Spark SQL on HDFS. With TiSpark, users are able to perform queries directly on changing / fresh data in real time.
The takeaways from this two are twofold:
— How to integrate Spark SQL with a distributed database engine and the benefit of it
— How to leverage Spark SQL’s experimental methods to extend its capacity.
Case Study: Stream Processing on AWS using Kappa ArchitectureJoey Bolduc-Gilbert
In the summer of 2016, XpertSea decided to migrate its operations to AWS and to build a data processing system that is able to scale to the extent of our ambitions. Come see how we built our platform inspired by Kappa Architecture, able to support connected devices located all-around the globe and state-of-the-art machine learning algorithms.
Superworkflow of Graph Neural Networks with K8S and FugueDatabricks
When machine learning models are productionized, they are commonly formed as workflows with multiple tasks, managed by a task scheduler such as Airflow, Prefect. Traditionally each task within the same workflow uses similar computing frameworks (e.g. Python, Spark, and PyTorch) in the same backend computing environment (e.g. AWS EMR, Google DataProc) with globally fixed settings (e.g. instances, cores, memory).
In complicated use cases, such traditional workflows create large resource and runtime inefficiency, hence it is highly desired to use different computing frameworks in the same workflow in different computing environments. Such workflows can be named as superworkflows. Fugue is an open-sourced abstraction layer on top of different computing frameworks and creates uniform interfaces to use these frameworks without dealing with the complexities associated with them. To this end, Fugue can be viewed as a superframework.
In addition, Kubernetes (K8S) is a container orchestration system, and it is easy to create different computing environments (e.g. Spark, PyTorch) with different docker images as everything is containerized in K8S. It is natural to combine K8S and Fugue to create superworkflows for complicated machine learning problems. In this talk, we use a popular graph neural network named Node2Vec as an example to illustrate how to create an efficient superworkflow using Fugue and K8S on very large graphs with hundreds of millions of vertices and edges.
We also demonstrate how to partition the whole Node2Vec process into multiple tasks based on their complexities and parallelism. Benchmark testing is conducted for comparing performance and resource efficiency. Finally, it is easy to generalize this superworkflow concept to other deep learning problems.
Extending Spark SQL 2.4 with New Data Sources (Live Coding Session)Databricks
Spark SQL 2.4.x gives you two Data Source APIs that your structured queries can use to access data in custom formats, possibly in unsupported storage systems. There is the older and almost legacy DataSource API V1 and what you can consider a modern DataSource API V2. This talk will introduce you to the main entities of each DataSource API and show you the steps how to write a new data source live on stage. That should give you enough knowledge on expanding available data sources in Spark SQL with new ones.
When Apache Spark Meets TiDB with Xiaoyu MaDatabricks
During the past 10 years, big-data storage layers mainly focus on analytical use cases. When it comes to analytical cases, users usually offload data onto Hadoop cluster and perform queries on HDFS files. People struggle dealing with modifications on append only storage and maintain fragile ETL pipelines.
On the other hand, although Spark SQL has been proven effective parallel query processing engine, some tricks common in traditional databases are not available due to characteristics of storage underneath. TiSpark sits directly on top of a distributed database (TiDB)’s storage engine, expand Spark SQL’s planning with its own extensions and utilizes unique features of database storage engine to achieve functions not possible for Spark SQL on HDFS. With TiSpark, users are able to perform queries directly on changing / fresh data in real time.
The takeaways from this two are twofold:
— How to integrate Spark SQL with a distributed database engine and the benefit of it
— How to leverage Spark SQL’s experimental methods to extend its capacity.
Real time Analytics with Apache Kafka and Apache SparkRahul Jain
A presentation cum workshop on Real time Analytics with Apache Kafka and Apache Spark. Apache Kafka is a distributed publish-subscribe messaging while other side Spark Streaming brings Spark's language-integrated API to stream processing, allows to write streaming applications very quickly and easily. It supports both Java and Scala. In this workshop we are going to explore Apache Kafka, Zookeeper and Spark with a Web click streaming example using Spark Streaming. A clickstream is the recording of the parts of the screen a computer user clicks on while web browsing.
Lightning talk showing various aspectos of software system performance. It goes through: latency, data structures, garbage collection, troubleshooting method like workload saturation method, quick diagnostic tools, famegraph and perfview
Apache Spark is an open-source framework developed by AMPlab of University of California and, successively, donated to Apache Software Foundation. Unlike the MapReduce paradigm based on twolevel disk of Hadoop, the primitive in-memory multilayer provided by Spark allow you to have performance up to 100 times better.
This talk discusses Spark (http://spark.apache.org), the Big Data computation system that is emerging as a replacement for MapReduce in Hadoop systems, while it also runs outside of Hadoop. I discuss why the issues why MapReduce needs to be replaced and how Spark addresses them with better performance and a more powerful API.
Introduction to Apache Spark. With an emphasis on the RDD API, Spark SQL (DataFrame and Dataset API) and Spark Streaming.
Presented at the Desert Code Camp:
http://oct2016.desertcodecamp.com/sessions/all
In this webinar, we'll see how to use Spark to process data from various sources in R and Python and how new tools like Spark SQL and data frames make it easy to perform structured data processing.
Building a Next-gen Data Platform and Leveraging the OSS Ecosystem for Easy W...StampedeCon
This session will be a detailed recount of the design, implementation, and launch of the next-generation Shutterstock Data Platform, with strong emphasis on conveying clear, understandable learnings that can be transferred to your own organizations and projects. This platform was architected around the prevailing use of Kafka as a highly-scalable central data hub for shipping data across your organization in batch or streaming fashion. It also relies heavily on Avro as a serialization format and a global schema registry to provide structure that greatly improves quality and usability of our data sets, while also allowing the flexibility to evolve schemas and maintain backwards compatibility.
As a company, Shutterstock has always focused heavily on leveraging open source technologies in developing its products and infrastructure, and open source has been a driving force in big data more so than almost any other software sub-sector. With this plethora of constantly evolving data technologies, it can be a daunting task to select the right tool for your problem. We will discuss our approach for choosing specific existing technologies and when we made decisions to invest time in home-grown components and solutions.
We will cover advantages and the engineering process of developing language-agnostic APIs for publishing to and consuming from the data platform. These APIs can power some very interesting streaming analytics solutions that are easily accessible to teams across our engineering organization.
We will also discuss some of the massive advantages a global schema for your data provides for downstream ETL and data analytics. ETL into Hadoop and creation and maintenance of Hive databases and tables becomes much more reliable and easily automated with historically compatible schemas. To complement this schema-based approach, we will cover results of performance testing various file formats and compression schemes in Hadoop and Hive, the massive performance benefits you can gain in analytical workloads by leveraging highly optimized columnar file formats such as ORC and Parquet, and how you can use good old fashioned Hive as a tool for easily and efficiently converting exiting datasets into these formats.
Finally, we will cover lessons learned in launching this platform across our organization, future improvements and further design, and the need for data engineers to understand and speak the languages of data scientists and web, infrastructure, and network engineers.
Presto is an open source distributed SQL query engine for running interactive analytic queries against data sources of all sizes ranging from gigabytes to petabytes.
Data Engineer's Lunch #77: Apache Arrow Flight SQL: A Universal Standard for ...Anant Corporation
This lunch covers why ODBC & JDBC don’t cut it in today’s data world and the problems solved by Arrow, Arrow Flight, and Arrow Flight SQL. Alex will go through how each of these building blocks works as well as an overview of universal ODBC & JDBC drivers built on Arrow Flight SQL, enabling clients to take advantage of this increased performance with zero application changes.
Sign Up For Our Newsletter: http://eepurl.com/grdMkn
Join Data Engineer’s Lunch Weekly at 12 PM EST Every Monday:
https://www.meetup.com/Data-Wranglers...
Cassandra.Link:
https://cassandra.link/
Follow Us and Reach Us At:
Anant:
https://www.anant.us/
Awesome Cassandra:
https://github.com/Anant/awesome-cass...
Email:
solutions@anant.us
LinkedIn:
https://www.linkedin.com/company/anant/
Twitter:
https://twitter.com/anantcorp
Eventbrite:
https://www.eventbrite.com/o/anant-10...
Facebook:
https://www.facebook.com/AnantCorp/
Join The Anant Team:
https://www.careers.anant.us
Stitch Fix aspires to help you find the style that you will love. Data, the backbone of the business, is used to help with styling recommendations, demand modeling, user acquisition, and merchandise planning and also to influence business decisions throughout the organization. These decisions are backed by algorithms and data collected and interpreted based on client preferences. This talk offers an overview of the compute infrastructure used by the data science team at Stitch Fix, covering the architecture, tools within the larger ecosystem, and the challenges that the team overcame along the way.
Apache Spark plays an important role in Stitch Fix’s data platform, and the company’s data scientists use Spark for their ETL and Presto for their ad hoc queries. The goal for the team running the compute infrastructure is to understand and make the data scientists’ lives easier, particularly in terms of usability of Spark, by building tools that make it easier to get started with Spark and transition themselves to a daily workflow. The compute infrastructure is a part of the data platform that is responsible for all the needs of data scientists as Stitch Fix.
In this talk, we look at Stitch Fix’s journey, exploring its Spark setup, in-house tools and how they work in synergy with open source frameworks in a cloud environment. There are additional improvements to the infrastructure that help persist information for future use and optimization and we look at how the implementation of Amazon’s EMR FS has helped make it easier for us to read from the S3 source.
An Engine to process big data in faster(than MR), easy and extremely scalable way. An Open Source, parallel, in-memory processing, cluster computing framework. Solution for loading, processing and end to end analyzing large scale data. Iterative and Interactive : Scala, Java, Python, R and with Command line interface.
AWS Big Data Demystified #1: Big data architecture lessons learned Omid Vahdaty
AWS Big Data Demystified #1: Big data architecture lessons learned . a quick overview of a big data techonoligies, which were selected and disregard in our company
The video: https://youtu.be/l5KmaZNQxaU
dont forget to subcribe to the youtube channel
The website: https://amazon-aws-big-data-demystified.ninja/
The meetup : https://www.meetup.com/AWS-Big-Data-Demystified/
The facebook group : https://www.facebook.com/Amazon-AWS-Big-Data-Demystified-1832900280345700/
The world has changed and having one huge server won’t do the job anymore, when you’re talking about vast amounts of data, growing all the time the ability to Scale Out would be your savior. Apache Spark is a fast and general engine for big data processing, with built-in modules for streaming, SQL, machine learning and graph processing.
This lecture will be about the basics of Apache Spark and distributed computing and the development tools needed to have a functional environment.
Extending Apache Spark SQL Data Source APIs with Join Push Down with Ioana De...Databricks
When Spark applications operate on distributed data coming from disparate data sources, they often have to directly query data sources external to Spark such as backing relational databases, or data warehouses. For that, Spark provides Data Source APIs, which are a pluggable mechanism for accessing structured data through Spark SQL. Data Source APIs are tightly integrated with the Spark Optimizer. They provide optimizations such as filter push down to the external data source and column pruning. While these optimizations significantly speed up Spark query execution, depending on the data source, they only provide a subset of the functionality that can be pushed down and executed at the data source. As part of our ongoing project to provide a generic data source push down API, this presentation will show our work related to join push down. An example is star-schema join, which can be simply viewed as filters applied to the fact table. Today, Spark Optimizer recognizes star-schema joins based on heuristics and executes star-joins using efficient left-deep trees. An alternative execution proposed by this work is to push down the star-join to the external data source in order to take advantage of multi-column indexes defined on the fact tables, and other star-join optimization techniques implemented by the relational data source.
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...Subhajit Sahu
Abstract — Levelwise PageRank is an alternative method of PageRank computation which decomposes the input graph into a directed acyclic block-graph of strongly connected components, and processes them in topological order, one level at a time. This enables calculation for ranks in a distributed fashion without per-iteration communication, unlike the standard method where all vertices are processed in each iteration. It however comes with a precondition of the absence of dead ends in the input graph. Here, the native non-distributed performance of Levelwise PageRank was compared against Monolithic PageRank on a CPU as well as a GPU. To ensure a fair comparison, Monolithic PageRank was also performed on a graph where vertices were split by components. Results indicate that Levelwise PageRank is about as fast as Monolithic PageRank on the CPU, but quite a bit slower on the GPU. Slowdown on the GPU is likely caused by a large submission of small workloads, and expected to be non-issue when the computation is performed on massive graphs.
Explore our comprehensive data analysis project presentation on predicting product ad campaign performance. Learn how data-driven insights can optimize your marketing strategies and enhance campaign effectiveness. Perfect for professionals and students looking to understand the power of data analysis in advertising. for more details visit: https://bostoninstituteofanalytics.org/data-science-and-artificial-intelligence/
Techniques to optimize the pagerank algorithm usually fall in two categories. One is to try reducing the work per iteration, and the other is to try reducing the number of iterations. These goals are often at odds with one another. Skipping computation on vertices which have already converged has the potential to save iteration time. Skipping in-identical vertices, with the same in-links, helps reduce duplicate computations and thus could help reduce iteration time. Road networks often have chains which can be short-circuited before pagerank computation to improve performance. Final ranks of chain nodes can be easily calculated. This could reduce both the iteration time, and the number of iterations. If a graph has no dangling nodes, pagerank of each strongly connected component can be computed in topological order. This could help reduce the iteration time, no. of iterations, and also enable multi-iteration concurrency in pagerank computation. The combination of all of the above methods is the STICD algorithm. [sticd] For dynamic graphs, unchanged components whose ranks are unaffected can be skipped altogether.
2. ● Madhukara Phatak
● Big data consultant and
trainer at datamantra.io
● Consult in Hadoop, Spark
and Scala
● www.madhukaraphatak.com
3. Agenda
● Variety in Big data
● Structured data
● Structured data analysis in M/R
● Datasource API
● DataFrame
● SQL in Spark
● Smart sources
4. 3 V’s of Big data
● Volume
○ TB’s and PB’s of files
○ Driving need for batch processing systems
● Velocity
○ TB’s of stream data
○ Driving need for stream processing systems
● Variety
○ Structured, semi structured and unstructured
○ Driving need for sql, graph processing systems
5. Why care about structured data?
● Isn’t big data is all about unstructured data?
● Most real world problems work with structured / semi
structured data 80% of the time
● Sources
○ JSON from API data
○ RDBMS input
○ NoSQL db inputs
● ETL process convert from unstructured to structured
6. Structured data in M/R world
● Both structured and unstructured data treated as same
text file
● Even higher level frameworks like Pig/Hive interprets
the structured data using user provided schema
● Let’s take an example of processing csv data in spark in
Map/Reduce style
● Ex: CsvInRDD
7. Challenges of structured data in M/R
● No uniform way of loading structured data, we just piggyback
on input format
● No automatic schema discovery
● Adding a new field or changing field sequencing is not that
easy
● Even Pig JSON input format just limits for record separating
● No high level representation of structured data even in Spark
as there is always RDD[T]
● No way to query RDD using sql once we have constructed
structured output
8. Spark SQL library
● Data source API
Universal API for Loading/ Saving structured data
● DataFrame API
Higher level representation for structured data
● SQL interpreter and optimizer
Express data transformation in SQL
● SQL service
Hive thrift server
9. Spark SQL Apache Hive
Library Framework
Optional metastore Mandatory metastore
Automatic schema inference Explicit schema declaration using DDL
API - DataFrame DSL and SQL HQL
Supports both Spark SQL and HQL Only HQL
Hive Thrift server Hive thrift server
10. Loading structured Data
● Why not Input format
○ Input format always needs key,value pair which is
not efficient manner to represent schema
○ Schema discovery is not built in
○ No direct support for smart sources aka server side
filtering
■ Hacks using configuration passing
12. Data source API
● Universal API for loading/saving structured data
● Built In support for Hive, Avro, Json,JDBC, Parquet
● Third party integration through spark-packages
● Support for smart sources
● Third parties already supporting
○ Csv
○ MongoDB
○ Cassandra (in works)
etc
13. Data source API examples
● SQLContext for accessing data source API’s
● sqlContext.load is way to load from given source
● Examples
○ Loading CSV file - CsvInDataSource.scala
○ Loading Json file - JsonInDataSource.scala
● Can we mix and match sources having same schema?
○ Example : MixSources.scala
14. DataFrame
● Single abstraction for representing structured data in
Spark
● DataFrame = RDD + Schema (aka SchemaRDD)
● All data source API’s return DataFrame
● Introduced in 1.3
● Inspired from R and Python panda
● .rdd to convert to RDD representation resulting in RDD
[Row]
● Support for DataFrame DSL in Spark
15. Querying data frames using SQL
● Spark-SQL has a built in spark sql interpreter and
optimizer similar to Hive
● Support both Spark SQL and Hive dialect
● Support for both temporary and hive metastore
● All ideas like UDF,UDAF, Partitioning of Hive is
supported
● Example
○ QueryCsv.scala
16. DataFrame DSL
● A DSL to express SQL transformation in a map/reduce
style DSL
● Geared towards Data Scientist coming from R/Python
● Both SQL and Dataframe uses exactly same interpreter
and optimizers
● SQL or Dataframe is upto you to decide
● Ex : AggDataFrame
17. RDD transformations Data Frame transformation
Actual transformation is shipped on
cluster
Optimized generated transformation is
shipped on cluster
No schema need to be specified Mandatory Schema
No parser or optimizer SQL parser and optimizer
Lowest API on platform API built on SQL which is intern built on RDD
Don’t use smart source capabilities Make effective use of smart souces
Different performance in different language API’s Same performance across all different languages
20. Smart sources
● Data source API’s integrates richly with data sources to
allow to get better performance from smart sources
● Smart data sources are the one which support server
side filtering, column pruning etc
● Ex : Parquet, HBase, Cassandra, RDBMS
● Whenever optimizer determines it only need only few
columns, it passes that info to data source
● Data source can optimize for reading only those columns
● More optimization like sharing logical planning is coming
in future
21. Apache Parquet
● Apache Parquet column storage format for Hadoop
● Supported by M/R,Spark, Hive, Pig etc
● Supports compression out of the box
● Optimized for column oriented processing aka analytics
● Supported out of the box in spark as data source API
● One of the smart sources, which supports column
pruning
● Ex : CsvToParquet.scala and AggParquet.scala