This presentation explains the major differences between SQL and NoSQL databases in terms of Scalability, Flexibility and Performance. It also talks about MongoDB which is a document-based NoSQL database and explains the database strutre for my mouse-human research classifier project.
Migrating Your Oracle Database to PostgreSQL - AWS Online Tech TalksAmazon Web Services
Learning Objectives:
- Learn about the capabilities of the PostgreSQL database
- Learn about PostgreSQL offerings on AWS
- Learn how to migrate from Oracle to PostgreSQL with minimal disruption
In this era of ever growing data, the need for analyzing it for meaningful business insights becomes more and more significant. There are different Big Data processing alternatives like Hadoop, Spark, Storm etc. Spark, however is unique in providing batch as well as streaming capabilities, thus making it a preferred choice for lightening fast Big Data Analysis platforms.
by Darin Briskman, Technical Evangelist, AWS
Database Freedom means being able to use the database engine that’s right for you as your needs evolve. Being locked into a specific technology can prevent you from achieving your mission. Fortunately, AWS Database Migration Service makes it easy to switch between different database engines. We’ll look at how to use Schema Migration Tool with DMS to switch from a commercial database to open source. You’ll need a laptop with a Firefox or Chrome browser.
This presentation explains the major differences between SQL and NoSQL databases in terms of Scalability, Flexibility and Performance. It also talks about MongoDB which is a document-based NoSQL database and explains the database strutre for my mouse-human research classifier project.
Migrating Your Oracle Database to PostgreSQL - AWS Online Tech TalksAmazon Web Services
Learning Objectives:
- Learn about the capabilities of the PostgreSQL database
- Learn about PostgreSQL offerings on AWS
- Learn how to migrate from Oracle to PostgreSQL with minimal disruption
In this era of ever growing data, the need for analyzing it for meaningful business insights becomes more and more significant. There are different Big Data processing alternatives like Hadoop, Spark, Storm etc. Spark, however is unique in providing batch as well as streaming capabilities, thus making it a preferred choice for lightening fast Big Data Analysis platforms.
by Darin Briskman, Technical Evangelist, AWS
Database Freedom means being able to use the database engine that’s right for you as your needs evolve. Being locked into a specific technology can prevent you from achieving your mission. Fortunately, AWS Database Migration Service makes it easy to switch between different database engines. We’ll look at how to use Schema Migration Tool with DMS to switch from a commercial database to open source. You’ll need a laptop with a Firefox or Chrome browser.
What is Apache Spark | Apache Spark Tutorial For Beginners | Apache Spark Tra...Edureka!
This Edureka "What is Spark" tutorial will introduce you to big data analytics framework - Apache Spark. This tutorial is ideal for both beginners as well as professionals who want to learn or brush up their Apache Spark concepts. Below are the topics covered in this tutorial:
1) Big Data Analytics
2) What is Apache Spark?
3) Why Apache Spark?
4) Using Spark with Hadoop
5) Apache Spark Features
6) Apache Spark Architecture
7) Apache Spark Ecosystem - Spark Core, Spark Streaming, Spark MLlib, Spark SQL, GraphX
8) Demo: Analyze Flight Data Using Apache Spark
Watch this talk here: https://www.confluent.io/online-talks/apache-kafka-architecture-and-fundamentals-explained-on-demand
This session explains Apache Kafka’s internal design and architecture. Companies like LinkedIn are now sending more than 1 trillion messages per day to Apache Kafka. Learn about the underlying design in Kafka that leads to such high throughput.
This talk provides a comprehensive overview of Kafka architecture and internal functions, including:
-Topics, partitions and segments
-The commit log and streams
-Brokers and broker replication
-Producer basics
-Consumers, consumer groups and offsets
This session is part 2 of 4 in our Fundamentals for Apache Kafka series.
Apache Spark is a In Memory Data Processing Solution that can work with existing data source like HDFS and can make use of your existing computation infrastructure like YARN/Mesos etc. This talk will cover a basic introduction of Apache Spark with its various components like MLib, Shark, GrpahX and with few examples.
Realtime Indexing for Fast Queries on Massive Semi-Structured DataScyllaDB
Rockset is a realtime indexing database that powers fast SQL over semi-structured data such as JSON, Parquet, or XML without requiring any schematization. All data loaded into Rockset are automatically indexed and a fully featured SQL engine powers fast queries over semi-structured data without requiring any database tuning. Rockset exploits the hardware fluidity available in the cloud and automatically grows and shrinks the cluster footprint based on demand. Available as a serverless cloud service, Rockset is used by developers to build data-driven applications and microservices.
In this talk, we discuss some of the key design aspects of Rockset, such as Smart Schema and Converged Index. We describe Rockset's Aggregator Leaf Tailer (ALT) architecture that provides low latency queries on large datasets.Then we describe how you can combine lightweight transactions in ScyllaDB with realtime analytics on Rockset to power an user-facing application.
This presentation contains the introduction to NOSQL databases, it's types with examples, differentiation with 40 year old relational database management system, it's usage, why and we should use it.
Amazon Aurora is a MySQL-compatible relational database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. Amazon Aurora is disruptive technology in the database space, bringing a new architectural model and distributed systems techniques to provide far higher performance, availability and durability than previously available using conventional monolithic database techniques. In this session, we will do a deep-dive into some of the key innovations behind Amazon Aurora, discuss best practices and configurations, and share early customer experience from the field.
This session will begin with an introduction to non-relational (NoSQL) databases and compare them with relational (SQL) databases. Learn the fundamentals of Amazon DynamoDB, a fully managed NoSQL database service, and see the DynamoDB console first-hand. See a walk-through demo of building a serverless web application using this high-performance key-value and JSON document store.
Organizations need to perform increasingly complex analysis on data — streaming analytics, ad-hoc querying, and predictive analytics — in order to get better customer insights and actionable business intelligence. Apache Spark has recently emerged as the framework of choice to address many of these challenges. In this session, we show you how to use Apache Spark on AWS to implement and scale common big data use cases such as real-time data processing, interactive data science, predictive analytics, and more. We will talk about common architectures, best practices to quickly create Spark clusters using Amazon EMR, and ways to integrate Spark with other big data services in AWS.
Learning Objectives:
• Learn why Spark is great for ad-hoc interactive analysis and real-time stream processing.
• How to deploy and tune scalable clusters running Spark on Amazon EMR.
• How to use EMR File System (EMRFS) with Spark to query data directly in Amazon S3.
• Common architectures to leverage Spark with Amazon DynamoDB, Amazon Redshift, Amazon Kinesis, and more.
A Deep Dive into Spark SQL's Catalyst Optimizer with Yin HuaiDatabricks
Catalyst is becoming one of the most important components of Apache Spark, as it underpins all the major new APIs in Spark 2.0 and later versions, from DataFrames and Datasets to Streaming. At its core, Catalyst is a general library for manipulating trees.
In this talk, Yin explores a modular compiler frontend for Spark based on this library that includes a query analyzer, optimizer, and an execution planner. Yin offers a deep dive into Spark SQL’s Catalyst optimizer, introducing the core concepts of Catalyst and demonstrating how developers can extend it. You’ll leave with a deeper understanding of how Spark analyzes, optimizes, and plans a user’s query.
NoSQL databases are currently used in several applications scenarios in contrast to Relations Databases. Several type of Databases there exist. In this presentation we compare Key Value, Column Oriented, Document Oriented and Graph Databases. Using a simple case study there are evaluated pros and cons of the NoSQL databases taken into account.
What is Apache Spark | Apache Spark Tutorial For Beginners | Apache Spark Tra...Edureka!
This Edureka "What is Spark" tutorial will introduce you to big data analytics framework - Apache Spark. This tutorial is ideal for both beginners as well as professionals who want to learn or brush up their Apache Spark concepts. Below are the topics covered in this tutorial:
1) Big Data Analytics
2) What is Apache Spark?
3) Why Apache Spark?
4) Using Spark with Hadoop
5) Apache Spark Features
6) Apache Spark Architecture
7) Apache Spark Ecosystem - Spark Core, Spark Streaming, Spark MLlib, Spark SQL, GraphX
8) Demo: Analyze Flight Data Using Apache Spark
Watch this talk here: https://www.confluent.io/online-talks/apache-kafka-architecture-and-fundamentals-explained-on-demand
This session explains Apache Kafka’s internal design and architecture. Companies like LinkedIn are now sending more than 1 trillion messages per day to Apache Kafka. Learn about the underlying design in Kafka that leads to such high throughput.
This talk provides a comprehensive overview of Kafka architecture and internal functions, including:
-Topics, partitions and segments
-The commit log and streams
-Brokers and broker replication
-Producer basics
-Consumers, consumer groups and offsets
This session is part 2 of 4 in our Fundamentals for Apache Kafka series.
Apache Spark is a In Memory Data Processing Solution that can work with existing data source like HDFS and can make use of your existing computation infrastructure like YARN/Mesos etc. This talk will cover a basic introduction of Apache Spark with its various components like MLib, Shark, GrpahX and with few examples.
Realtime Indexing for Fast Queries on Massive Semi-Structured DataScyllaDB
Rockset is a realtime indexing database that powers fast SQL over semi-structured data such as JSON, Parquet, or XML without requiring any schematization. All data loaded into Rockset are automatically indexed and a fully featured SQL engine powers fast queries over semi-structured data without requiring any database tuning. Rockset exploits the hardware fluidity available in the cloud and automatically grows and shrinks the cluster footprint based on demand. Available as a serverless cloud service, Rockset is used by developers to build data-driven applications and microservices.
In this talk, we discuss some of the key design aspects of Rockset, such as Smart Schema and Converged Index. We describe Rockset's Aggregator Leaf Tailer (ALT) architecture that provides low latency queries on large datasets.Then we describe how you can combine lightweight transactions in ScyllaDB with realtime analytics on Rockset to power an user-facing application.
This presentation contains the introduction to NOSQL databases, it's types with examples, differentiation with 40 year old relational database management system, it's usage, why and we should use it.
Amazon Aurora is a MySQL-compatible relational database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. Amazon Aurora is disruptive technology in the database space, bringing a new architectural model and distributed systems techniques to provide far higher performance, availability and durability than previously available using conventional monolithic database techniques. In this session, we will do a deep-dive into some of the key innovations behind Amazon Aurora, discuss best practices and configurations, and share early customer experience from the field.
This session will begin with an introduction to non-relational (NoSQL) databases and compare them with relational (SQL) databases. Learn the fundamentals of Amazon DynamoDB, a fully managed NoSQL database service, and see the DynamoDB console first-hand. See a walk-through demo of building a serverless web application using this high-performance key-value and JSON document store.
Organizations need to perform increasingly complex analysis on data — streaming analytics, ad-hoc querying, and predictive analytics — in order to get better customer insights and actionable business intelligence. Apache Spark has recently emerged as the framework of choice to address many of these challenges. In this session, we show you how to use Apache Spark on AWS to implement and scale common big data use cases such as real-time data processing, interactive data science, predictive analytics, and more. We will talk about common architectures, best practices to quickly create Spark clusters using Amazon EMR, and ways to integrate Spark with other big data services in AWS.
Learning Objectives:
• Learn why Spark is great for ad-hoc interactive analysis and real-time stream processing.
• How to deploy and tune scalable clusters running Spark on Amazon EMR.
• How to use EMR File System (EMRFS) with Spark to query data directly in Amazon S3.
• Common architectures to leverage Spark with Amazon DynamoDB, Amazon Redshift, Amazon Kinesis, and more.
A Deep Dive into Spark SQL's Catalyst Optimizer with Yin HuaiDatabricks
Catalyst is becoming one of the most important components of Apache Spark, as it underpins all the major new APIs in Spark 2.0 and later versions, from DataFrames and Datasets to Streaming. At its core, Catalyst is a general library for manipulating trees.
In this talk, Yin explores a modular compiler frontend for Spark based on this library that includes a query analyzer, optimizer, and an execution planner. Yin offers a deep dive into Spark SQL’s Catalyst optimizer, introducing the core concepts of Catalyst and demonstrating how developers can extend it. You’ll leave with a deeper understanding of how Spark analyzes, optimizes, and plans a user’s query.
NoSQL databases are currently used in several applications scenarios in contrast to Relations Databases. Several type of Databases there exist. In this presentation we compare Key Value, Column Oriented, Document Oriented and Graph Databases. Using a simple case study there are evaluated pros and cons of the NoSQL databases taken into account.
Anatomy of Data Frame API : A deep dive into Spark Data Frame APIdatamantra
In this presentation, we discuss about internals of spark data frame API. All the code discussed in this presentation available at https://github.com/phatak-dev/anatomy_of_spark_dataframe_api
Improving Mobile Payments With Real time Sparkdatamantra
Talk about real world spark streaming implementation for improving mobile payments experience. Presented at Target data meetup at Bangalore by Madhukara Phatak on 22/08/2015.
This slide deck is used as an introduction to the internals of Apache Spark, as part of the Distributed Systems and Cloud Computing course I hold at Eurecom.
Course website:
http://michiard.github.io/DISC-CLOUD-COURSE/
Sources available here:
https://github.com/michiard/DISC-CLOUD-COURSE
This is the presentation I made on JavaDay Kiev 2015 regarding the architecture of Apache Spark. It covers the memory model, the shuffle implementations, data frames and some other high-level staff and can be used as an introduction to Apache Spark
An overview of Mesos and Kubernetes ecosystem including overview, architecture, customers and partners. For a beginner it will give a good covering of all the basics!
Presentation on Predictive modeling in Health-care at San Jose, Ca 2015. This presentation talks about healthcare industry in US, provides stats and forecasts. It then discusses a few use cases in health care and goes into detail on a kaggle example.
Apache spark on Hadoop Yarn Resource Managerharidasnss
How we can configure the spark on apache hadoop environment, and why we need that compared to standalone cluster manager.
Slide also includes docker based demo to play with the hadoop and spark on your laptop itself. See more on the demo codes and other documentation here - https://github.com/haridas/hadoop-env
An Engine to process big data in faster(than MR), easy and extremely scalable way. An Open Source, parallel, in-memory processing, cluster computing framework. Solution for loading, processing and end to end analyzing large scale data. Iterative and Interactive : Scala, Java, Python, R and with Command line interface.
Big Data Processing with Apache Spark 2014mahchiev
Apache Spark™ is a fast and general engine for large-scale data processing. It has gained enormous popularity recently with its speed and ease of use and is currently replacing traditional Hadoop MapReduce. We'll talk about:
1. What is Big Data ?
2. The Map-Reduce paradigm
3. What does Apache Spark do?
4. Finally, we'll make a quick demo
700 Updatable Queries Per Second: Spark as a Real-Time Web ServiceEvan Chan
700 Updatable Queries Per Second: Spark as a Real-Time Web Service. Find out how to use Apache Spark with FiloDb for low-latency queries - something you never thought possible with Spark. Scale it down, not just scale it up!
Making the big data ecosystem work together with Python & Apache Arrow, Apach...Holden Karau
Slides from PyData London exploring how the big data ecosystem (currently) works together as well as how different parts of the ecosystem work with Python. Proof-of-concept examples are provided using nltk & spacy with Spark. Then we look to the future and how we can improve.
- A brief introduction to Spark Core
- Introduction to Spark Streaming
- A Demo of Streaming by evaluation top hashtags being used
- Introduction to Spark MLlib
- A Demo of MLlib by building a simple movie recommendation engine
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
2. ● Madhukara Phatak
● Technical Lead at Tellius
● Consultant and Trainer at
datamantra.io
● Consult in Hadoop, Spark
and Scala
● www.madhukaraphatak.com
3. Agenda
● Spark 1.0
● State of Big data
● Change in ecosystem
● Dawn of structured data
● Working with structured sources
● Dawn of custom memory management
● Evolution of Libraries
4. Spark 1.0
● Release on May 2014 [1]
● First production ready, backward compatible release
● Contains
○ Spark batch
○ Spark streaming
○ Shark
○ MLLib and Graphx
● Developed over 4 years
● Better hadoop
5. State of Big data Industry
● Map/Reduce was the way to do big data processing
● HDFS was primary source of the data
● Tools like Sqoop developed for moving data to hdfs and
hdfs acted like single point of source
● Every data by default assumed to be unstructured and
structure was laid on top of it
● Hive and Pig were popular ways to do structured and
semi structured data processing on top of Map/Reduce
6. Spark 1.0 Ideas
● RDD abstraction was supported to do Map/Reduce style
programming
● Primary source supported was HDFS and memory as
the speedup layer
● Spark-streaming viewed as faster batch processing
rather than as streaming
● To support Hive, Shark was created to generate RDD
code rather than Map/Reduce
7. Changes from 2014
● Big data industry has gone through many radical
changes in thinking in last two years
● Some of those changes started in spark and some other
are influenced by other frameworks
● These changes are important to understand why Spark
2.0 abstractions are radically different than Spark 1.0
● Many of these are already discussed in earlier meetups,
links to the videos are in reference
9. Usage of Big data in 2014.
● Most of the people were using higher level tools like
Hive and Pig to process data rather using Map/Reduce
● Most of the data was residing in the RDBMS databases
and user ETL data from mysql to hive to query
● So lot of use cases were analysing structured data
rather than basic assumption of unstructured in big data
world
● Huge time is consumed for ETL and non optimized
workflows from Hive
10. Spark with Structured Data in 1.2
● Spark recognised need of structured data in the market
and started to evolve the platform to support that use
case
● First attempt was to have a specialised RDD called
SchemaRDD in Spark 1.2 which represented that
schema
● But this approach was not clean
● Also even though there was InputFormat to read from
structured data, there was no direct API to read from
Spark
11. DataSource API in Spark 1.3
● First API to provide an unified API to read from
structured and semi structured sources
● Can read from RDBMS, NoSql databases like
Mongodb,Cassandra etc
● Advanced API like InputFormat which gives lot of
control to source to optimize locality of data
● So in Spark 1.3, spark addressed the need of structured
data being first class in Big data ecosystem
● For more info refer to, Anatomy of DataSource API talk[2]
12. DataFrame abstraction in Spark
● Spark understood modifying the RDD abstraction is not
good enough
● Many frameworks like Hive, Pig tried and failed mapping
querying efficiently on Map/Reduce
● So Spark came up with Dataframe abstraction which
goes through a complete different pipeline that of RDD
which is highly optimized
● For more info refer to, Anatomy of DataFrame API talk [3]
14. In memory in Spark 1.0
● Spark was the first open source big data framework to
embrace in memory computing
● With cheaper hardware and abstractions like RDD
allowed spark to exploit memory in efficient way than all
other hadoop ecosystem projects
● The first implementation of in memory computing
followed typical cache approach of keeping serialized
java bytes
● This proved to be challenging in future
15. Challenges of in memory in Java
● As more and more big data frameworks started to
exploit memory, soon they realised few limitation of
Java memory model
● Java memory is tuned for short lived objects and
complete control of memory is given to JVM
● But big data system started using JVM for long term
storage, JVM memory model started feel inadequate
● Also as java heap grew, to cache more data, GC
pauses started to kill performance
16. Custom memory management
● Apache Flink is first big data system to implement
custom memory management in java
● Flink follows Dataframe like API with custom memory
model
● The custom memory model with non GC based
approach proved to be highly successful
● By observing trends in community, optly Spark also
adopted same in Spark 1.4
17. Tungsten in Spark 1.4
● Spark release first version of custom memory
management in 1.4 version
● It was only supported DF as they need custom memory
model
● Custom memory management greatly improved use of
spark in higher vm size and fewer GC paused
● Solved OOM issues which plagued earlier versions of
spark
● For more info refer to, Anatomy of In memory
management in Spark talk [4]
19. RDD and Map/Reduce API API
● RDD API of spark follows functional programming
paradigm which is similar to Map/Reduce
● RDD API passes around opaque function objects which
is great for programming but bad for system based
optimization
● Map/Reduce API of Java also follows same patterns but
less elegant than scala ones
● Hard to optimise compared to Pig/Hive
● So we saw a steady increase in custom DSL’s in
hadoop world
20. Need of DSL’s in Hadoop
● DSL’s like Pig or Hive are much more easier to
understand compare to Java API
● Less error prone and helps to be very specific
● Can be easily optimised, as DSL only focuses on what
to do not how to do
● As Java Map/Reduce mixes what with how, it’s hard to
optimize compare to Hive and Pig
● So more and more people prefered these DSL over
platform level API’s
21. Challenges of DSL in Hadoop
● Hive and Pig DSL do not integrate well with
Map/Reduce API’s
● DSL often lack the flexibility of complete programming
language
● Hive/Pig DSL don’t define single abstraction to share so
you will be not able mix
● DSL are powerful for optimization but soon become
limited in terms of functionality
22. Scala as language to host DSL
● Scala is one of the first language to embrace DSL as
the first class citizens
● Scala features like implicits, higher order functions,
structured types etc allow easily build DSL’s and
integrate with language
● This allows any library on scala to integrate DSL and
harness full power of language
● Many libraries define their own DSL outside big data. Ex
: Slick, Akka-http, Sbt
23. DF DSL and Spark SQL DSL
● To harness power of custom memory management and
hive like optimizes spark encourages to write DF and
spark sql DSL over spark RDD code
● Whenever we write this DSL, all the features of scala
language and its libraries are available,which makes it
more powerful that Pig/ Hive
● Other frameworks like Flink, Beam follow same ideas on
scala, Java 8 etc
● You can easily mix and match DSL with RDD API
24. Dataset DSL in Spark 1.6
● Dataframe DSL introduced in 1.4 and stabilised in 1.5
● As spark observed the user and performance benefits of
DSL based programming, it wanted to make as import
pillar of Spark
● So in Spark 1.6, Spark released Dataset DSL which is
poised to complete RDD API from user land
● This indicates a big shift in thinking as we are more and
more moving away from 1.0 Map/Reduce and
unstructured mindset.
26. Evolution of libraries vs frameworks
● Spark is one of the first big data framework to build
platform rather than collection of frameworks
● Single abstraction results in multiple libraries not
multiple frameworks
● All these libraries get benefits from the improvements in
run time
● This made spark to build lot of ecosystem in very less
time
● To understand the meaning of platform, refer to
Introduction to Flink talk [5]
27. Data exchange between Libraries
● As more and more libraries are added to spark, having
common way to exchange data became important
● Initially libraries started using RDD as data exchange
format, but soon discovered some limitations
● Limitations of RDD as data exchange format is
○ No defined schema. Need to come up with domain
object for each library
○ Too low level
○ Custom serialization is hard to integrate
28. DataFrame as data exchange format
● From last few release, spark is making Dataframe as
new data exchange format of Spark
● Dataframe has schema and can be easily passed
around between libraries
● Dataframe is higher level abstraction compared RDD
● As Dataframe are serialized using platform specific
code generation, all libraries will be following same
serialization
● Dataset will follow the same advantages
29. Learnings from Spark 1.x
● Structured/Semi structured data is the first class of Big
data processing system
● Custom memory management and code generated
serialization gives best performance on JVM
● DataFrame/ Dataset are the new abstraction layers to
build next generation big data processing system
● DSL is the way forward over Map/Reduce like API’s
● Having high level structured abstractions make libraries
coexist happily on a platform