A Master Guide To Apache Spark Application And Versatile Uses.pdfDataSpace Academy
A leading name in big data handling tasks, Apache Spark earns kudos for its ability to handle vast amounts of data swiftly and efficiently. The tool is also a major name in the development of APIs in Java, Python, and R. The blog offers a master guide on all the key aspects of Apache Spark, including versatility, fault tolerance, real-time streaming, and more. The blog also goes on to explain the operational procedure of the tool, step by step. Finally, the article wraps up with benefits and also limitations of the tool.
This document provides an overview of Apache Spark, including its history, features, architecture and use cases. Spark started in 2009 at UC Berkeley and was later adopted by the Apache Foundation. It provides faster processing than Hadoop by keeping data in memory. Spark supports batch, streaming and interactive processing on large datasets using its core abstraction called resilient distributed datasets (RDDs).
In this one day workshop, we will introduce Spark at a high level context. Spark is fundamentally different than writing MapReduce jobs so no prior Hadoop experience is needed. You will learn how to interact with Spark on the command line and conduct rapid in-memory data analyses. We will then work on writing Spark applications to perform large cluster-based analyses including SQL-like aggregations, machine learning applications, and graph algorithms. The course will be conducted in Python using PySpark.
The document provides an overview of Apache Spark, including what it is, its ecosystem, features, and architecture. Some key points:
- Apache Spark is an open-source cluster computing framework for large-scale data processing. It is up to 100x faster than Hadoop for iterative/interactive algorithms.
- Spark features include its RDD abstraction, lazy evaluation, and use of DAGs to optimize performance. It supports Scala, Java, Python, and R.
- The Spark ecosystem includes tools like Spark SQL, MLlib, GraphX, and Spark Streaming. It can run on Hadoop YARN, Mesos, or in standalone mode.
- Spark's architecture includes the SparkContext,
This slide introduces Hadoop Spark.
Just to help you construct an idea of Spark regarding its architecture, data flow, job scheduling, and programming.
Not all technical details are included.
Spark is an open-source cluster computing framework that can run analytics applications much faster than Hadoop by keeping data in memory rather than on disk. While Spark can access Hadoop's HDFS storage system and is often used as a replacement for Hadoop's MapReduce, Hadoop remains useful for batch processing and Spark is not expected to fully replace it. Spark provides speed, ease of use, and integration of SQL, streaming, and machine learning through its APIs in multiple languages.
Apache spark architecture (Big Data and Analytics)Jyotasana Bharti
A presentation/slides on Apache Spark Architecture with its features, architecture, working, etc.
Introduction
Features
Understanding Apache Spark Architecture
Working of Apache Spark Architecture
Applications
Conclusion
References
A Master Guide To Apache Spark Application And Versatile Uses.pdfDataSpace Academy
A leading name in big data handling tasks, Apache Spark earns kudos for its ability to handle vast amounts of data swiftly and efficiently. The tool is also a major name in the development of APIs in Java, Python, and R. The blog offers a master guide on all the key aspects of Apache Spark, including versatility, fault tolerance, real-time streaming, and more. The blog also goes on to explain the operational procedure of the tool, step by step. Finally, the article wraps up with benefits and also limitations of the tool.
This document provides an overview of Apache Spark, including its history, features, architecture and use cases. Spark started in 2009 at UC Berkeley and was later adopted by the Apache Foundation. It provides faster processing than Hadoop by keeping data in memory. Spark supports batch, streaming and interactive processing on large datasets using its core abstraction called resilient distributed datasets (RDDs).
In this one day workshop, we will introduce Spark at a high level context. Spark is fundamentally different than writing MapReduce jobs so no prior Hadoop experience is needed. You will learn how to interact with Spark on the command line and conduct rapid in-memory data analyses. We will then work on writing Spark applications to perform large cluster-based analyses including SQL-like aggregations, machine learning applications, and graph algorithms. The course will be conducted in Python using PySpark.
The document provides an overview of Apache Spark, including what it is, its ecosystem, features, and architecture. Some key points:
- Apache Spark is an open-source cluster computing framework for large-scale data processing. It is up to 100x faster than Hadoop for iterative/interactive algorithms.
- Spark features include its RDD abstraction, lazy evaluation, and use of DAGs to optimize performance. It supports Scala, Java, Python, and R.
- The Spark ecosystem includes tools like Spark SQL, MLlib, GraphX, and Spark Streaming. It can run on Hadoop YARN, Mesos, or in standalone mode.
- Spark's architecture includes the SparkContext,
This slide introduces Hadoop Spark.
Just to help you construct an idea of Spark regarding its architecture, data flow, job scheduling, and programming.
Not all technical details are included.
Spark is an open-source cluster computing framework that can run analytics applications much faster than Hadoop by keeping data in memory rather than on disk. While Spark can access Hadoop's HDFS storage system and is often used as a replacement for Hadoop's MapReduce, Hadoop remains useful for batch processing and Spark is not expected to fully replace it. Spark provides speed, ease of use, and integration of SQL, streaming, and machine learning through its APIs in multiple languages.
Apache spark architecture (Big Data and Analytics)Jyotasana Bharti
A presentation/slides on Apache Spark Architecture with its features, architecture, working, etc.
Introduction
Features
Understanding Apache Spark Architecture
Working of Apache Spark Architecture
Applications
Conclusion
References
Spark can process data faster than Hadoop by keeping data in-memory as much as possible to avoid disk I/O. It supports streaming data, machine learning algorithms, graph processing, and SQL queries on structured data using its DataFrame API. Spark can integrate with Hadoop by running on YARN and accessing data from HDFS. The key capabilities discussed include low latency processing, streaming, machine learning, graph processing, DataFrames, and Hadoop integration.
Spark is a cluster computing framework designed to be fast, general-purpose, and able to handle a wide range of workloads including batch processing, iterative algorithms, interactive queries, and streaming. It is faster than Hadoop for interactive queries and complex applications by running computations in-memory when possible. Spark also simplifies combining different processing types through a single engine. It offers APIs in Java, Python, Scala and SQL and integrates closely with other big data tools like Hadoop. Spark is commonly used for interactive queries on large datasets, streaming data processing, and machine learning tasks.
In this era of ever growing data, the need for analyzing it for meaningful business insights becomes more and more significant. There are different Big Data processing alternatives like Hadoop, Spark, Storm etc. Spark, however is unique in providing batch as well as streaming capabilities, thus making it a preferred choice for lightening fast Big Data Analysis platforms.
Apache Spark is a lightning-fast cluster computing technology, designed for fast computation. It extends the MapReduce model of Hadoop to efficiently use it for more types of computations, which includes interactive queries and stream processing.
Spark is one of Hadoop's subproject developed in 2009 in UC Berkeley's AMPLab by Matei Zaharia. It was Open Sourced in 2010 under a BSD license. It was donated to Apache software foundation in 2013, and now Apache Spark has become a top-level Apache project from Feb-2014.
This document shares some basic knowledge about Apache Spark.
Transitioning Compute Models: Hadoop MapReduce to SparkSlim Baltagi
This presentation is an analysis of the observed trends in the transition from the Hadoop ecosystem to the Spark ecosystem. The related talk took place at the Chicago Hadoop User Group (CHUG) meetup held on February 12, 2015.
Apache Spark is a fast, general-purpose cluster computing system that allows processing of large datasets in parallel across clusters. It can be used for batch processing, streaming, and interactive queries. Spark improves on Hadoop MapReduce by using an in-memory computing model that is faster than disk-based approaches. It includes APIs for Java, Scala, Python and supports machine learning algorithms, SQL queries, streaming, and graph processing.
Spark is an open-source cluster computing framework that provides high performance for both batch and streaming data processing. It addresses limitations of other distributed processing systems like MapReduce by providing in-memory computing capabilities and supporting a more general programming model. Spark core provides basic functionalities and serves as the foundation for higher-level modules like Spark SQL, MLlib, GraphX, and Spark Streaming. RDDs are Spark's basic abstraction for distributed datasets, allowing immutable distributed collections to be operated on in parallel. Key benefits of Spark include speed through in-memory computing, ease of use through its APIs, and a unified engine supporting multiple workloads.
This document discusses 5 reasons why Apache Spark is in high demand: 1) Low latency processing by keeping data in memory, 2) Support for streaming data through resilient distributed datasets (RDDs), 3) Integration of machine learning and graph processing libraries, 4) DataFrame API for easier data analysis, and 5) Ability to integrate with Hadoop for large scale data processing. It provides details on Spark's architecture and benchmarks showing its faster performance compared to Hadoop for tasks like sorting large datasets.
This presentations is first in the series of Apache Spark tutorials and covers the basics of Spark framework.Subscribe to my youtube channel for more updates https://www.youtube.com/channel/UCNCbLAXe716V2B7TEsiWcoA
Teaching Apache Spark: Demonstrations on the Databricks Cloud PlatformYao Yao
Yao Yao Mooyoung Lee
https://github.com/yaowser/learn-spark/tree/master/Final%20project
https://www.youtube.com/watch?v=IVMbSDS4q3A
https://www.slideshare.net/YaoYao44/teaching-apache-spark-demonstrations-on-the-databricks-cloud-platform/
Apache Spark is a fast and general engine for big data analytics processing with libraries for SQL, streaming, and advanced analytics
Cloud Computing, Structured Streaming, Unified Analytics Integration, End-to-End Applications
This document provides an overview of Apache Spark, including:
- Spark allows for fast iterative processing by keeping data in memory across parallel jobs for faster sharing than MapReduce.
- The core of Spark is the resilient distributed dataset (RDD) which allows parallel operations on distributed data.
- Spark comes with libraries for SQL queries, streaming, machine learning, and graph processing.
Demi Ben-Ari is a senior software engineer at Windward Ltd. who has a BS in computer science. They previously worked as a software team leader and senior Java engineer developing missile defense and alert systems. The presentation discusses Spark, an open-source cluster computing framework, and how Windward uses Spark for data filtering, management, predictions and more through Java applications running on YARN clusters.
Spark is a fast and general cluster computing system that improves on MapReduce by keeping data in-memory between jobs. It was developed in 2009 at UC Berkeley and open sourced in 2010. Spark core provides in-memory computing capabilities and a programming model that allows users to write programs as transformations on distributed datasets.
Apache Spark Tutorial | Spark Tutorial for Beginners | Apache Spark Training ...Edureka!
This Edureka Spark Tutorial will help you to understand all the basics of Apache Spark. This Spark tutorial is ideal for both beginners as well as professionals who want to learn or brush up Apache Spark concepts. Below are the topics covered in this tutorial:
1) Big Data Introduction
2) Batch vs Real Time Analytics
3) Why Apache Spark?
4) What is Apache Spark?
5) Using Spark with Hadoop
6) Apache Spark Features
7) Apache Spark Ecosystem
8) Demo: Earthquake Detection Using Apache Spark
Learn about SPARK tool and it's componemtssiddharth30121
This document discusses Apache Spark, a fast and general engine for large-scale data processing. It provides three key advantages over MapReduce: in-memory processing which is 10-100x faster, support for interactive queries, and integration of streaming, SQL, machine learning, and graph processing. The core abstraction in Spark is the Resilient Distributed Dataset (RDD), which allows data to be partitioned across clusters and cached in memory for faster shared access compared to MapReduce's disk-based approach.
This document provides an overview of Apache Spark's architectural components through the life of simple Spark jobs. It begins with a simple Spark application analyzing airline on-time arrival data, then covers Resilient Distributed Datasets (RDDs), the cluster architecture, job execution through Spark components like tasks and scheduling, and techniques for writing better Spark applications like optimizing partitioning and reducing shuffle size.
Cascading is a Java framework that allows users to define data processing workflows on Hadoop clusters more easily. The author discusses connecting Cascading to the Starfish profiler and optimizer to enable automated optimization of Cascading workflows. Key points are:
1) Cascading workflows are translated to DAGs of Hadoop jobs for profiling and optimization.
2) The Cascading API is modified to use the Hadoop New API to interface with Starfish.
3) Experiments show the Starfish optimizer providing speedups of up to 1.3x for several real-world Cascading workflows.
In this era of ever growing data, the need for analyzing it for meaningful business insights becomes more and more significant. There are different Big Data processing alternatives like Hadoop, Spark, Storm etc. Spark, however is unique in providing batch as well as streaming capabilities, thus making it a preferred choice for lightening fast Big Data Analysis platforms.
Predictably Improve Your B2B Tech Company's Performance by Leveraging DataKiwi Creative
Harness the power of AI-backed reports, benchmarking and data analysis to predict trends and detect anomalies in your marketing efforts.
Peter Caputa, CEO at Databox, reveals how you can discover the strategies and tools to increase your growth rate (and margins!).
From metrics to track to data habits to pick up, enhance your reporting for powerful insights to improve your B2B tech company's marketing.
- - -
This is the webinar recording from the June 2024 HubSpot User Group (HUG) for B2B Technology USA.
Watch the video recording at https://youtu.be/5vjwGfPN9lw
Sign up for future HUG events at https://events.hubspot.com/b2b-technology-usa/
Spark can process data faster than Hadoop by keeping data in-memory as much as possible to avoid disk I/O. It supports streaming data, machine learning algorithms, graph processing, and SQL queries on structured data using its DataFrame API. Spark can integrate with Hadoop by running on YARN and accessing data from HDFS. The key capabilities discussed include low latency processing, streaming, machine learning, graph processing, DataFrames, and Hadoop integration.
Spark is a cluster computing framework designed to be fast, general-purpose, and able to handle a wide range of workloads including batch processing, iterative algorithms, interactive queries, and streaming. It is faster than Hadoop for interactive queries and complex applications by running computations in-memory when possible. Spark also simplifies combining different processing types through a single engine. It offers APIs in Java, Python, Scala and SQL and integrates closely with other big data tools like Hadoop. Spark is commonly used for interactive queries on large datasets, streaming data processing, and machine learning tasks.
In this era of ever growing data, the need for analyzing it for meaningful business insights becomes more and more significant. There are different Big Data processing alternatives like Hadoop, Spark, Storm etc. Spark, however is unique in providing batch as well as streaming capabilities, thus making it a preferred choice for lightening fast Big Data Analysis platforms.
Apache Spark is a lightning-fast cluster computing technology, designed for fast computation. It extends the MapReduce model of Hadoop to efficiently use it for more types of computations, which includes interactive queries and stream processing.
Spark is one of Hadoop's subproject developed in 2009 in UC Berkeley's AMPLab by Matei Zaharia. It was Open Sourced in 2010 under a BSD license. It was donated to Apache software foundation in 2013, and now Apache Spark has become a top-level Apache project from Feb-2014.
This document shares some basic knowledge about Apache Spark.
Transitioning Compute Models: Hadoop MapReduce to SparkSlim Baltagi
This presentation is an analysis of the observed trends in the transition from the Hadoop ecosystem to the Spark ecosystem. The related talk took place at the Chicago Hadoop User Group (CHUG) meetup held on February 12, 2015.
Apache Spark is a fast, general-purpose cluster computing system that allows processing of large datasets in parallel across clusters. It can be used for batch processing, streaming, and interactive queries. Spark improves on Hadoop MapReduce by using an in-memory computing model that is faster than disk-based approaches. It includes APIs for Java, Scala, Python and supports machine learning algorithms, SQL queries, streaming, and graph processing.
Spark is an open-source cluster computing framework that provides high performance for both batch and streaming data processing. It addresses limitations of other distributed processing systems like MapReduce by providing in-memory computing capabilities and supporting a more general programming model. Spark core provides basic functionalities and serves as the foundation for higher-level modules like Spark SQL, MLlib, GraphX, and Spark Streaming. RDDs are Spark's basic abstraction for distributed datasets, allowing immutable distributed collections to be operated on in parallel. Key benefits of Spark include speed through in-memory computing, ease of use through its APIs, and a unified engine supporting multiple workloads.
This document discusses 5 reasons why Apache Spark is in high demand: 1) Low latency processing by keeping data in memory, 2) Support for streaming data through resilient distributed datasets (RDDs), 3) Integration of machine learning and graph processing libraries, 4) DataFrame API for easier data analysis, and 5) Ability to integrate with Hadoop for large scale data processing. It provides details on Spark's architecture and benchmarks showing its faster performance compared to Hadoop for tasks like sorting large datasets.
This presentations is first in the series of Apache Spark tutorials and covers the basics of Spark framework.Subscribe to my youtube channel for more updates https://www.youtube.com/channel/UCNCbLAXe716V2B7TEsiWcoA
Teaching Apache Spark: Demonstrations on the Databricks Cloud PlatformYao Yao
Yao Yao Mooyoung Lee
https://github.com/yaowser/learn-spark/tree/master/Final%20project
https://www.youtube.com/watch?v=IVMbSDS4q3A
https://www.slideshare.net/YaoYao44/teaching-apache-spark-demonstrations-on-the-databricks-cloud-platform/
Apache Spark is a fast and general engine for big data analytics processing with libraries for SQL, streaming, and advanced analytics
Cloud Computing, Structured Streaming, Unified Analytics Integration, End-to-End Applications
This document provides an overview of Apache Spark, including:
- Spark allows for fast iterative processing by keeping data in memory across parallel jobs for faster sharing than MapReduce.
- The core of Spark is the resilient distributed dataset (RDD) which allows parallel operations on distributed data.
- Spark comes with libraries for SQL queries, streaming, machine learning, and graph processing.
Demi Ben-Ari is a senior software engineer at Windward Ltd. who has a BS in computer science. They previously worked as a software team leader and senior Java engineer developing missile defense and alert systems. The presentation discusses Spark, an open-source cluster computing framework, and how Windward uses Spark for data filtering, management, predictions and more through Java applications running on YARN clusters.
Spark is a fast and general cluster computing system that improves on MapReduce by keeping data in-memory between jobs. It was developed in 2009 at UC Berkeley and open sourced in 2010. Spark core provides in-memory computing capabilities and a programming model that allows users to write programs as transformations on distributed datasets.
Apache Spark Tutorial | Spark Tutorial for Beginners | Apache Spark Training ...Edureka!
This Edureka Spark Tutorial will help you to understand all the basics of Apache Spark. This Spark tutorial is ideal for both beginners as well as professionals who want to learn or brush up Apache Spark concepts. Below are the topics covered in this tutorial:
1) Big Data Introduction
2) Batch vs Real Time Analytics
3) Why Apache Spark?
4) What is Apache Spark?
5) Using Spark with Hadoop
6) Apache Spark Features
7) Apache Spark Ecosystem
8) Demo: Earthquake Detection Using Apache Spark
Learn about SPARK tool and it's componemtssiddharth30121
This document discusses Apache Spark, a fast and general engine for large-scale data processing. It provides three key advantages over MapReduce: in-memory processing which is 10-100x faster, support for interactive queries, and integration of streaming, SQL, machine learning, and graph processing. The core abstraction in Spark is the Resilient Distributed Dataset (RDD), which allows data to be partitioned across clusters and cached in memory for faster shared access compared to MapReduce's disk-based approach.
This document provides an overview of Apache Spark's architectural components through the life of simple Spark jobs. It begins with a simple Spark application analyzing airline on-time arrival data, then covers Resilient Distributed Datasets (RDDs), the cluster architecture, job execution through Spark components like tasks and scheduling, and techniques for writing better Spark applications like optimizing partitioning and reducing shuffle size.
Cascading is a Java framework that allows users to define data processing workflows on Hadoop clusters more easily. The author discusses connecting Cascading to the Starfish profiler and optimizer to enable automated optimization of Cascading workflows. Key points are:
1) Cascading workflows are translated to DAGs of Hadoop jobs for profiling and optimization.
2) The Cascading API is modified to use the Hadoop New API to interface with Starfish.
3) Experiments show the Starfish optimizer providing speedups of up to 1.3x for several real-world Cascading workflows.
In this era of ever growing data, the need for analyzing it for meaningful business insights becomes more and more significant. There are different Big Data processing alternatives like Hadoop, Spark, Storm etc. Spark, however is unique in providing batch as well as streaming capabilities, thus making it a preferred choice for lightening fast Big Data Analysis platforms.
Similar to Big Data Processing Using Spark.pptx (20)
Predictably Improve Your B2B Tech Company's Performance by Leveraging DataKiwi Creative
Harness the power of AI-backed reports, benchmarking and data analysis to predict trends and detect anomalies in your marketing efforts.
Peter Caputa, CEO at Databox, reveals how you can discover the strategies and tools to increase your growth rate (and margins!).
From metrics to track to data habits to pick up, enhance your reporting for powerful insights to improve your B2B tech company's marketing.
- - -
This is the webinar recording from the June 2024 HubSpot User Group (HUG) for B2B Technology USA.
Watch the video recording at https://youtu.be/5vjwGfPN9lw
Sign up for future HUG events at https://events.hubspot.com/b2b-technology-usa/
ViewShift: Hassle-free Dynamic Policy Enforcement for Every Data LakeWalaa Eldin Moustafa
Dynamic policy enforcement is becoming an increasingly important topic in today’s world where data privacy and compliance is a top priority for companies, individuals, and regulators alike. In these slides, we discuss how LinkedIn implements a powerful dynamic policy enforcement engine, called ViewShift, and integrates it within its data lake. We show the query engine architecture and how catalog implementations can automatically route table resolutions to compliance-enforcing SQL views. Such views have a set of very interesting properties: (1) They are auto-generated from declarative data annotations. (2) They respect user-level consent and preferences (3) They are context-aware, encoding a different set of transformations for different use cases (4) They are portable; while the SQL logic is only implemented in one SQL dialect, it is accessible in all engines.
#SQL #Views #Privacy #Compliance #DataLake
Open Source Contributions to Postgres: The Basics POSETTE 2024ElizabethGarrettChri
Postgres is the most advanced open-source database in the world and it's supported by a community, not a single company. So how does this work? How does code actually get into Postgres? I recently had a patch submitted and committed and I want to share what I learned in that process. I’ll give you an overview of Postgres versions and how the underlying project codebase functions. I’ll also show you the process for submitting a patch and getting that tested and committed.
"Financial Odyssey: Navigating Past Performance Through Diverse Analytical Lens"sameer shah
Embark on a captivating financial journey with 'Financial Odyssey,' our hackathon project. Delve deep into the past performance of two companies as we employ an array of financial statement analysis techniques. From ratio analysis to trend analysis, uncover insights crucial for informed decision-making in the dynamic world of finance."
The Ipsos - AI - Monitor 2024 Report.pdfSocial Samosa
According to Ipsos AI Monitor's 2024 report, 65% Indians said that products and services using AI have profoundly changed their daily life in the past 3-5 years.
Analysis insight about a Flyball dog competition team's performanceroli9797
Insight of my analysis about a Flyball dog competition team's last year performance. Find more: https://github.com/rolandnagy-ds/flyball_race_analysis/tree/main
- Distributed computing framework: - Spark is built for in-memory parallel processing. Unlike many distributed systems that store intermediate computations on disk, Spark keeps them in memory.
- Spark engine supports single- and multi-node deployments, meaning can be install on 1 or multiple machines.
- MapReduce: - Spark project was started Matai Zaharia who is now a CTO and co-founder of Databricks company. He started Spark project to replace Hadoop’s map reduce.
- Like Hadoop, Spark also supports single and multi-node deployment. Explain single and multi-node.
- Spark follows Master-slave architecture, next slide.
- Why Hadoop only runs in batch?
- Fault tolerance
- How YARN works?
- Machine learning (Mahout vs MLib)
- Security features in Hadoop
- Why Hadoop only runs in batch?
- Fault tolerance
- How YARN works?
- Machine learning (Mahout vs MLib)
- Security features in Hadoop
- Why Hadoop only runs in batch?
- Fault tolerance
- How YARN works?
- Machine learning (Mahout vs MLib)
- Security features in Hadoop
- Why Hadoop only runs in batch?
- Fault tolerance
- How YARN works?
- Machine learning (Mahout vs MLib)
- Security features in Hadoop
- Fault tolerance: - Resilient Distributed Dataset (RDD) is the fundamental data structure of Spark. They are immutable Distributed collections of objects of any type. As the name suggests is a Resilient (Fault-tolerant) records of data that resides on multiple nodes.
- Lazy evaluation: - lazy evaluation helps us in optimizing the process by evaluating the expression only when it’s needed and avoiding unnecessary overhead. So, it memorizes the results and evaluate later.
- Spark SQL: - Exposes the Spark dataset over JDBC API and allows running SQL like queries.
- Spark streaming Vs Structured streaming: - Spark streaming is based on RDD API which is a collection of data divided into chunks whereas structured streaming is based on dataframes and datasets which uses Spark SQL optimizer to speed up the streaming process.
- Mlib: - Is a machine learning library available in Spark program which provides an API to run algorithms like classification, regression, clustering etc.
- GraphX: - GraphX is the Spark API for graphs and graph-parallel computation. It includes a growing collection of graph algorithms and builders to simplify graph analytics tasks.
Use cases of Graph analysis are Disaster Detection Systems (Earthquake, Tsunami), Financial Fraud Detection, Page Rank (Finding a Social Media Influencer in a network),
Social media analysis (who is following whom and who liked whose comments)
- DAG: - Directed Acyclic Graph is a sequence of events.
Wake up → Leave the bed → Get fresh → Take breakfast → Get ready → Drive to office
The sequence of events can be related to different stages of an action of “going to office”, but no-one stopping you to just wake up and go to office and have your breakfast in the office. So, the end goal is to reach office for which if I need to break the cycle then I will. Similarly, DAG works in acyclic manner
meaning there are no cycles or loops in the graph. This property allows Spark to optimize and schedule the execution of the operations effectively, as it can determine the dependencies and execute the stages in the most efficient order.
- Physical execution plan: -
-