Apache Spark in Depth: Core Concepts, Architecture & InternalsAnton Kirillov
Slides cover Spark core concepts of Apache Spark such as RDD, DAG, execution workflow, forming stages of tasks and shuffle implementation and also describes architecture and main components of Spark Driver. The workshop part covers Spark execution modes , provides link to github repo which contains Spark Applications examples and dockerized Hadoop environment to experiment with
This slide deck is used as an introduction to the internals of Apache Spark, as part of the Distributed Systems and Cloud Computing course I hold at Eurecom.
Course website:
http://michiard.github.io/DISC-CLOUD-COURSE/
Sources available here:
https://github.com/michiard/DISC-CLOUD-COURSE
Apache Spark in Depth: Core Concepts, Architecture & InternalsAnton Kirillov
Slides cover Spark core concepts of Apache Spark such as RDD, DAG, execution workflow, forming stages of tasks and shuffle implementation and also describes architecture and main components of Spark Driver. The workshop part covers Spark execution modes , provides link to github repo which contains Spark Applications examples and dockerized Hadoop environment to experiment with
This slide deck is used as an introduction to the internals of Apache Spark, as part of the Distributed Systems and Cloud Computing course I hold at Eurecom.
Course website:
http://michiard.github.io/DISC-CLOUD-COURSE/
Sources available here:
https://github.com/michiard/DISC-CLOUD-COURSE
This is the presentation I made on JavaDay Kiev 2015 regarding the architecture of Apache Spark. It covers the memory model, the shuffle implementations, data frames and some other high-level staff and can be used as an introduction to Apache Spark
This deep dive attempts to "de-mystify" Spark by touching on some of the main design philosophies and diving into some of the more advanced features that make it such a flexible and powerful cluster computing framework. It will touch on some common pitfalls and attempt to build some best practices for building, configuring, and deploying Spark applications.
Apache Spark presentation at HasGeek FifthElelephant
https://fifthelephant.talkfunnel.com/2015/15-processing-large-data-with-apache-spark
Covering Big Data Overview, Spark Overview, Spark Internals and its supported libraries
Apache Spark Introduction and Resilient Distributed Dataset basics and deep diveSachin Aggarwal
We will give a detailed introduction to Apache Spark and why and how Spark can change the analytics world. Apache Spark's memory abstraction is RDD (Resilient Distributed DataSet). One of the key reason why Apache Spark is so different is because of the introduction of RDD. You cannot do anything in Apache Spark without knowing about RDDs. We will give a high level introduction to RDD and in the second half we will have a deep dive into RDDs.
Video to talk: https://www.youtube.com/watch?v=gd4Jqtyo7mM
Apache Spark is a next generation engine for large scale data processing built with Scala. This talk will first show how Spark takes advantage of Scala's function idioms to produce an expressive and intuitive API. You will learn about the design of Spark RDDs and the abstraction enables the Spark execution engine to be extended to support a wide variety of use cases(Spark SQL, Spark Streaming, MLib and GraphX). The Spark source will be be referenced to illustrate how these concepts are implemented with Scala.
http://www.meetup.com/Scala-Bay/events/209740892/
Technical introduction into Apache Spark - the Swiss Army Knife of Big Data analytics tools.
The talk was held at the Big Data User Group Mannheim, Germany at 24.11.2014.
This slide introduces Hadoop Spark.
Just to help you construct an idea of Spark regarding its architecture, data flow, job scheduling, and programming.
Not all technical details are included.
This presentation show the main Spark characteristics, like RDD, Transformations and Actions.
I used this presentation for many Spark Intro workshops from Cluj-Napoca Big Data community : http://www.meetup.com/Big-Data-Data-Science-Meetup-Cluj-Napoca/
Introduction to Apache Spark. With an emphasis on the RDD API, Spark SQL (DataFrame and Dataset API) and Spark Streaming.
Presented at the Desert Code Camp:
http://oct2016.desertcodecamp.com/sessions/all
A very short set of slides to describe an RDD data structure.
Extracted from my 3-day course: www.sparkInternals.com
There is also a video of this on YouTube: http://youtu.be/odcEg515Ne8
Video: https://www.youtube.com/watch?v=kkOG_aJ9KjQ
This talk gives details about Spark internals and an explanation of the runtime behavior of a Spark application. It explains how high level user programs are compiled into physical execution plans in Spark. It then reviews common performance bottlenecks encountered by Spark users, along with tips for diagnosing performance problems in a production application.
Deep Dive with Spark Streaming - Tathagata Das - Spark Meetup 2013-06-17spark-project
Slides from Tathagata Das's talk at the Spark Meetup entitled "Deep Dive with Spark Streaming" on June 17, 2013 in Sunnyvale California at Plug and Play. Tathagata Das is the lead developer on Spark Streaming and a PhD student in computer science in the UC Berkeley AMPLab.
Summary of the lessons we learned with Docker (Dockerfile, storage, distributed networking) during the first iteration of the AdamCloud project (Fall 2014).
The AdamCloud project (part I) was presented here:
http://www.slideshare.net/davidonlaptop/bdm29-adamcloud-planification
This is the presentation I made on JavaDay Kiev 2015 regarding the architecture of Apache Spark. It covers the memory model, the shuffle implementations, data frames and some other high-level staff and can be used as an introduction to Apache Spark
This deep dive attempts to "de-mystify" Spark by touching on some of the main design philosophies and diving into some of the more advanced features that make it such a flexible and powerful cluster computing framework. It will touch on some common pitfalls and attempt to build some best practices for building, configuring, and deploying Spark applications.
Apache Spark presentation at HasGeek FifthElelephant
https://fifthelephant.talkfunnel.com/2015/15-processing-large-data-with-apache-spark
Covering Big Data Overview, Spark Overview, Spark Internals and its supported libraries
Apache Spark Introduction and Resilient Distributed Dataset basics and deep diveSachin Aggarwal
We will give a detailed introduction to Apache Spark and why and how Spark can change the analytics world. Apache Spark's memory abstraction is RDD (Resilient Distributed DataSet). One of the key reason why Apache Spark is so different is because of the introduction of RDD. You cannot do anything in Apache Spark without knowing about RDDs. We will give a high level introduction to RDD and in the second half we will have a deep dive into RDDs.
Video to talk: https://www.youtube.com/watch?v=gd4Jqtyo7mM
Apache Spark is a next generation engine for large scale data processing built with Scala. This talk will first show how Spark takes advantage of Scala's function idioms to produce an expressive and intuitive API. You will learn about the design of Spark RDDs and the abstraction enables the Spark execution engine to be extended to support a wide variety of use cases(Spark SQL, Spark Streaming, MLib and GraphX). The Spark source will be be referenced to illustrate how these concepts are implemented with Scala.
http://www.meetup.com/Scala-Bay/events/209740892/
Technical introduction into Apache Spark - the Swiss Army Knife of Big Data analytics tools.
The talk was held at the Big Data User Group Mannheim, Germany at 24.11.2014.
This slide introduces Hadoop Spark.
Just to help you construct an idea of Spark regarding its architecture, data flow, job scheduling, and programming.
Not all technical details are included.
This presentation show the main Spark characteristics, like RDD, Transformations and Actions.
I used this presentation for many Spark Intro workshops from Cluj-Napoca Big Data community : http://www.meetup.com/Big-Data-Data-Science-Meetup-Cluj-Napoca/
Introduction to Apache Spark. With an emphasis on the RDD API, Spark SQL (DataFrame and Dataset API) and Spark Streaming.
Presented at the Desert Code Camp:
http://oct2016.desertcodecamp.com/sessions/all
A very short set of slides to describe an RDD data structure.
Extracted from my 3-day course: www.sparkInternals.com
There is also a video of this on YouTube: http://youtu.be/odcEg515Ne8
Video: https://www.youtube.com/watch?v=kkOG_aJ9KjQ
This talk gives details about Spark internals and an explanation of the runtime behavior of a Spark application. It explains how high level user programs are compiled into physical execution plans in Spark. It then reviews common performance bottlenecks encountered by Spark users, along with tips for diagnosing performance problems in a production application.
Deep Dive with Spark Streaming - Tathagata Das - Spark Meetup 2013-06-17spark-project
Slides from Tathagata Das's talk at the Spark Meetup entitled "Deep Dive with Spark Streaming" on June 17, 2013 in Sunnyvale California at Plug and Play. Tathagata Das is the lead developer on Spark Streaming and a PhD student in computer science in the UC Berkeley AMPLab.
Summary of the lessons we learned with Docker (Dockerfile, storage, distributed networking) during the first iteration of the AdamCloud project (Fall 2014).
The AdamCloud project (part I) was presented here:
http://www.slideshare.net/davidonlaptop/bdm29-adamcloud-planification
AdamCloud: A Cloud infrastructure for a Genomic project. The AdamCloud project aims to simplify the installation of the AmpLab genomic pipeline (Snap, Adam, Avocado).
The results of the first iteration (part II) were presented here:
http://www.slideshare.net/davidonlaptop/bdm32-adam-cloud-part-2-43514904
BDM8 - Near-realtime Big Data Analytics using ImpalaDavid Lauzon
Quick overview of all informations I've gathered on Cloudera Impala. It describes use cases for Impala and what not to use Impala for. Presented at Big Data Montreal #8 at RPM Startup Center.
BDM9 - Comparison of Oracle RDBMS and Cloudera Impala for a hospital use caseDavid Lauzon
High-level use case description of one department of a hospital, and comparisons of two solutions : 1) Big data solution using Cloudera Impala; and 2) Traditional RDBMS solution using Oracle DB.
A tutorial presentation based on spark.apache.org documentation.
I gave this presentation at Amirkabir University of Technology as Teaching Assistant of Cloud Computing course of Dr. Amir H. Payberah in spring semester 2015.
Securing the Data Hub--Protecting your Customer IP (Technical Workshop)Cloudera, Inc.
Your data is your IP and its security is paramount. The last thing you want is for your data to become a target for threats. This workshop will focus on the realities of protecting your customer’s IP from external and internal threats with battle hardened technologies and methodologies. Another key concept that will be examined is the connection of people, processes and technology. In addition, the session will take a look at authentication and authorisation, auditing and data lineage as well as the different groups required to play a part in the modern data hub. We will also look at how to produce high impact operation reports from Cloudera’s RecordService a new core security layer that centrally enforces fine-grained access control policy, which helps close the feedback loop to ensure awareness of security as a living entity within your organisation.
Building a Data Hub that Empowers Customer Insight (Technical Workshop)Cloudera, Inc.
We have seen the evolution with the Bi and Data Science fields from the structured data warehouse to data lake and finally, to the data hub. This session will cover the key steps required to building a data hub, examining how best to align and engage stakeholders and develop architectural sanction to enable your organisations to realise new customer insights and better enable you to achieve business objectives.
Big Data Everywhere Chicago: Apache Spark Plus Many Other Frameworks -- How S...BigDataEverywhere
Paco Nathan, Director of Community Evangelism at Databricks
Apache Spark is intended as a fast and powerful general purpose engine for processing Hadoop data. Spark supports combinations of batch processing, streaming, SQL, ML, Graph, etc., for applications written in Scala, Java, Python, Clojure, and R, among others. In this talk, I'll explore how Spark fits into the Big Data landscape. In addition, I'll describe other systems with which Spark pairs nicely, and will also explain why Spark is needed for the work ahead.
Unified Big Data Processing with Apache SparkC4Media
Video and slides synchronized, mp3 and slide download available at URL http://bit.ly/1yNuLGF.
Matei Zaharia talks about the latest developments in Spark and shows examples of how it can combine processing algorithms to build rich data pipelines in just a few lines of code. Filmed at qconsf.com.
Matei Zaharia is an assistant professor of computer science at MIT, and CTO of Databricks, the company commercializing Apache Spark.
we will see internal architecture of spark cluster i.e what is driver, worker,executer and cluster manager, how spark program will be run on cluster and what are jobs,stages and task.
The Vortex of Change - Digital Transformation (Presented by Intel)Cloudera, Inc.
The vortex of change continues all around us – inside the company, with our customers and partners. A new norm is upon us. Business models are being turned upside down – the hunters now the hunted, global equalization – size is no longer a guarantee of success. The innovative survive and thrive…the nervous and slow go under...what does all this change means for you? Find out how does Intel’s strengths help our customers in this world of change.
"Apache Spark™ is a fast and general engine for large-scale data processing."" Above statement is taken from Apache Spark welcome page. It's one of those definitions that, while describing the product in one sentence and being 100 % true, tell still little to the wondering noob.
Why take interest in Apache Spark? Apache Spark promise being up to 100x faster than Hadoop MapReduce in certain scenarios. It provide comprehensible programming model (familiar to everyone who is used to functional programming) and vast ecosystem of tools.
In my talk I will try to reveal secrets of Apache Spark for the very beginners.
We will do first quick introduction to the set of problems commonly known as BigData: what they try to solve, what are their obstacles and challenges and how those can be addressed. We will quickly take a pick on MapReduce: theory and implementation. We will then move to Apache Spark. We will see what was the main factor that drove its creators to introduce yet another large-scala processing engine. We will see how it works, what are its main advantages. Presentation will be mix of slides and code examples.
Using Big Data to Transform Your Customer’s Experience - Part 1 Cloudera, Inc.
3 Things to Learn About:
-How the Customer Insights Solution helped
- How customer insights can improve customer loyalty, reduce customer churn, and increase upsell opportunities
- Which real-world use cases are ideal for using big data analytics on customer data
Author: Stefan Papp, Data Architect at “The unbelievable Machine Company“. An overview of Big Data Processing engines with a focus on Apache Spark and Apache Flink, given at a Vienna Data Science Group meeting on 26 January 2017. Following questions are addressed:
• What are big data processing paradigms and how do Spark 1.x/Spark 2.x and Apache Flink solve them?
• When to use batch and when stream processing?
• What is a Lambda-Architecture and a Kappa Architecture?
• What are the best practices for your project?
Cassandra Summit 2014: Apache Spark - The SDK for All Big Data PlatformsDataStax Academy
Apache Spark has grown to be one of the largest open source communities in big data, with over 190 developers and dozens of companies contributing. The latest 1.0 release alone includes contributions from 117 people. A clean API, interactive shell, distributed in-memory computation, stream processing, interactive SQL, and libraries delivering everything from machine learning to graph processing make it an excellent unified platform to solve a number of problems. Apache Spark works very well with a growing number of big data solutions, including Cassandra and Hadoop. Come learn about Apache Spark and see how easy it is for you to get started using Spark to build your own high performance big data applications today.
An Engine to process big data in faster(than MR), easy and extremely scalable way. An Open Source, parallel, in-memory processing, cluster computing framework. Solution for loading, processing and end to end analyzing large scale data. Iterative and Interactive : Scala, Java, Python, R and with Command line interface.
In this one day workshop, we will introduce Spark at a high level context. Spark is fundamentally different than writing MapReduce jobs so no prior Hadoop experience is needed. You will learn how to interact with Spark on the command line and conduct rapid in-memory data analyses. We will then work on writing Spark applications to perform large cluster-based analyses including SQL-like aggregations, machine learning applications, and graph algorithms. The course will be conducted in Python using PySpark.
A full Machine learning pipeline in Scikit-learn vs in scala-Spark: pros and ...Jose Quesada (hiring)
The machine learning libraries in Apache Spark are an impressive piece of software engineering, and are maturing rapidly. What advantages does Spark.ml offer over scikit-learn? At Data Science Retreat we've taken a real-world dataset and worked through the stages of building a predictive model -- exploration, data cleaning, feature engineering, and model fitting; which would you use in production?
The machine learning libraries in Apache Spark are an impressive piece of software engineering, and are maturing rapidly. What advantages does Spark.ml offer over scikit-learn?
At Data Science Retreat we've taken a real-world dataset and worked through the stages of building a predictive model -- exploration, data cleaning, feature engineering, and model fitting -- in several different frameworks. We'll show what it's like to work with native Spark.ml, and compare it to scikit-learn along several dimensions: ease of use, productivity, feature set, and performance.
In some ways Spark.ml is still rather immature, but it also conveys new superpowers to those who know how to use it.
Workshop - How to Build Recommendation Engine using Spark 1.6 and HDP
Hands-on - Build a Data analytics application using SPARK, Hortonworks, and Zeppelin. The session explains RDD concepts, DataFrames, sqlContext, use SparkSQL for working with DataFrames and explore graphical abilities of Zeppelin.
b) Follow along - Build a Recommendation Engine - This will show how to build a predictive analytics (MLlib) recommendation engine with scoring This will give a better understanding of architecture and coding in Spark for ML.
This session covers how to work with PySpark interface to develop Spark applications. From loading, ingesting, and applying transformation on the data. The session covers how to work with different data sources of data, apply transformation, python best practices in developing Spark Apps. The demo covers integrating Apache Spark apps, In memory processing capabilities, working with notebooks, and integrating analytics tools into Spark Applications.
Volodymyr Lyubinets "Introduction to big data processing with Apache Spark"IT Event
In this talk we’ll explore Apache Spark — the most popular cluster computing framework right now. We’ll look at the improvements that Spark brought over Hadoop MapReduce and what makes Spark so fast; explore Spark programming model and RDDs; and look at some sample use cases for Spark and big data in general.
This talk will be interesting for people who have little or no experience with Spark and would like to learn more about it. It will also be interesting to a general engineering audience as we’ll go over the Spark programming model and some engineering tricks that make Spark fast.
Abstract –
Spark 2 is here, while Spark has been the leading cluster computation framework for severl years, its second version takes Spark to new heights. In this seminar, we will go over Spark internals and learn the new concepts of Spark 2 to create better scalable big data applications.
Target Audience
Architects, Java/Scala developers, Big Data engineers, team leaders
Prerequisites
Java/Scala knowledge and SQL knowledge
Contents:
- Spark internals
- Architecture
- RDD
- Shuffle explained
- Dataset API
- Spark SQL
- Spark Streaming
Gamify Your Mind; The Secret Sauce to Delivering Success, Continuously Improv...Shahin Sheidaei
Games are powerful teaching tools, fostering hands-on engagement and fun. But they require careful consideration to succeed. Join me to explore factors in running and selecting games, ensuring they serve as effective teaching tools. Learn to maintain focus on learning objectives while playing, and how to measure the ROI of gaming in education. Discover strategies for pitching gaming to leadership. This session offers insights, tips, and examples for coaches, team leads, and enterprise leaders seeking to teach from simple to complex concepts.
Experience our free, in-depth three-part Tendenci Platform Corporate Membership Management workshop series! In Session 1 on May 14th, 2024, we began with an Introduction and Setup, mastering the configuration of your Corporate Membership Module settings to establish membership types, applications, and more. Then, on May 16th, 2024, in Session 2, we focused on binding individual members to a Corporate Membership and Corporate Reps, teaching you how to add individual members and assign Corporate Representatives to manage dues, renewals, and associated members. Finally, on May 28th, 2024, in Session 3, we covered questions and concerns, addressing any queries or issues you may have.
For more Tendenci AMS events, check out www.tendenci.com/events
Top Features to Include in Your Winzo Clone App for Business Growth (4).pptxrickgrimesss22
Discover the essential features to incorporate in your Winzo clone app to boost business growth, enhance user engagement, and drive revenue. Learn how to create a compelling gaming experience that stands out in the competitive market.
Globus Connect Server Deep Dive - GlobusWorld 2024Globus
We explore the Globus Connect Server (GCS) architecture and experiment with advanced configuration options and use cases. This content is targeted at system administrators who are familiar with GCS and currently operate—or are planning to operate—broader deployments at their institution.
Check out the webinar slides to learn more about how XfilesPro transforms Salesforce document management by leveraging its world-class applications. For more details, please connect with sales@xfilespro.com
If you want to watch the on-demand webinar, please click here: https://www.xfilespro.com/webinars/salesforce-document-management-2-0-smarter-faster-better/
A Comprehensive Look at Generative AI in Retail App Testing.pdfkalichargn70th171
Traditional software testing methods are being challenged in retail, where customer expectations and technological advancements continually shape the landscape. Enter generative AI—a transformative subset of artificial intelligence technologies poised to revolutionize software testing.
Exploring Innovations in Data Repository Solutions - Insights from the U.S. G...Globus
The U.S. Geological Survey (USGS) has made substantial investments in meeting evolving scientific, technical, and policy driven demands on storing, managing, and delivering data. As these demands continue to grow in complexity and scale, the USGS must continue to explore innovative solutions to improve its management, curation, sharing, delivering, and preservation approaches for large-scale research data. Supporting these needs, the USGS has partnered with the University of Chicago-Globus to research and develop advanced repository components and workflows leveraging its current investment in Globus. The primary outcome of this partnership includes the development of a prototype enterprise repository, driven by USGS Data Release requirements, through exploration and implementation of the entire suite of the Globus platform offerings, including Globus Flow, Globus Auth, Globus Transfer, and Globus Search. This presentation will provide insights into this research partnership, introduce the unique requirements and challenges being addressed and provide relevant project progress.
OpenFOAM solver for Helmholtz equation, helmholtzFoam / helmholtzBubbleFoamtakuyayamamoto1800
In this slide, we show the simulation example and the way to compile this solver.
In this solver, the Helmholtz equation can be solved by helmholtzFoam. Also, the Helmholtz equation with uniformly dispersed bubbles can be simulated by helmholtzBubbleFoam.
Field Employee Tracking System| MiTrack App| Best Employee Tracking Solution|...informapgpstrackings
Keep tabs on your field staff effortlessly with Informap Technology Centre LLC. Real-time tracking, task assignment, and smart features for efficient management. Request a live demo today!
For more details, visit us : https://informapuae.com/field-staff-tracking/
Accelerate Enterprise Software Engineering with PlatformlessWSO2
Key takeaways:
Challenges of building platforms and the benefits of platformless.
Key principles of platformless, including API-first, cloud-native middleware, platform engineering, and developer experience.
How Choreo enables the platformless experience.
How key concepts like application architecture, domain-driven design, zero trust, and cell-based architecture are inherently a part of Choreo.
Demo of an end-to-end app built and deployed on Choreo.
Innovating Inference - Remote Triggering of Large Language Models on HPC Clus...Globus
Large Language Models (LLMs) are currently the center of attention in the tech world, particularly for their potential to advance research. In this presentation, we'll explore a straightforward and effective method for quickly initiating inference runs on supercomputers using the vLLM tool with Globus Compute, specifically on the Polaris system at ALCF. We'll begin by briefly discussing the popularity and applications of LLMs in various fields. Following this, we will introduce the vLLM tool, and explain how it integrates with Globus Compute to efficiently manage LLM operations on Polaris. Attendees will learn the practical aspects of setting up and remotely triggering LLMs from local machines, focusing on ease of use and efficiency. This talk is ideal for researchers and practitioners looking to leverage the power of LLMs in their work, offering a clear guide to harnessing supercomputing resources for quick and effective LLM inference.
In software engineering, the right architecture is essential for robust, scalable platforms. Wix has undergone a pivotal shift from event sourcing to a CRUD-based model for its microservices. This talk will chart the course of this pivotal journey.
Event sourcing, which records state changes as immutable events, provided robust auditing and "time travel" debugging for Wix Stores' microservices. Despite its benefits, the complexity it introduced in state management slowed development. Wix responded by adopting a simpler, unified CRUD model. This talk will explore the challenges of event sourcing and the advantages of Wix's new "CRUD on steroids" approach, which streamlines API integration and domain event management while preserving data integrity and system resilience.
Participants will gain valuable insights into Wix's strategies for ensuring atomicity in database updates and event production, as well as caching, materialization, and performance optimization techniques within a distributed system.
Join us to discover how Wix has mastered the art of balancing simplicity and extensibility, and learn how the re-adoption of the modest CRUD has turbocharged their development velocity, resilience, and scalability in a high-growth environment.
top nidhi software solution freedownloadvrstrong314
This presentation emphasizes the importance of data security and legal compliance for Nidhi companies in India. It highlights how online Nidhi software solutions, like Vector Nidhi Software, offer advanced features tailored to these needs. Key aspects include encryption, access controls, and audit trails to ensure data security. The software complies with regulatory guidelines from the MCA and RBI and adheres to Nidhi Rules, 2014. With customizable, user-friendly interfaces and real-time features, these Nidhi software solutions enhance efficiency, support growth, and provide exceptional member services. The presentation concludes with contact information for further inquiries.
How Recreation Management Software Can Streamline Your Operations.pptxwottaspaceseo
Recreation management software streamlines operations by automating key tasks such as scheduling, registration, and payment processing, reducing manual workload and errors. It provides centralized management of facilities, classes, and events, ensuring efficient resource allocation and facility usage. The software offers user-friendly online portals for easy access to bookings and program information, enhancing customer experience. Real-time reporting and data analytics deliver insights into attendance and preferences, aiding in strategic decision-making. Additionally, effective communication tools keep participants and staff informed with timely updates. Overall, recreation management software enhances efficiency, improves service delivery, and boosts customer satisfaction.
Enterprise Resource Planning System includes various modules that reduce any business's workload. Additionally, it organizes the workflows, which drives towards enhancing productivity. Here are a detailed explanation of the ERP modules. Going through the points will help you understand how the software is changing the work dynamics.
To know more details here: https://blogs.nyggs.com/nyggs/enterprise-resource-planning-erp-system-modules/
Unleash Unlimited Potential with One-Time Purchase
BoxLang is more than just a language; it's a community. By choosing a Visionary License, you're not just investing in your success, you're actively contributing to the ongoing development and support of BoxLang.
Large Language Models and the End of ProgrammingMatt Welsh
Talk by Matt Welsh at Craft Conference 2024 on the impact that Large Language Models will have on the future of software development. In this talk, I discuss the ways in which LLMs will impact the software industry, from replacing human software developers with AI, to replacing conventional software with models that perform reasoning, computation, and problem-solving.
3. WHO AM I
• Nan Zhu, PhD Candidate in School of Computer Science of
McGill University
• Work on computer networks (Software Defined Networks)
and large-scale data processing
• Work with Prof. Wenbo He and Prof. Xue Liu
• PhD is an awesome experience in my life
• Tackle real world problems
• Keep thinking ! Get insights !
4. WHO AM I
• Nan Zhu, PhD Candidate in School of Computer Science of
McGill University
• Work on computer networks (Software Defined Networks)
and large-scale data processing
• Work with Prof. Wenbo He and Prof. Xue Liu
• PhD is an awesome experience in my life
• Tackle real world problems
• Keep thinking ! Get insights !
When will I graduate ?
5. WHO AM I
• Do-it-all Engineer in Faimdata (http://www.faimdata.com)
• Faimdata is a new startup located in Montreal
• Build Customer-centric analysis solution based on
Spark for retailers
• My responsibility
• Participate in everything related to data
• Akka, HBase, Hive, Kafka, Spark, etc.
6. WHO AM I
• My Contribution to Spark
• 0.8.1, 0.9.0, 0.9.1, 1.0.0
• 1000+ code, 30 patches
• Two examples:
• YARN-like architecture in Spark
• Introduce Actor Supervisor mechanism to
DAGScheduler
7. WHO AM I
• My Contribution to Spark
• 0.8.1, 0.9.0, 0.9.1, 1.0.0
• 1000+ code, 30 patches
• Two examples:
• YARN-like architecture in Spark
• Introduce Actor Supervisor mechanism to
DAGScheduler
I’m CodingCat@GitHub !!!!
8. WHO AM I
• My Contribution to Spark
• 0.8.1, 0.9.0, 0.9.1, 1.0.0
• 1000+ code, 30 patches
• Two examples:
• YARN-like architecture in Spark
• Introduce Actor Supervisor mechanism to
DAGScheduler
I’m CodingCat@GitHub !!!!
10. What is Spark?
• A distributed computing framework
• Organize computation as concurrent tasks
• Schedule tasks to multiple servers
• Handle fault-tolerance, load balancing, etc, in
automatic (and transparently)
11. Advantages of Spark
• More Descriptive Computing Model
• Faster Processing Speed
• Unified Pipeline
14. Mode Descriptive Computing Model (1)
• WordCount in Hadoop (Map & Reduce)
Map function, read
each line of the input
file and transform
each word into
<word, 1> pair
15. Mode Descriptive Computing Model (1)
• WordCount in Hadoop (Map & Reduce)
Map function, read
each line of the input
file and transform
each word into
<word, 1> pair
16. Mode Descriptive Computing Model (1)
• WordCount in Hadoop (Map & Reduce)
Map function, read
each line of the input
file and transform
each word into
<word, 1> pair
17. Mode Descriptive Computing Model (1)
• WordCount in Hadoop (Map & Reduce)
Map function, read
each line of the input
file and transform
each word into
<word, 1> pair
Reduce function,
collect the <word, 1>
pairs generated by
Map function and
merge them by
accumulation
18. Mode Descriptive Computing Model (1)
• WordCount in Hadoop (Map & Reduce)
Map function, read
each line of the input
file and transform
each word into
<word, 1> pair
Reduce function,
collect the <word, 1>
pairs generated by
Map function and
merge them by
accumulation
19. Mode Descriptive Computing Model (1)
• WordCount in Hadoop (Map & Reduce)
Map function, read
each line of the input
file and transform
each word into
<word, 1> pair
Reduce function,
collect the <word, 1>
pairs generated by
Map function and
merge them by
accumulation
20. Mode Descriptive Computing Model (1)
• WordCount in Hadoop (Map & Reduce)
Map function, read
each line of the input
file and transform
each word into
<word, 1> pair
Reduce function,
collect the <word, 1>
pairs generated by
Map function and
merge them by
accumulation
Configurate the
program
21. Mode Descriptive Computing Model (1)
• WordCount in Hadoop (Map & Reduce)
Map function, read
each line of the input
file and transform
each word into
<word, 1> pair
Reduce function,
collect the <word, 1>
pairs generated by
Map function and
merge them by
accumulation
Configurate the
program
23. DESCRIPTIVE COMPUTING MODEL (2)
• Closer look at WordCount in Spark
Scala:
Organize Computation into Multiple Stages in a Processing Pipeline:
transformation to get the intermediate results with expected schema
action to get final output
Computation is expressed with more high-level
APIs, which simplify the logic in original Map &
Reduce and define the computation as a
processing pipeline
24. DESCRIPTIVE COMPUTING MODEL (2)
• Closer look at WordCount in Spark
Scala:
Organize Computation into Multiple Stages in a Processing Pipeline:
transformation to get the intermediate results with expected schema
action to get final output
Transformation
Computation is expressed with more high-level
APIs, which simplify the logic in original Map &
Reduce and define the computation as a
processing pipeline
25. DESCRIPTIVE COMPUTING MODEL (2)
• Closer look at WordCount in Spark
Scala:
Organize Computation into Multiple Stages in a Processing Pipeline:
transformation to get the intermediate results with expected schema
action to get final output
Transformation
Action
Computation is expressed with more high-level
APIs, which simplify the logic in original Map &
Reduce and define the computation as a
processing pipeline
26. MUCH BETTER PERFORMANCE
• PageRank Algorithm Performance Comparison
Matei Zaharia, et al, Resilient Distributed Datasets: A Fault-Tolerant Abstraction for
In-Memory Cluster Computing, NSDI 2012
0"
20"
40"
60"
80"
100"
120"
140"
160"
180"
Hadoop"" Basic"Spark" Spark"with"Controlled;
par<<on"
Time%per%Itera+ons%(s)%
31. Understand a distributed computing framework
• DataFlow
• e.g. Hadoop family utilizes HDFS to transfer data within
a job and share data across jobs/applications
HDFS
Daemon
MapTask
HDFS
Daemon
MapTask
HDFS
Daemon
MapTask
32. Understand a distributed computing framework
• DataFlow
• e.g. Hadoop family utilizes HDFS to transfer data within
a job and share data across jobs/applications
33. Understand a distributed computing framework
• DataFlow
• e.g. Hadoop family utilizes HDFS to transfer data within
a job and share data across jobs/applications
34. Understanding a distributed computing engine
• Task Management
• How the computation is executed within multiple
servers
• How the tasks are scheduled
• How the resources are allocated
36. Basic Structure of Spark program
• A Spark Program
val sc = new SparkContext(…)
!
val points = sc.textFile("hdfs://...")
.map(_.split.map(_.toDouble)).splitAt(1)
.map { case (Array(label), features) =>
LabeledPoint(label, features)
}
!
val model = Model.train(points)
37. Basic Structure of Spark program
• A Spark Program
val sc = new SparkContext(…)
!
val points = sc.textFile("hdfs://...")
.map(_.split.map(_.toDouble)).splitAt(1)
.map { case (Array(label), features) =>
LabeledPoint(label, features)
}
!
val model = Model.train(points)
Includes the components
driving the running of
computing tasks (will
introduce later)
38. Basic Structure of Spark program
• A Spark Program
val sc = new SparkContext(…)
!
val points = sc.textFile("hdfs://...")
.map(_.split.map(_.toDouble)).splitAt(1)
.map { case (Array(label), features) =>
LabeledPoint(label, features)
}
!
val model = Model.train(points)
Includes the components
driving the running of
computing tasks (will
introduce later)
Load data from HDFS,
forming a RDD
(Resilient Distributed
Datasets) object
39. Basic Structure of Spark program
• A Spark Program
val sc = new SparkContext(…)
!
val points = sc.textFile("hdfs://...")
.map(_.split.map(_.toDouble)).splitAt(1)
.map { case (Array(label), features) =>
LabeledPoint(label, features)
}
!
val model = Model.train(points)
Includes the components
driving the running of
computing tasks (will
introduce later)
Load data from HDFS,
forming a RDD
(Resilient Distributed
Datasets) object
Transformations to
generate RDDs with
expected element(s)/
format
40. Basic Structure of Spark program
• A Spark Program
val sc = new SparkContext(…)
!
val points = sc.textFile("hdfs://...")
.map(_.split.map(_.toDouble)).splitAt(1)
.map { case (Array(label), features) =>
LabeledPoint(label, features)
}
!
val model = Model.train(points)
Includes the components
driving the running of
computing tasks (will
introduce later)
Load data from HDFS,
forming a RDD
(Resilient Distributed
Datasets) object
Transformations to
generate RDDs with
expected element(s)/
format
All Computations are around RDDs
41. Resilient Distributed Dataset
• RDD is a distributed memory abstraction which is
• data collection
• immutable
• created by either loading from stable storage system (e.g. HDFS) or
through transformations on other RDD(s)
• partitioned and distributed
sc.textFile(…)
filter() map()
44. From data to computation
• Lineage
Where do I come
from?
(dependency)
45. From data to computation
• Lineage
Where do I come
from?
(dependency)
How do I come from?
(save the functions
calculating the partitions)
46. From data to computation
• Lineage
Where do I come
from?
(dependency)
How do I come from?
(save the functions
calculating the partitions)
Computation is
organized as a DAG
(Lineage)
47. From data to computation
• Lineage
Where do I come
from?
(dependency)
How do I come from?
(save the functions
calculating the partitions)
Computation is
organized as a DAG
(Lineage)
Lost data can be
recovered in parallel
with the help of the
lineage DAG
48. Cache
• Frequently accessed RDDs can be materialized and
cached in memory
• Cached RDD can also be replicated for fault tolerance
(Spark scheduler takes cached data locality into account)
• Manage the cache space with LRU algorithm
50. Benefits Brought Cache
• Example (Log Mining)
Count is an action, for the first
time, it has to calculate from the
start of the DAG Graph (textFile)
51. Benefits Brought Cache
• Example (Log Mining)
Count is an action, for the first
time, it has to calculate from the
start of the DAG Graph (textFile)
Because the data is cached, the
second count does not trigger a
“start-from-zero” computation,
instead, it is based on
“cachedMsgs” directly
52. Summary
• Resilient Distributed Datasets (RDD)
• Distributed memory abstraction in Spark
• Keep computation run in memory with best effort
• Keep track of the “lineage” of data
• Organize computation
• Support fault-tolerance
• Cache
53. RDD brings much better performance by
simplifying the data flow
• Share Data among Applications
A typical data processing pipeline
54. RDD brings much better performance by
simplifying the data flow
• Share Data among Applications
A typical data processing pipeline
Overhead Overhead
55. RDD brings much better performance by
simplifying the data flow
• Share Data among Applications
A typical data processing pipeline
Overhead Overhead
56. RDD brings much better performance by
simplifying the data flow
• Share Data among Applications
A typical data processing pipeline
57. RDD brings much better performance by
simplifying the data flow
• Share Data among Applications
A typical data processing pipeline
58. RDD brings much better performance by
simplifying the data flow
• Share data in Iterative Algorithms
• Certain amount of predictive/machine learning
algorithms are iterative
• e.g. K-Means
Step 1: Place randomly initial group centroids into the space.
Step 2: Assign each object to the group that has the closest
centroid.
Step 3: Recalculate the positions of the centroids.
Step 4: If the positions of the centroids didn't change go to
the next step, else go to Step 2.
Step 5: End.
59. RDD brings much better performance by
simplifying the data flow
• Share data in Iterative Algorithms
• Certain amount of predictive/machine learning
algorithms are iterative
• e.g. K-Means
60. RDD brings much better performance by
simplifying the data flow
• Share data in Iterative Algorithms
• Certain amount of predictive/machine learning
algorithms are iterative
• e.g. K-Means
Assign group
(Step 2)
Recalculate
Group (Step 3 & 4)
Output (Step 5)
HDFS
Read
Write
Write
61. RDD brings much better performance by
simplifying the data flow
• Share data in Iterative Algorithms
• Certain amount of predictive/machine learning
algorithms are iterative
• e.g. K-Means
62. RDD brings much better performance by
simplifying the data flow
• Share data in Iterative Algorithms
• Certain amount of predictive/machine learning
algorithms are iterative
• e.g. K-Means
Write
Assign group
(Step 2)
Recalculate
Group (Step 3 & 4)
Output (Step 5)
HDFS
63. RDD brings much better performance by
simplifying the data flow
• Share data in Iterative Algorithms
• Certain amount of predictive/machine learning
algorithms are iterative
• e.g. K-Means
65. Worker
Worker
The Structure of Spark Cluster
Driver Program
Spark Context
Cluster Manager
DAG
Sche
Task
Sche
Cluster
Sche
66. Worker
Worker
The Structure of Spark Cluster
Driver Program
Spark Context
Cluster Manager
Each SparkContext
creates a Spark
application
DAG
Sche
Task
Sche
Cluster
Sche
67. Worker
Worker
The Structure of Spark Cluster
Driver Program
Spark Context
Cluster Manager
Each SparkContext
creates a Spark
application
DAG
Sche
Task
Sche
Cluster
Sche
68. Worker
Worker
The Structure of Spark Cluster
Driver Program
Spark Context
Cluster Manager
Each SparkContext
creates a Spark
application
Submit Application
to Cluster Manager
DAG
Sche
Task
Sche
Cluster
Sche
69. Worker
Worker
The Structure of Spark Cluster
Driver Program
Spark Context
Cluster Manager
Each SparkContext
creates a Spark
application
Submit Application
to Cluster Manager
The Cluster Manager
can be the master of
standalone mode in
Spark, Mesos and
YARN
DAG
Sche
Task
Sche
Cluster
Sche
70. Worker
Worker
Executor
Cache
Executor
Cache
The Structure of Spark Cluster
Driver Program
Spark Context
Cluster Manager
Each SparkContext
creates a Spark
application
Submit Application
to Cluster Manager
The Cluster Manager
can be the master of
standalone mode in
Spark, Mesos and
YARN
DAG
Sche
Task
Sche
Cluster
Sche
71. Worker
Worker
Executor
Cache
Executor
Cache
The Structure of Spark Cluster
Driver Program
Spark Context
Cluster Manager
Each SparkContext
creates a Spark
application
Submit Application
to Cluster Manager
The Cluster Manager
can be the master of
standalone mode in
Spark, Mesos and
YARN
Start Executors for the application in Workers; Executors
registers with ClusterScheduler;
DAG
Sche
Task
Sche
Cluster
Sche
72. Worker
Worker
Executor
Cache
Executor
Cache
The Structure of Spark Cluster
Driver Program
Spark Context
Cluster Manager
Each SparkContext
creates a Spark
application
Submit Application
to Cluster Manager
The Cluster Manager
can be the master of
standalone mode in
Spark, Mesos and
YARN
Start Executors for the application in Workers; Executors
registers with ClusterScheduler;
DAG
Sche
Task
Sche
Cluster
Sche
Driver program schedules tasks for the application
TaskTask
Task Task
73. Scheduling Process
join%
union%
groupBy%
map%
Stage%3%
Stage%1%
Stage%2%
A:% B:%
C:% D:%
E:%
F:%
G:%
Split DAG
DAGScheduler
RDD objects
are connected
together with a
DAG
Submit each stage as
a TaskSet
TaskScheduler
TaskSetManagers
!
(monitor the
progress of tasks
and handle failed
stages)
Failed Stages
ClusterScheduler
Submit Tasks to
Executors