Spark is an open source cluster computing framework that allows processing of large datasets across clusters of computers using a simple programming model. It provides high-level APIs in Java, Scala, Python and R.
Typical machine learning workflows in Spark involve loading data, preprocessing, feature engineering, training models, evaluating performance, and tuning hyperparameters. Spark MLlib provides algorithms for common tasks like classification, regression, clustering and collaborative filtering.
The document provides an example of building a spam filtering application in Spark. It involves reading email data, extracting features using tokenization and hashing, training a logistic regression model, evaluating performance on test data, and tuning hyperparameters via cross validation.
This presentation is based on the third chapter of my textbook Fundamentals of Web Development. The book is published by Addison-Wesley. It can be purchased via http://www.amazon.com/Fundamentals-Web-Development-Randy-Connolly/dp/0133407152.
This book is intended to be used as a textbook on web development suitable for intermediate to upper-level computing students. It may also be of interest to a non-student reader wanting a single book that encompasses the entire breadth of contemporary web development.
This book will be the first in what will hopefully be a textbook series. Each book in the series will have the same topics and coverage but each will use a different web development environment. The first book in the series will use PHP.
To learn more about the book, visit http://www.funwebdev.com.
LiveBindings: desde lo básico hasta técnicas avanzadas Fernando Rizzato
Consejos prácticos de programación, trucos y técnicas que se pueden utilizar ahora! Usted está invitado a unirse a los expertos de Embarcadero cada 15 días para tutoriales de 30 minutos sobre el desarrollo de software para Windows, Mac, Android y iOS.
An introduction deck for the Web of Data to my team, including basic semantic web, Linked Open Data, primer, and then DBpedia, Linked Data Integration Framework (LDIF), Common Crawl Database, Web Data Commons.
This presentation is based on the third chapter of my textbook Fundamentals of Web Development. The book is published by Addison-Wesley. It can be purchased via http://www.amazon.com/Fundamentals-Web-Development-Randy-Connolly/dp/0133407152.
This book is intended to be used as a textbook on web development suitable for intermediate to upper-level computing students. It may also be of interest to a non-student reader wanting a single book that encompasses the entire breadth of contemporary web development.
This book will be the first in what will hopefully be a textbook series. Each book in the series will have the same topics and coverage but each will use a different web development environment. The first book in the series will use PHP.
To learn more about the book, visit http://www.funwebdev.com.
LiveBindings: desde lo básico hasta técnicas avanzadas Fernando Rizzato
Consejos prácticos de programación, trucos y técnicas que se pueden utilizar ahora! Usted está invitado a unirse a los expertos de Embarcadero cada 15 días para tutoriales de 30 minutos sobre el desarrollo de software para Windows, Mac, Android y iOS.
An introduction deck for the Web of Data to my team, including basic semantic web, Linked Open Data, primer, and then DBpedia, Linked Data Integration Framework (LDIF), Common Crawl Database, Web Data Commons.
Introduction to Pandas and Time Series Analysis [PyCon DE]Alexander Hendorf
Most data is allocated to a period or to some point in time. We can gain a lot of insight by analyzing what happened when. The better the quality and accuracy of our data, the better our predictions can become.
Unfortunately the data we have to deal with is often aggregated for example on a monthly basis, but not all months are the same, they may have 28 days, 31 days, have four or five weekends,…. It’s made fit to our calendar that was made fit to deal with the earth surrounding the sun, not to please Data Scientists.
Dealing with periodical data can be a challenge. This talk will show to how you can deal with it with Pandas.
JavaScript is an object-based scripting language that is lightweight and cross-platform.
JavaScript is a solution of client side dynamic pages.
JavaScript is helpful in designing interactive web-pages.
Deals with CSV Files operations in Pandas like reading, writing, performing joins and other operations in python using dataframes and Series in Pandas.
This presentation educates you about R - data types in detail with data type syntax, the data types are - Vectors, Lists, Matrices, Arrays, Factors, Data Frames.
For more topics stay tuned with Learnbay.
Know the Difference Between HTML and HTML5. seee what are the new updates in html5 and what tags you can use in html5 and make your website more beautiful and attractive
Automation and machine learning in the enterprisealphydan
How can a company start automating repetitive tasks? What are the standard tools of the trade? What kind of processes could be automated with a relatively small investment?
Introduction to Pandas and Time Series Analysis [PyCon DE]Alexander Hendorf
Most data is allocated to a period or to some point in time. We can gain a lot of insight by analyzing what happened when. The better the quality and accuracy of our data, the better our predictions can become.
Unfortunately the data we have to deal with is often aggregated for example on a monthly basis, but not all months are the same, they may have 28 days, 31 days, have four or five weekends,…. It’s made fit to our calendar that was made fit to deal with the earth surrounding the sun, not to please Data Scientists.
Dealing with periodical data can be a challenge. This talk will show to how you can deal with it with Pandas.
JavaScript is an object-based scripting language that is lightweight and cross-platform.
JavaScript is a solution of client side dynamic pages.
JavaScript is helpful in designing interactive web-pages.
Deals with CSV Files operations in Pandas like reading, writing, performing joins and other operations in python using dataframes and Series in Pandas.
This presentation educates you about R - data types in detail with data type syntax, the data types are - Vectors, Lists, Matrices, Arrays, Factors, Data Frames.
For more topics stay tuned with Learnbay.
Know the Difference Between HTML and HTML5. seee what are the new updates in html5 and what tags you can use in html5 and make your website more beautiful and attractive
Automation and machine learning in the enterprisealphydan
How can a company start automating repetitive tasks? What are the standard tools of the trade? What kind of processes could be automated with a relatively small investment?
AI&BigData Lab.Руденко Петр. Automation and optimisation of machine learning ...GeeksLab Odessa
23.05.15 Одесса. Impact Hub Odessa. Конференция AI&BigData Lab
Руденко Петр (Инженер-программист, Datarobot) Automation and optimisation of machine learning pipelines on top of Apache Spark
В компании Datarobot мы занимаемся автоматизированным построением точных предсказательных моделей. Помимо непосредственного обучения модели, важную роль во всем процессе играет препроцессинг данных (feature selection/normalization/transformation). В своем докладе я поделюсь нашим опытом использования платформы Apache Spark и в частности новыми ml API, которые предоставляют функционал для построения пайплайнов (Pipeline), поиска оптимальных значений гиперпараметров моделей (Crossvalidation).
Подробнее:
http://geekslab.co/
https://www.facebook.com/GeeksLab.co
https://www.youtube.com/user/GeeksLabVideo
Deploy Spark ML and Tensorflow AI Models from Notebooks to Microservices - No...Chris Fregly
In this completely 100% Open Source demo-based talk, Chris Fregly from PipelineIO will be addressing an area of machine learning and artificial intelligence that is often overlooked: the real-time, end-user-facing "serving” layer in a hybrid-cloud and on-premise deployment environment using Jupyter, NetflixOSS, Docker, and Kubernetes.
Serving models to end-users in real-time in a highly-scalable, fault-tolerant manner requires not only an understanding of machine learning fundamentals, but also an understanding of distributed systems and scalable microservices.
Chris will combine his work experience from both Databricks and Netflix to present a 100% open source, real-world, hybrid-cloud, on-premise, and NetflixOSS-based production-ready environment to serve your notebook-based Spark ML and TensorFlow AI models with highly-scalable and highly-available robustness.
Speaker Bio
Chris Fregly is a Research Scientist at PipelineIO - a Streaming Analytics and Machine Learning Startup in San Francisco.
Chris is an Apache Spark Contributor, Netflix Open Source Committer, Founder of the Global Advanced Spark and TensorFlow Meetup, and Author of the upcoming book, Advanced Spark, and Creator of the upcoming O'Reilly video series, Scaling TensorFlow Distributed in Production.
Previously, Chris was an engineer at Databricks and Netflix - as well as a Founding Member of the IBM Spark Technology Center in San Francisco.
The slides give an overview of how Spark can be used to tackle Machine learning tasks, such as classification, regression, clustering, etc., at a Big Data scale.
Presented at the MLConf in Seattle, this presentation offers a quick introduction to Apache Spark, followed by an overview of two novel features for data science
Using SparkML to Power a DSaaS (Data Science as a Service): Spark Summit East...Spark Summit
Almost all organizations now have a need for datascience and as such the main challenge after determining the algorithm is to scale it up and make it operational. We at comcast use several tools and technologies such as Python, R, SaS, H2O and so on.
In this talk we will show how many common use cases use the common algorithms like Logistic Regression, Random Forest, Decision Trees , Clustering, NLP etc.
Spark has several Machine Learning algorithms built in and has excellent scalability. Hence we at comcast built a platform to provide DSaaS on top of Spark with REST API as a means of controlling and submitting jobs so as to abstract most users from the rigor of writing(repeating ) code instead focusing on the actual requirements. We will show how we solved some of the problems of establishing feature vectors, choosing algorithms and then deploying models into production.
We will showcase our use of Scala, R and Python to implement models using language of choice yet deploying quickly into production on 500 node Spark clusters.
Unlocking Value in Device Data Using Spark: Spark Summit East talk by John La...Spark Summit
HP ships millions of PCs, Printers, and other devices every year to customers in all market segments. More customers are seeking services provided with our products enabling new opportunities for HP to create services from the data we can collect from our devices. Every device we ship is an IoT endpoint with powerful CPU to capture rich data. Insights from this data are used internally to improve our products and focus on customer needs.
In this presentation, John will focus on HP’s journey to enabling Big Data analytics from within a large enterprise environment. He will review the challenges and how HP decided on AWS, Apache Spark and Databricks as the foundation for their entry into Big Data Analytics. John will also review how HP uses Spark to build analytic services from the data they generate from their devices.
Apache Spark for Machine Learning with High Dimensional Labels: Spark Summit ...Spark Summit
This talk will cover the tools we used, the hurdles we faced and the work arounds we developed with the help from Databricks support in our attempt to build a custom machine learning model and use it to predict the TV ratings for different networks and demographics.
The Apache Spark machine learning and dataframe APIs make it incredibly easy to produce a machine learning pipeline to solve an archetypal supervised learning problem. In our applications at Cadent, we face a challenge with high dimensional labels and relatively low dimensional features; at first pass such a problem is all but intractable but thanks to a large number of historical records and the tools available in Apache Spark, we were able to construct a multi-stage model capable of forecasting with sufficient accuracy to drive the business application.
Over the course of our work we have come across many tools that made our lives easier, and others that forced work around. In this talk we will review our custom multi-stage methodology, review the challenges we faced and walk through the key steps that made our project successful.
Scaling Apache Spark MLlib to Billions of Parameters: Spark Summit East talk ...Spark Summit
Apache Spark MLlib provides scalable implementation of popular machine learning algorithms, which lets users train models from big dataset and iterate fast. The existing implementations assume that the number of parameters is small enough to fit in the memory of a single machine. However, many applications require solving problems with billions of parameters on a huge amount of data such as Ads CTR prediction and deep neural network. This requirement far exceeds the capacity of exisiting MLlib algorithms many of who use L-BFGS as the underlying solver. In order to fill this gap, we developed Vector-free L-BFGS for MLlib. It can solve optimization problems with billions of parameters in the Spark SQL framework where the training data are often generated. The algorithm scales very well and enables a variety of MLlib algorithms to handle a massive number of parameters over large datasets. In this talk, we will illustrate the power of Vector-free L-BFGS via logistic regression with real-world dataset and requirement. We will also discuss how this approach could be applied to other ML algorithms.
Tech-Talk at Bay Area Spark Meetup
Apache Spark(tm) has rapidly become a key tool for data scientists to explore, understand and transform massive datasets and to build and train advanced machine learning models. The question then becomes, how do I deploy these model to a production environment. How do I embed what I have learned into customer facing data applications. Like all things in engineering, it depends.
In this meetup, we will discuss best practices from Databricks on how our customers productionize machine learning models and do a deep dive with actual customer case studies and live demos of a few example architectures and code in Python and Scala. We will also briefly touch on what is coming in Apache Spark 2.X with model serialization and scoring options.
Practical Machine Learning Pipelines with MLlibDatabricks
This talk from 2015 Spark Summit East discusses Pipelines and related concepts introduced in Spark 1.2 which provide a simple API for users to set up complex ML workflows.
Practical Large Scale Experiences with Spark 2.0 Machine Learning: Spark Summ...Spark Summit
Spark 2.0 provided strong performance enhancements to the Spark core while advancing Spark ML usability to use data frames. But what happens when you run Spark 2.0 machine learning algorithms on a large cluster with a very large data set? Do you even get any benefit from using a very large data set? It depends. How do new hardware advances affect the topology of high performance Spark clusters. In this talk we will explore Spark 2.0 Machine Learning at scale and share our findings with the community.
As our test platform we will be using a new cluster design, different from typical Hadoop clusters, with more cores, more RAM and latest generation NVMe SSD’s and a 100GbE network with a goal of more performance, in a more space and energy efficient footprint.
Apache Spark MLlib 2.0 Preview: Data Science and ProductionDatabricks
This talk highlights major improvements in Machine Learning (ML) targeted for Apache Spark 2.0. The MLlib 2.0 release focuses on ease of use for data science—both for casual and power users. We will discuss 3 key improvements: persisting models for production, customizing Pipelines, and improvements to models and APIs critical to data science.
(1) MLlib simplifies moving ML models to production by adding full support for model and Pipeline persistence. Individual models—and entire Pipelines including feature transformations—can be built on one Spark deployment, saved, and loaded onto other Spark deployments for production and serving.
(2) Users will find it much easier to implement custom feature transformers and models. Abstractions automatically handle input schema validation, as well as persistence for saving and loading models.
(3) For statisticians and data scientists, MLlib has doubled down on Generalized Linear Models (GLMs), which are key algorithms for many use cases. MLlib now supports more GLM families and link functions, handles corner cases more gracefully, and provides more model statistics. Also, expanded language APIs allow data scientists using Python and R to call many more algorithms.
Finally, we will demonstrate these improvements live and show how they facilitate getting started with ML on Spark, customizing implementations, and moving to production.
Apache Spark presentation at HasGeek FifthElelephant
https://fifthelephant.talkfunnel.com/2015/15-processing-large-data-with-apache-spark
Covering Big Data Overview, Spark Overview, Spark Internals and its supported libraries
Apache Spark for RDBMS Practitioners: How I Learned to Stop Worrying and Lov...Databricks
This talk is about sharing experience and lessons learned on setting up and running the Apache Spark service inside the database group at CERN. It covers the many aspects of this change with examples taken from use cases and projects at the CERN Hadoop, Spark, streaming and database services. The talks is aimed at developers, DBAs, service managers and members of the Spark community who are using and/or investigating “Big Data” solutions deployed alongside relational database processing systems. The talk highlights key aspects of Apache Spark that have fuelled its rapid adoption for CERN use cases and for the data processing community at large, including the fact that it provides easy to use APIs that unify, under one large umbrella, many different types of data processing workloads from ETL, to SQL reporting to ML.
Spark can also easily integrate a large variety of data sources, from file-based formats to relational databases and more. Notably, Spark can easily scale up data pipelines and workloads from laptops to large clusters of commodity hardware or on the cloud. The talk also addresses some key points about the adoption process and learning curve around Apache Spark and the related “Big Data” tools for a community of developers and DBAs at CERN with a background in relational database operations.
Presentation detailed about capabilities of In memory Analytic using Apache Spark. Apache Spark overview with programming mode, cluster mode with Mosos, supported operations and comparison with Hadoop Map Reduce. Elaborating Apache Spark Stack expansion like Shark, Streaming, MLib, GraphX
Apache Spark is a lightning-fast cluster computing technology, designed for fast computation. It extends the MapReduce model of Hadoop to efficiently use it for more types of computations, which includes interactive queries and stream processing. This slide shares some basic knowledge about Apache Spark.
In this webinar, we'll see how to use Spark to process data from various sources in R and Python and how new tools like Spark SQL and data frames make it easy to perform structured data processing.
http://bit.ly/1BTaXZP – As organizations look for even faster ways to derive value from big data, they are turning to Apache Spark is an in-memory processing framework that offers lightning-fast big data analytics, providing speed, developer productivity, and real-time processing advantages. The Spark software stack includes a core data-processing engine, an interface for interactive querying, Spark Streaming for streaming data analysis, and growing libraries for machine-learning and graph analysis. Spark is quickly establishing itself as a leading environment for doing fast, iterative in-memory and streaming analysis. This talk will give an introduction the Spark stack, explain how Spark has lighting fast results, and how it complements Apache Hadoop. By the end of the session, you’ll come away with a deeper understanding of how you can unlock deeper insights from your data, faster, with Spark.
An Engine to process big data in faster(than MR), easy and extremely scalable way. An Open Source, parallel, in-memory processing, cluster computing framework. Solution for loading, processing and end to end analyzing large scale data. Iterative and Interactive : Scala, Java, Python, R and with Command line interface.
Similar to Large Scale Machine learning with Spark (20)
First Steps with Globus Compute Multi-User EndpointsGlobus
In this presentation we will share our experiences around getting started with the Globus Compute multi-user endpoint. Working with the Pharmacology group at the University of Auckland, we have previously written an application using Globus Compute that can offload computationally expensive steps in the researcher's workflows, which they wish to manage from their familiar Windows environments, onto the NeSI (New Zealand eScience Infrastructure) cluster. Some of the challenges we have encountered were that each researcher had to set up and manage their own single-user globus compute endpoint and that the workloads had varying resource requirements (CPUs, memory and wall time) between different runs. We hope that the multi-user endpoint will help to address these challenges and share an update on our progress here.
Gamify Your Mind; The Secret Sauce to Delivering Success, Continuously Improv...Shahin Sheidaei
Games are powerful teaching tools, fostering hands-on engagement and fun. But they require careful consideration to succeed. Join me to explore factors in running and selecting games, ensuring they serve as effective teaching tools. Learn to maintain focus on learning objectives while playing, and how to measure the ROI of gaming in education. Discover strategies for pitching gaming to leadership. This session offers insights, tips, and examples for coaches, team leads, and enterprise leaders seeking to teach from simple to complex concepts.
OpenFOAM solver for Helmholtz equation, helmholtzFoam / helmholtzBubbleFoamtakuyayamamoto1800
In this slide, we show the simulation example and the way to compile this solver.
In this solver, the Helmholtz equation can be solved by helmholtzFoam. Also, the Helmholtz equation with uniformly dispersed bubbles can be simulated by helmholtzBubbleFoam.
Accelerate Enterprise Software Engineering with PlatformlessWSO2
Key takeaways:
Challenges of building platforms and the benefits of platformless.
Key principles of platformless, including API-first, cloud-native middleware, platform engineering, and developer experience.
How Choreo enables the platformless experience.
How key concepts like application architecture, domain-driven design, zero trust, and cell-based architecture are inherently a part of Choreo.
Demo of an end-to-end app built and deployed on Choreo.
Quarkus Hidden and Forbidden ExtensionsMax Andersen
Quarkus has a vast extension ecosystem and is known for its subsonic and subatomic feature set. Some of these features are not as well known, and some extensions are less talked about, but that does not make them less interesting - quite the opposite.
Come join this talk to see some tips and tricks for using Quarkus and some of the lesser known features, extensions and development techniques.
Innovating Inference - Remote Triggering of Large Language Models on HPC Clus...Globus
Large Language Models (LLMs) are currently the center of attention in the tech world, particularly for their potential to advance research. In this presentation, we'll explore a straightforward and effective method for quickly initiating inference runs on supercomputers using the vLLM tool with Globus Compute, specifically on the Polaris system at ALCF. We'll begin by briefly discussing the popularity and applications of LLMs in various fields. Following this, we will introduce the vLLM tool, and explain how it integrates with Globus Compute to efficiently manage LLM operations on Polaris. Attendees will learn the practical aspects of setting up and remotely triggering LLMs from local machines, focusing on ease of use and efficiency. This talk is ideal for researchers and practitioners looking to leverage the power of LLMs in their work, offering a clear guide to harnessing supercomputing resources for quick and effective LLM inference.
Developing Distributed High-performance Computing Capabilities of an Open Sci...Globus
COVID-19 had an unprecedented impact on scientific collaboration. The pandemic and its broad response from the scientific community has forged new relationships among public health practitioners, mathematical modelers, and scientific computing specialists, while revealing critical gaps in exploiting advanced computing systems to support urgent decision making. Informed by our team’s work in applying high-performance computing in support of public health decision makers during the COVID-19 pandemic, we present how Globus technologies are enabling the development of an open science platform for robust epidemic analysis, with the goal of collaborative, secure, distributed, on-demand, and fast time-to-solution analyses to support public health.
Unleash Unlimited Potential with One-Time Purchase
BoxLang is more than just a language; it's a community. By choosing a Visionary License, you're not just investing in your success, you're actively contributing to the ongoing development and support of BoxLang.
Software Engineering, Software Consulting, Tech Lead.
Spring Boot, Spring Cloud, Spring Core, Spring JDBC, Spring Security,
Spring Transaction, Spring MVC,
Log4j, REST/SOAP WEB-SERVICES.
TROUBLESHOOTING 9 TYPES OF OUTOFMEMORYERRORTier1 app
Even though at surface level ‘java.lang.OutOfMemoryError’ appears as one single error; underlyingly there are 9 types of OutOfMemoryError. Each type of OutOfMemoryError has different causes, diagnosis approaches and solutions. This session equips you with the knowledge, tools, and techniques needed to troubleshoot and conquer OutOfMemoryError in all its forms, ensuring smoother, more efficient Java applications.
May Marketo Masterclass, London MUG May 22 2024.pdfAdele Miller
Can't make Adobe Summit in Vegas? No sweat because the EMEA Marketo Engage Champions are coming to London to share their Summit sessions, insights and more!
This is a MUG with a twist you don't want to miss.
In software engineering, the right architecture is essential for robust, scalable platforms. Wix has undergone a pivotal shift from event sourcing to a CRUD-based model for its microservices. This talk will chart the course of this pivotal journey.
Event sourcing, which records state changes as immutable events, provided robust auditing and "time travel" debugging for Wix Stores' microservices. Despite its benefits, the complexity it introduced in state management slowed development. Wix responded by adopting a simpler, unified CRUD model. This talk will explore the challenges of event sourcing and the advantages of Wix's new "CRUD on steroids" approach, which streamlines API integration and domain event management while preserving data integrity and system resilience.
Participants will gain valuable insights into Wix's strategies for ensuring atomicity in database updates and event production, as well as caching, materialization, and performance optimization techniques within a distributed system.
Join us to discover how Wix has mastered the art of balancing simplicity and extensibility, and learn how the re-adoption of the modest CRUD has turbocharged their development velocity, resilience, and scalability in a high-growth environment.
Globus Connect Server Deep Dive - GlobusWorld 2024Globus
We explore the Globus Connect Server (GCS) architecture and experiment with advanced configuration options and use cases. This content is targeted at system administrators who are familiar with GCS and currently operate—or are planning to operate—broader deployments at their institution.
How to Position Your Globus Data Portal for Success Ten Good PracticesGlobus
Science gateways allow science and engineering communities to access shared data, software, computing services, and instruments. Science gateways have gained a lot of traction in the last twenty years, as evidenced by projects such as the Science Gateways Community Institute (SGCI) and the Center of Excellence on Science Gateways (SGX3) in the US, The Australian Research Data Commons (ARDC) and its platforms in Australia, and the projects around Virtual Research Environments in Europe. A few mature frameworks have evolved with their different strengths and foci and have been taken up by a larger community such as the Globus Data Portal, Hubzero, Tapis, and Galaxy. However, even when gateways are built on successful frameworks, they continue to face the challenges of ongoing maintenance costs and how to meet the ever-expanding needs of the community they serve with enhanced features. It is not uncommon that gateways with compelling use cases are nonetheless unable to get past the prototype phase and become a full production service, or if they do, they don't survive more than a couple of years. While there is no guaranteed pathway to success, it seems likely that for any gateway there is a need for a strong community and/or solid funding streams to create and sustain its success. With over twenty years of examples to draw from, this presentation goes into detail for ten factors common to successful and enduring gateways that effectively serve as best practices for any new or developing gateway.
Exploring Innovations in Data Repository Solutions - Insights from the U.S. G...Globus
The U.S. Geological Survey (USGS) has made substantial investments in meeting evolving scientific, technical, and policy driven demands on storing, managing, and delivering data. As these demands continue to grow in complexity and scale, the USGS must continue to explore innovative solutions to improve its management, curation, sharing, delivering, and preservation approaches for large-scale research data. Supporting these needs, the USGS has partnered with the University of Chicago-Globus to research and develop advanced repository components and workflows leveraging its current investment in Globus. The primary outcome of this partnership includes the development of a prototype enterprise repository, driven by USGS Data Release requirements, through exploration and implementation of the entire suite of the Globus platform offerings, including Globus Flow, Globus Auth, Globus Transfer, and Globus Search. This presentation will provide insights into this research partnership, introduce the unique requirements and challenges being addressed and provide relevant project progress.
Navigating the Metaverse: A Journey into Virtual Evolution"Donna Lenk
Join us for an exploration of the Metaverse's evolution, where innovation meets imagination. Discover new dimensions of virtual events, engage with thought-provoking discussions, and witness the transformative power of digital realms."
Navigating the Metaverse: A Journey into Virtual Evolution"
Large Scale Machine learning with Spark
1. Large scale machine learning
with Apache Spark
Md. Mahedi Kaysar (Research Master), Insight Centre for Data Analytics [DCU]
mahedi.kaysar@insight-centre.org
3. Spark Overview
• Open source large scale data processing engine
• 100x times faster than hadoop map-reduce in
memory or 10x faster on disk
• Can write application on java, scala, python and R
• Runs on Mesos, standalone or YARN cluster
manager
• It can access diverse data sources including HDFS,
Cassandra, HBase and S3
3
4. Spark Overview
• MapReduce: distributed execution model
– Map read data from hard disk, process it and write
in back to the disk. Before doing the shuffle
operation it send data to reducer
– Reduce read data from disk and process it and
sent back to disk
4
7. Spark Overview
• RDD: Resilient distributed dataset
– We write program in terms of operations on
distributed data set
– Partitioned collection of object across the cluster,
stored in memory or disk
– RDDs built and manipulated though a diverse
source of parallel transformation (Map, filter,
join), action (save, count, collect)
– RDDs automatically rebuild on machine failure
7
8. Spark Overview
• RDD: Resilient distributed dataset
– immutable and programmer specifies number of
partitions for an RDD.
8
10. Spark Core: underlying general
execution engine. It provides In
memory computing. APIs are build
upon it.
• Spark SQL
• Spark Mlib
• Spark GraphX
• Spark Streaming
10
Spark Ecosystem
Apache Spark 2.0
11. Apache Spark 2.0
• Spark SQL
– Module for structured or tabular data processing
– Actually it is new data abstraction called
SchemaRDD
– Internally it has more information about the
structure of both data and the computation being
performed
– Two way to interact with Spark SQL
• SQL queries: “SELECT * FROM PEOPLE”
• Dataset/DataFrame: domain specific language
11
13. Apache Spark 2.0
• Spark Mlib
– Machine learning library
– ML Algorithms: common learning algorithms such as classification,
regression, clustering, and collaborative filtering
• SVM, Decision Tree
– Featurization: feature extraction, transformation, dimensionality
reduction, and selection
• Term frequency, document frequency
– Pipelines: tools for constructing, evaluating, and tuning ML Pipelines
– Persistence: saving and load algorithms, models, and Pipelines
– Utilities: linear algebra, statistics, data handling, etc.
– DataFrame-based API is primary API (spark.ml)
13
14. Apache Spark 2.0
• Spark Streaming
– provides a high-level abstraction called discretized stream
or DStream, which represents a continuous stream of data
– DStream is represented as a sequence of RDDs.
14
15. Apache Spark 2.0
• Structured Streaming (Experimental)
– scalable and fault-tolerant stream processing engine built
on the Spark SQL engine
– The Spark SQL engine will take care of running it
incrementally and continuously and updating the final
result as streaming data continues to arrive
– Can be used Dataset or DataFrame APIs
15
16. Apache Spark 2.0
• GraphX
– extends the Spark RDD by
introducing a new Graph
abstraction
– a directed multigraph with
properties attached to each
vertex and edge
– Pagerank: measures the
importance of a vertex
– Connected component
– Triangle count
16
17. Apache Spark 2.0
• RDD vs. Data Frame vs. Dataset
– All are immutable and distributed dataset
– RDD is the main building block of Apache Spark called
resilient distributed dataset. It process data in
memory for efficient use.
– The DataFrame and Dataset are more abstract then
RDD and those are optimized and good when you
have structured data like CSV, JSON, Hive and so on.
– When you have row data like text file you can use RDD
and transform to structured data with the help of
DataFrame and Dataset
17
18. Apache Spark 2.0
• RDD:
– immutable, partitioned
collections of objects
– Two main Operations
18
19. Apache Spark 2.0
• DataFrame
– A dataset organized into named columns.
– It is conceptually equivalent to a table in a
relational database
– can be constructed from a wide array of sources
such as: structured data files, tables in Hive,
external databases, or existing RDDs.
19
20. Apache Spark 2.0
• Dataset
– distributed collection of data
– It has all the benefits of RDD and Dataframe with
more optimization
– You can switch any form of data from dataset
– It is the latest API for data collections
20
21. Apache Spark 2.0
• DataFrame/Dataset
– Reading a json data using dataset
21
22. Apache Spark 2.0
• DataFrame/Dataset
– Connect hive and query it by HiveQL
22
23. Apache Spark 2.0
• Dataset
– You can transform a dataset to rdd and rdd to dataset
23
24. Spark Cluster Overview
• Spark uses master/slave architecture
• One central coordinator called driver
that communicates with many
distributed workers (executors)
• Drivers and executors run in their own
Java Process
24
A Driver is a process where the main method runs.
It converts the user program into task and schedule the task to the
executors with the help of cluster manager
Cluster manager runs the executors and manages the worker nodes
Executors run the spark jobs and send back the result to the Driver
program.
They provide in-memory storage for RDDs that are cached by user
program
The workers are in charge of communicating the cluster manager the
availability of their resources
25. Spark Cluster Overview
• Standalone: a simple cluster manager included with Spark that makes it easy to set up a cluster.
• Example: A standalone cluster with 2 worker nodes (each node having 2 cores)
– Local machine
– Cloud EC2
25
Conf/spark-env.sh
export SPARK_WORKER_MEMORY=1g
export SPARK_EXECUTOR_MEMORY=1g
export SPARK_WORKER_INSTANCES=2
export SPARK_WORKER_CORES=2
export SPARK_WORKER_DIR=/home/work/sparkdata
./sbin/start-master.sh
Conf/slaves
Master node IP
./sbin/start-slaves.sh
28. Machine Learning with Spark
• Typical Machine learning workflow:
Load the sample data.
Parse the data into the input format for the algorithm.
Pre-process the data and handle the missing values.
Split the data into two sets, one for building the model (training dataset) and one for
testing the model (validation dataset).
Run the algorithm to build or train your ML model.
28
29. Machine Learning with Spark
• Typical Machine learning workflow:
Make predictions with the training data and observe the results.
Test and evaluate the model with the test data or alternatively validate the with some
cross-validator technique using the third dataset, called the validation dataset.
Tune the model for better performance and accuracy.
Scale-up the model so that it can handle massive datasets in the future
Deploy the ML model in commercialization:
29
30. Machine Learning with Spark
• Pre-processing
– The three most common data preprocessing steps
that are used are
• formatting: data may not be in a good shape
• cleaning: data may have unwanted records or
sometimes with missing entries against a record. This
cleaning process deals with the removal or fixing of
missing data
• sampling the data: when the available data size is large
– Data Transformation
– Dataset, RDD and DataFrame
30
31. Machine Learning with Spark
• Feature Engineering
Extraction: Extracting features from “raw” data
Transformation: Scaling, converting, or modifying features
Selection: Selecting a subset from a larger set of features
31
32. Machine Learning with Spark
• ML Algorithms
Classifications
Regression
Tuning
32
33. Machine Learning with Spark
• ML Pipeline:
– Higher level API build on top of DataFrame
– Can combines multiple algorithms together to
make a complete workflow.
– For example: text analytics
• Split the texts=> words
• Convert words => numerical feature vectors
• Numerical feature vectors => labeling
• Build an ML model as a prediction model using vectors
and labels
33
34. Machine Learning with Spark
• ML Pipeline Component:
– Transformers
• is an abstraction that includes feature transformers and
learned models
• an algorithm for transforming one dataset or dataframe
to another dataset or dataframe
• Ex. HashingTF
– Estimators
• an algorithm which can fit on a dataset or dataframe to
produce a transformer or model. Ex- Logistic Regression
34
35. Machine Learning with Spark
• Spam detection or spam filtering:
– Given some e-mails in an inbox, the task is to
identify those e-mails that are spam and those
that are non-spam (often called ham) e-mail
messages.
35
36. Machine Learning with Spark
• Spam detection or spam filtering:
– Reading dataset
– SparkSession is the single entry point to interact
with underlying spark functionality. It allows
dataframe and dataset for programming
36
37. Machine Learning with Spark
• Spam detection or spam filtering:
– Pre-process the dataset
37
38. Machine Learning with Spark
• Spam detection or spam filtering:
– Feature Extraction: make feature vectors
– TF: Term frequency is the number of times that
term appears in document
• Feature vectorization method
38
39. Machine Learning with Spark
• Spam detection or spam filtering:
– Tokenizer: Transformer to tokenise the text into
words
– HashingTF: Transformer for making feature Vector
using TF techniques.
• Takes set of terms
• Converts it to set of feature vector
• It uses hashing trick for indexing terms
39
40. Machine Learning with Spark
• Spam detection or spam filtering:
– Train a model
– Define classifier
– Fit transet
40
42. Machine Learning with Spark
• Tuning
– Model selection:
• Hyper parameter tuning
• Find the best model or parameter for a given task
• Tuning can be done for individial estimator such as
logistic regression, pipeline
– Model selection via cross-validation
– Model selection via train validation split
42
43. Machine Learning with Spark
• Tuning
– Model selection workflow
• Split input data into separate training set or test set
• For each (training,test) pair, they iterate through set of
ParamMaps.
– For each ParamMap they fit the estimator using those
parameteras
– Get fitted model and evaluate the models performance using
evaluator
• They select the model produced by best performing set
of parameters.
43
44. Machine Learning with Spark
• Tuning
– Model selection workflow
• The evaluator can be RegressionEvaluator or
BinaryClassificationEvaluator and so on.
44
45. Machine Learning with Spark
• Tuning
– Model selection via cross validation
• CrossValidator begins by splitting the dataset into a set of folds
(k=3) means create 3 (training,test) dataset pair
• Each pair use 2/3 of the data for training and 1/3 for testing
• To evaluate particular ParamMap it computes the average
evaluation matric for the three model fitting by estimator
• However, it is also a well-established method for choosing
parameters which is more statistically sound than heuristic hand-
tuning.
– Model selection via train-validation split
• only evaluates each combination of parameters once
• less expensive, but will not produce as reliable results when the
training dataset is not sufficiently large.
45
46. Spam Filtering Application
• What we did so far?
– Reading dataset
– Cleaning
– Feature engineering
– Training
– Testing
– Tuning
– Deploying
– Persisting the model
– Reuse the existing model
46
Actually it extend the map reduce programming model to better support of Iterative programming model like machine learning, graphs and so on. The motivation of develop spark programming models comes from most currently programming models like acyclic data flow. Means it flows the data from stable storage to stable storage. It benefits the runtime to decide where to run the tasks and automatically recovers from failure. But it is inefficient for applications that repeatedly reuse a working set of data. For example machine learning and graph datasets. Apps used to reload date from stable or persistent storage on each query.
Then Apache Spark brings a solution that is called resilient distributed dataset (RDD) that allows apps to to keep working set in memory for efficient reuse. It also keeps attractive properties of Map-Reduce which are fault tolerant, data locality and scalability.
Here spam is not a feature. You have to extract the features and lebel from here. Then you have to transform it into feature vectors which are newmerial represetation of
Here spam is not a feature. You have to extract the features and lebel from here. Then you have to transform it into feature vectors which are newmerial represetation of
Here spam is not a feature. You have to extract the features and lebel from here. Then you have to transform it into feature vectors which are newmerial represetation of
Here spam is not a feature. You have to extract the features and lebel from here. Then you have to transform it into feature vectors which are newmerial represetation of
Here spam is not a feature. You have to extract the features and lebel from here. Then you have to transform it into feature vectors which are newmerial represetation of
Here spam is not a feature. You have to extract the features and lebel from here. Then you have to transform it into feature vectors which are newmerial represetation of