Apache Spark is a next-generation processing engine optimized for speed, ease of use, and advanced analytics well beyond batch. The Spark framework supports streaming data and complex, iterative algorithms, enabling applications to run 100x faster than traditional MapReduce programs. With Spark, developers can write sophisticated parallel applications for faster business decisions and better user outcomes, applied to a wide variety of architectures and industries.
Learn What Apache Spark is and how it compares to Hadoop MapReduce, How to filter, map, reduce, and save Resilient Distributed Datasets (RDDs), Who is best suited to attend the course and what prior knowledge you should have, and the benefits of building Spark applications as part of an enterprise data hub.
Apache Spark in Depth: Core Concepts, Architecture & InternalsAnton Kirillov
Slides cover Spark core concepts of Apache Spark such as RDD, DAG, execution workflow, forming stages of tasks and shuffle implementation and also describes architecture and main components of Spark Driver. The workshop part covers Spark execution modes , provides link to github repo which contains Spark Applications examples and dockerized Hadoop environment to experiment with
Apache Spark is a In Memory Data Processing Solution that can work with existing data source like HDFS and can make use of your existing computation infrastructure like YARN/Mesos etc. This talk will cover a basic introduction of Apache Spark with its various components like MLib, Shark, GrpahX and with few examples.
What is Apache Spark | Apache Spark Tutorial For Beginners | Apache Spark Tra...Edureka!
This Edureka "What is Spark" tutorial will introduce you to big data analytics framework - Apache Spark. This tutorial is ideal for both beginners as well as professionals who want to learn or brush up their Apache Spark concepts. Below are the topics covered in this tutorial:
1) Big Data Analytics
2) What is Apache Spark?
3) Why Apache Spark?
4) Using Spark with Hadoop
5) Apache Spark Features
6) Apache Spark Architecture
7) Apache Spark Ecosystem - Spark Core, Spark Streaming, Spark MLlib, Spark SQL, GraphX
8) Demo: Analyze Flight Data Using Apache Spark
This session covers how to work with PySpark interface to develop Spark applications. From loading, ingesting, and applying transformation on the data. The session covers how to work with different data sources of data, apply transformation, python best practices in developing Spark Apps. The demo covers integrating Apache Spark apps, In memory processing capabilities, working with notebooks, and integrating analytics tools into Spark Applications.
Apache Spark in Depth: Core Concepts, Architecture & InternalsAnton Kirillov
Slides cover Spark core concepts of Apache Spark such as RDD, DAG, execution workflow, forming stages of tasks and shuffle implementation and also describes architecture and main components of Spark Driver. The workshop part covers Spark execution modes , provides link to github repo which contains Spark Applications examples and dockerized Hadoop environment to experiment with
Apache Spark is a In Memory Data Processing Solution that can work with existing data source like HDFS and can make use of your existing computation infrastructure like YARN/Mesos etc. This talk will cover a basic introduction of Apache Spark with its various components like MLib, Shark, GrpahX and with few examples.
What is Apache Spark | Apache Spark Tutorial For Beginners | Apache Spark Tra...Edureka!
This Edureka "What is Spark" tutorial will introduce you to big data analytics framework - Apache Spark. This tutorial is ideal for both beginners as well as professionals who want to learn or brush up their Apache Spark concepts. Below are the topics covered in this tutorial:
1) Big Data Analytics
2) What is Apache Spark?
3) Why Apache Spark?
4) Using Spark with Hadoop
5) Apache Spark Features
6) Apache Spark Architecture
7) Apache Spark Ecosystem - Spark Core, Spark Streaming, Spark MLlib, Spark SQL, GraphX
8) Demo: Analyze Flight Data Using Apache Spark
This session covers how to work with PySpark interface to develop Spark applications. From loading, ingesting, and applying transformation on the data. The session covers how to work with different data sources of data, apply transformation, python best practices in developing Spark Apps. The demo covers integrating Apache Spark apps, In memory processing capabilities, working with notebooks, and integrating analytics tools into Spark Applications.
This presentation is an introduction to Apache Spark. It covers the basic API, some advanced features and describes how Spark physically executes its jobs.
we will see an overview of Spark in Big Data. We will start with an introduction to Apache Spark Programming. Then we will move to know the Spark History. Moreover, we will learn why Spark is needed. Afterward, will cover all fundamental of Spark components. Furthermore, we will learn about Spark’s core abstraction and Spark RDD. For more detailed insights, we will also cover spark features, Spark limitations, and Spark Use cases.
A Tale of Three Apache Spark APIs: RDDs, DataFrames, and Datasets with Jules ...Databricks
Of all the developers’ delight, none is more attractive than a set of APIs that make developers productive, that are easy to use, and that are intuitive and expressive. Apache Spark offers these APIs across components such as Spark SQL, Streaming, Machine Learning, and Graph Processing to operate on large data sets in languages such as Scala, Java, Python, and R for doing distributed big data processing at scale. In this talk, I will explore the evolution of three sets of APIs-RDDs, DataFrames, and Datasets-available in Apache Spark 2.x. In particular, I will emphasize three takeaways: 1) why and when you should use each set as best practices 2) outline its performance and optimization benefits; and 3) underscore scenarios when to use DataFrames and Datasets instead of RDDs for your big data distributed processing. Through simple notebook demonstrations with API code examples, you’ll learn how to process big data using RDDs, DataFrames, and Datasets and interoperate among them. (this will be vocalization of the blog, along with the latest developments in Apache Spark 2.x Dataframe/Datasets and Spark SQL APIs: https://databricks.com/blog/2016/07/14/a-tale-of-three-apache-spark-apis-rdds-dataframes-and-datasets.html)
Apache Spark Data Source V2 with Wenchen Fan and Gengliang WangDatabricks
As a general computing engine, Spark can process data from various data management/storage systems, including HDFS, Hive, Cassandra and Kafka. For flexibility and high throughput, Spark defines the Data Source API, which is an abstraction of the storage layer. The Data Source API has two requirements.
1) Generality: support reading/writing most data management/storage systems.
2) Flexibility: customize and optimize the read and write paths for different systems based on their capabilities.
Data Source API V2 is one of the most important features coming with Spark 2.3. This talk will dive into the design and implementation of Data Source API V2, with comparison to the Data Source API V1. We also demonstrate how to implement a file-based data source using the Data Source API V2 for showing its generality and flexibility.
Apache Spark presentation at HasGeek FifthElelephant
https://fifthelephant.talkfunnel.com/2015/15-processing-large-data-with-apache-spark
Covering Big Data Overview, Spark Overview, Spark Internals and its supported libraries
A Deep Dive into Query Execution Engine of Spark SQLDatabricks
Spark SQL enables Spark to perform efficient and fault-tolerant relational query processing with analytics database technologies. The relational queries are compiled to the executable physical plans consisting of transformations and actions on RDDs with the generated Java code. The code is compiled to Java bytecode, executed at runtime by JVM and optimized by JIT to native machine code at runtime. This talk will take a deep dive into Spark SQL execution engine. The talk includes pipelined execution, whole-stage code generation, UDF execution, memory management, vectorized readers, lineage based RDD transformation and action.
This slide deck is used as an introduction to the internals of Apache Spark, as part of the Distributed Systems and Cloud Computing course I hold at Eurecom.
Course website:
http://michiard.github.io/DISC-CLOUD-COURSE/
Sources available here:
https://github.com/michiard/DISC-CLOUD-COURSE
Apache Spark Tutorial | Spark Tutorial for Beginners | Apache Spark Training ...Edureka!
This Edureka Spark Tutorial will help you to understand all the basics of Apache Spark. This Spark tutorial is ideal for both beginners as well as professionals who want to learn or brush up Apache Spark concepts. Below are the topics covered in this tutorial:
1) Big Data Introduction
2) Batch vs Real Time Analytics
3) Why Apache Spark?
4) What is Apache Spark?
5) Using Spark with Hadoop
6) Apache Spark Features
7) Apache Spark Ecosystem
8) Demo: Earthquake Detection Using Apache Spark
Slides for Data Syndrome one hour course on PySpark. Introduces basic operations, Spark SQL, Spark MLlib and exploratory data analysis with PySpark. Shows how to use pylab with Spark to create histograms.
This presentation covers the curriculum for HDPCD Spark certification using Python as programming language. HDPCD stands for Hortonworks Data Platform Certified Developer. This scenario based examination is one of the well recognized Big Data developer certifications.
This is an introductory tutorial to Apache Spark at the Lagos Scala Meetup II. We discussed the basics of processing engine, Spark, how it relates to Hadoop MapReduce. Little handson at the end of the session.
This presentation is an introduction to Apache Spark. It covers the basic API, some advanced features and describes how Spark physically executes its jobs.
we will see an overview of Spark in Big Data. We will start with an introduction to Apache Spark Programming. Then we will move to know the Spark History. Moreover, we will learn why Spark is needed. Afterward, will cover all fundamental of Spark components. Furthermore, we will learn about Spark’s core abstraction and Spark RDD. For more detailed insights, we will also cover spark features, Spark limitations, and Spark Use cases.
A Tale of Three Apache Spark APIs: RDDs, DataFrames, and Datasets with Jules ...Databricks
Of all the developers’ delight, none is more attractive than a set of APIs that make developers productive, that are easy to use, and that are intuitive and expressive. Apache Spark offers these APIs across components such as Spark SQL, Streaming, Machine Learning, and Graph Processing to operate on large data sets in languages such as Scala, Java, Python, and R for doing distributed big data processing at scale. In this talk, I will explore the evolution of three sets of APIs-RDDs, DataFrames, and Datasets-available in Apache Spark 2.x. In particular, I will emphasize three takeaways: 1) why and when you should use each set as best practices 2) outline its performance and optimization benefits; and 3) underscore scenarios when to use DataFrames and Datasets instead of RDDs for your big data distributed processing. Through simple notebook demonstrations with API code examples, you’ll learn how to process big data using RDDs, DataFrames, and Datasets and interoperate among them. (this will be vocalization of the blog, along with the latest developments in Apache Spark 2.x Dataframe/Datasets and Spark SQL APIs: https://databricks.com/blog/2016/07/14/a-tale-of-three-apache-spark-apis-rdds-dataframes-and-datasets.html)
Apache Spark Data Source V2 with Wenchen Fan and Gengliang WangDatabricks
As a general computing engine, Spark can process data from various data management/storage systems, including HDFS, Hive, Cassandra and Kafka. For flexibility and high throughput, Spark defines the Data Source API, which is an abstraction of the storage layer. The Data Source API has two requirements.
1) Generality: support reading/writing most data management/storage systems.
2) Flexibility: customize and optimize the read and write paths for different systems based on their capabilities.
Data Source API V2 is one of the most important features coming with Spark 2.3. This talk will dive into the design and implementation of Data Source API V2, with comparison to the Data Source API V1. We also demonstrate how to implement a file-based data source using the Data Source API V2 for showing its generality and flexibility.
Apache Spark presentation at HasGeek FifthElelephant
https://fifthelephant.talkfunnel.com/2015/15-processing-large-data-with-apache-spark
Covering Big Data Overview, Spark Overview, Spark Internals and its supported libraries
A Deep Dive into Query Execution Engine of Spark SQLDatabricks
Spark SQL enables Spark to perform efficient and fault-tolerant relational query processing with analytics database technologies. The relational queries are compiled to the executable physical plans consisting of transformations and actions on RDDs with the generated Java code. The code is compiled to Java bytecode, executed at runtime by JVM and optimized by JIT to native machine code at runtime. This talk will take a deep dive into Spark SQL execution engine. The talk includes pipelined execution, whole-stage code generation, UDF execution, memory management, vectorized readers, lineage based RDD transformation and action.
This slide deck is used as an introduction to the internals of Apache Spark, as part of the Distributed Systems and Cloud Computing course I hold at Eurecom.
Course website:
http://michiard.github.io/DISC-CLOUD-COURSE/
Sources available here:
https://github.com/michiard/DISC-CLOUD-COURSE
Apache Spark Tutorial | Spark Tutorial for Beginners | Apache Spark Training ...Edureka!
This Edureka Spark Tutorial will help you to understand all the basics of Apache Spark. This Spark tutorial is ideal for both beginners as well as professionals who want to learn or brush up Apache Spark concepts. Below are the topics covered in this tutorial:
1) Big Data Introduction
2) Batch vs Real Time Analytics
3) Why Apache Spark?
4) What is Apache Spark?
5) Using Spark with Hadoop
6) Apache Spark Features
7) Apache Spark Ecosystem
8) Demo: Earthquake Detection Using Apache Spark
Slides for Data Syndrome one hour course on PySpark. Introduces basic operations, Spark SQL, Spark MLlib and exploratory data analysis with PySpark. Shows how to use pylab with Spark to create histograms.
This presentation covers the curriculum for HDPCD Spark certification using Python as programming language. HDPCD stands for Hortonworks Data Platform Certified Developer. This scenario based examination is one of the well recognized Big Data developer certifications.
This is an introductory tutorial to Apache Spark at the Lagos Scala Meetup II. We discussed the basics of processing engine, Spark, how it relates to Hadoop MapReduce. Little handson at the end of the session.
In this era of ever growing data, the need for analyzing it for meaningful business insights becomes more and more significant. There are different Big Data processing alternatives like Hadoop, Spark, Storm etc. Spark, however is unique in providing batch as well as streaming capabilities, thus making it a preferred choice for lightening fast Big Data Analysis platforms.
In this era of ever growing data, the need for analyzing it for meaningful business insights becomes more and more significant. There are different Big Data processing alternatives like Hadoop, Spark, Storm etc. Spark, however is unique in providing batch as well as streaming capabilities, thus making it a preferred choice for lightening fast Big Data Analysis platforms.
In this one day workshop, we will introduce Spark at a high level context. Spark is fundamentally different than writing MapReduce jobs so no prior Hadoop experience is needed. You will learn how to interact with Spark on the command line and conduct rapid in-memory data analyses. We will then work on writing Spark applications to perform large cluster-based analyses including SQL-like aggregations, machine learning applications, and graph algorithms. The course will be conducted in Python using PySpark.
In this era of ever growing data, the need for analyzing it for meaningful business insights becomes more and more significant. There are different Big Data processing alternatives like Hadoop, Spark, Storm etc. Spark, however is unique in providing batch as well as streaming capabilities, thus making it a preferred choice for lightening fast Big Data Analysis platforms.
Spark Summit EU 2015: Lessons from 300+ production usersDatabricks
At Databricks, we have a unique view into over a hundred different companies trying out Spark for development and production use-cases, from their support tickets and forum posts. Having seen so many different workflows and applications, some discernible patterns emerge when looking at common performance and scalability issues that our users run into. This talk will discuss some of these common common issues from an engineering and operations perspective, describing solutions and clarifying misconceptions.
Building data pipelines for modern data warehouse with Apache® Spark™ and .NE...Michael Rys
This presentation shows how you can build solutions that follow the modern data warehouse architecture and introduces the .NET for Apache Spark support (https://dot.net/spark, https://github.com/dotnet/spark)
Real time Analytics with Apache Kafka and Apache SparkRahul Jain
A presentation cum workshop on Real time Analytics with Apache Kafka and Apache Spark. Apache Kafka is a distributed publish-subscribe messaging while other side Spark Streaming brings Spark's language-integrated API to stream processing, allows to write streaming applications very quickly and easily. It supports both Java and Scala. In this workshop we are going to explore Apache Kafka, Zookeeper and Spark with a Web click streaming example using Spark Streaming. A clickstream is the recording of the parts of the screen a computer user clicks on while web browsing.
This presentations is first in the series of Apache Spark tutorials and covers the basics of Spark framework.Subscribe to my youtube channel for more updates https://www.youtube.com/channel/UCNCbLAXe716V2B7TEsiWcoA
Cloudera Data Impact Awards 2021 - Finalists Cloudera, Inc.
This annual program recognizes organizations who are moving swiftly towards the future and building innovative solutions by making what was impossible yesterday, possible today.
The winning organizations' implementations demonstrate outstanding achievements in fulfilling their mission, technical advancement, and overall impact.
The 2021 Data Impact Awards recognize organizations' achievements with the Cloudera Data Platform in seven categories:
Data Lifecycle Connection
Data for Enterprise AI
Cloud Innovation
Security & Governance Leadership
People First
Data for Good
Industry Transformation
2020 Cloudera Data Impact Awards FinalistsCloudera, Inc.
Cloudera is proud to present the 2020 Data Impact Awards Finalists. This annual program recognizes organizations running the Cloudera platform for the applications they've built and the impact their data projects have on their organizations, their industries, and the world. Nominations were evaluated by a panel of independent thought-leaders and expert industry analysts, who then selected the finalists and winners. Winners exemplify the most-cutting edge data projects and represent innovation and leadership in their respective industries.
Machine Learning with Limited Labeled Data 4/3/19Cloudera, Inc.
Cloudera Fast Forward Labs’ latest research report and prototype explore learning with limited labeled data. This capability relaxes the stringent labeled data requirement in supervised machine learning and opens up new product possibilities. It is industry invariant, addresses the labeling pain point and enables applications to be built faster and more efficiently.
Data Driven With the Cloudera Modern Data Warehouse 3.19.19Cloudera, Inc.
In this session, we will cover how to move beyond structured, curated reports based on known questions on known data, to an ad-hoc exploration of all data to optimize business processes and into the unknown questions on unknown data, where machine learning and statistically motivated predictive analytics are shaping business strategy.
Introducing Cloudera DataFlow (CDF) 2.13.19Cloudera, Inc.
Watch this webinar to understand how Hortonworks DataFlow (HDF) has evolved into the new Cloudera DataFlow (CDF). Learn about key capabilities that CDF delivers such as -
-Powerful data ingestion powered by Apache NiFi
-Edge data collection by Apache MiNiFi
-IoT-scale streaming data processing with Apache Kafka
-Enterprise services to offer unified security and governance from edge-to-enterprise
Introducing Cloudera Data Science Workbench for HDP 2.12.19Cloudera, Inc.
Cloudera’s Data Science Workbench (CDSW) is available for Hortonworks Data Platform (HDP) clusters for secure, collaborative data science at scale. During this webinar, we provide an introductory tour of CDSW and a demonstration of a machine learning workflow using CDSW on HDP.
Shortening the Sales Cycle with a Modern Data Warehouse 1.30.19Cloudera, Inc.
Join Cloudera as we outline how we use Cloudera technology to strengthen sales engagement, minimize marketing waste, and empower line of business leaders to drive successful outcomes.
Leveraging the cloud for analytics and machine learning 1.29.19Cloudera, Inc.
Learn how organizations are deriving unique customer insights, improving product and services efficiency, and reducing business risk with a modern big data architecture powered by Cloudera on Azure. In this webinar, you see how fast and easy it is to deploy a modern data management platform—in your cloud, on your terms.
Modernizing the Legacy Data Warehouse – What, Why, and How 1.23.19Cloudera, Inc.
Join us to learn about the challenges of legacy data warehousing, the goals of modern data warehousing, and the design patterns and frameworks that help to accelerate modernization efforts.
Leveraging the Cloud for Big Data Analytics 12.11.18Cloudera, Inc.
Learn how organizations are deriving unique customer insights, improving product and services efficiency, and reducing business risk with a modern big data architecture powered by Cloudera on AWS. In this webinar, you see how fast and easy it is to deploy a modern data management platform—in your cloud, on your terms.
Explore new trends and use cases in data warehousing including exploration and discovery, self-service ad-hoc analysis, predictive analytics and more ways to get deeper business insight. Modern Data Warehousing Fundamentals will show how to modernize your data warehouse architecture and infrastructure for benefits to both traditional analytics practitioners and data scientists and engineers.
Explore new trends and use cases in data warehousing including exploration and discovery, self-service ad-hoc analysis, predictive analytics and more ways to get deeper business insight. Modern Data Warehousing Fundamentals will show how to modernize your data warehouse architecture and infrastructure for benefits to both traditional analytics practitioners and data scientists and engineers.
Explore new trends and use cases in data warehousing including exploration and discovery, self-service ad-hoc analysis, predictive analytics and more ways to get deeper business insight. Modern Data Warehousing Fundamentals will show how to modernize your data warehouse architecture and infrastructure for benefits to both traditional analytics practitioners and data scientists and engineers.
Extending Cloudera SDX beyond the PlatformCloudera, Inc.
Cloudera SDX is by no means no restricted to just the platform; it extends well beyond. In this webinar, we show you how Bardess Group’s Zero2Hero solution leverages the shared data experience to coordinate Cloudera, Trifacta, and Qlik to deliver complete customer insight.
Federated Learning: ML with Privacy on the Edge 11.15.18Cloudera, Inc.
Join Cloudera Fast Forward Labs Research Engineer, Mike Lee Williams, to hear about their latest research report and prototype on Federated Learning. Learn more about what it is, when it’s applicable, how it works, and the current landscape of tools and libraries.
Analyst Webinar: Doing a 180 on Customer 360Cloudera, Inc.
451 Research Analyst Sheryl Kingstone, and Cloudera’s Steve Totman recently discussed how a growing number of organizations are replacing legacy Customer 360 systems with Customer Insights Platforms.
Build a modern platform for anti-money laundering 9.19.18Cloudera, Inc.
In this webinar, you will learn how Cloudera and BAH riskCanvas can help you build a modern AML platform that reduces false positive rates, investigation costs, technology sprawl, and regulatory risk.
Introducing the data science sandbox as a service 8.30.18Cloudera, Inc.
How can companies integrate data science into their businesses more effectively? Watch this recorded webinar and demonstration to hear more about operationalizing data science with Cloudera Data Science Workbench on Cazena’s fully-managed cloud platform.
Developing Distributed High-performance Computing Capabilities of an Open Sci...Globus
COVID-19 had an unprecedented impact on scientific collaboration. The pandemic and its broad response from the scientific community has forged new relationships among public health practitioners, mathematical modelers, and scientific computing specialists, while revealing critical gaps in exploiting advanced computing systems to support urgent decision making. Informed by our team’s work in applying high-performance computing in support of public health decision makers during the COVID-19 pandemic, we present how Globus technologies are enabling the development of an open science platform for robust epidemic analysis, with the goal of collaborative, secure, distributed, on-demand, and fast time-to-solution analyses to support public health.
Unleash Unlimited Potential with One-Time Purchase
BoxLang is more than just a language; it's a community. By choosing a Visionary License, you're not just investing in your success, you're actively contributing to the ongoing development and support of BoxLang.
Enterprise Resource Planning System includes various modules that reduce any business's workload. Additionally, it organizes the workflows, which drives towards enhancing productivity. Here are a detailed explanation of the ERP modules. Going through the points will help you understand how the software is changing the work dynamics.
To know more details here: https://blogs.nyggs.com/nyggs/enterprise-resource-planning-erp-system-modules/
Prosigns: Transforming Business with Tailored Technology SolutionsProsigns
Unlocking Business Potential: Tailored Technology Solutions by Prosigns
Discover how Prosigns, a leading technology solutions provider, partners with businesses to drive innovation and success. Our presentation showcases our comprehensive range of services, including custom software development, web and mobile app development, AI & ML solutions, blockchain integration, DevOps services, and Microsoft Dynamics 365 support.
Custom Software Development: Prosigns specializes in creating bespoke software solutions that cater to your unique business needs. Our team of experts works closely with you to understand your requirements and deliver tailor-made software that enhances efficiency and drives growth.
Web and Mobile App Development: From responsive websites to intuitive mobile applications, Prosigns develops cutting-edge solutions that engage users and deliver seamless experiences across devices.
AI & ML Solutions: Harnessing the power of Artificial Intelligence and Machine Learning, Prosigns provides smart solutions that automate processes, provide valuable insights, and drive informed decision-making.
Blockchain Integration: Prosigns offers comprehensive blockchain solutions, including development, integration, and consulting services, enabling businesses to leverage blockchain technology for enhanced security, transparency, and efficiency.
DevOps Services: Prosigns' DevOps services streamline development and operations processes, ensuring faster and more reliable software delivery through automation and continuous integration.
Microsoft Dynamics 365 Support: Prosigns provides comprehensive support and maintenance services for Microsoft Dynamics 365, ensuring your system is always up-to-date, secure, and running smoothly.
Learn how our collaborative approach and dedication to excellence help businesses achieve their goals and stay ahead in today's digital landscape. From concept to deployment, Prosigns is your trusted partner for transforming ideas into reality and unlocking the full potential of your business.
Join us on a journey of innovation and growth. Let's partner for success with Prosigns.
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I ...Juraj Vysvader
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I didn't get rich from it but it did have 63K downloads (powered possible tens of thousands of websites).
Globus Connect Server Deep Dive - GlobusWorld 2024Globus
We explore the Globus Connect Server (GCS) architecture and experiment with advanced configuration options and use cases. This content is targeted at system administrators who are familiar with GCS and currently operate—or are planning to operate—broader deployments at their institution.
Enhancing Project Management Efficiency_ Leveraging AI Tools like ChatGPT.pdfJay Das
With the advent of artificial intelligence or AI tools, project management processes are undergoing a transformative shift. By using tools like ChatGPT, and Bard organizations can empower their leaders and managers to plan, execute, and monitor projects more effectively.
Navigating the Metaverse: A Journey into Virtual Evolution"Donna Lenk
Join us for an exploration of the Metaverse's evolution, where innovation meets imagination. Discover new dimensions of virtual events, engage with thought-provoking discussions, and witness the transformative power of digital realms."
OpenFOAM solver for Helmholtz equation, helmholtzFoam / helmholtzBubbleFoamtakuyayamamoto1800
In this slide, we show the simulation example and the way to compile this solver.
In this solver, the Helmholtz equation can be solved by helmholtzFoam. Also, the Helmholtz equation with uniformly dispersed bubbles can be simulated by helmholtzBubbleFoam.
How Recreation Management Software Can Streamline Your Operations.pptxwottaspaceseo
Recreation management software streamlines operations by automating key tasks such as scheduling, registration, and payment processing, reducing manual workload and errors. It provides centralized management of facilities, classes, and events, ensuring efficient resource allocation and facility usage. The software offers user-friendly online portals for easy access to bookings and program information, enhancing customer experience. Real-time reporting and data analytics deliver insights into attendance and preferences, aiding in strategic decision-making. Additionally, effective communication tools keep participants and staff informed with timely updates. Overall, recreation management software enhances efficiency, improves service delivery, and boosts customer satisfaction.
Climate Science Flows: Enabling Petabyte-Scale Climate Analysis with the Eart...Globus
The Earth System Grid Federation (ESGF) is a global network of data servers that archives and distributes the planet’s largest collection of Earth system model output for thousands of climate and environmental scientists worldwide. Many of these petabyte-scale data archives are located in proximity to large high-performance computing (HPC) or cloud computing resources, but the primary workflow for data users consists of transferring data, and applying computations on a different system. As a part of the ESGF 2.0 US project (funded by the United States Department of Energy Office of Science), we developed pre-defined data workflows, which can be run on-demand, capable of applying many data reduction and data analysis to the large ESGF data archives, transferring only the resultant analysis (ex. visualizations, smaller data files). In this talk, we will showcase a few of these workflows, highlighting how Globus Flows can be used for petabyte-scale climate analysis.
May Marketo Masterclass, London MUG May 22 2024.pdfAdele Miller
Can't make Adobe Summit in Vegas? No sweat because the EMEA Marketo Engage Champions are coming to London to share their Summit sessions, insights and more!
This is a MUG with a twist you don't want to miss.
SOCRadar Research Team: Latest Activities of IntelBrokerSOCRadar
The European Union Agency for Law Enforcement Cooperation (Europol) has suffered an alleged data breach after a notorious threat actor claimed to have exfiltrated data from its systems. Infamous data leaker IntelBroker posted on the even more infamous BreachForums hacking forum, saying that Europol suffered a data breach this month.
The alleged breach affected Europol agencies CCSE, EC3, Europol Platform for Experts, Law Enforcement Forum, and SIRIUS. Infiltration of these entities can disrupt ongoing investigations and compromise sensitive intelligence shared among international law enforcement agencies.
However, this is neither the first nor the last activity of IntekBroker. We have compiled for you what happened in the last few days. To track such hacker activities on dark web sources like hacker forums, private Telegram channels, and other hidden platforms where cyber threats often originate, you can check SOCRadar’s Dark Web News.
Stay Informed on Threat Actors’ Activity on the Dark Web with SOCRadar!
Quarkus Hidden and Forbidden ExtensionsMax Andersen
Quarkus has a vast extension ecosystem and is known for its subsonic and subatomic feature set. Some of these features are not as well known, and some extensions are less talked about, but that does not make them less interesting - quite the opposite.
Come join this talk to see some tips and tricks for using Quarkus and some of the lesser known features, extensions and development techniques.
We describe the deployment and use of Globus Compute for remote computation. This content is aimed at researchers who wish to compute on remote resources using a unified programming interface, as well as system administrators who will deploy and operate Globus Compute services on their research computing infrastructure.
Code reviews are vital for ensuring good code quality. They serve as one of our last lines of defense against bugs and subpar code reaching production.
Yet, they often turn into annoying tasks riddled with frustration, hostility, unclear feedback and lack of standards. How can we improve this crucial process?
In this session we will cover:
- The Art of Effective Code Reviews
- Streamlining the Review Process
- Elevating Reviews with Automated Tools
By the end of this presentation, you'll have the knowledge on how to organize and improve your code review proces
Custom Healthcare Software for Managing Chronic Conditions and Remote Patient...Mind IT Systems
Healthcare providers often struggle with the complexities of chronic conditions and remote patient monitoring, as each patient requires personalized care and ongoing monitoring. Off-the-shelf solutions may not meet these diverse needs, leading to inefficiencies and gaps in care. It’s here, custom healthcare software offers a tailored solution, ensuring improved care and effectiveness.
Understanding Globus Data Transfers with NetSageGlobus
NetSage is an open privacy-aware network measurement, analysis, and visualization service designed to help end-users visualize and reason about large data transfers. NetSage traditionally has used a combination of passive measurements, including SNMP and flow data, as well as active measurements, mainly perfSONAR, to provide longitudinal network performance data visualization. It has been deployed by dozens of networks world wide, and is supported domestically by the Engagement and Performance Operations Center (EPOC), NSF #2328479. We have recently expanded the NetSage data sources to include logs for Globus data transfers, following the same privacy-preserving approach as for Flow data. Using the logs for the Texas Advanced Computing Center (TACC) as an example, this talk will walk through several different example use cases that NetSage can answer, including: Who is using Globus to share data with my institution, and what kind of performance are they able to achieve? How many transfers has Globus supported for us? Which sites are we sharing the most data with, and how is that changing over time? How is my site using Globus to move data internally, and what kind of performance do we see for those transfers? What percentage of data transfers at my institution used Globus, and how did the overall data transfer performance compare to the Globus users?
How to Position Your Globus Data Portal for Success Ten Good PracticesGlobus
Science gateways allow science and engineering communities to access shared data, software, computing services, and instruments. Science gateways have gained a lot of traction in the last twenty years, as evidenced by projects such as the Science Gateways Community Institute (SGCI) and the Center of Excellence on Science Gateways (SGX3) in the US, The Australian Research Data Commons (ARDC) and its platforms in Australia, and the projects around Virtual Research Environments in Europe. A few mature frameworks have evolved with their different strengths and foci and have been taken up by a larger community such as the Globus Data Portal, Hubzero, Tapis, and Galaxy. However, even when gateways are built on successful frameworks, they continue to face the challenges of ongoing maintenance costs and how to meet the ever-expanding needs of the community they serve with enhanced features. It is not uncommon that gateways with compelling use cases are nonetheless unable to get past the prototype phase and become a full production service, or if they do, they don't survive more than a couple of years. While there is no guaranteed pathway to success, it seems likely that for any gateway there is a need for a strong community and/or solid funding streams to create and sustain its success. With over twenty years of examples to draw from, this presentation goes into detail for ten factors common to successful and enduring gateways that effectively serve as best practices for any new or developing gateway.
2. Agenda
Cloudera's Learning Path for Developers
Target Audience and Prerequisites
Course Outline
Short Presentation Based on Actual Course Material
Question and Answer Session
3. Learning Path: Developers
Create Powerful New Data Processing Tools
Learn to code and write MapReduce programs for production
Master advanced API topics required for real-world data analysis
Design schemas to minimize latency on massive data sets
Scale hundreds of thousands of operations per second
Implement recommenders and data experiments
Draw actionable insights from analysis of disparate data
Build converged applications using multiple processing engines
Develop enterprise solutions using components across the EDH
Combine batch and stream processing with interactive analytics
Optimize applications for speed, ease of use, and sophistication
Spark
Training
Big Data
Applications
HBase
Training
Intro to
Data Science
Developer
Training
Aaron T. Myers
Software Engineer
4. 1 Broadest Range of Courses
Developer, Admin, Analyst, HBase, Data Science
2
3
Most Experienced Instructors
More than 20,000 students trained since 2009
6 Widest Geographic Coverage
Most classes offered: 50 cities worldwide plus online
7 Most Relevant Platform & Community
CDH deployed more than all other distributions combined
8 Depth of Training Material
Hands-on labs and VMs support live instruction
Leader in Certification
Over 8,000 accredited Cloudera professionals
4 Trusted Source for Training
100,000+ people have attended online courses 9 Ongoing Learning
Video tutorials and e-learning complement training
Why Cloudera Training?
Aligned to Best Practices and the Pace of Change
5 State of the Art Curriculum
Courses updated as Hadoop evolves 10Commitment to Big Data Education
University partnerships to teach Hadoop in the classroom
6. Intended for people who write code, such as
–Software Engineers
–Data Engineers
–ETL Developers
Target Audience
7. No prior knowledge of Spark, Hadoop or distributed programming
concepts is required
Course Prerequisites
8. No prior knowledge of Spark, Hadoop or distributed programming
concepts is required
Requirements
–Basic familiarity with Linux or Unix
Course Prerequisites
$ mkdir /data
$ cd /data
$ rm /home/johndoe/salesreport.txt
9. No prior knowledge of Spark, Hadoop or distributed programming
concepts is required
Requirements
–Basic familiarity with Linux or Unix
–Intermediate-level programming skills in either Scala or Python
Course Prerequisites
$ mkdir /data
$ cd /data
$ rm /home/johndoe/salesreport.txt
10. Example of Required Scala Skill Level
Do you understand the following code? Could you write something
similar?
object Maps {
val colors = Map("red" -> 0xFF0000,
"turquoise" -> 0x00FFFF,
"black" -> 0x000000,
"orange" -> 0xFF8040,
"brown" -> 0x804000)
def main(args: Array[String]) {
for (name <- args) println(
colors.get(name) match {
case Some(code) =>
name + " has code: " + code
case None =>
"Unknown color: " + name
}
)
}
}
11. Example of Required Python Skill Level
Do you understand the following code? Could you write something
similar?
import sys
def parsePurchases(s):
return s.split(',')
if __name__ == "__main__":
if len(sys.argv) < 2:
print "Usage: SumPrices <products>"
exit(-1)
prices = {'apple': 0.40, 'banana': 0.50, 'orange': 0.10}
total = sum(prices[fruit]
for fruit in parsePurchases(sys.argv[1]))
print 'Total: $%.2f' % total
12. Getting started with Scala
–www.scala-lang.org
Practicing Scala or Python
13. Getting started with Scala
–www.scala-lang.org
Getting started with Python
–python.org
–developers.google.com/edu/python
–and many more
Practicing Scala or Python
18. 1. Introduction
2. What is Spark?
3. Spark Basics
4. Working with RDDs
5. The Hadoop Distributed File
System
Course Outline
19. 1. Introduction
2. What is Spark?
3. Spark Basics
4. Working with RDDs
5. The Hadoop Distributed File
System
6. Running Spark on a Cluster
Course Outline
20. 1. Introduction
2. What is Spark?
3. Spark Basics
4. Working with RDDs
5. The Hadoop Distributed File
System
6. Running Spark on a Cluster
7. Parallel Programming with Spark
Course Outline
21. 1. Introduction
2. What is Spark?
3. Spark Basics
4. Working with RDDs
5. The Hadoop Distributed File
System
6. Running Spark on a Cluster
7. Parallel Programming with Spark
8. Caching and Persistence
Course Outline
22. 1. Introduction
2. What is Spark?
3. Spark Basics
4. Working with RDDs
5. The Hadoop Distributed File
System
6. Running Spark on a Cluster
7. Parallel Programming with Spark
8. Caching and Persistence
9. Writing Spark Applications
Course Outline
23. 1. Introduction
2. What is Spark?
3. Spark Basics
4. Working with RDDs
5. The Hadoop Distributed File
System
6. Running Spark on a Cluster
7. Parallel Programming with Spark
8. Caching and Persistence
9. Writing Spark Applications
10. Spark Streaming
Course Outline
24. 1. Introduction
2. What is Spark?
3. Spark Basics
4. Working with RDDs
5. The Hadoop Distributed File
System
6. Running Spark on a Cluster
7. Parallel Programming with Spark
8. Caching and Persistence
9. Writing Spark Applications
10. Spark Streaming
11. Common Patterns in Spark
Programming
Course Outline
25. 1. Introduction
2. What is Spark?
3. Spark Basics
4. Working with RDDs
5. The Hadoop Distributed File
System
6. Running Spark on a Cluster
7. Parallel Programming with Spark
8. Caching and Persistence
9. Writing Spark Applications
10. Spark Streaming
11. Common Patterns in Spark
Programming
12. Improving Spark Performance
Course Outline
26. 1. Introduction
2. What is Spark?
3. Spark Basics
4. Working with RDDs
5. The Hadoop Distributed File
System
6. Running Spark on a Cluster
7. Parallel Programming with Spark
8. Caching and Persistence
9. Writing Spark Applications
10. Spark Streaming
11. Common Patterns in Spark
Programming
12. Improving Spark Performance
13. Spark, Hadoop and the Enterprise
Data Center
Course Outline
27. 1. Introduction
2. What is Spark?
3. Spark Basics
4. Working with RDDs
5. The Hadoop Distributed File
System
6. Running Spark on a Cluster
7. Parallel Programming with Spark
8. Caching and Persistence
9. Writing Spark Applications
10. Spark Streaming
11. Common Patterns in Spark
Programming
12. Improving Spark Performance
13. Spark, Hadoop and the Enterprise
Data Center
14. Conclusion
Course Outline
28. Based on
–Chapter 3: Spark Basics
–Chapter 4: Working with RDDs
Course Excerpt
29. Based on
–Chapter 3: Spark Basics
–Chapter 4: Working with RDDs
Topics
–What is Spark?
–The components of a distributed data processing system
–Intro to the Spark Shell
–Resilient Distributed Datasets
–RDD operations
–Example: WordCount
Course Excerpt
30. Apache Spark is a fast, general engine for large-scale data
processing and analysis
–Open source, developed at UC Berkeley
Written in Scala
–Functional programming language that runs in a JVM
What is Apache Spark?
31. Apache Spark is a fast, general engine for large-scale data
processing and analysis
–Open source, developed at UC Berkeley
Written in Scala
–Functional programming language that runs in a JVM
Key Concepts
–Avoid the data bottleneck by distributing data when it is
stored
–Bring the processing to the data
–Data stored in memory
What is Apache Spark?
33. Distributed Processing with the Spark Framework
API
Cluster Computing
Spark
• Spark Standalone
• YARN
• Mesos
34. Distributed Processing with the Spark Framework
API
Cluster Computing Storage
Spark
• Spark Standalone
• YARN
• Mesos
HDFS
(Hadoop Distributed File
System)
35. Spark Shell
–Interactive REPL – for learning or data exploration
–Python or Scala
Spark Applications
–For large scale data processing
–Python, Java or Scala
What is Apache Spark?
$ pyspark
Welcome to
____ __
/ __/__ ___ _____/ /__
_ / _ / _ `/ __/ '_/
/__ / .__/_,_/_/ /_/_ version 0.9.1
/_/
Using Python version 2.6.6 (r266:84292, Jan
22 2014 09:42:36)
Spark context available as sc.
>>>
$ spark-shell
Welcome to
____ __
/ __/__ ___ _____/ /__
_ / _ / _ `/ __/ '_/
/___/ .__/_,_/_/ /_/_ version 0.9.1
/_/
Using Scala version 2.10.3 (Java HotSpot(TM)
64-Bit Server VM, Java 1.7.0_51)
Created spark context..
Spark context available as sc.
scala>
Scala Shell
Python Shell
36. Every Spark application requires a Spark Context
–The main entry point to the Spark API
Spark Shell provides a preconfigured Spark Context called sc
Spark Context
>>> sc.appName
u'PySparkShell'
scala> sc.appName
res0: String = Spark shell
37. RDD (Resilient Distributed Dataset)
–Resilient – if data in memory is lost, it can be
recreated
–Distributed – stored in memory across the cluster
–Dataset – initial data can come from a file or created
programmatically
RDDs are the fundamental unit of data in Spark
Most of Spark programming is performing operations on
RDDs
RDD (Resilient Distributed Dataset)
data
data
data
data…
RDD
38. I've never seen a purple cow.
I never hope to see one;
But I can tell you, anyhow,
I'd rather see than be one.
Example: A File-based RDD
I've never seen a purple
cow.
I never hope to see one;
But I can tell you, anyhow,
I'd rather see than be one.
File: purplecow.txt
RDD: mydata
> mydata = sc.textFile("purplecow.txt")
39. I've never seen a purple cow.
I never hope to see one;
But I can tell you, anyhow,
I'd rather see than be one.
Example: A File-based RDD
I've never seen a purple
cow.
I never hope to see one;
But I can tell you, anyhow,
I'd rather see than be one.
File: purplecow.txt
RDD: mydata
> mydata = sc.textFile("purplecow.txt")
> mydata.count()
4
40. Two types of RDD operations
–Actions – return values
–count
–take(n)
RDD Operations
value
RDD
41. Two types of RDD operations
–Actions – return values
–count
–take(n)
–Transformations – define new RDDs
based on the current one
–filter
–map
–reduce
RDD Operations
value
RDD
New RDDBase RDD
42. I've never seen a purple cow.
I never hope to see one;
But I can tell you, anyhow,
I'd rather see than be one.
Example: map and filter Transformations
43. I've never seen a purple cow.
I never hope to see one;
But I can tell you, anyhow,
I'd rather see than be one.
I'VE NEVER SEEN A PURPLE COW.
I NEVER HOPE TO SEE ONE;
BUT I CAN TELL YOU, ANYHOW,
I'D RATHER SEE THAN BE ONE.
Example: map and filter Transformations
map(lambda line: line.upper()) map(line => line.toUpperCase())
44. I've never seen a purple cow.
I never hope to see one;
But I can tell you, anyhow,
I'd rather see than be one.
I'VE NEVER SEEN A PURPLE COW.
I NEVER HOPE TO SEE ONE;
BUT I CAN TELL YOU, ANYHOW,
I'D RATHER SEE THAN BE ONE.
Example: map and filter Transformations
I'VE NEVER SEEN A PURPLE COW.
I NEVER HOPE TO SEE ONE;
I'D RATHER SEE THAN BE ONE.
filter(lambda line: line.startswith('I'))
map(lambda line: line.upper()) map(line => line.toUpperCase())
filter(line => line.startsWith('I'))
45. RDDs can hold any type of element
–Primitive types: integers, characters, booleans, strings, etc.
–Sequence types: lists, arrays, tuples, dicts, etc. (including nested)
–Scala/Java Objects (if serializable)
–Mixed types
RDDs
46. RDDs can hold any type of element
–Primitive types: integers, characters, booleans, strings, etc.
–Sequence types: lists, arrays, tuples, dicts, etc. (including nested)
–Scala/Java Objects (if serializable)
–Mixed types
Some types of RDDs have additional functionality
–Double RDDs – RDDs consisting of numeric data
–Pair RDDs – RDDs consisting of Key-Value pairs
RDDs
47. Pair RDDs are a special form of RDD
–Each element must be a key-value pair (a two-
element tuple)
–Keys and values can be any type
Pair RDDs
(key1,value1)
(key2,value2)
(key3,value3)
…
Pair RDD
48. Pair RDDs are a special form of RDD
–Each element must be a key-value pair (a two-
element tuple)
–Keys and values can be any type
Why?
–Use with Map-Reduce algorithms
–Many additional functions are available for
common data processing needs
–E.g. sorting, joining, grouping, counting, etc.
Pair RDDs
(key1,value1)
(key2,value2)
(key3,value3)
…
Pair RDD
49. MapReduce is a common programming model
–Two phases
–Map – process each element in a data set
–Reduce – aggregate or consolidate the data
–Easily applicable to distributed processing of large data sets
MapReduce
50. MapReduce is a common programming model
–Two phases
–Map – process each element in a data set
–Reduce – aggregate or consolidate the data
–Easily applicable to distributed processing of large data sets
Hadoop MapReduce is the major implementation
–Limited
–Each job has one Map phase, one Reduce phase in each
–Job output saved to files
MapReduce
51. MapReduce is a common programming model
–Two phases
–Map – process each element in a data set
–Reduce – aggregate or consolidate the data
–Easily applicable to distributed processing of large data sets
Hadoop MapReduce is the major implementation
–Limited
–Each job has one Map phase, one Reduce phase in each
–Job output saved to files
Spark implements MapReduce with much greater flexibility
–Map and Reduce functions can be interspersed
–Results stored in memory
–Operations can be chained easily
MapReduce
52. MapReduce Example: Word Count
the cat sat on the mat
the aardvark sat on the sofa
Input Data
Result
aardvark 1
cat 1
mat 1
on 2
sat 2
sofa 1
the 4
?
53. Example: Word Count
> counts = sc.textFile(file)
the cat sat on the
mat
the aardvark sat on
the sofa
54. Example: Word Count
> counts = sc.textFile(file)
.flatMap(lambda line: line.split())
the cat sat on the
mat
the aardvark sat on
the sofa
the
cat
sat
on
the
mat
the
aardvark
sat
…
55. Example: Word Count
> counts = sc.textFile(file)
.flatMap(lambda line: line.split())
.map(lambda word: (word,1))
the cat sat on the
mat
the aardvark sat on
the sofa
(the, 1)
(cat, 1)
(sat, 1)
(on, 1)
(the, 1)
(mat, 1)
(the, 1)
(aardvark, 1)
(sat, 1)
…
the
cat
sat
on
the
mat
the
aardvark
sat
…
Key-
Value
Pairs
56. Example: Word Count
> counts = sc.textFile(file)
.flatMap(lambda line: line.split())
.map(lambda word: (word,1))
.reduceByKey(lambda v1,v2: v1+v2)
(aardvark, 1)
(cat, 1)
(mat, 1)
(on, 2)
(sat, 2)
(sofa, 1)
(the, 4)
the cat sat on the
mat
the aardvark sat on
the sofa
(the, 1)
(cat, 1)
(sat, 1)
(on, 1)
(the, 1)
(mat, 1)
(the, 1)
(aardvark, 1)
(sat, 1)
…
the
cat
sat
on
the
mat
the
aardvark
sat
…
57. Example: Word Count
> counts = sc.textFile(file)
.flatMap(lambda line: line.split())
.map(lambda word: (word,1))
.reduceByKey(lambda v1,v2: v1+v2)
(aardvark, 1)
(cat, 1)
(mat, 1)
(on, 2)
(sat, 2)
(sofa, 1)
(the, 4)
the cat sat on the
mat
the aardvark sat on
the sofa
(the, 1)
(cat, 1)
(sat, 1)
(on, 1)
(the, 1)
(mat, 1)
(the, 1)
(aardvark, 1)
(sat, 1)
…
the
cat
sat
on
the
mat
the
aardvark
sat
…
58. Example: Word Count
> counts = sc.textFile(file)
.flatMap(lambda line: line.split())
.map(lambda word: (word,1))
.reduceByKey(lambda v1,v2: v1+v2)
(aardvark, 1)
(cat, 1)
(mat, 1)
(on, 2)
(sat, 2)
(sofa, 1)
(the, 4)
the cat sat on the
mat
the aardvark sat on
the sofa
(the, 1)
(cat, 1)
(sat, 1)
(on, 1)
(the, 1)
(mat, 1)
(the, 1)
(aardvark, 1)
(sat, 1)
…
the
cat
sat
on
the
mat
the
aardvark
sat
…
63. Spark takes the concepts of
MapReduce to the next level
–Higher level API = faster, easier
development
Spark v. Hadoop MapReduce
64. Spark takes the concepts of
MapReduce to the next level
–Higher level API = faster, easier
development
Spark v. Hadoop MapReduce
public class WordCount {
public static void main(String[] args) throws Exception {
Job job = new Job();
job.setJarByClass(WordCount.class);
job.setJobName("Word Count");
FileInputFormat.setInputPaths(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
job.setMapperClass(WordMapper.class);
job.setReducerClass(SumReducer.class);
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(IntWritable.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
boolean success = job.waitForCompletion(true);
System.exit(success ? 0 : 1);
}
}
public class WordMapper extends Mapper<LongWritable, Text, Text,
IntWritable> {
public void map(LongWritable key, Text value,
Context context) throws IOException, InterruptedException {
String line = value.toString();
for (String word : line.split("W+")) {
if (word.length() > 0)
context.write(new Text(word), new IntWritable(1));
}
}
}
}
public class SumReducer extends Reducer<Text, IntWritable, Text,
IntWritable> {
public void reduce(Text key, Iterable<IntWritable>
values, Context context) throws IOException, InterruptedException {
int wordCount = 0;
for (IntWritable value : values) {
wordCount += value.get();
}
context.write(key, new IntWritable(wordCount));
}
}
> counts = sc.textFile(file)
.flatMap(lambda line: line.split())
.map(lambda word: (word,1))
.reduceByKey(lambda v1,v2: v1+v2)
> counts.saveAsTextFile(output)
65. Spark takes the concepts of
MapReduce to the next level
–Higher level API = faster, easier
development
–Low latency = near real-time
processing
Spark v. Hadoop MapReduce
66. Spark takes the concepts of
MapReduce to the next level
–Higher level API = faster, easier
development
–Low latency = near real-time
processing
–In-memory data storage = up to
100x performance improvement
Spark v. Hadoop MapReduce
Logistic Regression
67.
68. Thank you for attending!
• Submit questions in the Q&A panel
• Follow Cloudera University @ClouderaU
• Follow Diana on GitHub:
https://github.com/dianacarroll
• Follow the Developer learning path:
http://university.cloudera.com/develop
ers
• Learn about the enterprise data hub:
http://tinyurl.com/edh-webinar
• Join the Cloudera user community:
http://community.cloudera.com/
Register now for Cloudera training at
http://university.cloudera.com
Use discount code Spark_10 to save 10%
on new enrollments in Spark Developer
Training classes delivered by Cloudera
until October 3, 2014*
Use discount code 15off2 to save 15% on
enrollments in two or more training
classes delivered by Cloudera until
October 3, 2014*
* Excludes classes sold or delivered by Cloudera partners
Editor's Notes
As I said, Python is another option. Take a look at this simple program, which takes a list of products purchased from the command line, and calculates the total cost of the purchase.
Again, if this syntax doesn’t make sense to you, you will need to get more familiar with Python before you take the course. In the course, you need to be comfortable with defining functions, working with lists and arrays, parsing strings and so on.
If you don’t yet have the programming skills to take this course, a good place to start learning Scala is at the official Scala site: scala-lang.org, which includes lots of documentation including overviews and a series of tutorials geared toward Java developers. The site also has pointers to other resources, such as a Coursera course and several good books.
There’s an even richer set of resources for learning Python, including tutorials at python.org, and well as many other tutorial sites and online classes. One particularly useful resource for experienced programmers is Google’s Python class for developers. And of course, there are many Python books available from O’Reilly and other respected publishers.
Note that Spark uses Python 2.6 or 2.7, so if you are new to Python, focus your learning on Python 2 instead of 3.
Now let’s turn our attention to what you will actually learn in the class.
[CLICK]
After a brief introduction
Now let’s turn our attention to what you will actually learn in the class.
[CLICK]
After a brief introduction, [CLICK]
Chapter 2 is “What is Spark?” As I said, no experience with Spark or distributed processing is required, so we start at the beginning: what is Spark and why would you want to use it? What problems does it solve and what kind of use cases might you want to use it for?
[CLICK]
Then in Chapter 3 we move on to actually using Spark. We introduce the concept of Resilient Distributed Datasets, or RDDs, which is the core concept in Spark development, and briefly cover the principles of Functional Programming as used in Spark. In the hands-on exercises, you’ll learn how to start the Spark interactive shell and load data from a file into an RDD.
[CLICK]
In Chapter 4 we look more deeply at RDDs: how to perform operations to transform them and extract data from them. You will learn about Map-Reduce, a programming model for parallel processing of large data sets, and compare Spark’s MapReduce implementation with Hadoop’s. In the exercises, you will work with a set of Apache web server logs files: loading them into an RDD, parsing and filtering the data, and aggregating, joining and reporting on the data.
[CLICK]
In Chapter 5, we introduce the Hadoop Distributed File System, or HDFS, which provides the distribute storage layer Spark uses to read and save data in a cluster. The course virtual machines include a running HDFS cluster, so in the exercises you will have a chance to import and export data using both the command line and a Spark application.
[CLICK]
Chapter 6 gives an overview of how a Spark application distributes processing on a cluster using a supported clustering platform, such as YARN, Mesos, or the Spark Standalone framework included with Spark. You will learn about different deployment options for a Spark application, and in the exercises you will start a Spark Standalone cluster on your virtual machine, start the Spark Shell on the cluster, and use the Spark Standalone web UI to explore the cluster.
[CLICK]
The next chapter goes deeper into clustered computing. We will cover how Spark partitions RDDs by storing data in memory on multiple nodes in the cluster…and how it distributes parallel tasks to process that data on the node where it is stored. In the exercises you will explore data partitioning, and use the Spark Application UI to better understand how Spark executes tasks in a cluster.
[CLICK]
In Chapter 8, we cover one of Spark’s unique features – the ability to cache distributed data locally, either in memory or on disk, for great improvements in performance. You will also learn about what makes RDDs “resilient”: how Spark uses “lineage” to recreate the data as needed in case of losing a node.
[CLICK]
In Chapter 9 teaches how to write and configure a Spark application from scratch. In the exercises, you will build a Spark application in either Scala or Python, configure different application properties, and submit the application to run on the cluster.
[CLICK]
Chapter 10 introduces one of the most exciting parts of the Spark ecosystem, Spark Streaming, which allows you to use Spark to process streaming data in near real-time, from sources such as application logs and social media feeds. In the exercises, you will write a Spark Streaming application to process data from a stream of web server logs.
[CLICK]
In the next chapter we discuss common patterns in Spark Programming, with a particular focus on implementing iterative algorithms in Spark, which is one of Spark’s special strong points. We will explore page ranking as a common iterative tasks, as well as briefly introduce Spark’s machine learning and graphing add-ons: MLLib and GraphX. In the labs, you will use Spark to implement an iterative calculation of k-means on location data.
[CLICK]
In Chapter 12, you will learn how to diagnose and fix common performance issues in Spark applications using techniques such as shared variables, serialization and data partitioning.
you will practice using broadcast variables to avoid expensive join operations.
[CLICK]
Finally, in Chapter 13 you will learn how to use Spark in the context of a production data center. We will discuss how Spark complements existing Hadoop MapReduce applications, and explore how Spark applications work with other components of the Hadoop ecosystem such as Sqoop, Flume, HBase and Impala. In the final exercises before the course conclusion, [CLICK]
you’ll practice extracting data from a relational database using Sqoop and using that data in Spark.
Now let’s turn our attention to what you will actually learn in the class.
[CLICK]
After a brief introduction, [CLICK]
Chapter 2 is “What is Spark?” As I said, no experience with Spark or distributed processing is required, so we start at the beginning: what is Spark and why would you want to use it? What problems does it solve and what kind of use cases might you want to use it for?
[CLICK]
Then in Chapter 3 we move on to actually using Spark. We introduce the concept of Resilient Distributed Datasets, or RDDs, which is the core concept in Spark development, and briefly cover the principles of Functional Programming as used in Spark. In the hands-on exercises, you’ll learn how to start the Spark interactive shell and load data from a file into an RDD.
[CLICK]
In Chapter 4 we look more deeply at RDDs: how to perform operations to transform them and extract data from them. You will learn about Map-Reduce, a programming model for parallel processing of large data sets, and compare Spark’s MapReduce implementation with Hadoop’s. In the exercises, you will work with a set of Apache web server logs files: loading them into an RDD, parsing and filtering the data, and aggregating, joining and reporting on the data.
[CLICK]
In Chapter 5, we introduce the Hadoop Distributed File System, or HDFS, which provides the distribute storage layer Spark uses to read and save data in a cluster. The course virtual machines include a running HDFS cluster, so in the exercises you will have a chance to import and export data using both the command line and a Spark application.
[CLICK]
Chapter 6 gives an overview of how a Spark application distributes processing on a cluster using a supported clustering platform, such as YARN, Mesos, or the Spark Standalone framework included with Spark. You will learn about different deployment options for a Spark application, and in the exercises you will start a Spark Standalone cluster on your virtual machine, start the Spark Shell on the cluster, and use the Spark Standalone web UI to explore the cluster.
[CLICK]
The next chapter goes deeper into clustered computing. We will cover how Spark partitions RDDs by storing data in memory on multiple nodes in the cluster…and how it distributes parallel tasks to process that data on the node where it is stored. In the exercises you will explore data partitioning, and use the Spark Application UI to better understand how Spark executes tasks in a cluster.
[CLICK]
In Chapter 8, we cover one of Spark’s unique features – the ability to cache distributed data locally, either in memory or on disk, for great improvements in performance. You will also learn about what makes RDDs “resilient”: how Spark uses “lineage” to recreate the data as needed in case of losing a node.
[CLICK]
In Chapter 9 teaches how to write and configure a Spark application from scratch. In the exercises, you will build a Spark application in either Scala or Python, configure different application properties, and submit the application to run on the cluster.
[CLICK]
Chapter 10 introduces one of the most exciting parts of the Spark ecosystem, Spark Streaming, which allows you to use Spark to process streaming data in near real-time, from sources such as application logs and social media feeds. In the exercises, you will write a Spark Streaming application to process data from a stream of web server logs.
[CLICK]
In the next chapter we discuss common patterns in Spark Programming, with a particular focus on implementing iterative algorithms in Spark, which is one of Spark’s special strong points. We will explore page ranking as a common iterative tasks, as well as briefly introduce Spark’s machine learning and graphing add-ons: MLLib and GraphX. In the labs, you will use Spark to implement an iterative calculation of k-means on location data.
[CLICK]
In Chapter 12, you will learn how to diagnose and fix common performance issues in Spark applications using techniques such as shared variables, serialization and data partitioning.
you will practice using broadcast variables to avoid expensive join operations.
[CLICK]
Finally, in Chapter 13 you will learn how to use Spark in the context of a production data center. We will discuss how Spark complements existing Hadoop MapReduce applications, and explore how Spark applications work with other components of the Hadoop ecosystem such as Sqoop, Flume, HBase and Impala. In the final exercises before the course conclusion, [CLICK]
you’ll practice extracting data from a relational database using Sqoop and using that data in Spark.
Now let’s turn our attention to what you will actually learn in the class.
[CLICK]
After a brief introduction, [CLICK]
Chapter 2 is “What is Spark?” As I said, no experience with Spark or distributed processing is required, so we start at the beginning: what is Spark and why would you want to use it? What problems does it solve and what kind of use cases might you want to use it for?
[CLICK]
Then in Chapter 3 we move on to actually using Spark. We introduce the concept of Resilient Distributed Datasets, or RDDs, which is the core concept in Spark development, and briefly cover the principles of Functional Programming as used in Spark. In the hands-on exercises, you’ll learn how to start the Spark interactive shell and load data from a file into an RDD.
[CLICK]
In Chapter 4 we look more deeply at RDDs: how to perform operations to transform them and extract data from them. You will learn about Map-Reduce, a programming model for parallel processing of large data sets, and compare Spark’s MapReduce implementation with Hadoop’s. In the exercises, you will work with a set of Apache web server logs files: loading them into an RDD, parsing and filtering the data, and aggregating, joining and reporting on the data.
[CLICK]
In Chapter 5, we introduce the Hadoop Distributed File System, or HDFS, which provides the distribute storage layer Spark uses to read and save data in a cluster. The course virtual machines include a running HDFS cluster, so in the exercises you will have a chance to import and export data using both the command line and a Spark application.
[CLICK]
Chapter 6 gives an overview of how a Spark application distributes processing on a cluster using a supported clustering platform, such as YARN, Mesos, or the Spark Standalone framework included with Spark. You will learn about different deployment options for a Spark application, and in the exercises you will start a Spark Standalone cluster on your virtual machine, start the Spark Shell on the cluster, and use the Spark Standalone web UI to explore the cluster.
[CLICK]
The next chapter goes deeper into clustered computing. We will cover how Spark partitions RDDs by storing data in memory on multiple nodes in the cluster…and how it distributes parallel tasks to process that data on the node where it is stored. In the exercises you will explore data partitioning, and use the Spark Application UI to better understand how Spark executes tasks in a cluster.
[CLICK]
In Chapter 8, we cover one of Spark’s unique features – the ability to cache distributed data locally, either in memory or on disk, for great improvements in performance. You will also learn about what makes RDDs “resilient”: how Spark uses “lineage” to recreate the data as needed in case of losing a node.
[CLICK]
In Chapter 9 teaches how to write and configure a Spark application from scratch. In the exercises, you will build a Spark application in either Scala or Python, configure different application properties, and submit the application to run on the cluster.
[CLICK]
Chapter 10 introduces one of the most exciting parts of the Spark ecosystem, Spark Streaming, which allows you to use Spark to process streaming data in near real-time, from sources such as application logs and social media feeds. In the exercises, you will write a Spark Streaming application to process data from a stream of web server logs.
[CLICK]
In the next chapter we discuss common patterns in Spark Programming, with a particular focus on implementing iterative algorithms in Spark, which is one of Spark’s special strong points. We will explore page ranking as a common iterative tasks, as well as briefly introduce Spark’s machine learning and graphing add-ons: MLLib and GraphX. In the labs, you will use Spark to implement an iterative calculation of k-means on location data.
[CLICK]
In Chapter 12, you will learn how to diagnose and fix common performance issues in Spark applications using techniques such as shared variables, serialization and data partitioning.
you will practice using broadcast variables to avoid expensive join operations.
[CLICK]
Finally, in Chapter 13 you will learn how to use Spark in the context of a production data center. We will discuss how Spark complements existing Hadoop MapReduce applications, and explore how Spark applications work with other components of the Hadoop ecosystem such as Sqoop, Flume, HBase and Impala. In the final exercises before the course conclusion, [CLICK]
you’ll practice extracting data from a relational database using Sqoop and using that data in Spark.
Now let’s turn our attention to what you will actually learn in the class.
[CLICK]
After a brief introduction, [CLICK]
Chapter 2 is “What is Spark?” As I said, no experience with Spark or distributed processing is required, so we start at the beginning: what is Spark and why would you want to use it? What problems does it solve and what kind of use cases might you want to use it for?
[CLICK]
Then in Chapter 3 we move on to actually using Spark. We introduce the concept of Resilient Distributed Datasets, or RDDs, which is the core concept in Spark development, and briefly cover the principles of Functional Programming as used in Spark. In the hands-on exercises, you’ll learn how to start the Spark interactive shell and load data from a file into an RDD.
[CLICK]
In Chapter 4 we look more deeply at RDDs: how to perform operations to transform them and extract data from them. You will learn about Map-Reduce, a programming model for parallel processing of large data sets, and compare Spark’s MapReduce implementation with Hadoop’s. In the exercises, you will work with a set of Apache web server logs files: loading them into an RDD, parsing and filtering the data, and aggregating, joining and reporting on the data.
[CLICK]
In Chapter 5, we introduce the Hadoop Distributed File System, or HDFS, which provides the distribute storage layer Spark uses to read and save data in a cluster. The course virtual machines include a running HDFS cluster, so in the exercises you will have a chance to import and export data using both the command line and a Spark application.
[CLICK]
Chapter 6 gives an overview of how a Spark application distributes processing on a cluster using a supported clustering platform, such as YARN, Mesos, or the Spark Standalone framework included with Spark. You will learn about different deployment options for a Spark application, and in the exercises you will start a Spark Standalone cluster on your virtual machine, start the Spark Shell on the cluster, and use the Spark Standalone web UI to explore the cluster.
[CLICK]
The next chapter goes deeper into clustered computing. We will cover how Spark partitions RDDs by storing data in memory on multiple nodes in the cluster…and how it distributes parallel tasks to process that data on the node where it is stored. In the exercises you will explore data partitioning, and use the Spark Application UI to better understand how Spark executes tasks in a cluster.
[CLICK]
In Chapter 8, we cover one of Spark’s unique features – the ability to cache distributed data locally, either in memory or on disk, for great improvements in performance. You will also learn about what makes RDDs “resilient”: how Spark uses “lineage” to recreate the data as needed in case of losing a node.
[CLICK]
In Chapter 9 teaches how to write and configure a Spark application from scratch. In the exercises, you will build a Spark application in either Scala or Python, configure different application properties, and submit the application to run on the cluster.
[CLICK]
Chapter 10 introduces one of the most exciting parts of the Spark ecosystem, Spark Streaming, which allows you to use Spark to process streaming data in near real-time, from sources such as application logs and social media feeds. In the exercises, you will write a Spark Streaming application to process data from a stream of web server logs.
[CLICK]
In the next chapter we discuss common patterns in Spark Programming, with a particular focus on implementing iterative algorithms in Spark, which is one of Spark’s special strong points. We will explore page ranking as a common iterative tasks, as well as briefly introduce Spark’s machine learning and graphing add-ons: MLLib and GraphX. In the labs, you will use Spark to implement an iterative calculation of k-means on location data.
[CLICK]
In Chapter 12, you will learn how to diagnose and fix common performance issues in Spark applications using techniques such as shared variables, serialization and data partitioning.
you will practice using broadcast variables to avoid expensive join operations.
[CLICK]
Finally, in Chapter 13 you will learn how to use Spark in the context of a production data center. We will discuss how Spark complements existing Hadoop MapReduce applications, and explore how Spark applications work with other components of the Hadoop ecosystem such as Sqoop, Flume, HBase and Impala. In the final exercises before the course conclusion, [CLICK]
you’ll practice extracting data from a relational database using Sqoop and using that data in Spark.
Now let’s turn our attention to what you will actually learn in the class.
[CLICK]
After a brief introduction, [CLICK]
Chapter 2 is “What is Spark?” As I said, no experience with Spark or distributed processing is required, so we start at the beginning: what is Spark and why would you want to use it? What problems does it solve and what kind of use cases might you want to use it for?
[CLICK]
Then in Chapter 3 we move on to actually using Spark. We introduce the concept of Resilient Distributed Datasets, or RDDs, which is the core concept in Spark development, and briefly cover the principles of Functional Programming as used in Spark. In the hands-on exercises, you’ll learn how to start the Spark interactive shell and load data from a file into an RDD.
[CLICK]
In Chapter 4 we look more deeply at RDDs: how to perform operations to transform them and extract data from them. You will learn about Map-Reduce, a programming model for parallel processing of large data sets, and compare Spark’s MapReduce implementation with Hadoop’s. In the exercises, you will work with a set of Apache web server logs files: loading them into an RDD, parsing and filtering the data, and aggregating, joining and reporting on the data.
[CLICK]
In Chapter 5, we introduce the Hadoop Distributed File System, or HDFS, which provides the distribute storage layer Spark uses to read and save data in a cluster. The course virtual machines include a running HDFS cluster, so in the exercises you will have a chance to import and export data using both the command line and a Spark application.
[CLICK]
Chapter 6 gives an overview of how a Spark application distributes processing on a cluster using a supported clustering platform, such as YARN, Mesos, or the Spark Standalone framework included with Spark. You will learn about different deployment options for a Spark application, and in the exercises you will start a Spark Standalone cluster on your virtual machine, start the Spark Shell on the cluster, and use the Spark Standalone web UI to explore the cluster.
[CLICK]
The next chapter goes deeper into clustered computing. We will cover how Spark partitions RDDs by storing data in memory on multiple nodes in the cluster…and how it distributes parallel tasks to process that data on the node where it is stored. In the exercises you will explore data partitioning, and use the Spark Application UI to better understand how Spark executes tasks in a cluster.
[CLICK]
In Chapter 8, we cover one of Spark’s unique features – the ability to cache distributed data locally, either in memory or on disk, for great improvements in performance. You will also learn about what makes RDDs “resilient”: how Spark uses “lineage” to recreate the data as needed in case of losing a node.
[CLICK]
In Chapter 9 teaches how to write and configure a Spark application from scratch. In the exercises, you will build a Spark application in either Scala or Python, configure different application properties, and submit the application to run on the cluster.
[CLICK]
Chapter 10 introduces one of the most exciting parts of the Spark ecosystem, Spark Streaming, which allows you to use Spark to process streaming data in near real-time, from sources such as application logs and social media feeds. In the exercises, you will write a Spark Streaming application to process data from a stream of web server logs.
[CLICK]
In the next chapter we discuss common patterns in Spark Programming, with a particular focus on implementing iterative algorithms in Spark, which is one of Spark’s special strong points. We will explore page ranking as a common iterative tasks, as well as briefly introduce Spark’s machine learning and graphing add-ons: MLLib and GraphX. In the labs, you will use Spark to implement an iterative calculation of k-means on location data.
[CLICK]
In Chapter 12, you will learn how to diagnose and fix common performance issues in Spark applications using techniques such as shared variables, serialization and data partitioning.
you will practice using broadcast variables to avoid expensive join operations.
[CLICK]
Finally, in Chapter 13 you will learn how to use Spark in the context of a production data center. We will discuss how Spark complements existing Hadoop MapReduce applications, and explore how Spark applications work with other components of the Hadoop ecosystem such as Sqoop, Flume, HBase and Impala. In the final exercises before the course conclusion, [CLICK]
you’ll practice extracting data from a relational database using Sqoop and using that data in Spark.