Vous avez récemment commencé à travailler sur Spark et vos jobs prennent une éternité pour se terminer ? Cette présentation est faite pour vous.
Himanshu Arora et Nitya Nand YADAV ont rassemblé de nombreuses bonnes pratiques, optimisations et ajustements qu'ils ont appliqué au fil des années en production pour rendre leurs jobs plus rapides et moins consommateurs de ressources.
Dans cette présentation, ils nous apprennent les techniques avancées d'optimisation de Spark, les formats de sérialisation des données, les formats de stockage, les optimisations hardware, contrôle sur la parallélisme, paramétrages de resource manager, meilleur data localité et l'optimisation du GC etc.
Ils nous font découvrir également l'utilisation appropriée de RDD, DataFrame et Dataset afin de bénéficier pleinement des optimisations internes apportées par Spark.
These slides show how to reduce latency on websites and reduce bandwidth for improved user experience.
Covering network, compression, caching, etags, application optimisation, sphinxsearch, memcache, db optimisation
From common errors seen in running Spark applications, e.g., OutOfMemory, NoClassFound, disk IO bottlenecks, History Server crash, cluster under-utilization to advanced settings used to resolve large-scale Spark SQL workloads such as HDFS blocksize vs Parquet blocksize, how best to run HDFS Balancer to re-distribute file blocks, etc. you will get all the scoop in this information-packed presentation.
The Best and Worst of Cassandra-stress Tool (Christopher Batey, The Last Pick...DataStax
Making sure your Data Model will work on the production cluster after 6 months as well as it does on your laptop is an important skill. It's one that we use every day with our clients at The Last Pickle, and one that relies on tools like the cassandra-stress. Knowing how the data model will perform under stress once it has been loaded with data can prevent expensive re-writes late in the project.
In this talk Christopher Batey, Consultant at The Last Pickle, will shed some light on how to use the cassandra-stress tool to test your own schema, graph the results and even how to extend the tool for your own use cases. While this may be called premature optimisation for a RDBS, a successful Cassandra project depends on it's data model.
About the Speaker
Christopher Batey Consultant / Software Engineer, The Last Pickle
Christopher (@chbatey) is a part time consultant at The Last Pickle where he works with clients to help them succeed with Apache Cassandra as well as a freelance software engineer working in London. Likes: Scala, Haskell, Java, the JVM, Akka, distributed databases, XP, TDD, Pairing. Hates: Untested software, code ownership. You can checkout his blog at: http://www.batey.info
Building highly scalable website requires to understand the core building blocks of your applicative environment. In this talk we dive into Jahia core components to understand how they interact and how by (1) respecting a few architectural practices and (2) fine tuning Jahia components and the JVM, you will be able to build a highly scalable service
These slides show how to reduce latency on websites and reduce bandwidth for improved user experience.
Covering network, compression, caching, etags, application optimisation, sphinxsearch, memcache, db optimisation
From common errors seen in running Spark applications, e.g., OutOfMemory, NoClassFound, disk IO bottlenecks, History Server crash, cluster under-utilization to advanced settings used to resolve large-scale Spark SQL workloads such as HDFS blocksize vs Parquet blocksize, how best to run HDFS Balancer to re-distribute file blocks, etc. you will get all the scoop in this information-packed presentation.
The Best and Worst of Cassandra-stress Tool (Christopher Batey, The Last Pick...DataStax
Making sure your Data Model will work on the production cluster after 6 months as well as it does on your laptop is an important skill. It's one that we use every day with our clients at The Last Pickle, and one that relies on tools like the cassandra-stress. Knowing how the data model will perform under stress once it has been loaded with data can prevent expensive re-writes late in the project.
In this talk Christopher Batey, Consultant at The Last Pickle, will shed some light on how to use the cassandra-stress tool to test your own schema, graph the results and even how to extend the tool for your own use cases. While this may be called premature optimisation for a RDBS, a successful Cassandra project depends on it's data model.
About the Speaker
Christopher Batey Consultant / Software Engineer, The Last Pickle
Christopher (@chbatey) is a part time consultant at The Last Pickle where he works with clients to help them succeed with Apache Cassandra as well as a freelance software engineer working in London. Likes: Scala, Haskell, Java, the JVM, Akka, distributed databases, XP, TDD, Pairing. Hates: Untested software, code ownership. You can checkout his blog at: http://www.batey.info
Building highly scalable website requires to understand the core building blocks of your applicative environment. In this talk we dive into Jahia core components to understand how they interact and how by (1) respecting a few architectural practices and (2) fine tuning Jahia components and the JVM, you will be able to build a highly scalable service
This presentation provides an overview of the Dell PowerEdge R730xd server performance results with Red Hat Ceph Storage. It covers the advantages of using Red Hat Ceph Storage on Dell servers with their proven hardware components that provide high scalability, enhanced ROI cost benefits, and support of unstructured data.
What’s the Best PostgreSQL High Availability Framework? PAF vs. repmgr vs. Pa...ScaleGrid.io
Compare top PostgreSQL high availability frameworks - PostgreSQL Automatic Failover (PAF), Replication Manager (repmgr) and Patroni to improve your app uptime. ScaleGrid blog - https://scalegrid.io/blog/whats-the-best-postgresql-high-availability-framework-paf-vs-repmgr-vs-patroni-infographic/
Red Hat Enterprise Linux OpenStack Platform on Inktank Ceph EnterpriseRed_Hat_Storage
This session describes how to get the most out of OpenStack Cinder volumes on Ceph.
We’ll discuss:
Performance configuration, tuning, and workloads.
Performance test results of Red Hat Enterprise Linux OpenStack Platform 5, Red Hat Enterprise Linux OpenStack Platform 6, Red Hat Ceph Storage 1.2.3, and Firefly.
Anticipated improvements in performance for Red Hat Ceph Storage 1.3.
In 40 minutes the audience will learn a variety of ways to make postgresql database suddenly go out of memory on a box with half a terabyte of RAM.
Developer's and DBA's best practices for preventing this will also be discussed, as well as a bit of Postgres and Linux memory management internals.
Cassandra Community Webinar | In Case of Emergency Break GlassDataStax
The design of Apache Cassandra allows applications to provide constant uptime. Peer-to-Peer technology ensures there are no single points of failure, and the Consistency guarantees allow applications to function correctly while some nodes are down. There is also a wealth of information provided by the JMX API and the system log. All of this means that when things go wrong you have the time, information and platform to resolve them without downtime. This presentation will cover some of the common, and not so common, performance issues, failures and management tasks observed in running clusters. Aaron will discuss how to gather information and how to act on it. Operators, Developers and Managers will all benefit from this exposition of Cassandra in the wild.
[Open Infrastructure & Cloud Native Days Korea 2019]
커뮤니티 버전의 OpenStack 과 Ceph를 활용하여 대고객서비스를 구축한 사례를 공유합니다. 유연성을 확보한 기업용 클라우드 서비스 구축 사례와 높은 수준의 보안을 요구하는 거래소 서비스를 구축, 운영한 사례를 소개합니다. 또한 이 프로젝트에 사용된 기술 스택 및 장애 해결사례와 최적화 방안을 소개합니다. 오픈스택은 역시 오픈소스컨설팅입니다.
#openstack #ceph #openinfraday #cloudnative #opensourceconsulting
Building Apache Cassandra clusters for massive scaleAlex Thompson
Covering theory and operational aspects of bring up Apache Cassandra clusters - this presentation can be used as a field reference. Presented by Alex Thompson at the Sydney Cassandra Meetup.
Accelerating Apache Spark by Several Orders of Magnitude with GPUs and RAPIDS...Databricks
GPU acceleration has been at the heart of scientific computing and artificial intelligence for many years now. GPUs provide the computational power needed for the most demanding applications such as Deep Neural Networks, nuclear or weather simulation. Since the launch of RAPIDS in mid-2018, this vast computational resource has become available for Data Science workloads too. The RAPIDS toolkit, which is now available on the Databricks Unified Analytics Platform, is a GPU-accelerated drop-in replacement for utilities such as Pandas/NumPy/ScikitLearn/XGboost. Through its use of Dask wrappers the platform allows for true, large scale computation with minimal, if any, code changes.
The goal of this talk is to discuss RAPIDS, its functionality, architecture as well as the way it integrates with Spark providing on many occasions several orders of magnitude acceleration versus its CPU-only counterparts.
There are many common workloads in R that are "embarrassingly parallel": group-by analyses, simulations, and cross-validation of models are just a few examples. In this talk I'll describe several techniques available in R to speed up workloads like these, by running multiple iterations simultaneously, in parallel.
Many of these techniques require the use of a cluster of machines running R, and I'll provide examples of using cloud-based services to provision clusters for parallel computations. In particular, I will describe how you can use the SparklyR package to distribute data manipulations using the dplyr syntax, on a cluster of servers provisioned in the Azure cloud.
Presented by David Smith at Data Day Texas in Austin, January 27 2018.
This presentation provides an overview of the Dell PowerEdge R730xd server performance results with Red Hat Ceph Storage. It covers the advantages of using Red Hat Ceph Storage on Dell servers with their proven hardware components that provide high scalability, enhanced ROI cost benefits, and support of unstructured data.
What’s the Best PostgreSQL High Availability Framework? PAF vs. repmgr vs. Pa...ScaleGrid.io
Compare top PostgreSQL high availability frameworks - PostgreSQL Automatic Failover (PAF), Replication Manager (repmgr) and Patroni to improve your app uptime. ScaleGrid blog - https://scalegrid.io/blog/whats-the-best-postgresql-high-availability-framework-paf-vs-repmgr-vs-patroni-infographic/
Red Hat Enterprise Linux OpenStack Platform on Inktank Ceph EnterpriseRed_Hat_Storage
This session describes how to get the most out of OpenStack Cinder volumes on Ceph.
We’ll discuss:
Performance configuration, tuning, and workloads.
Performance test results of Red Hat Enterprise Linux OpenStack Platform 5, Red Hat Enterprise Linux OpenStack Platform 6, Red Hat Ceph Storage 1.2.3, and Firefly.
Anticipated improvements in performance for Red Hat Ceph Storage 1.3.
In 40 minutes the audience will learn a variety of ways to make postgresql database suddenly go out of memory on a box with half a terabyte of RAM.
Developer's and DBA's best practices for preventing this will also be discussed, as well as a bit of Postgres and Linux memory management internals.
Cassandra Community Webinar | In Case of Emergency Break GlassDataStax
The design of Apache Cassandra allows applications to provide constant uptime. Peer-to-Peer technology ensures there are no single points of failure, and the Consistency guarantees allow applications to function correctly while some nodes are down. There is also a wealth of information provided by the JMX API and the system log. All of this means that when things go wrong you have the time, information and platform to resolve them without downtime. This presentation will cover some of the common, and not so common, performance issues, failures and management tasks observed in running clusters. Aaron will discuss how to gather information and how to act on it. Operators, Developers and Managers will all benefit from this exposition of Cassandra in the wild.
[Open Infrastructure & Cloud Native Days Korea 2019]
커뮤니티 버전의 OpenStack 과 Ceph를 활용하여 대고객서비스를 구축한 사례를 공유합니다. 유연성을 확보한 기업용 클라우드 서비스 구축 사례와 높은 수준의 보안을 요구하는 거래소 서비스를 구축, 운영한 사례를 소개합니다. 또한 이 프로젝트에 사용된 기술 스택 및 장애 해결사례와 최적화 방안을 소개합니다. 오픈스택은 역시 오픈소스컨설팅입니다.
#openstack #ceph #openinfraday #cloudnative #opensourceconsulting
Building Apache Cassandra clusters for massive scaleAlex Thompson
Covering theory and operational aspects of bring up Apache Cassandra clusters - this presentation can be used as a field reference. Presented by Alex Thompson at the Sydney Cassandra Meetup.
Accelerating Apache Spark by Several Orders of Magnitude with GPUs and RAPIDS...Databricks
GPU acceleration has been at the heart of scientific computing and artificial intelligence for many years now. GPUs provide the computational power needed for the most demanding applications such as Deep Neural Networks, nuclear or weather simulation. Since the launch of RAPIDS in mid-2018, this vast computational resource has become available for Data Science workloads too. The RAPIDS toolkit, which is now available on the Databricks Unified Analytics Platform, is a GPU-accelerated drop-in replacement for utilities such as Pandas/NumPy/ScikitLearn/XGboost. Through its use of Dask wrappers the platform allows for true, large scale computation with minimal, if any, code changes.
The goal of this talk is to discuss RAPIDS, its functionality, architecture as well as the way it integrates with Spark providing on many occasions several orders of magnitude acceleration versus its CPU-only counterparts.
There are many common workloads in R that are "embarrassingly parallel": group-by analyses, simulations, and cross-validation of models are just a few examples. In this talk I'll describe several techniques available in R to speed up workloads like these, by running multiple iterations simultaneously, in parallel.
Many of these techniques require the use of a cluster of machines running R, and I'll provide examples of using cloud-based services to provision clusters for parallel computations. In particular, I will describe how you can use the SparklyR package to distribute data manipulations using the dplyr syntax, on a cluster of servers provisioned in the Azure cloud.
Presented by David Smith at Data Day Texas in Austin, January 27 2018.
SDK – Software Development Kit
Boost data processing at the edge for general applications and processor intensive algorithms
AI – Artificial Intelligence
High performance deep neural network (DNN) engine to deploy any AI or machine learning module at the edge
ROS2 Executor plugin
Executor for ROS2 able to process up to 10 times more data with up to 50% reduction in CPU consumption.
GPU (Graphic Processing Unit)
High parallelisation of GPU to increase the processing data rate and GPU utilization
16 artifacts to capture when there is a production problemTier1 app
Production problems are tricky to troubleshoot if proper diagnostic information isn’t captured. In this session, 16 important artifacts that you need to capture and the effective tools that you can use to analyze those artifacts are discussed.
In-Memory Logical Data Warehouse for accelerating Machine Learning Pipelines ...Gianmario Spacagna
Abstract:
Legacy enterprise architectures still rely on relational data warehouse and require moving and syncing with the so-called "Data Lake" where raw data is stored and periodically ingested into a distributed file system such as HDFS.
Moreover, there are a number of use cases where you might want to avoid storing data on the development cluster disks, such as for regulations or reducing latency, in which case Alluxio (previously known as Tachyon) can make this data available in-memory and shared among multiple applications.
We propose an Agile workflow by combining Spark, Scala, DataFrame (and the recent DataSet API), JDBC, Parquet, Kryo and Alluxio to create a scalable, in-memory, reactive stack to explore data directly from source and develop high quality machine learning pipelines that can then be deployed straight into production.
In this talk we will:
* Present how to load raw data from an RDBMS and use Spark to make it available as a DataSet
* Explain the iterative exploratory process and advantages of adopting functional programming
* Make a crucial analysis on the issues faced with the existing methodology
* Show how to deploy Alluxio and how it greatly improved the existing workflow by providing the desired in-memory solution and by decreasing the loading time from hours to seconds
* Discuss some future improvements to the overall architecture
Bio:
Gianmario is a Senior Data Scientist at Pirelli Tyre, processing telemetry data for smart manufacturing and connected vehicles applications.
His main expertise is on building production-oriented machine learning systems.
Co-author of the Professional Manifesto for Data Science (datasciencemanifesto.com), founder of the Data Science Milan Meetup group and currently writing "Python Deep Learning" book (will be published soon).
He loves evangelising his passion for best practices and effective methodologies amongst the community.
Prior to Pirelli, he worked in Financial Services (Barclays), Cyber Security (Cisco) and Predictive Marketing (AgilOne).
Build Large-Scale Data Analytics and AI Pipeline Using RayDPDatabricks
A large-scale end-to-end data analytics and AI pipeline usually involves data processing frameworks such as Apache Spark for massive data preprocessing, and ML/DL frameworks for distributed training on the preprocessed data. A conventional approach is to use two separate clusters and glue multiple jobs. Other solutions include running deep learning frameworks in an Apache Spark cluster, or use workflow orchestrators like Kubeflow to stitch distributed programs. All these options have their own limitations. We introduce Ray as a single substrate for distributed data processing and machine learning. We also introduce RayDP which allows you to start an Apache Spark job on Ray in your python program and utilize Ray’s in-memory object store to efficiently exchange data between Apache Spark and other libraries. We will demonstrate how this makes building an end-to-end data analytics and AI pipeline simpler and more efficient.
DevoxxUK: Optimizating Application Performance on KubernetesDinakar Guniguntala
Now that you have your apps running on K8s, wondering how to get the response time that you need ? Tuning a polyglot set of microservices to get the performance that you need can be challenging in Kubernetes. The key to overcoming this is observability. Luckily there are a number of tools such as Prometheus that can provide all the metrics you need, but here is the catch, there is so much of data and metrics that is difficult make sense of it all. This is where Hyperparameter tuning can come to the rescue to help build the right models.
This talk covers best practices that will help attendees
1. To understand and avoid common performance related problems.
2. Discuss observability tools and how they can help identify perf issues.
3. Look closer into Kruize Autotune which is a Open Source Autonomous Performance Tuning Tool for Kubernetes and where it can help.
Scalable Monitoring Using Prometheus with Apache Spark Clusters with Diane F...Databricks
As Apache Spark applications move to a containerized environment, there are many questions about how to best configure server systems in the container world. In this talk we will demonstrate a set of tools to better monitor performance and identify optimal configuration settings. We will demonstrate how Prometheus, a project that is now part of the Cloud Native Computing Foundation (CNCF: https://www.cncf.io/projects/), can be applied to monitor and archive system performance data in a containerized spark environment.
In our examples, we will gather spark metric output through Prometheus and present the data with Grafana dashboards. We will use our examples to demonstrate how performance can be enhanced through different tuned configuration settings. Our demo will show how to configure settings across the cluster as well as within each node.
‘16 artifacts’ to capture when there is a production problemTier1 app
In this session we will share with you 16 important artifacts which can consider capturing, when a production problem surfaces in your cloud/containers/K8s. We will also discuss effective tools, techniques and tips that you can use to analyze these artifacts.
Speaker
Erik Skytthe - Monitoring Mesos, Docker, Containers with Zabbix | ZabConf2016Zabbix
At DBC we are running docker and other container types in a mesos/marathon cluster environment. I will demonstrate how we collect statistics, logs etc. and monitor this environment, showing configuration examples, data flows and templates.
Some of the covered topics:
- Mesos master and agents
- Marathon Framework
- Docker engine
- Containers
- Zookeeper
- Elasticserach/ELK
These slides were presented on a Software Craftsmanship meetup @ EPAM Hungary on 26 January, 2017.
During the talk we went through the evolution of structured data analytics in Spark. We compared the RDD, the SparkSQL (DataFrame) and the DataSet APIs. We used the very latest and greatest Spark 2.1, released on December 28, went through code samples and dove deep into Spark optimizations. The code samples can be downloaded from here: https://github.com/symat/spark-api-comparison
A bit of history, frustration-driven development, and why and how we started looking into Puppet at Opera Software. What we're doing, successes, pain points and what we're going to do with Puppet and Config Management next.
Similar to 10 things i wish i'd known before using spark in production (20)
Delta Lake OSS: Create reliable and performant Data Lake by Quentin AmbardParis Data Engineers !
Delta Lake is an open source framework living on top of parquet in your data lake to provide Reliability and performances. It has been open-sourced by Databricks this year and is gaining traction to become the defacto delta lake format.
We’ll see all the goods Delta Lake can do to your data with ACID transactions, DDL operations, Schema enforcement, batch and stream support etc !
Lors du DataxDay je vous ai appris à industrialiser les prédictions d'un réseau de neurones à l'aide de DL4J et Spark. Maintenant les Data Scientists voudraient tester différentes variantes de réseaux de neurones. Pendant ce temps, votre modèle en production donne des prédictions. Il faudra suivre son évolution pour vérifier que ses performances ne se dégradent pas.
Utilisation de MLflow pour le cycle de vie des projet Machine learningParis Data Engineers !
Mlflow est un projet opensource pour administrer le cycle de vie des projets machine learning (de l’expérimentation jusqu’au déploiement) afin de mieux les intégrer dans l’écosystème qui les entoure.
Durant cette présentation nous montrerons les différentes composantes de MLflow et ferons une démonstration de son utilisation à la fois dans le contexte d’une plateforme Databricks et d’un IDE local.
"Apache Pulsar, encore un système de messages pub/sub", me direz-vous ? C'est pas faux... Néanmoins, regardons de plus près... Pulsar est devenu un Top Level Project de la fondation Apache au mois de septembre 2018 et il se targue de vouloir unifier les modèles de messages traditionnels et le streaming, tout en fournissant un système extrêmement performant. Alors partons à la découverte de ce nouveau pulsar pour voir de quoi il retourne !
La collecte de données au sein d'un DataLake sans impacter les systèmes opérationnels est un challenge pour de nombreuses entreprises.
Lors du meetup Paris Data Engineers du 26 mars 2019, Dimitri Capitaine nous a présenté Data Collector qui est un outil de Change Data Capture (CDC) développé en interne chez OVH. Data Collector est capable d'assurer une réplication fiable et performante des bases de données jusqu'au DataLake.
Hugo Larcher nous a alors présenté un cas d'utilisation autour de l'exploitation de données aéronautiques avec une touche d'IoT et de DataViz.
Building highly reliable data pipeline @datadog par Quentin FrançoisParis Data Engineers !
Certaines fonctionnalités au cœur du produit de Datadog reposent sur des pipelines de données construits avec Spark qui traitent des milliers de milliards de points chaque jour. Dans cette présentation, nous verrons les grands principes que nous appliquons chez Datadog pour assurer que nos pipelines restent fiables malgré la croissance exponentielle du volume de données, les pannes matérielles, les données corrompues et les erreurs humaines.
Paris Data Eng' Meetup du 26 février 2019 @Datadog
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
20240605 QFM017 Machine Intelligence Reading List May 2024
10 things i wish i'd known before using spark in production
1. “ 10 things I wish I'd known
before using
in production ! ”
2. Himanshu Arora
Lead Data Engineer, NeoLynk France
h.arora@neolynk.fr
@him_aro
Nitya Nand Yadav
Data Engineer, NeoLynk France
n.yadav@neolynk.fr
@nityany
5. What we are going to cover...
1. RDD vs DataFrame vs DataSet
2. Data Serialisation Formats
3. Storage formats
4. Broadcast join
5. Hardware tuning
6. Level of parallelism
7. GC tuning
8. Common errors
9. Data skew
10. Data locality
5
7. ● RDD - Resilient Distributed Dataset
➔ Main abstraction of Spark.
➔ Low-level transformation, actions and control on partition level.
➔ Unstructured dataset like media streams, text streams.
➔ Manipulate data with functional programming constructs.
➔ No optimization
7
8. ● DataFrame
➔ High level abstractions, rich semantics.
➔ Like a big distributed SQL table.
➔ High level expressions (aggregation, average, sum, sql queries).
➔ Performance and optimizations(Predicate pushdown, QBO, CBO...).
➔ No compile time type check, runtime errors.
8
9. ● DataSet
➔ A collection of strongly-typed JVM objects, dictated by a case class you define
in Scala or a class in Java.
➔ DataFrame = DataSet[Row].
➔ Performance and optimisations.
➔ Type-safety at compile time.
9
10. 2/10 - Data Serialisation Format
➔ Data shuffled in serialized format between executors.
➔ RDDs cached & persisted in disk are serialized too.
➔ Default serialization format of spark: Java Serialization (slow & large).
➔ Better use: Kryo serialisation.
➔ Kryo: Faster and more compact (up to 10x).
➔ DataFrame/DataSets use tungsten serialization (even better than kryo).
10
11. val sparkConf: SparkConf = new SparkConf()
.set("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
val sparkSession: SparkSession = SparkSession
.builder()
.config(sparkConf)
.getOrCreate()
// register your own custom classes with kryo
sparkConf.registerKryoClasses(Array(classOf[MyCustomeClass]))
2/10 - Data Serialisation Format
11
13. ➔ Avoid using text, json and csv etc. if possible.
➔ Use compressed binary formats instead.
➔ Popular choices: Apache Parquet, Apache Avro & ORC etc.
➔ Use case dictates the choice.
3/10 - Storage Formats
13
14. ➔ Binary formats.
➔ Splittable.
➔ Parquet: Columnar & Avro: Row based
➔ Parquet: Higher compression rates than row based format.
➔ Parquet: read-heavy workload & Avro: write heavy workload
➔ Schema preserved in files itself.
➔ Avro: Better support for schema evolution
3/10 - Storage Formats: Apache Parquet & Avro
14
15. val sparkConf: SparkConf = new SparkConf()
.set("spark.sql.parquet.compression.codec", "snappy")
val dataframe = sparkSession.read.parquet("s3a://....")
dataframe.write.parquet("s3a://....")
val sparkConf: SparkConf = new SparkConf()
.set("spark.sql.avro.compression.codec", "snappy")
val dataframe = sparkSession.read.avro("s3a://....")
dataframe.write.avro("s3a://....")
3/10 - Storage Formats
15
24. rdd = sc.textFile('demo.zip')
rdd = rdd.repartition(100)
6/10 - Level of parallelism/partitions
➔ The maximum size of a partition(s) is limited by the available memory of an
executor.
➔ Increasing partitions count will make each partition to have less data.
➔ Spark can not split compressed files (e.g. zip) and creates only 1 partition so
repartition yourself.
24
25. ➔ Quick wins when using a large JVM heap to avoid long GC pauses.
spark.executor.extraJavaOptions: -XX:+UseG1GC -XX:+AlwaysPreTouch -XX:+UseLargePages
-XX:+UseTLAB -XX:+ResizeTLAB
// if creating too many objects in driver (ex. collect())
// which is not a very good idea though
spark.driver.extraJavaOptions: -XX:+UseG1GC -XX:+AlwaysPreTouch -XX:+UseLargePages
-XX:+UseTLAB -XX:+ResizeTLAB
7/10 - GC Tuning
25
26. Container killed by YARN for exceeding memory limits. 10.4 GB of 10.4 GB physical
memory used.
8/10 - Knock knock… Who’s there?… An error :(
26
27. ➔ Not enough executor memory.
➔ Too many executor cores (implies too much parallelism).
➔ Not enough spark partitions.
➔ Data skew (let’s talk about that later…).
➔ Increase executor memory.
➔ Reduce number of executor cores.
➔ Increase number of spark partitions.
➔ Persist in memory and disk (or just disk) with serialization.
➔ Off heap memory for caching.
8/10 - Knock knock… Who’s there?… An error :(
27
28. 8/10 - Knock knock… Who’s there?… An error :(
19/01/31 21:03:13 INFO DAGScheduler: Host lost:
ip-172-29-149-243.eu-west-1.compute.internal (epoch 16)
19/01/31 21:03:13 INFO BlockManagerMasterEndpoint: Trying to
remove executors on host
ip-172-29-149-243.eu-west-1.compute.internal from BlockManagerMaster.
19/01/31 21:03:13 INFO BlockManagerMaster: Removed executors on
host ip-172-29-149-243.eu-west-1.compute.internal successfully.
28
32. ➔ A condition when data is not uniformly distributed across partitions.
➔ During joins, aggregations etc.
➔ E.g. joining with a column containing lots of null.
➔ Might cause java.lang.OutOfMemoryError: Java heap space.
9/10 - Data Skew
32
34. ➔ Repartition your data based on key(Rdd) and column(dataframe) ,which will
evenly distribute the data.
➔ Use non-skewed column(s) for join.
➔ Replace null values of join col with NULL_X (X is a random number).
➔ Salting.
9/10 - Data Skew: possible solutions
34
37. 9/10 - Impossible to find repartitioning key for even data distribution ???
Salting key = Actual partition key + Random fake key
(where fake key takes value between 1 to N, with N being the level of
distribution/partitions)
37
38. ➔ Join DFs : Create salt col on bigger DF and broadcast the smaller one (with
addition col containing 1 to N).
➔ If both are too big to broadcast: Salt one and iterative broadcast other.
38
39. ➔ Why it’s important?
10/10 - Data Locality
39
40. val sparkSession = SparkSession
.builder()
.appName("spark-app")
.config("spark.locality.wait", "60s") //default 3secs
.config("spark.locality.wait.node", "0") //set to 0 to skip node local
.config("spark.locality.wait.process", "10s")
.config("spark.locality.wait.rack", "30s")
.getOrCreate()
10/10 - Data Locality
40