En este taller, impartido por uno de los desarrolladores de Optimus, aprenderás a realizar limpieza y preparación de datos utilizando Optimus en conjunto con Apache Spark y Python (PySpark).
Por Favio Vázquez
What we've learned from running a PostgreSQL managed service on KubernetesDoKC
In this talk, I will share some of our learnings from running a managed PostgreSQL/TimescaleDB service on Kubernetes on AWS for a little more than a year: I’ll start with the motivation of running managed PostgreSQL on Kubernetes, the benefits and drawbacks. I’ll describe the architecture of the managed PostgreSQL cloud on Kubernetes I’ll zoom in on how we solved some of the Kubernetes-specific issues within our cloud, such as upgrading extensions without downtimes, taming the dreaded OOM killer, and doing regular maintenance and PostgreSQL major upgrades. I’ll share how open-source tools from the PostgreSQL ecosystem helps us to run the service and explain how we use them in a slightly non-trivial way.
This talk was given by Oleksii Kliukin for DoK Day Europe @ KubeCon 2022.
In recent years, many companies have adopted service-oriented architectures by deploying tens to hundreds of small microservices. But with the increasing number of independent services, do you still know what’s going on in your infrastructure?
Traditional monitoring solutions were mostly focused on machines and fell short keeping track of infrastructures where service deployments happen multiple times per day and instances get dynamically allocated on a multitude of nodes. Prometheus is a relatively new monitoring system which has gained a lot of popularity in the last two years as it was explicitly designed for today’s needs of service monitoring and container infrastructure.
In this session, you’ll learn how to instrument a service with a Prometheus client library to provide information about its current health and state. In order to get automatically notified when the service becomes unhealthy, you’ll see how to configure alerts and notifications. Along the way, I’ll discuss a few important key metrics paramount to successfully monitor a microservice.
An in depth overview of Kubernetes and it's various components.
NOTE: This is a fixed version of a previous presentation (a draft was uploaded with some errors)
Running PostgreSQL in Kubernetes: from day 0 to day 2 with CloudNativePG - Do...DoKC
Link: https://youtu.be/cegd3Exg05w
https://go.dok.community/slack
https://dok.community/
Gabriele Bartolini - Vice President/CTO of Cloud Native and Kubernetes, EDB
ABSTRACT OF THE TALK
Imagine this: you have a virtual infrastructure based on Kubernetes, made up of virtual data centers, possibly spread across multiple Kubernetes clusters and regions. Your infrastructure could even be hosted on premises or on different cloud service providers. Infrastructure as Code is a requirement. You’ve been tasked to run Postgres databases, alongside your applications.
The good news is that you can leverage a fully open source stack with Kubernetes, PostgreSQL and the CloudNativePG operator, and deploy your Postgres database in the same way you deploy applications.
Join me in this webinar to discover the key role that you have to make this succeed, starting from day 0 through day 2 operations.
I’ll share some examples and best practices for running Postgres databases in Kubernetes, before peeking at the new features we are developing for the months to come.
What we've learned from running a PostgreSQL managed service on KubernetesDoKC
In this talk, I will share some of our learnings from running a managed PostgreSQL/TimescaleDB service on Kubernetes on AWS for a little more than a year: I’ll start with the motivation of running managed PostgreSQL on Kubernetes, the benefits and drawbacks. I’ll describe the architecture of the managed PostgreSQL cloud on Kubernetes I’ll zoom in on how we solved some of the Kubernetes-specific issues within our cloud, such as upgrading extensions without downtimes, taming the dreaded OOM killer, and doing regular maintenance and PostgreSQL major upgrades. I’ll share how open-source tools from the PostgreSQL ecosystem helps us to run the service and explain how we use them in a slightly non-trivial way.
This talk was given by Oleksii Kliukin for DoK Day Europe @ KubeCon 2022.
In recent years, many companies have adopted service-oriented architectures by deploying tens to hundreds of small microservices. But with the increasing number of independent services, do you still know what’s going on in your infrastructure?
Traditional monitoring solutions were mostly focused on machines and fell short keeping track of infrastructures where service deployments happen multiple times per day and instances get dynamically allocated on a multitude of nodes. Prometheus is a relatively new monitoring system which has gained a lot of popularity in the last two years as it was explicitly designed for today’s needs of service monitoring and container infrastructure.
In this session, you’ll learn how to instrument a service with a Prometheus client library to provide information about its current health and state. In order to get automatically notified when the service becomes unhealthy, you’ll see how to configure alerts and notifications. Along the way, I’ll discuss a few important key metrics paramount to successfully monitor a microservice.
An in depth overview of Kubernetes and it's various components.
NOTE: This is a fixed version of a previous presentation (a draft was uploaded with some errors)
Running PostgreSQL in Kubernetes: from day 0 to day 2 with CloudNativePG - Do...DoKC
Link: https://youtu.be/cegd3Exg05w
https://go.dok.community/slack
https://dok.community/
Gabriele Bartolini - Vice President/CTO of Cloud Native and Kubernetes, EDB
ABSTRACT OF THE TALK
Imagine this: you have a virtual infrastructure based on Kubernetes, made up of virtual data centers, possibly spread across multiple Kubernetes clusters and regions. Your infrastructure could even be hosted on premises or on different cloud service providers. Infrastructure as Code is a requirement. You’ve been tasked to run Postgres databases, alongside your applications.
The good news is that you can leverage a fully open source stack with Kubernetes, PostgreSQL and the CloudNativePG operator, and deploy your Postgres database in the same way you deploy applications.
Join me in this webinar to discover the key role that you have to make this succeed, starting from day 0 through day 2 operations.
I’ll share some examples and best practices for running Postgres databases in Kubernetes, before peeking at the new features we are developing for the months to come.
Zeus: Uber’s Highly Scalable and Distributed Shuffle as a ServiceDatabricks
Zeus is an efficient, highly scalable and distributed shuffle as a service which is powering all Data processing (Spark and Hive) at Uber. Uber runs one of the largest Spark and Hive clusters on top of YARN in industry which leads to many issues such as hardware failures (Burn out Disks), reliability and scalability challenges.
Evening out the uneven: dealing with skew in FlinkFlink Forward
Flink Forward San Francisco 2022.
When running Flink jobs, skew is a common problem that results in wasted resources and limited scalability. In the past years, we have helped our customers and users solve various skew-related issues in their Flink jobs or clusters. In this talk, we will present the different types of skew that users often run into: data skew, key skew, event time skew, state skew, and scheduling skew, and discuss solutions for each of them. We hope this will serve as a guideline to help you reduce skew in your Flink environment.
by
Jun Qin & Karl Friedrich
Improving Data Locality for Spark Jobs on Kubernetes Using AlluxioAlluxio, Inc.
Alluxio Community Office Hour
Dec 17, 2019
Speakers:
Bin Fan & Jiacheng Liu, Alluxio
While adoption of the Cloud & Kubernetes has made it exceptionally easy to scale compute, the increasing spread of data across different systems and clouds has created new challenges for data engineers. Effectively accessing data from AWS S3 or on-premises HDFS becomes harder and data locality is also lost - how do you move data to compute workers efficiently, how do you unify data across multiple or remote clouds, and many more. Open source project Alluxio approaches this problem in a new way. It helps elastic compute workloads, such as Apache Spark, realize the true benefits of the cloud while bringing data locality and data accessibility to workloads orchestrated by Kubernetes.
One important performance optimization in Apache Spark is to schedule tasks on nodes with HDFS data nodes locally serving the task input data. However, more users are running Apache Spark natively on Kubernetes where HDFS is not an option. This office hour describes the concept and dataflow with respect to using the stack of Spark/Alluxio in Kubernetes with enhanced data locality even the storage service is outside or remote.
In this Office Hour, we will go over:
- Why Spark is able to make a locality-aware schedule when working with Alluxio in K8s environment using the host network
- Why a pod running Alluxio can share data efficiently with a pod running Spark on the same host using domain socket and host path volume
- The roadmap to improve this Spark / Alluxio stack in the context of K8s
Netflix’s Big Data Platform team manages data warehouse in Amazon S3 with over 60 petabytes of data and writes hundreds of terabytes of data every day. At this scale, output committers that create extra copies or can’t handle task failures are no longer practical. This talk will explain the problems that are caused by the available committers when writing to S3, and show how Netflix solved the committer problem.
In this session, you’ll learn:
– Some background about Spark at Netflix
– About output committers, and how both Spark and Hadoop handle failures
– How HDFS and S3 differ, and why HDFS committers don’t work well
– A new output committer that uses the S3 multi-part upload API
– How you can use this new committer in your Spark applications to avoid duplicating data
Using the New Apache Flink Kubernetes Operator in a Production DeploymentFlink Forward
Flink Forward San Francisco 2022.
Running natively on Kubernetes, using the new Apache Flink Kubernetes Operator is a great way to deploy and manage Flink application and session deployments. In this presentation, we provide: - A brief overview of Kubernetes operators and their benefits. - Introduce the five levels of the operator maturity model. - Introduce the newly released Apache Flink Kubernetes Operator and FlinkDeployment CRs - Dockerfile modifications you can make to swap out UBI images and Java of the underlying Flink Operator container - Enhancements we're making in: - Versioning/Upgradeability/Stability - Security - Demo of the Apache Flink Operator in-action, with a technical preview of an upcoming product using the Flink Kubernetes Operator. - Lessons learned - Q&A
by
James Busche & Ted Chang
This talk is from Distributed Data Summit SF 2018 - http://distributeddatasummit.com/2018-sf/sessions#chella
Audit logging is one of the most critical features in an enterprise-ready database in terms of security compliance. Furthermore, live traffic troubleshooting is critical for operators to troubleshoot production issues quickly. While past versions have lacked these critical features, the Cassandra team understood the need for better solutions and in the upcoming release of Cassandra both of these features now come out of the box which makes Cassandra even more awesome to work with. Cassandra now supports Audit logging and query logging as part of C* itself. As part of this talk, audience will learn about how to enable, configure, and tune audit logging for their C* clusters and how to log live traffic/queries for serverel needs including troubleshooting or even live traffic reply
Real-Time Processing of Spatial Data Using Kafka Streams, Ian Feeney & Roman ...HostedbyConfluent
Real-Time Processing of Spatial Data Using Kafka Streams, Ian Feeney & Roman Kolesnev | Current 2022
Kafka Streams applications can process fast-moving, unbounded streams of data. This gives us the capability to process and react to events from many sources in near real time as they converge in Kafka. However, if the events in these data streams have a spatial component and their spatial relationships with each other determine how they should be processed or reacted to, this raises some fundamental challenges. Determining that, for example, a person is within an area or that routes are intersecting requires access to geospatial operations which are not readily available in Kafka Streams.
In this talk, we will first set the scene with a geospatial 101. Then, using a simplified taxi hailing use case, we will look at two approaches for processing spatial data with Kafka Streams. The first approach is a naive approach which uses Kafka Streams DSL, geohashing and the Java Spatial4j library. The second approach is a prototype which replaces the RocksDB statestore with Apache Lucene (an embedded storage engine with powerful indexing, search and geospatial capabilities), and implements a stateful spatial join with the Transformer API.
This talk will give you an appreciation of geospatial use cases and how Kafka Streams could enable them. You will see the role the state store plays in stateful processing and the implications for geospatial processing. It will also show you what is involved in integrating a custom state store with Kafka Streams. Overall, this talk will give you an understanding of how you might go about building custom processing capabilities on top of Kafka Streams for your own use cases.
How to Automate Performance Tuning for Apache SparkDatabricks
Spark has made writing big data pipelines much easier than before. But a lot of effort is required to maintain performant and stable data pipelines in production over time. Did I choose the right type of infrastructure for my application? Did I set the Spark configurations correctly? Can my application keep running smoothly as the volume of ingested data grows over time? How to make sure that my pipeline always finishes on time and meets its SLA?
These questions are not easy to answer even for a handful of jobs, and this maintenance work can become a real burden as you scale to dozens, hundreds, or thousands of jobs. This talk will review what we found to be the most useful piece of information and parameters to look at for manual tuning, and the different options available to engineers who want to automate this work, from open-source tools to managed services provided by the data platform or third parties like the Data Mechanics platform.
Dok Talks #111 - Scheduled Scaling with Dask and Argo WorkflowsDoKC
https://go.dok.community/slack
https://dok.community/
ABSTRACT OF THE TALK
Complex computational workloads in Python are a common sight these days, especially in the context of processing large and complex datasets. Battle-hardened modules such as Numpy, Pandas, and Scikit-Learn can perform low-level tasks, while tools like Dask makes it easy to parallelize these workloads across distributed computational environments. Meanwhile, Argo Workflows offers a Kubernetes-native solution to provisioning cloud resources in Kubernetes and triggering workflows on a regular schedule. Being Kubernetes-native, Argo Workflows also meshes nicely with other Kubernetes tools. This talk discusses the combination of these two worlds by showcasing a set-up for Argo-managed workflows which schedule and automatically scale-out Dask-powered data pipelines in Python.
BIO
Former academic in the field of renewable energy simulation and energy systems analysis. Currently responsible for architecting and maintaining the cloud- and data strategy at ACCURE Battery Intelligence
KEY TAKE-AWAYS FROM THE TALK
Argo Workflows + Dask is a nice combination for data-processing pipelines. There are a a few "gotchyas" to be on the look-out for, but in nevertheless this is still a generally-applicable and powerful combination.
https://github.com/sevberg
Database Performance at Scale Masterclass: Workload Characteristics by Felipe...ScyllaDB
Felipe Cardeneti Mendes, Solutions Architect at ScyllaDB
Navigating workload-specific performance challenges and tradeoffs.
Felipe Mendes covers how to navigate the top performance challenges and tradeoffs that you’re likely to face with your project’s specific workload characteristics and technical/business requirements.
Microservices architecture involves many services that are being distributed over the network resulting in many more ways of failure. This session will try to cover the available tools that can help you when designing/building such distributed system in Go
Deep Dive: Memory Management in Apache SparkDatabricks
Memory management is at the heart of any data-intensive system. Spark, in particular, must arbitrate memory allocation between two main use cases: buffering intermediate data for processing (execution) and caching user data (storage). This talk will take a deep dive through the memory management designs adopted in Spark since its inception and discuss their performance and usability implications for the end user.
June 24, 2014. At Velocity 2014, Fastly engineer Vladimir Vuksan gave an intro to Ganglia concepts (grid, clusters, hosts) as well as an installation of a sample monitoring grid. He also goes through the following commonly used visualization tools and how they may aid in detecting issues, identifying causes, and taking corrective action:
- Cluster/Grid Views
- Aggregate graphs
- Compare Hosts
- Custom graph functionality
- Views
- Interactive graphs
- Trending
- Nagios/Alerting system integration
- How to add metrics to Ganglia
- Different export formats such as JSON, CSV, and XML
YugabyteDB - Distributed SQL Database on KubernetesDoKC
ABSTRACT OF THE TALK
Kubernetes has hit a home run for stateless workloads, but can it do the same for stateful services such as distributed databases? Before we can answer that question, we need to understand the challenges of running stateful workloads on, well anything. In this talk, we will first look at which stateful workloads, specifically databases, are ideal for running inside Kubernetes. Secondly, we will explore the various concerns around running databases in Kubernetes for production environments, such as: - The production-readiness of Kubernetes for stateful workloads in general - The pros and cons of the various deployment architectures - The failure characteristics of a distributed database inside containers In this session, we will demonstrate what Kubernetes brings to the table for stateful workloads and what database servers must provide to fit the Kubernetes model. This talk will also highlight some of the modern databases that take full advantage of Kubernetes and offer a peek into what’s possible if stateful services can meet Kubernetes halfway. We will go into the details of deployment choices, how the different cloud-vendor managed container offerings differ in what they offer, as well as compare performance and failure characteristics of a Kubernetes-based deployment with an equivalent VM-based deployment.
BIO
Amey is a VP of Data Engineering at Yugabyte with a deep passion for Data Analytics and Cloud-Native technologies. In his current role, he collaborates with Fortune 500 enterprises to architect their business applications with scalable microservices and geo-distributed, fault-tolerant data backend using YugabyteDB. Prior to joining Yugabyte, he spent 5 years at Pivotal as Platform Data Architect and has helped enterprise customers across multiple industry verticals to extend their analytical capabilities using Pivotal & OSS Big Data platforms. He is originally from Mumbai, India, and has a Master's degree in Computer Science from the University of Pennsylvania(UPenn), Philadelphia. Twitter: @ameybanarse LinkedIn: linkedin.com/in/ameybanarse/
gRPC can help minimize the barrier of cross-system communication by providing language-agnostic API definitions, backward and forward compatible versioning with protocol buffers, and pluggable load balancing and tracing. You will see how to quickly get up and running with the gRPC framework using Node.js from creating a protocol definition, creating meaningful health checks, and securing the endpoint. Additionally, this session will go over best practices and how to take full advantage of what gRPC has to offer.
Observability in Java: Getting Started with OpenTelemetryDevOps.com
Our software is more complex than ever: applications must be reliable, predictable, and easy to use to meet modern expectations. As developers, this means our responsibilities have grown while the things we can control have stayed the same. In order to better understand our systems and create truly modern software, we need observability.
This workshop will walk through what observability means for Java developers and how to achieve it in our systems with the least amount of work using the open source observability project OpenTelemetry.
¿Eres desarrollador y emprendedor? En este ebook se recopilan tres análisis en profundidad con las mejores herramientas y las más populares entre los científicos de datos. Más información en http://bbva.info/2t1NEv7
La presente investigación es para determinar si es más conveniente usar las herramientas que ofrece Apache Hadoop o escoger a su rival a decir de muchos: Apache Spark.
“Apache Spark es el motor más rápido y de uso general para el procesamiento de datos a gran escala.”
...O al menos es de lo que se informa en el sitio oficial pero ¿es eso cierto? En esta época del BigData aparecen y se ven muchas soluciones y tecnologías que enriquecen el entorno ampliamente dominado por Apache Hadoop, sin embargo en la era de los metadatos Spark brilla con una luz diferente y empieza a hacerle sombra a Hadoop en el negocio del BigData.
Zeus: Uber’s Highly Scalable and Distributed Shuffle as a ServiceDatabricks
Zeus is an efficient, highly scalable and distributed shuffle as a service which is powering all Data processing (Spark and Hive) at Uber. Uber runs one of the largest Spark and Hive clusters on top of YARN in industry which leads to many issues such as hardware failures (Burn out Disks), reliability and scalability challenges.
Evening out the uneven: dealing with skew in FlinkFlink Forward
Flink Forward San Francisco 2022.
When running Flink jobs, skew is a common problem that results in wasted resources and limited scalability. In the past years, we have helped our customers and users solve various skew-related issues in their Flink jobs or clusters. In this talk, we will present the different types of skew that users often run into: data skew, key skew, event time skew, state skew, and scheduling skew, and discuss solutions for each of them. We hope this will serve as a guideline to help you reduce skew in your Flink environment.
by
Jun Qin & Karl Friedrich
Improving Data Locality for Spark Jobs on Kubernetes Using AlluxioAlluxio, Inc.
Alluxio Community Office Hour
Dec 17, 2019
Speakers:
Bin Fan & Jiacheng Liu, Alluxio
While adoption of the Cloud & Kubernetes has made it exceptionally easy to scale compute, the increasing spread of data across different systems and clouds has created new challenges for data engineers. Effectively accessing data from AWS S3 or on-premises HDFS becomes harder and data locality is also lost - how do you move data to compute workers efficiently, how do you unify data across multiple or remote clouds, and many more. Open source project Alluxio approaches this problem in a new way. It helps elastic compute workloads, such as Apache Spark, realize the true benefits of the cloud while bringing data locality and data accessibility to workloads orchestrated by Kubernetes.
One important performance optimization in Apache Spark is to schedule tasks on nodes with HDFS data nodes locally serving the task input data. However, more users are running Apache Spark natively on Kubernetes where HDFS is not an option. This office hour describes the concept and dataflow with respect to using the stack of Spark/Alluxio in Kubernetes with enhanced data locality even the storage service is outside or remote.
In this Office Hour, we will go over:
- Why Spark is able to make a locality-aware schedule when working with Alluxio in K8s environment using the host network
- Why a pod running Alluxio can share data efficiently with a pod running Spark on the same host using domain socket and host path volume
- The roadmap to improve this Spark / Alluxio stack in the context of K8s
Netflix’s Big Data Platform team manages data warehouse in Amazon S3 with over 60 petabytes of data and writes hundreds of terabytes of data every day. At this scale, output committers that create extra copies or can’t handle task failures are no longer practical. This talk will explain the problems that are caused by the available committers when writing to S3, and show how Netflix solved the committer problem.
In this session, you’ll learn:
– Some background about Spark at Netflix
– About output committers, and how both Spark and Hadoop handle failures
– How HDFS and S3 differ, and why HDFS committers don’t work well
– A new output committer that uses the S3 multi-part upload API
– How you can use this new committer in your Spark applications to avoid duplicating data
Using the New Apache Flink Kubernetes Operator in a Production DeploymentFlink Forward
Flink Forward San Francisco 2022.
Running natively on Kubernetes, using the new Apache Flink Kubernetes Operator is a great way to deploy and manage Flink application and session deployments. In this presentation, we provide: - A brief overview of Kubernetes operators and their benefits. - Introduce the five levels of the operator maturity model. - Introduce the newly released Apache Flink Kubernetes Operator and FlinkDeployment CRs - Dockerfile modifications you can make to swap out UBI images and Java of the underlying Flink Operator container - Enhancements we're making in: - Versioning/Upgradeability/Stability - Security - Demo of the Apache Flink Operator in-action, with a technical preview of an upcoming product using the Flink Kubernetes Operator. - Lessons learned - Q&A
by
James Busche & Ted Chang
This talk is from Distributed Data Summit SF 2018 - http://distributeddatasummit.com/2018-sf/sessions#chella
Audit logging is one of the most critical features in an enterprise-ready database in terms of security compliance. Furthermore, live traffic troubleshooting is critical for operators to troubleshoot production issues quickly. While past versions have lacked these critical features, the Cassandra team understood the need for better solutions and in the upcoming release of Cassandra both of these features now come out of the box which makes Cassandra even more awesome to work with. Cassandra now supports Audit logging and query logging as part of C* itself. As part of this talk, audience will learn about how to enable, configure, and tune audit logging for their C* clusters and how to log live traffic/queries for serverel needs including troubleshooting or even live traffic reply
Real-Time Processing of Spatial Data Using Kafka Streams, Ian Feeney & Roman ...HostedbyConfluent
Real-Time Processing of Spatial Data Using Kafka Streams, Ian Feeney & Roman Kolesnev | Current 2022
Kafka Streams applications can process fast-moving, unbounded streams of data. This gives us the capability to process and react to events from many sources in near real time as they converge in Kafka. However, if the events in these data streams have a spatial component and their spatial relationships with each other determine how they should be processed or reacted to, this raises some fundamental challenges. Determining that, for example, a person is within an area or that routes are intersecting requires access to geospatial operations which are not readily available in Kafka Streams.
In this talk, we will first set the scene with a geospatial 101. Then, using a simplified taxi hailing use case, we will look at two approaches for processing spatial data with Kafka Streams. The first approach is a naive approach which uses Kafka Streams DSL, geohashing and the Java Spatial4j library. The second approach is a prototype which replaces the RocksDB statestore with Apache Lucene (an embedded storage engine with powerful indexing, search and geospatial capabilities), and implements a stateful spatial join with the Transformer API.
This talk will give you an appreciation of geospatial use cases and how Kafka Streams could enable them. You will see the role the state store plays in stateful processing and the implications for geospatial processing. It will also show you what is involved in integrating a custom state store with Kafka Streams. Overall, this talk will give you an understanding of how you might go about building custom processing capabilities on top of Kafka Streams for your own use cases.
How to Automate Performance Tuning for Apache SparkDatabricks
Spark has made writing big data pipelines much easier than before. But a lot of effort is required to maintain performant and stable data pipelines in production over time. Did I choose the right type of infrastructure for my application? Did I set the Spark configurations correctly? Can my application keep running smoothly as the volume of ingested data grows over time? How to make sure that my pipeline always finishes on time and meets its SLA?
These questions are not easy to answer even for a handful of jobs, and this maintenance work can become a real burden as you scale to dozens, hundreds, or thousands of jobs. This talk will review what we found to be the most useful piece of information and parameters to look at for manual tuning, and the different options available to engineers who want to automate this work, from open-source tools to managed services provided by the data platform or third parties like the Data Mechanics platform.
Dok Talks #111 - Scheduled Scaling with Dask and Argo WorkflowsDoKC
https://go.dok.community/slack
https://dok.community/
ABSTRACT OF THE TALK
Complex computational workloads in Python are a common sight these days, especially in the context of processing large and complex datasets. Battle-hardened modules such as Numpy, Pandas, and Scikit-Learn can perform low-level tasks, while tools like Dask makes it easy to parallelize these workloads across distributed computational environments. Meanwhile, Argo Workflows offers a Kubernetes-native solution to provisioning cloud resources in Kubernetes and triggering workflows on a regular schedule. Being Kubernetes-native, Argo Workflows also meshes nicely with other Kubernetes tools. This talk discusses the combination of these two worlds by showcasing a set-up for Argo-managed workflows which schedule and automatically scale-out Dask-powered data pipelines in Python.
BIO
Former academic in the field of renewable energy simulation and energy systems analysis. Currently responsible for architecting and maintaining the cloud- and data strategy at ACCURE Battery Intelligence
KEY TAKE-AWAYS FROM THE TALK
Argo Workflows + Dask is a nice combination for data-processing pipelines. There are a a few "gotchyas" to be on the look-out for, but in nevertheless this is still a generally-applicable and powerful combination.
https://github.com/sevberg
Database Performance at Scale Masterclass: Workload Characteristics by Felipe...ScyllaDB
Felipe Cardeneti Mendes, Solutions Architect at ScyllaDB
Navigating workload-specific performance challenges and tradeoffs.
Felipe Mendes covers how to navigate the top performance challenges and tradeoffs that you’re likely to face with your project’s specific workload characteristics and technical/business requirements.
Microservices architecture involves many services that are being distributed over the network resulting in many more ways of failure. This session will try to cover the available tools that can help you when designing/building such distributed system in Go
Deep Dive: Memory Management in Apache SparkDatabricks
Memory management is at the heart of any data-intensive system. Spark, in particular, must arbitrate memory allocation between two main use cases: buffering intermediate data for processing (execution) and caching user data (storage). This talk will take a deep dive through the memory management designs adopted in Spark since its inception and discuss their performance and usability implications for the end user.
June 24, 2014. At Velocity 2014, Fastly engineer Vladimir Vuksan gave an intro to Ganglia concepts (grid, clusters, hosts) as well as an installation of a sample monitoring grid. He also goes through the following commonly used visualization tools and how they may aid in detecting issues, identifying causes, and taking corrective action:
- Cluster/Grid Views
- Aggregate graphs
- Compare Hosts
- Custom graph functionality
- Views
- Interactive graphs
- Trending
- Nagios/Alerting system integration
- How to add metrics to Ganglia
- Different export formats such as JSON, CSV, and XML
YugabyteDB - Distributed SQL Database on KubernetesDoKC
ABSTRACT OF THE TALK
Kubernetes has hit a home run for stateless workloads, but can it do the same for stateful services such as distributed databases? Before we can answer that question, we need to understand the challenges of running stateful workloads on, well anything. In this talk, we will first look at which stateful workloads, specifically databases, are ideal for running inside Kubernetes. Secondly, we will explore the various concerns around running databases in Kubernetes for production environments, such as: - The production-readiness of Kubernetes for stateful workloads in general - The pros and cons of the various deployment architectures - The failure characteristics of a distributed database inside containers In this session, we will demonstrate what Kubernetes brings to the table for stateful workloads and what database servers must provide to fit the Kubernetes model. This talk will also highlight some of the modern databases that take full advantage of Kubernetes and offer a peek into what’s possible if stateful services can meet Kubernetes halfway. We will go into the details of deployment choices, how the different cloud-vendor managed container offerings differ in what they offer, as well as compare performance and failure characteristics of a Kubernetes-based deployment with an equivalent VM-based deployment.
BIO
Amey is a VP of Data Engineering at Yugabyte with a deep passion for Data Analytics and Cloud-Native technologies. In his current role, he collaborates with Fortune 500 enterprises to architect their business applications with scalable microservices and geo-distributed, fault-tolerant data backend using YugabyteDB. Prior to joining Yugabyte, he spent 5 years at Pivotal as Platform Data Architect and has helped enterprise customers across multiple industry verticals to extend their analytical capabilities using Pivotal & OSS Big Data platforms. He is originally from Mumbai, India, and has a Master's degree in Computer Science from the University of Pennsylvania(UPenn), Philadelphia. Twitter: @ameybanarse LinkedIn: linkedin.com/in/ameybanarse/
gRPC can help minimize the barrier of cross-system communication by providing language-agnostic API definitions, backward and forward compatible versioning with protocol buffers, and pluggable load balancing and tracing. You will see how to quickly get up and running with the gRPC framework using Node.js from creating a protocol definition, creating meaningful health checks, and securing the endpoint. Additionally, this session will go over best practices and how to take full advantage of what gRPC has to offer.
Observability in Java: Getting Started with OpenTelemetryDevOps.com
Our software is more complex than ever: applications must be reliable, predictable, and easy to use to meet modern expectations. As developers, this means our responsibilities have grown while the things we can control have stayed the same. In order to better understand our systems and create truly modern software, we need observability.
This workshop will walk through what observability means for Java developers and how to achieve it in our systems with the least amount of work using the open source observability project OpenTelemetry.
¿Eres desarrollador y emprendedor? En este ebook se recopilan tres análisis en profundidad con las mejores herramientas y las más populares entre los científicos de datos. Más información en http://bbva.info/2t1NEv7
La presente investigación es para determinar si es más conveniente usar las herramientas que ofrece Apache Hadoop o escoger a su rival a decir de muchos: Apache Spark.
“Apache Spark es el motor más rápido y de uso general para el procesamiento de datos a gran escala.”
...O al menos es de lo que se informa en el sitio oficial pero ¿es eso cierto? En esta época del BigData aparecen y se ven muchas soluciones y tecnologías que enriquecen el entorno ampliamente dominado por Apache Hadoop, sin embargo en la era de los metadatos Spark brilla con una luz diferente y empieza a hacerle sombra a Hadoop en el negocio del BigData.
Apache Spark es un framework para procesamiento de grandes cantidades de datos. Ha sido diseñado para ser el sucesor de Hadoop siendo hasta 100 veces mas rápido utilizando procesamiento en memoria. Provee interfaces de programación con Scala, Python y Java.
En este Webinar exploraremos las capacidades de Spark y su ecosistema para resolver problemas de Big Data así como su interoperabilidad con diferentes orígenes de datos HDFS, HBase y Apache Mesos. Exploraremos las principales librerías del ecosistema: Spark Streaming, SparkSQL, GraphX y MLlib.
Primeros pasos con Apache Spark - Madrid Meetupdhiguero
Primeros pasos con Spark dentro del Apache Spark Meetup group de Madrid (http://www.meetup.com/Madrid-Apache-Spark-meetup/events/198362002/)
Contenido:
- Introdución
- Conceptos básicos
- Ecosistema Spark
- Instalación del entorno
- Errores comunes
Introducción (en Español) al framework de procesamiento distribuido en memoria Apache Spark. Elementos básicos de Spark, RDD, incluye demo de las librerías SparkSQL y Spark Streaming
Presentado en www.nardoz.com
http://arjon.es/2014/08/14/introduccion-a-apache-spark-en-espanol/
Estructuras de datos avanzadas: Casos de uso realesSoftware Guru
La utilización de estructuras de datos adecuadas para cada problema hace que se simplifiquen en gran medida los tiempos de respuestas y la cantidad de cómputo realizada.
Por Nelson González
Onboarding new members into an engineering team is not easy on anyone. In a short period of time, the new team member is required to be able to bring professional
Por Victoriya Kalmanovich
El secreto para ser un desarrollador SeniorSoftware Guru
En esta charla platicaremos sobre el “secreto” y el camino para llegar a ser un desarrollador Senior, experiencia, consejos y recomendaciones que en estos 8 años
Por René Sandoval
Apache Airflow es una plataforma en la que podemos crear flujos de datos de manera programática, planificarlos y monitorear de manera centralizada.
Por Yesi Díaz
How thick data can improve big data analysis for business:Software Guru
En esta presentación hablaré sobre cómo el Análisis de Datos Gruesos, específicamente el análisis antropológico y semiótico, puede ayudar a mejorar los resultados del Big Data
Por Martin Cuitzeo
CoDi® es la nueva forma de realizar pagos digitales desarrollada por el Banco de México. Por medio de CoDi puedes realizar cobros y pagos desde tu celular, utilizando una cuenta bancaria o de alguna institución financiera, sin comisiones.
Por Cristian Jaramillo
Gestionando la felicidad de los equipos con Management 3.0Software Guru
En las metodologías agiles hablamos de equipos colaborativos, autogestionados y felices. hablamos de lideres serviciales. El management 3.0 nos ayuda a cultivar el mindset correcto, aquel que servirá como el terreno fértil para que la agilidad florezca.
Por Andrea Vélez Cárdenas
Taller: Creación de Componentes Web re-usables con StencilJSSoftware Guru
Hoy por hoy las experiences de usuario pueden ser enriquecidas mediante el uso de Web Components, que son un estándar de la W3C soportado por la mayoría de los navegadores web modernos.
Por Alex Arriaga
Así publicamos las apps de Spotify sin stressSoftware Guru
En Spotify tenemos 1600+ ingenieros, trabajando en 280+ squads. Aún a esta escala, hemos logrado adoptar prácticas que nos han permitido acelerar la forma en que desarrollamos nuestro producto. Presentado por Erick Camacho en SG Virtual Conference 2020
Achieving Your Goals: 5 Tips to successfully achieve your goalsSoftware Guru
he measure of the executive, Peter F. Drucker reminds us, is the ability to "get the right things done." This involves having clarity on what are the right things as well as avoiding what is unproductive. Intelligence, creativity, and knowledge may all be wasted if not put to work on the things that matter.
Presentado por Cristina Nistor en SG Virtual Conference 2020
Acciones de comunidades tech en tiempos del Covid19Software Guru
Acciones de Comunidades Tech en tiempo del COVID-19 es una platica para informar acerca de las acciones que están realizando algunas comunidades de tecnología en México para luchar contra la propagación del COVID-19. Desde análisis de datos, visualizaciones, simulaciones de contagio, etc.
Presentado por Juana Martínez, Adriana Vallejo y Eduardo Ramírez en SG Virtual Conference 2020
De lo operativo a lo estratégico: un modelo de management de diseñoSoftware Guru
La charla presenta un modelo claro, generado por la ponente, para atender los niveles desde lo operativo a lo estratégico.
Presentado por Gabriela Salinas en SG Virtual Conference
En este documento analizamos ciertos conceptos relacionados con la ficha 1 y 2. Y concluimos, dando el porque es importante desarrollar nuestras habilidades de pensamiento.
Sara Sofia Bedoya Montezuma.
9-1.
Catalogo Cajas Fuertes BTV Amado Salvador Distribuidor OficialAMADO SALVADOR
Explora el catálogo completo de cajas fuertes BTV, disponible a través de Amado Salvador, distribuidor oficial de BTV. Este catálogo presenta una amplia variedad de cajas fuertes, cada una diseñada con la más alta calidad para ofrecer la máxima seguridad y satisfacer las diversas necesidades de protección de nuestros clientes.
En Amado Salvador, como distribuidor oficial de BTV, ofrecemos productos que destacan por su innovación, durabilidad y robustez. Las cajas fuertes BTV son reconocidas por su eficiencia en la protección contra robos, incendios y otros riesgos, lo que las convierte en una opción ideal tanto para uso doméstico como comercial.
Amado Salvador, distribuidor oficial BTV, asegura que cada producto cumpla con los más estrictos estándares de calidad y seguridad. Al adquirir una caja fuerte a través de Amado Salvador, distribuidor oficial BTV, los clientes pueden tener la tranquilidad de que están obteniendo una solución confiable y duradera para la protección de sus pertenencias.
Este catálogo incluye detalles técnicos, características y opciones de personalización de cada modelo de caja fuerte BTV. Desde cajas fuertes empotrables hasta modelos de alta seguridad, Amado Salvador, como distribuidor oficial de BTV, tiene la solución perfecta para cualquier necesidad de seguridad. No pierdas la oportunidad de conocer todos los beneficios y características de las cajas fuertes BTV y protege lo que más valoras con la calidad y seguridad que solo BTV y Amado Salvador, distribuidor oficial BTV, pueden ofrecerte.
Taller: Limpieza y preparación de datos con Optimus y Apache Spark
1. Ciencia de Datos con Apache Spark y OptimusCiencia de Datos con Apache Spark y Optimus
Favio Vázquez
Data Scientist
@faviovaz
https://github.com/faviovazquez
https://www.linkedin.com/in/faviovazquez/
March 15th, 2018
2. Esquema
1. Introducción a Apache Spark y1. Introducción a Apache Spark y
OptimusOptimus
Fundamentos de Spark
Fundamentos de Optimus
2. Data Science con Spark y2. Data Science con Spark y
OptimusOptimus
Limpieza, Exploración y Preparación de
Datos con Spark y Optimus
Introducción a Machine Learning con Spark
y Optimus.
5. 2004 – Google
MapReduce: Simplified Data Processing on Large Clusters
Jeffrey Dean and Sanjay Ghemawat
2006 – Apache
Hadoop, originating from the Nutch Project
Doug Cutting
2008 – Yahoo
web scale search indexing
Hadoop Summit, HUG, etc.
2009 – Amazon AWS
Elastic MapReduce
Hadoop modified for EC2/S3, plus support for Hive, Pig,
Cascading, etc.
research.google.com/archive/mapreduce.html
research.yahoo.com/files/cutting.pdf
developer.yahoo.com/hadoop/
aws.amazon.com/elasticmapreduce/
6. ¿Qué es complicado¿Qué es complicado
en Big Data?en Big Data?
La combinación compleja de hilos de
ejecución, sistemas de almacenamiento y
modos de trabajo.
ETL, agregaciones, machine learning,
streaming, etc.
Muy complicado obtener productividad y
performance
7. ¿Qué es?¿Qué es?
Es un motor general y muy rápido para el
procesamiento en paralelo de datos en gran
escala.
8. Motor UnificadoMotor Unificado
APIs de alto nivel conAPIs de alto nivel con
espacio para optimizar espacio para optimizar
Expresa todo el workflow con una API
Conecta librerías existentes y sistemas
de almacenamiento
11. SparkContextSparkContext
Un programa Spark crea primero un objeto
SparkContext que le indica a Spark cómo y
dónde acceder a un clúster.
El shell PySpark y Optimus crean
automáticamente la variable sc.
iPython y los programas deben usar un
constructor para crear un nuevo
SparkContext
12. RDDsRDDs
Los Resilient Distributed Datasets (RDD)
son una colección distribuida de objetos
JVM inmutables que permiten realizar
cálculos muy rápidamente, y son la
columna vertebral de Apache Spark.
25. Problems
Difíciles de usar y entender.
La mayoría de ellos tienen su propio
lenguaje.
No son adecuados para Big Data.
SampleClean no continuado y nada
fácil de usar.
Mala integración con Machine
Learning.
Atascados en la limpieza.
Mal documentado.
Un poco feos.
Soluciones actuales paraSoluciones actuales para
limpiar datoslimpiar datos
26. OptimusOptimus
Optimus is the missing framework for cleaning and pre-processing
data in a distributed fashion. It uses all the power of Apache Spark
(optimized via Catalyst) to do so. It implements several handy tools
for data wrangling and munging that will make your life much easier.
The first obvious advantage over any other public data cleaning
library or framework is that it will work on your laptop or your big
cluster, and second, it is amazingly easy to install, use and
understand.
https://github.com/ironmussa/Optimus
https://hioptimus.com
27. OptimusOptimus
Características:Características:
Fácil de instalar, usar e implementar.
Utiliza Python y PySpark.
Funciona en tu computadora portátil o
clúster.
DFTransformer con excelentes métodos
para la limpieza de datos y disputas.
DFAnalyzer con excelentes métodos para
visualizar datos y comprender.
Métodos para feature engineering
(realmente fáciles de usar).
Machine Learning con Spark (fácil e
igualmente rápido). Trabajo en progreso.