This document summarizes a presentation about Presslabs migrating their MySQL database from cloud-managed to Kubernetes-managed using their own MySQL Operator. Some key points:
- Presslabs built a MySQL Operator for Kubernetes to manage MySQL clusters with replication and provide high availability, automated backups, and other features.
- Migrating to Kubernetes-managed MySQL addressed needs for ease of operations, elasticity, availability, data safety, and observability.
- Challenges in the migration included integrating with their Orchestrator, cleaning up persistent volume claims, upgrading the operator, and handling MySQL upgrades.
- Future plans include adding more features to the operator and increasing community contributions and support.
Building Scalable Real-Time Data Pipelines with the Couchbase Kafka Connector...HostedbyConfluent
Many organizations use Apache Kafka to facilitate the flow of data between multiple applications or data sources. Thanks to Kafka’s distributed architecture, it is easy to set up a scalable and reliable broker, but doing the same with producers or consumers is quite often a fine art. This session provides a quick overview of Couchbase, describes the Couchbase Kafka Connector, and showcases a demo of how it can be used as both a source and a sink for building real-time data processing pipelines for mission-critical applications.
Mainframe Integration, Offloading and Replacement with Apache Kafka | Kai Wae...HostedbyConfluent
Legacy migration is a journey. Mainframes cannot be replaced in a single project. A big bang will fail. This has to be planned long-term.
Mainframe offloading and replacement with Apache Kafka and its ecosystem can be used to keep a more modern data store in real-time sync with the mainframe, while at the same time persisting the event data on the bus to enable microservices, and deliver the data to other systems such as data warehouses and search indexes.
This session walks through the different steps some companies are already gone through. Technical options like Change Data Capture (CDC), MQ, and third-party tools for mainframe integration, offloading and replacement are explored.
Superior Streaming and CDN Solutions: Cloud Storage Revolutionizes Digital MediaScality
Superior Streaming and CDN Solutions: Cloud Storage Revolutionizes Digital Media
Yannick Guillerm – Director Technical Marketing
Learn more:
http://www.scality.com/solutions-industries/media-entertainment-storage/
May 26, 2017
Presented at NAB 2017
eBay is one of the largest OpenStack based Clouds in the world. As eBay evolves into the world of Containers and Microservices, Kubernetes is quickly becoming a key platform. This talk is about how we applied our learnings from OpenStack to build a framework for managing life-cycle of Kubernetes at scale.
Multi-Cloud Orchestration for Kubernetes with Cloudify - Webinar PresentationCloudify Community
Watch the webinar at:
http://cloudify.co/webinars/multi-cloud-orchestration-kubernetes
Tune in as we unveil the new capabilities for maximizing use of Kubernetes with the new Cloudify Kubernetes Plugin, and the new Cloudify Kubernetes provider. Using Kubernetes with Cloudify has never been easier or more powerful, as you can now easily provision workloads on both cloud based VM’s and containers, or have total control and flexibility by using Cloudify as a Kubernetes IaaS.
Kafka Excellence at Scale – Cloud, Kubernetes, Infrastructure as Code (Vik Wa...HostedbyConfluent
Cloud is changing the world; Kubernetes is changing the world; real-time event streaming is changing the world. In this talk we explore some of best practices to synergistically combine the power of these paradigm shifts to achieve a much greater return on your Kafka investments. From declarative deployments, zero-downtime upgrades, elastic scaling to self-healing and automated governance, learn how you can bring the next level of speed, agility, resilience, and security to your Kafka implementations.
Episode 4: Operating Kubernetes at Scale with DC/OSMesosphere Inc.
You’ve installed your Kubernetes cluster on DC/OS — now what? Operating Kubernetes efficiently can be challenging. In the final episode of our Kubernetes series, we will share best practices for operating your DC/OS Kubernetes cluster and maintaining performance. During this presentation, Joerg Schad and Chris Gaun show you how to successfully operate Kubernetes at scale in your environment.
During this session, we discuss:
1. How to upgrade DC/OS and Kubernetes with no downtime
2. How DC/OS guards against failure and enables fault domains that are resistant to outages within racks, availability zones, or cloud environments
3. How the monitoring and metrics capabilities on DC/OS improve operational analytics and help you get the most from your cluster
4. How cloud bursting extends your on-prem environment with resources from the cloud to handle spikes in your workload
Building Scalable Real-Time Data Pipelines with the Couchbase Kafka Connector...HostedbyConfluent
Many organizations use Apache Kafka to facilitate the flow of data between multiple applications or data sources. Thanks to Kafka’s distributed architecture, it is easy to set up a scalable and reliable broker, but doing the same with producers or consumers is quite often a fine art. This session provides a quick overview of Couchbase, describes the Couchbase Kafka Connector, and showcases a demo of how it can be used as both a source and a sink for building real-time data processing pipelines for mission-critical applications.
Mainframe Integration, Offloading and Replacement with Apache Kafka | Kai Wae...HostedbyConfluent
Legacy migration is a journey. Mainframes cannot be replaced in a single project. A big bang will fail. This has to be planned long-term.
Mainframe offloading and replacement with Apache Kafka and its ecosystem can be used to keep a more modern data store in real-time sync with the mainframe, while at the same time persisting the event data on the bus to enable microservices, and deliver the data to other systems such as data warehouses and search indexes.
This session walks through the different steps some companies are already gone through. Technical options like Change Data Capture (CDC), MQ, and third-party tools for mainframe integration, offloading and replacement are explored.
Superior Streaming and CDN Solutions: Cloud Storage Revolutionizes Digital MediaScality
Superior Streaming and CDN Solutions: Cloud Storage Revolutionizes Digital Media
Yannick Guillerm – Director Technical Marketing
Learn more:
http://www.scality.com/solutions-industries/media-entertainment-storage/
May 26, 2017
Presented at NAB 2017
eBay is one of the largest OpenStack based Clouds in the world. As eBay evolves into the world of Containers and Microservices, Kubernetes is quickly becoming a key platform. This talk is about how we applied our learnings from OpenStack to build a framework for managing life-cycle of Kubernetes at scale.
Multi-Cloud Orchestration for Kubernetes with Cloudify - Webinar PresentationCloudify Community
Watch the webinar at:
http://cloudify.co/webinars/multi-cloud-orchestration-kubernetes
Tune in as we unveil the new capabilities for maximizing use of Kubernetes with the new Cloudify Kubernetes Plugin, and the new Cloudify Kubernetes provider. Using Kubernetes with Cloudify has never been easier or more powerful, as you can now easily provision workloads on both cloud based VM’s and containers, or have total control and flexibility by using Cloudify as a Kubernetes IaaS.
Kafka Excellence at Scale – Cloud, Kubernetes, Infrastructure as Code (Vik Wa...HostedbyConfluent
Cloud is changing the world; Kubernetes is changing the world; real-time event streaming is changing the world. In this talk we explore some of best practices to synergistically combine the power of these paradigm shifts to achieve a much greater return on your Kafka investments. From declarative deployments, zero-downtime upgrades, elastic scaling to self-healing and automated governance, learn how you can bring the next level of speed, agility, resilience, and security to your Kafka implementations.
Episode 4: Operating Kubernetes at Scale with DC/OSMesosphere Inc.
You’ve installed your Kubernetes cluster on DC/OS — now what? Operating Kubernetes efficiently can be challenging. In the final episode of our Kubernetes series, we will share best practices for operating your DC/OS Kubernetes cluster and maintaining performance. During this presentation, Joerg Schad and Chris Gaun show you how to successfully operate Kubernetes at scale in your environment.
During this session, we discuss:
1. How to upgrade DC/OS and Kubernetes with no downtime
2. How DC/OS guards against failure and enables fault domains that are resistant to outages within racks, availability zones, or cloud environments
3. How the monitoring and metrics capabilities on DC/OS improve operational analytics and help you get the most from your cluster
4. How cloud bursting extends your on-prem environment with resources from the cloud to handle spikes in your workload
Operating Kubernetes at Scale (Australia Presentation)Mesosphere Inc.
Kubernetes is an amazing technology, but getting it up and running in your data center or VMs is challenging. In this technical webinar, you will learn how best to deploy, operate, and scale Kubernetes clusters from one to hundreds of nodes using DC/OS.
Jörg Schad and Adrian Smolski from Mesosphere show how to run Kubernetes on DC/OS, as well as how to integrate and run Kubernetes alongside traditional applications and fast data services of your choice (e.g. Apache Cassandra, Apache Kafka, Apache Spark, TensorFlow, and more) on any infrastructure.
You will learn how to:
1. Deploy Kubernetes in a secure, highly available, and fault-tolerant manner on DC/OS
2. Solve operational challenges of running a large/multiple Kubernetes cluster(s)
3. One-click deploy big data stateful and stateless services alongside a Kubernetes cluster
Jörg is a Technical Lead for Community Projects at Mesosphere in San Francisco. His speaking experience includes various Meetups, international conferences, and lecture halls.
Adrian Smolski is the local Field CTO based out of Sydney, Australia. His background is big data, data science and distributed systems.
There is a transformation brewing for DevOps in age of Kubernetes. The tools of the trade, configuration management solutions, have been superseded in agility and preference by development teams who want the declarative choreography of containerized applications. The new preference for mixing developer and operations is the site reliability engineering (SRE) model championed by Google. In this new structure, the need to automate doesn’t stop at the containerized application and DevOps professionals should seek to automate the Kubernetes service itself.
Discover how to accelerate the modernization of your Java Enterprise applications with no refactoring. Without re-architecting or re-writing, we will show you how to modernize painlessly to achieve faster time-to-market, simplified deployment and scaling, improved security, painless patching, and save money on infrastructure resources and licensing cost.
A Look into the Mirror: Patterns and Best Practices for MirrorMaker2 | Cliff ...HostedbyConfluent
From migrations between Apache Kafka clusters to multi-region deployments across datacenters, the introduction of MirrorMaker2 has expanded the possibilities for Apache Kafka deployments and use cases. In this session you will learn about patterns, best practices, and learnings compiled from running MirrorMaker2 in production at every scale.
If you implement a microservice architecture correctly, you will end up with a proliferation of different microservices; with multiple instances of each one for redundancy. Find out how you to get microservices to automatically discover each other, share a configuration with real-time updates. See how to eliminate server management altogether with "serverless" microservice frameworks.
How Confluent Completes the Event Streaming Platform (Addison Huddy & Dan Ros...HostedbyConfluent
Apache Kafka fundamentally changes how organizations build and deploy a universal data pipeline that is scalable, reliable, and durable enough to meet the needs of digital-first organizations. However, as powerful as Kafka is today, it’s not an event-streaming platform - and getting it there on your own is a long, complicated, and expensive process. Earlier this year Confluent announced Project Metamorphosis - our plan to bring the best characteristics of cloud native systems to Apache Kafka. Since May we’ve begun transforming Confluent Cloud and Confluent Platform to do just that.
Join two of our Product Managers, Dan Rosanova and Addison Huddy to: Learn how we’ve evolved Confluent Cloud with the first phase of Project Metamorphosis releases
See how Confluent Platform 6.0 brings these transformational, cloud-like qualities to self-managed Kafka
Get a sneak peak of our next Metamorphosis theme and how it impacts your Kafka and event-streaming strategy.
Best Practices for Managing Kubernetes and Stateful Services: Mesosphere & Sy...Mesosphere Inc.
Gain a complete understanding of how to quickly and easily implement a Kubernetes cluster, scale it out post implementation based on consumption, and conduct Day 2 activities with minimal operational impact. Also, learn how to include deep data on containers for monitoring and security.
By using a modern platform like DC/OS, you will be able to quickly add additional services like portability to public clouds, real time analytics or machine learning. Learn how customers have reduced HW costs by improving density of these applications and in many instances improve scalability and resiliency.
Dok Talks #111 - Scheduled Scaling with Dask and Argo WorkflowsDoKC
https://go.dok.community/slack
https://dok.community/
ABSTRACT OF THE TALK
Complex computational workloads in Python are a common sight these days, especially in the context of processing large and complex datasets. Battle-hardened modules such as Numpy, Pandas, and Scikit-Learn can perform low-level tasks, while tools like Dask makes it easy to parallelize these workloads across distributed computational environments. Meanwhile, Argo Workflows offers a Kubernetes-native solution to provisioning cloud resources in Kubernetes and triggering workflows on a regular schedule. Being Kubernetes-native, Argo Workflows also meshes nicely with other Kubernetes tools. This talk discusses the combination of these two worlds by showcasing a set-up for Argo-managed workflows which schedule and automatically scale-out Dask-powered data pipelines in Python.
BIO
Former academic in the field of renewable energy simulation and energy systems analysis. Currently responsible for architecting and maintaining the cloud- and data strategy at ACCURE Battery Intelligence
KEY TAKE-AWAYS FROM THE TALK
Argo Workflows + Dask is a nice combination for data-processing pipelines. There are a a few "gotchyas" to be on the look-out for, but in nevertheless this is still a generally-applicable and powerful combination.
https://github.com/sevberg
Navigating the obdervability storm with Kafka | Jose Manuel Cristobal, AdidasHostedbyConfluent
When all your stores are closed, e-commerce becomes your bigger store… and the most challenging. That means a myriad of systems orchestrated to make it happen, all of them scaling out accordingly and implementing Observability and SRE practices to support this growth, preserving stability and reliability.
How can we detect problems, root causes and react? How can we predict those problems?
HOLMES is the adidas solution to accelerate problem detection, giving a holistic view of technical systems through metrics and logs democratisation.
In this talk, we'll show how Kafka technology stack allows adidas to support the ingestion and offload of all logs and metrics of the company. A platform which adoption has skyrocketed during 2020, supporting 100 Billion messages per day.
The main takeaway will be the explanation of a cutting-edge solution based on kafka technology stack (kafka, Kafka Streams and Kafka Connect) for demanding throughput ecosystem.
Complex Analytics with NoSQL Data Store in Real TimeNati Shalom
NOSQL are often limited in the type of queries that they can support due to the distributed nature of the data. In this session we would learn patterns on how we can overcome this limitation and combine multiple query semantics with NoSQL based engines.
We will demonstrate specifically a combination of key/value, SQL like, Document model and Graph based queries as well as more advanced topic such as handling partial update and query through projection. We will also demonstrate how we can create a meshaup between those API's i.e. write fast through Key/Value API and execute complex queries on that same data through SQL query.
- See more at: http://nosql2014.dataversity.net/sessionPop.cfm?confid=81&proposalid=6335#sthash.PNSZi5TJ.dpuf
The Kubernetes cloud native landscape is vast. Delivering a solution requires managing a puzzling array of required tooling, monitoring, disaster recovery, and other solutions that lie outside the realm of the central cluster. The governing body of Kubernetes, the Cloud Native Computing Foundation, has developed guidance for organizations interested in this topic by publishing the Cloud Native Landscape, but while a list of options is helpful it does not give operations and DevOps professionals the knowledge they need to execute.
Learn best practices of setting up and managing the tools needed around Kubernetes. This presentation covers popular open source options (to avoid lock in) and how one can implement and manage these tools on an ongoing basis. Learn from, and do not repeat, the mistakes of previous centralized platforms.
In this session, attendees will learn:
1. Cloud Native Landscape 101 - Prometheus, Sysdig, NGINX, and more. Where do they all fit in Kubernetes solution?
2. Avoiding the OpenStack sprawl of managing a multiverse of required tooling in the Kubernetes world.
3. Leverage technology like Kubernetes, now available on DC/OS, to provide part of the infrastructure framework that helps manage cloud native application patterns.
An Introduction to Confluent Cloud: Apache Kafka as a Serviceconfluent
Business breakout during Confluent’s streaming event in Munich, presented by Hans Jespersen, VP WW Systems Engineering at Confluent. This three-day hands-on course focused on how to build, manage, and monitor clusters using industry best-practices developed by the world’s foremost Apache Kafka™ experts. The sessions focused on how Kafka and the Confluent Platform work, how their main subsystems interact, and how to set up, manage, monitor, and tune your cluster.
The Future of Enterprise Applications is ServerlessEficode
The Future of Enterprise Applications is Serverless
Jarkko Hirvonen, Manager, Solutions Architecture, AWS Nordics
In 2014 AWS introduced serverless computing with AWS Lambda. Since then, serverless has become one of the hottest topics in the industry. What is serverless, and what are the key trends and architecture patterns you should be aware of? Witness how AWS does it.
Live Event Debugging With ksqlDB at Reddit | Hannah Hagen and Paul Kiernan, R...HostedbyConfluent
Convincing developers to write tests for new code is hard; convincing developers to write tests for new event data is even harder. At Reddit, engineers have often deployed new app versions, only to find out later that the event wasn’t firing at all, or it was missing critical fields. So this begs the question, “How can engineers at Reddit be confident that the events they instrument are accurate and complete?”
In this session, we will learn about an internal tool developed at Reddit to QA events in real-time. This KSQL-powered web app streams events from our pipeline, allowing developers to filter events they care about using criteria like User ID, Device ID or the type of user interaction. With a backbone of KSQL and Kafka Streams, engineers can get real-time feedback on how accurate (or how erroneous) their event data is.
Leader in Cloud and Object Storage for Service ProvidersScality
Cloud-based services are growing as they become real opportunities for service providers. Discover more about Scality RING Software-Defined Object Storage. Learn more at www.scality.com.
Don't Cross the Streams! (or do, we got you)Caito Scherr
Ghostbusters better get ready, because it's time to cross (ok, join) some streams! This talk will include easy-to-follow steps to set up and maximize a powerful, streaming data pipeline with the newest features from Apache Flink. This talk is for anyone using (or interested in) stream processing who wants to minimize their development overhead, and particularly for those who want to do so while leveraging available Open Source tools.
Keeping Analytics Data Fresh in a Streaming Architecture | John Neal, QlikHostedbyConfluent
Qlik is an industry leader across its solution stack, both on the Data Integration side of things with Qlik Replicate (real-time CDC) and Qlik Compose (data warehouse and data lake automation), and on the Analytics side with Qlik Sense. These two “sides” of Qlik are coming together more frequently these days as the need for “always fresh” data increases across organizations.
When real-time streaming applications are the topic du jour, those companies are looking to Apache Kafka to provide the architectural backbone those applications require. Those same companies turn to Qlik Replicate to put the data from their enterprise database systems into motion at scale, whether that data resides in “legacy” mainframe databases; traditional relational databases such as Oracle, MySQL, or SQL Server; or applications such as SAP and SalesForce.
In this session we will look in depth at how Qlik Replicate can be used to continuously stream changes from a source database into Apache Kafka. From there, we will explore how a purpose-built consumer can be used to provide the bridge between Apache Kafka and an analytics application such as Qlik Sense.
Kubera is a SaaS platform - also available on-premise - that simplifies the use of Kubernetes as a data plane and that is free for individual usage.
Core capabilities include:
Visualization of a Kubernetes environment, including stateful workloads and the resources serving them
Data resilience capabilities, such as cross availability zone configuration, crash-consistent consistent back-ups, pre-staged disaster recovery, chaos test integration, and more
Off cluster logging and alerting
Autoconfiguration and management of OpenEBS Enterprise Edition
Integrated support services from MayaData
Cloud Migration Paths: Kubernetes, IaaS, or DBaaSEDB
Moving to the cloud is hard, and moving Postgres databases to the cloud is even harder. Public cloud or private cloud? Infrastructure as a Service (IaaS), or Platform as a Service (PaaS)? Kubernetes for the application, or for the database and the application? This talk will juxtapose self-managed Kubernetes and container-based database solutions, Postgres deployments on IaaS, and Postgres DBaaS solutions of which EDB’s DBaaS BigAnimal is the latest example.
Operating Kubernetes at Scale (Australia Presentation)Mesosphere Inc.
Kubernetes is an amazing technology, but getting it up and running in your data center or VMs is challenging. In this technical webinar, you will learn how best to deploy, operate, and scale Kubernetes clusters from one to hundreds of nodes using DC/OS.
Jörg Schad and Adrian Smolski from Mesosphere show how to run Kubernetes on DC/OS, as well as how to integrate and run Kubernetes alongside traditional applications and fast data services of your choice (e.g. Apache Cassandra, Apache Kafka, Apache Spark, TensorFlow, and more) on any infrastructure.
You will learn how to:
1. Deploy Kubernetes in a secure, highly available, and fault-tolerant manner on DC/OS
2. Solve operational challenges of running a large/multiple Kubernetes cluster(s)
3. One-click deploy big data stateful and stateless services alongside a Kubernetes cluster
Jörg is a Technical Lead for Community Projects at Mesosphere in San Francisco. His speaking experience includes various Meetups, international conferences, and lecture halls.
Adrian Smolski is the local Field CTO based out of Sydney, Australia. His background is big data, data science and distributed systems.
There is a transformation brewing for DevOps in age of Kubernetes. The tools of the trade, configuration management solutions, have been superseded in agility and preference by development teams who want the declarative choreography of containerized applications. The new preference for mixing developer and operations is the site reliability engineering (SRE) model championed by Google. In this new structure, the need to automate doesn’t stop at the containerized application and DevOps professionals should seek to automate the Kubernetes service itself.
Discover how to accelerate the modernization of your Java Enterprise applications with no refactoring. Without re-architecting or re-writing, we will show you how to modernize painlessly to achieve faster time-to-market, simplified deployment and scaling, improved security, painless patching, and save money on infrastructure resources and licensing cost.
A Look into the Mirror: Patterns and Best Practices for MirrorMaker2 | Cliff ...HostedbyConfluent
From migrations between Apache Kafka clusters to multi-region deployments across datacenters, the introduction of MirrorMaker2 has expanded the possibilities for Apache Kafka deployments and use cases. In this session you will learn about patterns, best practices, and learnings compiled from running MirrorMaker2 in production at every scale.
If you implement a microservice architecture correctly, you will end up with a proliferation of different microservices; with multiple instances of each one for redundancy. Find out how you to get microservices to automatically discover each other, share a configuration with real-time updates. See how to eliminate server management altogether with "serverless" microservice frameworks.
How Confluent Completes the Event Streaming Platform (Addison Huddy & Dan Ros...HostedbyConfluent
Apache Kafka fundamentally changes how organizations build and deploy a universal data pipeline that is scalable, reliable, and durable enough to meet the needs of digital-first organizations. However, as powerful as Kafka is today, it’s not an event-streaming platform - and getting it there on your own is a long, complicated, and expensive process. Earlier this year Confluent announced Project Metamorphosis - our plan to bring the best characteristics of cloud native systems to Apache Kafka. Since May we’ve begun transforming Confluent Cloud and Confluent Platform to do just that.
Join two of our Product Managers, Dan Rosanova and Addison Huddy to: Learn how we’ve evolved Confluent Cloud with the first phase of Project Metamorphosis releases
See how Confluent Platform 6.0 brings these transformational, cloud-like qualities to self-managed Kafka
Get a sneak peak of our next Metamorphosis theme and how it impacts your Kafka and event-streaming strategy.
Best Practices for Managing Kubernetes and Stateful Services: Mesosphere & Sy...Mesosphere Inc.
Gain a complete understanding of how to quickly and easily implement a Kubernetes cluster, scale it out post implementation based on consumption, and conduct Day 2 activities with minimal operational impact. Also, learn how to include deep data on containers for monitoring and security.
By using a modern platform like DC/OS, you will be able to quickly add additional services like portability to public clouds, real time analytics or machine learning. Learn how customers have reduced HW costs by improving density of these applications and in many instances improve scalability and resiliency.
Dok Talks #111 - Scheduled Scaling with Dask and Argo WorkflowsDoKC
https://go.dok.community/slack
https://dok.community/
ABSTRACT OF THE TALK
Complex computational workloads in Python are a common sight these days, especially in the context of processing large and complex datasets. Battle-hardened modules such as Numpy, Pandas, and Scikit-Learn can perform low-level tasks, while tools like Dask makes it easy to parallelize these workloads across distributed computational environments. Meanwhile, Argo Workflows offers a Kubernetes-native solution to provisioning cloud resources in Kubernetes and triggering workflows on a regular schedule. Being Kubernetes-native, Argo Workflows also meshes nicely with other Kubernetes tools. This talk discusses the combination of these two worlds by showcasing a set-up for Argo-managed workflows which schedule and automatically scale-out Dask-powered data pipelines in Python.
BIO
Former academic in the field of renewable energy simulation and energy systems analysis. Currently responsible for architecting and maintaining the cloud- and data strategy at ACCURE Battery Intelligence
KEY TAKE-AWAYS FROM THE TALK
Argo Workflows + Dask is a nice combination for data-processing pipelines. There are a a few "gotchyas" to be on the look-out for, but in nevertheless this is still a generally-applicable and powerful combination.
https://github.com/sevberg
Navigating the obdervability storm with Kafka | Jose Manuel Cristobal, AdidasHostedbyConfluent
When all your stores are closed, e-commerce becomes your bigger store… and the most challenging. That means a myriad of systems orchestrated to make it happen, all of them scaling out accordingly and implementing Observability and SRE practices to support this growth, preserving stability and reliability.
How can we detect problems, root causes and react? How can we predict those problems?
HOLMES is the adidas solution to accelerate problem detection, giving a holistic view of technical systems through metrics and logs democratisation.
In this talk, we'll show how Kafka technology stack allows adidas to support the ingestion and offload of all logs and metrics of the company. A platform which adoption has skyrocketed during 2020, supporting 100 Billion messages per day.
The main takeaway will be the explanation of a cutting-edge solution based on kafka technology stack (kafka, Kafka Streams and Kafka Connect) for demanding throughput ecosystem.
Complex Analytics with NoSQL Data Store in Real TimeNati Shalom
NOSQL are often limited in the type of queries that they can support due to the distributed nature of the data. In this session we would learn patterns on how we can overcome this limitation and combine multiple query semantics with NoSQL based engines.
We will demonstrate specifically a combination of key/value, SQL like, Document model and Graph based queries as well as more advanced topic such as handling partial update and query through projection. We will also demonstrate how we can create a meshaup between those API's i.e. write fast through Key/Value API and execute complex queries on that same data through SQL query.
- See more at: http://nosql2014.dataversity.net/sessionPop.cfm?confid=81&proposalid=6335#sthash.PNSZi5TJ.dpuf
The Kubernetes cloud native landscape is vast. Delivering a solution requires managing a puzzling array of required tooling, monitoring, disaster recovery, and other solutions that lie outside the realm of the central cluster. The governing body of Kubernetes, the Cloud Native Computing Foundation, has developed guidance for organizations interested in this topic by publishing the Cloud Native Landscape, but while a list of options is helpful it does not give operations and DevOps professionals the knowledge they need to execute.
Learn best practices of setting up and managing the tools needed around Kubernetes. This presentation covers popular open source options (to avoid lock in) and how one can implement and manage these tools on an ongoing basis. Learn from, and do not repeat, the mistakes of previous centralized platforms.
In this session, attendees will learn:
1. Cloud Native Landscape 101 - Prometheus, Sysdig, NGINX, and more. Where do they all fit in Kubernetes solution?
2. Avoiding the OpenStack sprawl of managing a multiverse of required tooling in the Kubernetes world.
3. Leverage technology like Kubernetes, now available on DC/OS, to provide part of the infrastructure framework that helps manage cloud native application patterns.
An Introduction to Confluent Cloud: Apache Kafka as a Serviceconfluent
Business breakout during Confluent’s streaming event in Munich, presented by Hans Jespersen, VP WW Systems Engineering at Confluent. This three-day hands-on course focused on how to build, manage, and monitor clusters using industry best-practices developed by the world’s foremost Apache Kafka™ experts. The sessions focused on how Kafka and the Confluent Platform work, how their main subsystems interact, and how to set up, manage, monitor, and tune your cluster.
The Future of Enterprise Applications is ServerlessEficode
The Future of Enterprise Applications is Serverless
Jarkko Hirvonen, Manager, Solutions Architecture, AWS Nordics
In 2014 AWS introduced serverless computing with AWS Lambda. Since then, serverless has become one of the hottest topics in the industry. What is serverless, and what are the key trends and architecture patterns you should be aware of? Witness how AWS does it.
Live Event Debugging With ksqlDB at Reddit | Hannah Hagen and Paul Kiernan, R...HostedbyConfluent
Convincing developers to write tests for new code is hard; convincing developers to write tests for new event data is even harder. At Reddit, engineers have often deployed new app versions, only to find out later that the event wasn’t firing at all, or it was missing critical fields. So this begs the question, “How can engineers at Reddit be confident that the events they instrument are accurate and complete?”
In this session, we will learn about an internal tool developed at Reddit to QA events in real-time. This KSQL-powered web app streams events from our pipeline, allowing developers to filter events they care about using criteria like User ID, Device ID or the type of user interaction. With a backbone of KSQL and Kafka Streams, engineers can get real-time feedback on how accurate (or how erroneous) their event data is.
Leader in Cloud and Object Storage for Service ProvidersScality
Cloud-based services are growing as they become real opportunities for service providers. Discover more about Scality RING Software-Defined Object Storage. Learn more at www.scality.com.
Don't Cross the Streams! (or do, we got you)Caito Scherr
Ghostbusters better get ready, because it's time to cross (ok, join) some streams! This talk will include easy-to-follow steps to set up and maximize a powerful, streaming data pipeline with the newest features from Apache Flink. This talk is for anyone using (or interested in) stream processing who wants to minimize their development overhead, and particularly for those who want to do so while leveraging available Open Source tools.
Keeping Analytics Data Fresh in a Streaming Architecture | John Neal, QlikHostedbyConfluent
Qlik is an industry leader across its solution stack, both on the Data Integration side of things with Qlik Replicate (real-time CDC) and Qlik Compose (data warehouse and data lake automation), and on the Analytics side with Qlik Sense. These two “sides” of Qlik are coming together more frequently these days as the need for “always fresh” data increases across organizations.
When real-time streaming applications are the topic du jour, those companies are looking to Apache Kafka to provide the architectural backbone those applications require. Those same companies turn to Qlik Replicate to put the data from their enterprise database systems into motion at scale, whether that data resides in “legacy” mainframe databases; traditional relational databases such as Oracle, MySQL, or SQL Server; or applications such as SAP and SalesForce.
In this session we will look in depth at how Qlik Replicate can be used to continuously stream changes from a source database into Apache Kafka. From there, we will explore how a purpose-built consumer can be used to provide the bridge between Apache Kafka and an analytics application such as Qlik Sense.
Kubera is a SaaS platform - also available on-premise - that simplifies the use of Kubernetes as a data plane and that is free for individual usage.
Core capabilities include:
Visualization of a Kubernetes environment, including stateful workloads and the resources serving them
Data resilience capabilities, such as cross availability zone configuration, crash-consistent consistent back-ups, pre-staged disaster recovery, chaos test integration, and more
Off cluster logging and alerting
Autoconfiguration and management of OpenEBS Enterprise Edition
Integrated support services from MayaData
Cloud Migration Paths: Kubernetes, IaaS, or DBaaSEDB
Moving to the cloud is hard, and moving Postgres databases to the cloud is even harder. Public cloud or private cloud? Infrastructure as a Service (IaaS), or Platform as a Service (PaaS)? Kubernetes for the application, or for the database and the application? This talk will juxtapose self-managed Kubernetes and container-based database solutions, Postgres deployments on IaaS, and Postgres DBaaS solutions of which EDB’s DBaaS BigAnimal is the latest example.
OpenEBS is a container-native open source containerized storage project for containers – tightly integrated into Kubernetes.
You can find the full presentation here: https://www.facebook.com/VMTNcommunity/videos/2008142932762386/
About the Talk:
Cloud native ecosystem is bringing a huge change in the way of DevOps in every cloud native organisation. Developers and operators in cloud native organisations are using tools and platforms like Kubernetes to achieve the agility promised by DevOps and microservices. The tools and best practices for stateless applications have been well established and the results can be seen in the agility of teams using these stateless applications. However, stateful applications pose new challenges to DevOps teams in achieving the agility as the best practices around persistent storage management are still emerging. In this talk, first we discuss the challenges faced by DevOps while dealing with persistent storage handling in stateful applications. Then we discuss the open source tools and best practices for DevOps teams to achieve data agility of cloud native applications.
Multi-Cloud Orchestration for Kubernetes with CloudifyCloudify Community
This presentation details Cloudify's Kubernetes plugin as well as Kubernetes Provider, offering complete integration with K8s and delivering multi-cloud container-based orchestration.
Are you actively using or moving to Office 365, G-Suite, or other popular cloud applications? If so, how confident are you that you can keep all of that critical data protected? Attend this session to learn how Veritas can help protect data across all of your different cloud applications--using the same solution you use to protect your existing non-cloud applications. Don't miss this opportunity to explore the advantages of using one unified solution to protect all of your data--across all of your physical, virtual, and cloud environments.
Pivotal Container Service : la nuova soluzione per gestire Kubernetes in aziendaVMware Tanzu
Le applicazioni moderne vengono distribuite in poche ore anziché giorni o settimane, consentendo alle aziende di accelerare il time-to-value e fornire una migliore esperienza al loro cliente finale. Uno dei modi più rapidi per passare dall'ideazione alla produzione è quello di disporre di una piattaforma di gestione dei container coerente e affidabile che aiuti gli sviluppatori a erogare il software più velocemente e all'IT di semplificare le operazioni
VMware e Pivotal mettono insieme le nostre competenze combinate per offrire una soluzione di gestione dei container completa con Pivotal Container Service (PKS).
Unisciti ai tuoi colleghi in questo evento gratuito della durata di un'ora per sapere in che modo le aziende possono implementare i containers su vSphere con PKS, semplificando la gestione di un ambiente Kubernetes dall’installazione (day 1) fino all’aggiornamento ed evoluzione infrastrutturale (day 2).
Agenda del webinar:
- Kubernetes e l'orchestrazione dei container
- La gestione dei container e di Kubernetes in ambienti di produzione con VMware e -
- Pivotal Container Service (PKS)
- La modernizzazione delle applicazioni con PKS
- Demo di Pivotal Container Service e delle integrazioni con l'infrastruttura VMware
- Chiusura del webinar e Q/A
Presenters :
Fabio Chiodini, Advisory Platform Architect EMEA, Pivotal Ruggero Citterio, Senior System Engineer, VMware
The combination of StackPointCloud with NetApp creates NetApp Kubernetes Service, the industry’s first complete Kubernetes platform for multi-cloud deployments and a complete cloud-based stack for Azure, Google Cloud, AWS, and NetApp HCI. Further, Trident is a fully supported open source project maintained by NetApp, designed from the ground up to help meet the sophisticated persistence demands of containerized applications.
One And Done Multi-Cloud Load Balancing Done Right.pptxAvi Networks
Did you know that on average, it takes organizations more than three months using legacy load balancers to scale their load balancing capacity? That includes tedious policy management, expensive over-provisioning (or even more expensive under-provisioning), and the risk of supply-chain delays.
Join us for an eye-opening discussion of application delivery done right. By following the guiding principles of a cloud operating model, your team can get operational simplicity, multi-cloud consistency, pervasive analytics, holistic security and full life-cycle automation. This means less time spent on manual, repetitive tasks and troubleshooting, freeing up more time to proactively manage and automate your load balancers.
Watch a replay of the webinar: https://www.youtube.com/watch?v=BtzPgLBy56w
451 Research and NuoDB outline the key database criteria for cloud applications. Explore how applications deployed in the cloud require a combination of standard functionality, such as ANSI SQL, and new capabilities specifically required to take full advantage of cloud economics, such as elastic scalability and continuous availability.
This presentation was presented to the Fachhochschule Bern. The course was part of the Master program and we covered the topics of Cloud Native & Docker
In 2018's user conference keynote MariaDB CEO, Michael Howard, announced an initiative to build a MariaDB DBaaS platform. In this session, the DBaaS team shares how MariaDB is approaching DBaaS, then discusses the role of containers and Kubernetes, the need for infrastructure-agnostic provisioning, support for day-two operations and enterprise requirements for large-scale DBaaS deployments.
Driving Digital Transformation With Containers And Kubernetes Complete DeckSlideTeam
Introducing Kubernetes Concepts And Architecture PowerPoint Presentation Slides. This readily available open-source architecture PPT infographics well explains the concept of containers. You can also depict the architecture of containers and microservices with the help of a visually appealing PPT slideshow. Our content-ready containers PPT slideshow allow you to showcase the reasons for opting for Kubernetes by an organization. Depict the roadmap for installing Kubernetes in the organization in a presentable manner by using this slide design. The major advantages of Kubernetes, such as the stability of application run, improving productivity, and many more can be presented in this slide deck. Cover 30 60 90 days plan to implement Kubernetes in the organization with this thoroughly researched PowerPoint templates. Discuss the key components of Kubernetes with a diagram using this modern-designed cluster architecture PowerPoint layouts. Describe each element’s functionality using these PowerPoint visuals. Hence manage the clusters efficiently by downloading Kubernetes architecture PPT slides. https://bit.ly/3p6xEoS
Use GitLab with Chaos Engineering to Harden your Applications + OpenEBS 1.3 ...MayaData Inc
If you were not at the GitLab Commit conferences in New York and London, here’s an opportunity to attend our popular talk on using chaos engineering in Gitlab pipelines for faster hardening. As cloud native applications are coming to life faster than anyone could have imagined, the explosion of microservices empowers developers while also making it increasingly difficult to build pipelines that validate changes outside of their (or their SREs') control.
Chaos engineering has emerged as a way to introduce faults into systems to increase their resiliency and Litmus, part of OpenEBS Enterprise Platform, can shake out a lot of bugs.
We are also glad to announce that OpenEBS 1.3 has been released and we will review the new features added.
Webinar: Data Protection for KubernetesMayaData Inc
In this webinar, we will back-up many live workloads to the Cloudian Hyperstore from a Kubernetes environment running on a particular cloud. We will demonstrate the value of Cloudian’s WORM capabilities to show how workloads and their data can be protected from ransomware attacks. Later, we will recover workloads from the Cloudian HyperStore to another cloud vendor. We will also demonstrate streaming back-ups for use in cloud and hardware switch overs and other use cases.
Kubera from MayaData is the first solution to extend the per workload management of data offered by Container Attached Storage to back-ups and disaster recovery. Kubera is often used by small teams to establish and manage back-up policies whereby data is backed up to S3 compatible object storage. Kubera can also be used to provide a comprehensive view across all workloads of back-up and retention policies and to enable back-ground cloud migration and disaster recovery.
A deep dive into running data analytic workloads in the cloudCloudera, Inc.
Aishwarya Venkataraman, Jason Wang, Mala Ramakrishnan, Stefan Salandy, and Vinithra Varadharajan lead a deep dive into running data analytic workloads in a managed service capacity in the public cloud and highlight cloud infrastructure best practices.
PartnerSkillUp_Enable a Streaming CDC SolutionTimothy Spann
PartnerSkillUp_Enable a Streaming CDC Solution
Tim Spann
Principal Developer Advocate in Data In Motion for Cloudera, Global
https://attend.cloudera.com/skillupseriesseptember14
Streaming Change Data Capture (CDC) Two Unique Ways
In this next session,
learn how to use Debezium with Flink, Kafka, and NiFi for Change Data Capture using two different mechanisms: Kafka Connect and Flink SQL.
With the virtual nature of today's world, streaming data is more critical than ever. Join Cloudera Chief Data-In-Motion Principal, Tim Spann, and Partner Solution Engineer, Salvador Alamazan as they look closely at key CDC use cases, discuss why Debezium is the best option for handling CDC and use examples to show you how to demonstrate value.
This is a must-attend experience!
Zoom Webinar
September 14, 2023
10:00am–11:00am EDT
FLaNK Stack
Apache NiFi
Apache Flink
Apache Kafka
Kafka Connect
Flink SQL
Cloudera DataFlow
Cloudera SQL Stream Builder
Cloudera Streams Messages Manager
Debezium
Postgresql
IBM DB2
Oracle DB
Similar to Kubecon - Democratizing my sql_ cloud managed to k8s managed (1) (20)
Gamify Your Mind; The Secret Sauce to Delivering Success, Continuously Improv...Shahin Sheidaei
Games are powerful teaching tools, fostering hands-on engagement and fun. But they require careful consideration to succeed. Join me to explore factors in running and selecting games, ensuring they serve as effective teaching tools. Learn to maintain focus on learning objectives while playing, and how to measure the ROI of gaming in education. Discover strategies for pitching gaming to leadership. This session offers insights, tips, and examples for coaches, team leads, and enterprise leaders seeking to teach from simple to complex concepts.
Top Features to Include in Your Winzo Clone App for Business Growth (4).pptxrickgrimesss22
Discover the essential features to incorporate in your Winzo clone app to boost business growth, enhance user engagement, and drive revenue. Learn how to create a compelling gaming experience that stands out in the competitive market.
Innovating Inference - Remote Triggering of Large Language Models on HPC Clus...Globus
Large Language Models (LLMs) are currently the center of attention in the tech world, particularly for their potential to advance research. In this presentation, we'll explore a straightforward and effective method for quickly initiating inference runs on supercomputers using the vLLM tool with Globus Compute, specifically on the Polaris system at ALCF. We'll begin by briefly discussing the popularity and applications of LLMs in various fields. Following this, we will introduce the vLLM tool, and explain how it integrates with Globus Compute to efficiently manage LLM operations on Polaris. Attendees will learn the practical aspects of setting up and remotely triggering LLMs from local machines, focusing on ease of use and efficiency. This talk is ideal for researchers and practitioners looking to leverage the power of LLMs in their work, offering a clear guide to harnessing supercomputing resources for quick and effective LLM inference.
Software Engineering, Software Consulting, Tech Lead, Spring Boot, Spring Cloud, Spring Core, Spring JDBC, Spring Transaction, Spring MVC, OpenShift Cloud Platform, Kafka, REST, SOAP, LLD & HLD.
Prosigns: Transforming Business with Tailored Technology SolutionsProsigns
Unlocking Business Potential: Tailored Technology Solutions by Prosigns
Discover how Prosigns, a leading technology solutions provider, partners with businesses to drive innovation and success. Our presentation showcases our comprehensive range of services, including custom software development, web and mobile app development, AI & ML solutions, blockchain integration, DevOps services, and Microsoft Dynamics 365 support.
Custom Software Development: Prosigns specializes in creating bespoke software solutions that cater to your unique business needs. Our team of experts works closely with you to understand your requirements and deliver tailor-made software that enhances efficiency and drives growth.
Web and Mobile App Development: From responsive websites to intuitive mobile applications, Prosigns develops cutting-edge solutions that engage users and deliver seamless experiences across devices.
AI & ML Solutions: Harnessing the power of Artificial Intelligence and Machine Learning, Prosigns provides smart solutions that automate processes, provide valuable insights, and drive informed decision-making.
Blockchain Integration: Prosigns offers comprehensive blockchain solutions, including development, integration, and consulting services, enabling businesses to leverage blockchain technology for enhanced security, transparency, and efficiency.
DevOps Services: Prosigns' DevOps services streamline development and operations processes, ensuring faster and more reliable software delivery through automation and continuous integration.
Microsoft Dynamics 365 Support: Prosigns provides comprehensive support and maintenance services for Microsoft Dynamics 365, ensuring your system is always up-to-date, secure, and running smoothly.
Learn how our collaborative approach and dedication to excellence help businesses achieve their goals and stay ahead in today's digital landscape. From concept to deployment, Prosigns is your trusted partner for transforming ideas into reality and unlocking the full potential of your business.
Join us on a journey of innovation and growth. Let's partner for success with Prosigns.
Quarkus Hidden and Forbidden ExtensionsMax Andersen
Quarkus has a vast extension ecosystem and is known for its subsonic and subatomic feature set. Some of these features are not as well known, and some extensions are less talked about, but that does not make them less interesting - quite the opposite.
Come join this talk to see some tips and tricks for using Quarkus and some of the lesser known features, extensions and development techniques.
Exploring Innovations in Data Repository Solutions - Insights from the U.S. G...Globus
The U.S. Geological Survey (USGS) has made substantial investments in meeting evolving scientific, technical, and policy driven demands on storing, managing, and delivering data. As these demands continue to grow in complexity and scale, the USGS must continue to explore innovative solutions to improve its management, curation, sharing, delivering, and preservation approaches for large-scale research data. Supporting these needs, the USGS has partnered with the University of Chicago-Globus to research and develop advanced repository components and workflows leveraging its current investment in Globus. The primary outcome of this partnership includes the development of a prototype enterprise repository, driven by USGS Data Release requirements, through exploration and implementation of the entire suite of the Globus platform offerings, including Globus Flow, Globus Auth, Globus Transfer, and Globus Search. This presentation will provide insights into this research partnership, introduce the unique requirements and challenges being addressed and provide relevant project progress.
Climate Science Flows: Enabling Petabyte-Scale Climate Analysis with the Eart...Globus
The Earth System Grid Federation (ESGF) is a global network of data servers that archives and distributes the planet’s largest collection of Earth system model output for thousands of climate and environmental scientists worldwide. Many of these petabyte-scale data archives are located in proximity to large high-performance computing (HPC) or cloud computing resources, but the primary workflow for data users consists of transferring data, and applying computations on a different system. As a part of the ESGF 2.0 US project (funded by the United States Department of Energy Office of Science), we developed pre-defined data workflows, which can be run on-demand, capable of applying many data reduction and data analysis to the large ESGF data archives, transferring only the resultant analysis (ex. visualizations, smaller data files). In this talk, we will showcase a few of these workflows, highlighting how Globus Flows can be used for petabyte-scale climate analysis.
Code reviews are vital for ensuring good code quality. They serve as one of our last lines of defense against bugs and subpar code reaching production.
Yet, they often turn into annoying tasks riddled with frustration, hostility, unclear feedback and lack of standards. How can we improve this crucial process?
In this session we will cover:
- The Art of Effective Code Reviews
- Streamlining the Review Process
- Elevating Reviews with Automated Tools
By the end of this presentation, you'll have the knowledge on how to organize and improve your code review proces
We describe the deployment and use of Globus Compute for remote computation. This content is aimed at researchers who wish to compute on remote resources using a unified programming interface, as well as system administrators who will deploy and operate Globus Compute services on their research computing infrastructure.
Large Language Models and the End of ProgrammingMatt Welsh
Talk by Matt Welsh at Craft Conference 2024 on the impact that Large Language Models will have on the future of software development. In this talk, I discuss the ways in which LLMs will impact the software industry, from replacing human software developers with AI, to replacing conventional software with models that perform reasoning, computation, and problem-solving.
Utilocate offers a comprehensive solution for locate ticket management by automating and streamlining the entire process. By integrating with Geospatial Information Systems (GIS), it provides accurate mapping and visualization of utility locations, enhancing decision-making and reducing the risk of errors. The system's advanced data analytics tools help identify trends, predict potential issues, and optimize resource allocation, making the locate ticket management process smarter and more efficient. Additionally, automated ticket management ensures consistency and reduces human error, while real-time notifications keep all relevant personnel informed and ready to respond promptly.
The system's ability to streamline workflows and automate ticket routing significantly reduces the time taken to process each ticket, making the process faster and more efficient. Mobile access allows field technicians to update ticket information on the go, ensuring that the latest information is always available and accelerating the locate process. Overall, Utilocate not only enhances the efficiency and accuracy of locate ticket management but also improves safety by minimizing the risk of utility damage through precise and timely locates.
GraphSummit Paris - The art of the possible with Graph TechnologyNeo4j
Sudhir Hasbe, Chief Product Officer, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Enhancing Research Orchestration Capabilities at ORNL.pdfGlobus
Cross-facility research orchestration comes with ever-changing constraints regarding the availability and suitability of various compute and data resources. In short, a flexible data and processing fabric is needed to enable the dynamic redirection of data and compute tasks throughout the lifecycle of an experiment. In this talk, we illustrate how we easily leveraged Globus services to instrument the ACE research testbed at the Oak Ridge Leadership Computing Facility with flexible data and task orchestration capabilities.
Listen to the keynote address and hear about the latest developments from Rachana Ananthakrishnan and Ian Foster who review the updates to the Globus Platform and Service, and the relevance of Globus to the scientific community as an automation platform to accelerate scientific discovery.
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I ...Juraj Vysvader
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I didn't get rich from it but it did have 63K downloads (powered possible tens of thousands of websites).
AI Pilot Review: The World’s First Virtual Assistant Marketing SuiteGoogle
AI Pilot Review: The World’s First Virtual Assistant Marketing Suite
👉👉 Click Here To Get More Info 👇👇
https://sumonreview.com/ai-pilot-review/
AI Pilot Review: Key Features
✅Deploy AI expert bots in Any Niche With Just A Click
✅With one keyword, generate complete funnels, websites, landing pages, and more.
✅More than 85 AI features are included in the AI pilot.
✅No setup or configuration; use your voice (like Siri) to do whatever you want.
✅You Can Use AI Pilot To Create your version of AI Pilot And Charge People For It…
✅ZERO Manual Work With AI Pilot. Never write, Design, Or Code Again.
✅ZERO Limits On Features Or Usages
✅Use Our AI-powered Traffic To Get Hundreds Of Customers
✅No Complicated Setup: Get Up And Running In 2 Minutes
✅99.99% Up-Time Guaranteed
✅30 Days Money-Back Guarantee
✅ZERO Upfront Cost
See My Other Reviews Article:
(1) TubeTrivia AI Review: https://sumonreview.com/tubetrivia-ai-review
(2) SocioWave Review: https://sumonreview.com/sociowave-review
(3) AI Partner & Profit Review: https://sumonreview.com/ai-partner-profit-review
(4) AI Ebook Suite Review: https://sumonreview.com/ai-ebook-suite-review
May Marketo Masterclass, London MUG May 22 2024.pdfAdele Miller
Can't make Adobe Summit in Vegas? No sweat because the EMEA Marketo Engage Champions are coming to London to share their Summit sessions, insights and more!
This is a MUG with a twist you don't want to miss.
How Recreation Management Software Can Streamline Your Operations.pptxwottaspaceseo
Recreation management software streamlines operations by automating key tasks such as scheduling, registration, and payment processing, reducing manual workload and errors. It provides centralized management of facilities, classes, and events, ensuring efficient resource allocation and facility usage. The software offers user-friendly online portals for easy access to bookings and program information, enhancing customer experience. Real-time reporting and data analytics deliver insights into attendance and preferences, aiding in strategic decision-making. Additionally, effective communication tools keep participants and staff informed with timely updates. Overall, recreation management software enhances efficiency, improves service delivery, and boosts customer satisfaction.
2. Democratizing MySQL: Cloud Managed to K8S Managed @presslabs
Sachin Manpathak
Technical Lead @Platform9
Flavius Mecea
Project Lead @Presslabs
Case study
Migrate from Cloud
Managed SQL to K8S
managed
The story of building
Presslabs Operator for
MySQL
00. Who we are
5. Democratizing MySQL: Cloud Managed to K8S Managed @presslabs
WordPress
development
agency
Managed
WordPress
hosting
Record: 2,2 BN
pageviews in a
month
Top tier in
Enterprise
hosting
Open-Source
Stack
2007 2011 2013 2015 2018
Who is Presslabs01.1
7. Democratizing MySQL: Cloud Managed to K8S Managed @presslabs
Presslabs objectives01.3
MySQL Operator for
WordPress hosting
Open infrastructure
using Kubernetes to run
and operate WordPress
8. Democratizing MySQL: Cloud Managed to K8S Managed @presslabs
➔ Runs everywhere
➔ Open-source
➔ We had experience with containers
before they were cool
➔ Our core services already run on
Kubernetes since version 1.7
➔ Support for a lot of integrations
Why Kubernetes?01.4
13. Democratizing MySQL: Cloud Managed to K8S Managed @presslabs
✓ Self-healing clusters
✓ Highly available reads
✓ Virtually highly available writes
✓ Replication lag detection and mitigation
✓ Resource abuse control
✓ Automated backups and restores
MySQL Operator03.2
A Kubernetes Operator for managing MySQL Clusters with
asynchronous or semi-synchronous replication:
14. Democratizing MySQL: Cloud Managed to K8S Managed @presslabs
➔ Control plane
◆ Operator
◆ Orchestrator
➔ Data plane
◆ MySQL deployment
➔ Monitoring
◆ Prometheus
Architecture overview03.3
16. Democratizing MySQL: Cloud Managed to K8S Managed @presslabs
➔ Init:
◆ MySQL configuration
➔ Main:
◆ Percona Server for MySQL
➔ Sidecar:
◆ Lag Detection and Monitoring
◆ Resource abuse control
◆ Backups and Initializations
MySQL Node03.5
18. Democratizing MySQL: Cloud Managed to K8S Managed @presslabs
➔ Orchestrator is a MySQL high
availability and replication
management tool
➔ State reconciliation between
Orchestrator and Kubernetes
Orchestrator integration04.1
19. Democratizing MySQL: Cloud Managed to K8S Managed @presslabs
➔ MySQL nodes keeps data into
Persistent Volumes
➔ Scale down does not delete PVCs
➔ Scale up may be an issue because
of obsolete data
➔ Delete PVC at scale down
➔ Special case for Node 0
PVC clean-up04.2
20. Democratizing MySQL: Cloud Managed to K8S Managed @presslabs
➔ Helm the standard for packaging apps in Kubernetes
➔ CRD management is painful
➔ No CRD validation
Operator upgrades / deployment04.3
helm.sh/crd-install
21. Democratizing MySQL: Cloud Managed to K8S Managed @presslabs
➔ Default policy:
MySQL Upgrade04.4
Rolling Updates
22. Democratizing MySQL: Cloud Managed to K8S Managed @presslabs
MySQL Upgrade04.4
Rolling Updates
➔ Default policy:
23. Democratizing MySQL: Cloud Managed to K8S Managed @presslabs
MySQL Upgrade04.4
➔ Default policy:
➔ Not gentle for MySQL
➔ Master should be the last one standing
Rolling Updates
24. Democratizing MySQL: Cloud Managed to K8S Managed @presslabs
MySQL Upgrade
✘ Pod finalizer
✘ Container lifecycle hooks
04.4
➔ Default policy:
➔ Not gentle for MySQL
➔ Master should be the last one standing
➔ Recommended policy:
Rolling Updates
On Delete
30. Democratizing MySQL: Cloud Managed to K8S Managed @Platform9Sys
● Customer base growth == Substantial increase in public cloud
costs
● At ~300 cloud regions, just RDS bill amounted to 10s of
thousands
Problem of Scale
31. Democratizing MySQL: Cloud Managed to K8S Managed @Platform9Sys
Infra evolution
2014-2016 2017-2018 2019-2020
AWS for compute
+ RDS
Private cloud for
compute + RDS
Private cloud for
compute + DB
32. Democratizing MySQL: Cloud Managed to K8S Managed @Platform9Sys
● Automation set up to use of public self service API
● Reliance on RDS snapshots, performance charts and alerting
● No MySQL expertise
● Required Comparable performance
● Needed drop-in replacement for MySQL to minimize impact
Hurdles to DBaaS
33. Democratizing MySQL: Cloud Managed to K8S Managed @Platform9Sys
Requirement Mysql-Operator Featureset
Simple, self service, open API K8s CRD implementation
Drop in replacement: MySQL Percona: 100% compatible
Automated backups, API driven recovery Scheduled Backups to S3
High Availability & Failover
Replica support with automated failover with
Orchestrator
Open Source Yes
Built-in monitoring Yes: Prometheus metrics
Searching for DBaaS
35. Democratizing MySQL: Cloud Managed to K8S Managed @Platform9Sys
The Rollout
Current State
● ~10 accounts using MySQL managed on K8s by the operator
● 3 managed multi-master K8s clusters: Dev, Stage and Prod
● Automated failover with 3-AZ deployment
Plan:
● 100% deployments managed with MySQL operator
● Standardize on Operator Paradigm: Prometheus Monitoring, Log collection, etc.
Hello everyone, I’m happy to be here with you. Thank you for joining our session on Democratizing MySQL Cloud Managed to K8S Managed.
My name is Flavius Mecea and I will talk about...
This is a joint session with Platform 9’s Sachin Manpathak. Unfortunately he is not present today here, because some visa issues. But he prepared a video for you and his colleague, Daniel, is here to answer your questions.
He will talk about a case study on migrating from cloud managed to k8s managed
Sa-chin Man-Pa-tak
First of all, I want to start by presenting the MSQL operator,
I will go through this 5 main topics:
Context in which the operator was born
The needs that we had in mind when we first started building With what needs we started in mind
What we have achieved, the operator overview
Also, some challenges that we encountered during development
And the project status and future plans
First, let me introduce the company that I work for, Presslabs and the context in which the operator was born.
We are a managed WordPress hosting company, doing business for more than 10 years
We started as a WP dev agency, then we pivoted towards the hosting business.
After serving both publishers and Enterprise clients for several years, we came to realize that all the companies (including us) in the global top Enterprise tier were doing the same thing.
That’s when we started thinking about the Stack; an open-source infrastructure that could become the standard in WP hosting.
It’s not just about the stack. It’s our commitment to our mission to democratize WP hosting infrastructure, to share our accumulated knowledge.
As part of our mission, we have 2 key objectives:
We are currently building an open infrastructure using Kubernetes to run and operate WordPress, named: Presslabs Stack
The other one is building the MySQL Operator;
Because half of WordPress hosting is about MySQL
Runs everywhere (dev’s laptop, public clouds, private data centers)
Open-source
We had experience with containers before they were cool
Our core services already run on K8s since version 1.7
It already offers support for a lot of integrations: cert manager, nginx(Ingress), Prometheus(monitoring)
As part of the Stack we needed a way to automate certain operations such as: deploying, scaling, maintaining and backing-up MySQL
For that:
We’ve identified some key requirements (for the infrastructure), to focus on:
First we wanted something that is easy to operate, that doesn’t get in our way
Second, we needed an elastic service, to help us scale with the demand.
You all know that in hosting, service-uptime is paramount, so we had(have) to maximize the availability of our service
Also, no one wants to lose data, especially when it comes to someone else’s data
And in order to reliably operate the service(system) we need a method to observe what’s happening from top, down to the request level.
With this in mind we checked some of the available solutions and concluded that they were not suitable for us.
For example, both Oracle and Percona operators perform group replication — (which implies that ) they required more nodes to operate(at least 3), (which is not suitable for us because) it increases costs.
As great engineers do, we’ve ended up building a solution ourselves.
In the past 10 year, we’ve identified several must-have features, which have been integrated into the Operator,(to fulfill our basic needs) such as:
Self-healing clusters - (w/o this feature the operator doesn’t make sense) - the operator has to continuously reconcile and solve replication issues
Highly available reads - when more nodes are available
Virtually highly available writes - that provides us minimum downtime due to fast failovers
Replication lag detection and mitigation - takes lagging nodes out of rotation when lag is above a set threshold or in case of unhealthy nodes
Resource abuse control - (which is useful) to limit noisy queries (that may slow down the cluster)
Automated backups and restores - this one speaks for itself.
All of these features have proven to be very helpful compared with our old setup.
Now let’s move on to the practical aspects of building the operator.
In this figure is presented the entire system overview.
The architecture is split into 3 main parts: control plane, data plane and monitoring
The control plane consists of the operator and its components, which are deployed using helm, usually in a dedicated namespace. Here we have:
The controller itself
The Orchestrator, a MySQL high availability and replication management tool (I will come back to it later)
The data plane represents a MySQL deployment, made of basic k8s resources (like pods, services, etc) which can be spread across multiple namespaces.
And last but not least we have monitoring which is performed by Prometheus, the standard k8s monitoring system.
Going deeper into the dataplane, we can see that the MySQL cluster has multiple components:
Statefulset - that represents the main resource, which provisions the pods and the PVs for each MySQL node.
Also there are 2 services for each cluster:
Master service - that always points to the master MySQL node
Healthy nodes service - that points to all the pods that are considered healthy by the operator
The selections are made based on K8S labels, which are set by the operator based on information gathered from Orchestrator
Your application will interact with those two services for writes and for reads (and it’s the application’s responsibility to split them, by using app specific logic or by using some dedicated software like proxySQL)
Internally, a node consists of several components:
Init containers: for MySQL initialization and configuration
A main container: which is the Percona Server for MySQL. We chose Percona because it’s battle tested in enterprise environments and a MySQL drop-in replacement.
Sidecar containers:
Some of them are based on Percona toolkit which are responsible for several actions: lag detection, MySQL monitoring and resource limit policy enforcement
There is an extra container that provides an endpoint for node initialization or for backups.
I want to mention some specific challenges we’ve had during the implementation of this operator such as:
Orchestrator integration - how do we integrate Orchestrator, a third-party tool, so we don’t have to reinvent the wheel?
PV clean-up -we have to manage PVs ourselves because the way K8S manages PVs is not suitable for MySQL (later we’ll see why)
Operators upgrades - which is a common problem for operators because helm provides very modest CRDs support
MySQL Upgrades - this is a specific problem, because usually it’s done by humans and it’s difficult to automate # is a difficult operation especially when it comes to k8s
Let me start by presenting the Orchestrator integration.
Orchestrator is a subcomponent of the entire operator and it’s the tool that handles MySQL topology and failovers ... but it’s not meant to be stateless, as operators usually are.
K8s keeps a state, also Orchestrator keeps a state - the operator doesn’t know which one to listen to. ...I’m talking about an information flow conflict.
To fix this, we chose to implement a reconciliation loop between Orchestrator and K8S which, at every few seconds reconciles the state between the 2.
On one hand, the Orchestrator is responsible for updating replication topology(in emergency situations) and to observe the current status of the MySQL cluster.
On the other hand, the Operator reconciles the desire replication topology into Orchestrator and provides service discovery. Even if the Orchestrator data is lost, the operator is able to restore all the data to Orchestrator.
As a conclusion, the operator (has to) take decisions based only on the information found in k8s which is up-to-date, thanks to the reconciliation loop.
Another challenge was how k8s manages PVs.
The MySQL data is being stored in PVs, managed by the statefulset.
But this implies that, when a cluster is scaled down, the volume is not deleted, so after a while the data might become obsolete, and when the statefulset is scaled up again the replication can fail.
To fix this, we’ve implemented a cleaner that deletes the PVC when the cluster is scaled down, except for node 0 which is special case and the data should be kept as long as the cluster exists, to avoid losing cluster data.
A common problem in the world of operators is CRD management.
Currently the defacto standard for packaging application is Helm. If you are a helm user you probably know that CRD management is still very painful, because Helm does not provides an upgrade path for CRDs.
What is more MySQL Operator is still in development and CRDs specifications are still subject to change.
This made us to install CRDs without validation, to minimize user intervention at upgrades.
However we hope that this is a temporary solution until Helm improves its support for managing CRDs.
A specific challenge for this operator is how MySQL upgrades are performed.
K8S already provides some upgrade policies, like:
Rolling updates (update policy)- which is not exactly gentle with MySQL, as it can choose to upgrade the master first, which forces a failover to the replica. Then, when the replica is updated will triggers another failover, which is unnecessary and can be avoided if the master is the last one to be updated.
That’s why the master should be the last one standing to avoid failover flip-flop, or downtime
A contributor came up with an idea to use: On Delete policy, which fits better our needs because the operator can choose which pod to update. Therefore we can control the order in which the pods are upgraded.
We tried to use other techniques, as well, like pod finalizers, to block pod deletion until the failover is done.
But we hit a dead end because we misunderstood how k8s finalizers work.
Using containers lifecycle hooks to trigger a failover was proven to be too complicated
So we chose to implement ‘On Delete’ policy which is still work in progress
A specific challenge for this operator is how MySQL upgrades are performed.
K8S already provides some upgrade policies, like:
Rolling updates (update policy)- which is not exactly gentle with MySQL, as it can choose to upgrade the master first, which forces a failover to the replica. Then, when the replica is updated will triggers another failover, which is unnecessary.
That’s why the master should be the last one standing to avoid failover flip-flop, or downtime
A contributor came up with an idea to use: On Delete policy, which fits better our needs because the operator can choose which pod to update. Therefore we can control the order in which the pods are upgraded.
We tried to use other techniques, as well, like pod finalizers, to block pod deletion until the failover is done.
But we hit a dead end because we misunderstood how k8s finalizers work.
Using containers lifecycle hooks to trigger a failover was proven to be too complicated
So we chose to implement ‘On Delete’ policy which is still work in progress
A specific challenge for this operator is how MySQL upgrades are performed.
K8S already provides some upgrade policies, like:
Rolling updates (update policy)- which is not exactly gentle with MySQL, as it can choose to upgrade the master first, which forces a failover to the replica. Then, when the replica is updated will triggers another failover, which is unnecessary.
...and can be avoided if the master is the last one that is updated.
A contributor came up with an idea to use: On Delete policy, which fits better our needs because the operator can choose which pod to update. Therefore we can control the order in which the pods are upgraded.
We tried to use other techniques, as well, like pod finalizers, to block pod deletion until the failover is done.
But we hit a dead end because we misunderstood how k8s finalizers work.
Using containers lifecycle hooks to trigger a failover was proven to be too complicated
So we chose to implement ‘On Delete’ policy which is still work in progress
A specific challenge for this operator is how MySQL upgrades are performed.
K8S already provides some upgrade policies, like:
Rolling updates (update policy)- which is not exactly gentle with MySQL, as it can choose to upgrade the master first, which forces a failover to the replica. Then, when the replica is updated will triggers another failover, which is unnecessary.
That’s why the master should be the last one standing to avoid failover flip-flop, or downtime
A contributor came up with an idea to use: On Delete policy, which fits better our needs because the operator can choose which pod to update. Therefore we can control the order in which the pods are upgraded.
We tried to use other techniques, as well, like pod finalizers, to block pod deletion until the failover is done.
But we hit a dead end because we misunderstood how k8s finalizers work.
Using containers lifecycle hooks to trigger a failover was proven to be too complicated
So we chose to implement ‘On Delete’ policy which is still work in progress
Now, I want to share with you the current status of the project and future plans.
Integration with Marketplaces - like Google Cloud Marketplace, OperatorHub, AWS Marketplace - to make easier for end users to install it
We would like to finish what we’ve started, so we would add CRD validation and webhooks
Multiple backup policies - for granular control over backups
To make it easy for your application to connect to the cluster we want to integrate ProxySQL, instead of using, that 2 services, the app can connect only to ProxySQL, which will do the routing for you.
The Operator is still in alpha version and we’re really close to beta.
We have a good feedback from the community, and some major platforms actively use and contribute to the Operator.
I would like to invite you to visit the project page on Github and for any question to join the #mysql-operator slack channel.
Coming up next, my co-presenter Daniel will continue with second part of the presentation. Let’s encourage him with a round of applause, thank you for your time.
Customer base growth == Substantial increase in public cloud costs
At ~300 cloud regions, just RDS bill amounted to 10s of thousands
Prototype: MySQL as backend on managed on-prem Kubernetes & storage
Lessons:
Multi-master kubernetes is essential
Storage story is still developing (as of v1.10), better (as of v1.13)
MySQL backups save lives!