Modernization patterns to refactor a legacy application into event driven mic...Bilgin Ibryam
A use-case-driven introduction to the most common design patterns for modernizing monolithic legacy applications to microservices using Apache Kafka, Debezium, and Kubernetes.
Kubernetes: The evolution of distributed systems | DevNation Tech TalkRed Hat Developers
Kubernetes has evolved to provide capabilities for managing the lifecycle of distributed applications such as deployment, scaling, configuration, and isolation of resources. It addresses needs such as service discovery, networking, bindings to APIs, and state management through controllers, custom resources, and extensions like operators, service meshes, and serverless platforms like Knative. Emerging technologies are exploring hybrid deployments, edge computing, improved state abstractions, and integration across runtimes and clouds.
This document provides an agenda and overview of Kafka on Kubernetes. It begins with an introduction to Kafka fundamentals and messaging systems. It then discusses key ideas behind Kafka's architecture like data parallelism and batching. The rest of the document explains various Kafka concepts in detail like topics, partitions, producers, consumers, and replication. It also introduces Kubernetes concepts relevant for running Kafka like StatefulSets, StorageClasses and the operator pattern. The goal is to help understand how to build event-driven systems using Kafka and deploy it on Kubernetes.
Build your operator with the right toolRafał Leszko
The document discusses different tools that can be used to build Kubernetes operators, including the Operator SDK, Helm, Ansible, Go, and operator frameworks like KOPF. It provides an overview of how each tool can be used to generate the scaffolding and implement the logic for a sample Hazelcast operator.
The document discusses Kubernetes and cloud native application design. It begins by defining cloud native as structuring teams and technology around automation and microservices packaged as containers orchestrated by platforms like Kubernetes. It then covers common Kubernetes resources like pods, services, deployments and Kubernetes design patterns like sidecars, init containers and immutable configuration. The document advocates principles for container-based applications including single concern, self-containment and image immutability. It also recommends techniques like using volumes for persistent data and logging to standard output/error.
Serverless Workflow: New approach to Kubernetes service orchestration | DevNa...Red Hat Developers
With the rise of Serverless Architectures, Workflows have gained a renewed interest and usefulness. Typically thought of as centralized and monolithic, they now play a key role in service orchestration and coordination as well as modular processing. With many different architecture approaches already in place, the Cloud Native Computing Foundation (CNCF) has started an initiative to specify serverless workflows to ensure portability and vendor neutrality. In this talk, we introduce the CNCF Serverless Workflow specification and provide examples and demos on top of Kogito, Red Hat's business automation toolkit. You will learn: 1- The what, why, and how of the CNCF Serverless Workflow specification 2- Why using the Serverless Workflow specification and orchestration can improve your serverless architecture 3- When to use CNCF Serverless Workflow and Kogito together and the benefits derived.
Kubernetes is awesome! But what does it takes for a Java developer to design, implement and run Cloud Native applications? In this session, we will look at Kubernetes from a user point of view and demonstrate how to consume it effectively. We will discover which concerns Kubernetes addresses and how it helps to develop highly scalable and resilient Java applications.
FOSDEM TALK: https://fosdem.org/2017/schedule/event/cnjavadev/
Modernization patterns to refactor a legacy application into event driven mic...Bilgin Ibryam
A use-case-driven introduction to the most common design patterns for modernizing monolithic legacy applications to microservices using Apache Kafka, Debezium, and Kubernetes.
Kubernetes: The evolution of distributed systems | DevNation Tech TalkRed Hat Developers
Kubernetes has evolved to provide capabilities for managing the lifecycle of distributed applications such as deployment, scaling, configuration, and isolation of resources. It addresses needs such as service discovery, networking, bindings to APIs, and state management through controllers, custom resources, and extensions like operators, service meshes, and serverless platforms like Knative. Emerging technologies are exploring hybrid deployments, edge computing, improved state abstractions, and integration across runtimes and clouds.
This document provides an agenda and overview of Kafka on Kubernetes. It begins with an introduction to Kafka fundamentals and messaging systems. It then discusses key ideas behind Kafka's architecture like data parallelism and batching. The rest of the document explains various Kafka concepts in detail like topics, partitions, producers, consumers, and replication. It also introduces Kubernetes concepts relevant for running Kafka like StatefulSets, StorageClasses and the operator pattern. The goal is to help understand how to build event-driven systems using Kafka and deploy it on Kubernetes.
Build your operator with the right toolRafał Leszko
The document discusses different tools that can be used to build Kubernetes operators, including the Operator SDK, Helm, Ansible, Go, and operator frameworks like KOPF. It provides an overview of how each tool can be used to generate the scaffolding and implement the logic for a sample Hazelcast operator.
The document discusses Kubernetes and cloud native application design. It begins by defining cloud native as structuring teams and technology around automation and microservices packaged as containers orchestrated by platforms like Kubernetes. It then covers common Kubernetes resources like pods, services, deployments and Kubernetes design patterns like sidecars, init containers and immutable configuration. The document advocates principles for container-based applications including single concern, self-containment and image immutability. It also recommends techniques like using volumes for persistent data and logging to standard output/error.
Serverless Workflow: New approach to Kubernetes service orchestration | DevNa...Red Hat Developers
With the rise of Serverless Architectures, Workflows have gained a renewed interest and usefulness. Typically thought of as centralized and monolithic, they now play a key role in service orchestration and coordination as well as modular processing. With many different architecture approaches already in place, the Cloud Native Computing Foundation (CNCF) has started an initiative to specify serverless workflows to ensure portability and vendor neutrality. In this talk, we introduce the CNCF Serverless Workflow specification and provide examples and demos on top of Kogito, Red Hat's business automation toolkit. You will learn: 1- The what, why, and how of the CNCF Serverless Workflow specification 2- Why using the Serverless Workflow specification and orchestration can improve your serverless architecture 3- When to use CNCF Serverless Workflow and Kogito together and the benefits derived.
Kubernetes is awesome! But what does it takes for a Java developer to design, implement and run Cloud Native applications? In this session, we will look at Kubernetes from a user point of view and demonstrate how to consume it effectively. We will discover which concerns Kubernetes addresses and how it helps to develop highly scalable and resilient Java applications.
FOSDEM TALK: https://fosdem.org/2017/schedule/event/cnjavadev/
Kafka at the Edge: an IoT scenario with OpenShift Streams for Apache Kafka | ...Red Hat Developers
This document discusses Apache Kafka and Red Hat OpenShift Streams for Apache Kafka. It begins with an overview of what Apache Kafka is and its common use cases. It then demonstrates how Red Hat OpenShift Streams provides a managed Apache Kafka cluster as a service, including a dedicated cluster, configuration management, metrics, monitoring and other features to provide a streamlined developer experience. It concludes with information on trying OpenShift Streams for Apache Kafka and additional resources.
Give Your Confluent Platform Superpowers! (Sandeep Togrika, Intel and Bert Ha...HostedbyConfluent
Whether you are a die-hard DC comic enthusiast, mad for Marvel, or completely clueless when it comes to comic books, at the end of the day each of us would love to possess the superpower to transform data in seconds versus minutes or days. But architects and developers are challenged with designing and managing platforms that scale elastically and combine event streams with stored data, to enable more contextually rich data analytics. This made even more complex with data coming from hundreds of sources, and in hundreds of terabytes, or even petabytes, per day.
Now, with Apache Kafka and Intel hardware technology advances, organizations can turn massive volumes of disparate data into actionable insights with the ability to filter, enrich, join and process data instream. Let's consider Information Security. IT leaders need to ensure all company data and IP is secured against threats and vulnerabilities. A combination of real-time event streaming with Confluent Platform and Intel Architecture has enabled threat detection efforts that once took hours to be completed in seconds, while simultaneously reducing technical debt and data processing and storage costs.
In this session, Confluent and Intel architects will share detailed performance benchmarking results and new joint reference architecture. We’ll detail ways to remove Kafka performance bottlenecks, and improve platform resiliency and ensure high availability using Confluent Control Center and Multi-Region Clusters. And we’ll offer up tips for addressing challenges that you may be facing in your own super heroic efforts to design, deploy, and manage your organization’s data platforms.
The Evolution of Distributed Systems on KubernetesBilgin Ibryam
Cloud native applications of the future will consist of hybrid workloads: stateful applications, batch jobs, stateless microservices, functions, (and maybe something else too) wrapped as Linux containers and deployed via Kubernetes on any cloud. Functions and the so-called serverless computing model is the latest evolution of what started as SOA years ago. But is it the last step of the application architecture evolution and is it here to stay? During this talk, we will take you on a journey exploring distributed application needs and how they evolved with Kubernetes, Istio, Knative, Dapr, and other projects. By the end of the session, you will know what is coming after microservices.
This document summarizes a meetup presentation about deploying Kong API gateway with Mesosphere DC/OS. The presentation was given by Shashi Ranjan and Cooper Marcus of Kong and covered how Kong can help manage microservices and act as a central API gateway. It discussed how Kong provides functionality like authentication, security, logging and load balancing through plugins. The document also provided an overview of Kong editions, plugins, and common enterprise installations.
Architectural patterns for high performance microservices in kubernetesRafał Leszko
The document discusses various architectural patterns for distributed in-memory caching in Kubernetes microservices including embedded, embedded distributed, client-server, cloud, sidecar, reverse proxy, and reverse proxy sidecar patterns. It provides examples of implementing each pattern using the Hazelcast in-memory data grid and summaries of the pros and cons of each approach.
Better Kafka Performance Without Changing Any Code | Simon Ritter, AzulHostedbyConfluent
Apache Kafka is the most popular open-source stream-processing software for collecting, processing, storing, and analyzing data at scale. Most known for its excellent performance, low latency, fault tolerance, and high throughput, it's capable of handling thousands of messages per second. For mission-critical applications, how do you ensure that the performance delivered is the performance required? This is especially important as Kafka is written in Java and Scala and runs on the JVM. The JVM is a fantastic platform that delivers on an internet scale.
In this session, we'll explore how making changes to the JVM design can eliminate the problems of garbage collection pauses and raise the throughput of applications. For cloud-based Kafka applications, this can deliver both lower latency and reduced infrastructure costs. All without changing a line of code!
From Postgres to Event-Driven: using docker-compose to build CDC pipelines in...confluent
Mark Teehan, Principal Solutions Engineer, Confluent
Use the Debezium CDC connector to capture database changes from a Postgres database - or MySQL or Oracle; streaming into Kafka topics and onwards to an external data store. Examine how to setup this pipeline using Docker Compose and Confluent Cloud; and how to use various payload formats, such as avro, protobuf and json-schema.
https://www.meetup.com/Singapore-Kafka-Meetup/events/276822852/
During this talk, Bilgin will take you on a journey exploring distributed application needs and how they evolved with Kubernetes, Istio, Knative, Dapr, and other projects. By the end of the session, you will know what is coming after microservices
Have you ever tried Java on AWS Lambda but found that the cold-start latency and memory usage were far too high? In this session, we will show how we optimized Java for serverless applications by leveraging GraalVM with Quarkus to provide both supersonic startup speed and a subatomic memory footprint.
Managing Stateful Services with the Operator Pattern in Kubernetes - Kubernet...Jakob Karalus
While it's easy to deploy stateless application with Kubernetes, it's harder for stateful software. Since applications often require custom functionality that Kubernetes can't provide, developers want to add more specialized patterns like automatic backups, failover or rebalancing to their Kubernetes deployments. In this talk, we will look at the Operator Pattern and other possibilities to extend the functionality of Kubernetes and how to use them to operate stateful applications.
Machine Learning Exchange (MLX) is a catalog and execution engine for AI assets including pipelines, models, datasets and notebooks. It allows users to upload, register, execute and deploy these assets. MLX generates sample pipeline code and uses Kubeflow Pipelines powered by Tekton as its pipelines engine. It integrates with services like KFServing for model serving, Dataset Lifecycle Framework for data management, and MAX/DAX for pre-registered datasets and models. MLX provides APIs, UI and SDK to interact with these AI assets.
The document discusses monitoring an OpenShift cluster with Prometheus. It describes what components need monitoring, including nodes, services, and pods. Prometheus is well-integrated for Kubernetes monitoring. The architecture proposed uses Prometheus to scrape metrics from targets like nodes and services, with alerting configured and dashboards built. It references existing Prometheus mixins for Kubernetes and OpenShift monitoring best practices. Special design choices like using remote write and a Blackbox exporter are highlighted.
This document provides an overview of container management and Kubernetes concepts. It discusses delivery and deployment methods like classic deployment, containers, virtualization, and container orchestration. It then covers Kubernetes components like etcd, the control plane, and nodes. It outlines cluster administration tasks and best practices for cluster usage. Finally, it provides examples of Kubernetes resource types like pods, replica sets, and deployments.
How Confluent Completes the Event Streaming Platform (Addison Huddy & Dan Ros...HostedbyConfluent
Confluent Platform 6.0 and Project Metamorphosis complete the event streaming platform by providing elastic scalability, infinite storage, global access, and transforming Kafka. Key features include self-balancing clusters and dynamic scaling on Confluent Cloud, tiered storage and infinite retention on the platform, and cluster linking to simplify hybrid and multi-cloud deployments. These new capabilities help remove limitations on scale, storage, and deployment that traditionally challenged Kafka applications.
Serverless stream processing of Debezium data change events with Knative | De...Red Hat Developers
Come and join us for an (almost) no-slides session around the terrific trio of Debezium, Apache Kafka Streams, and Knative Eventing! Leveraging Apache Kafka as the de-facto standard for event-driven data pipelines, these open-source technologies allow you to ingest data changes from relational and NoSQL databases, process and enrich them, and consume them serverless-style. In a live demo, you’ll see how Debezium, Apache Kafka, Quarkus, and Knative are the dream-team for building serverless, cloud-native stream processing pipelines. You will learn: How to stream change events out of your database using Debezium How to use the Quarkus extension for Kafka Streams to build cloud-native stream processing applications, running either on the JVM or GraalVM How to consume and distribute Kafka messages with Knative Eventing, allowing you to manage modern serverless workloads on Kubernetes.
This three-day course teaches developers how to build applications that can publish and subscribe to data from an Apache Kafka cluster. Students will learn Kafka concepts and components, how to use Kafka and Confluent APIs, and how to develop Kafka producers, consumers, and streams applications. The hands-on course covers using Kafka tools, writing producers and consumers, ingesting data with Kafka Connect, and more. It is designed for developers who need to interact with Kafka as a data source or destination.
OSDC 2018 | Three years running containers with Kubernetes in Production by T...NETWAYS
The talk gives a state of the art update of experiences with deploying applications in Kubernetes on scale. If in clouds or on premises, Kubernetes took over the leading role as a container operating system. The central paradigm of stateless containers connected to storage and services is the core of Kubernetes. However, it can be extended to distributed databases, Machine Learning, Windows VMs in Kubernetes. All these applications have been considered as edge cases a few years ago, however, are going more and more mainstream today.
Model Driven SDLC using Docker #gopaddle #dockermeetupVinothini Raju
The document discusses model driven software development lifecycle (SDLC) using Docker. It describes using models for requirements, design, testing, and composition. Models are used to define services, dependencies, build processes and deployment configuration. The SDLC can be implemented from the models using forward or reverse engineering to generate Dockerfiles, images and docker-compose files to build, test and deploy applications as containers.
Securing Kafka At Zendesk (Joy Nag, Zendesk) Kafka Summit 2020confluent
Kafka is one of the most important foundation services at Zendesk. It became even more crucial with the introduction of Global Event Bus which my team built to propagate events between Kafka clusters hosted at different parts of the world and between different products. As part of its rollout, we had to add mTLS support in all of our Kafka Clusters (we have quite a few of them), this was to make propagation of events between clusters hosted at different parts of the world secure. It was quite a journey, but we eventually built a solution that is working well for us.
Things I will be sharing as part of the talk:
1. Establishing the use case/problem we were trying to solve (why we needed mTLS)
2. Building a Certificate Authority with open source tools (with self-signed Root CA)
3. Building helper components to generate certificates automatically and regenerate them before they expire (helps using a shorter TTL (Time To Live) which is good security practice) for both Kafka Clients and Brokers
4. Hot reloading regenerated certificates on Kafka brokers without downtime
5. What we built to rotate the self-signed root CA without downtime as well across the board
6. Monitoring and alerts on TTL of certificates
7. Performance impact of using TLS (along with why TLS affects kafka’s performance)
8. What we are doing to drive adoption of mTLS for existing Kafka clients using PLAINTEXT protocol by making onboarding easier
9. How this will become a base for other features we want, eg ACL, Rate Limiting (by using the principal from the TLS certificate as Identity of clients)
Meetup 12-12-2017 - Application Isolation on Kubernetesdtoledo67
Here are the slides I presented on 12-12-2017 at the Bay Area Microservices Meeting. I presented some of the best practices to achieve application isolation on Kubernetes
Container technologies use namespaces and cgroups to provide isolation between processes and limit resource usage. Docker builds on these technologies using a client-server model and additional features like images, containers, and volumes to package and run applications reliably and at scale. Kubernetes builds on Docker to provide a platform for automating deployment, scaling, and operations of containerized applications across clusters of hosts. It uses labels and pods to group related containers together and services to provide discovery and load balancing for pods.
Kafka at the Edge: an IoT scenario with OpenShift Streams for Apache Kafka | ...Red Hat Developers
This document discusses Apache Kafka and Red Hat OpenShift Streams for Apache Kafka. It begins with an overview of what Apache Kafka is and its common use cases. It then demonstrates how Red Hat OpenShift Streams provides a managed Apache Kafka cluster as a service, including a dedicated cluster, configuration management, metrics, monitoring and other features to provide a streamlined developer experience. It concludes with information on trying OpenShift Streams for Apache Kafka and additional resources.
Give Your Confluent Platform Superpowers! (Sandeep Togrika, Intel and Bert Ha...HostedbyConfluent
Whether you are a die-hard DC comic enthusiast, mad for Marvel, or completely clueless when it comes to comic books, at the end of the day each of us would love to possess the superpower to transform data in seconds versus minutes or days. But architects and developers are challenged with designing and managing platforms that scale elastically and combine event streams with stored data, to enable more contextually rich data analytics. This made even more complex with data coming from hundreds of sources, and in hundreds of terabytes, or even petabytes, per day.
Now, with Apache Kafka and Intel hardware technology advances, organizations can turn massive volumes of disparate data into actionable insights with the ability to filter, enrich, join and process data instream. Let's consider Information Security. IT leaders need to ensure all company data and IP is secured against threats and vulnerabilities. A combination of real-time event streaming with Confluent Platform and Intel Architecture has enabled threat detection efforts that once took hours to be completed in seconds, while simultaneously reducing technical debt and data processing and storage costs.
In this session, Confluent and Intel architects will share detailed performance benchmarking results and new joint reference architecture. We’ll detail ways to remove Kafka performance bottlenecks, and improve platform resiliency and ensure high availability using Confluent Control Center and Multi-Region Clusters. And we’ll offer up tips for addressing challenges that you may be facing in your own super heroic efforts to design, deploy, and manage your organization’s data platforms.
The Evolution of Distributed Systems on KubernetesBilgin Ibryam
Cloud native applications of the future will consist of hybrid workloads: stateful applications, batch jobs, stateless microservices, functions, (and maybe something else too) wrapped as Linux containers and deployed via Kubernetes on any cloud. Functions and the so-called serverless computing model is the latest evolution of what started as SOA years ago. But is it the last step of the application architecture evolution and is it here to stay? During this talk, we will take you on a journey exploring distributed application needs and how they evolved with Kubernetes, Istio, Knative, Dapr, and other projects. By the end of the session, you will know what is coming after microservices.
This document summarizes a meetup presentation about deploying Kong API gateway with Mesosphere DC/OS. The presentation was given by Shashi Ranjan and Cooper Marcus of Kong and covered how Kong can help manage microservices and act as a central API gateway. It discussed how Kong provides functionality like authentication, security, logging and load balancing through plugins. The document also provided an overview of Kong editions, plugins, and common enterprise installations.
Architectural patterns for high performance microservices in kubernetesRafał Leszko
The document discusses various architectural patterns for distributed in-memory caching in Kubernetes microservices including embedded, embedded distributed, client-server, cloud, sidecar, reverse proxy, and reverse proxy sidecar patterns. It provides examples of implementing each pattern using the Hazelcast in-memory data grid and summaries of the pros and cons of each approach.
Better Kafka Performance Without Changing Any Code | Simon Ritter, AzulHostedbyConfluent
Apache Kafka is the most popular open-source stream-processing software for collecting, processing, storing, and analyzing data at scale. Most known for its excellent performance, low latency, fault tolerance, and high throughput, it's capable of handling thousands of messages per second. For mission-critical applications, how do you ensure that the performance delivered is the performance required? This is especially important as Kafka is written in Java and Scala and runs on the JVM. The JVM is a fantastic platform that delivers on an internet scale.
In this session, we'll explore how making changes to the JVM design can eliminate the problems of garbage collection pauses and raise the throughput of applications. For cloud-based Kafka applications, this can deliver both lower latency and reduced infrastructure costs. All without changing a line of code!
From Postgres to Event-Driven: using docker-compose to build CDC pipelines in...confluent
Mark Teehan, Principal Solutions Engineer, Confluent
Use the Debezium CDC connector to capture database changes from a Postgres database - or MySQL or Oracle; streaming into Kafka topics and onwards to an external data store. Examine how to setup this pipeline using Docker Compose and Confluent Cloud; and how to use various payload formats, such as avro, protobuf and json-schema.
https://www.meetup.com/Singapore-Kafka-Meetup/events/276822852/
During this talk, Bilgin will take you on a journey exploring distributed application needs and how they evolved with Kubernetes, Istio, Knative, Dapr, and other projects. By the end of the session, you will know what is coming after microservices
Have you ever tried Java on AWS Lambda but found that the cold-start latency and memory usage were far too high? In this session, we will show how we optimized Java for serverless applications by leveraging GraalVM with Quarkus to provide both supersonic startup speed and a subatomic memory footprint.
Managing Stateful Services with the Operator Pattern in Kubernetes - Kubernet...Jakob Karalus
While it's easy to deploy stateless application with Kubernetes, it's harder for stateful software. Since applications often require custom functionality that Kubernetes can't provide, developers want to add more specialized patterns like automatic backups, failover or rebalancing to their Kubernetes deployments. In this talk, we will look at the Operator Pattern and other possibilities to extend the functionality of Kubernetes and how to use them to operate stateful applications.
Machine Learning Exchange (MLX) is a catalog and execution engine for AI assets including pipelines, models, datasets and notebooks. It allows users to upload, register, execute and deploy these assets. MLX generates sample pipeline code and uses Kubeflow Pipelines powered by Tekton as its pipelines engine. It integrates with services like KFServing for model serving, Dataset Lifecycle Framework for data management, and MAX/DAX for pre-registered datasets and models. MLX provides APIs, UI and SDK to interact with these AI assets.
The document discusses monitoring an OpenShift cluster with Prometheus. It describes what components need monitoring, including nodes, services, and pods. Prometheus is well-integrated for Kubernetes monitoring. The architecture proposed uses Prometheus to scrape metrics from targets like nodes and services, with alerting configured and dashboards built. It references existing Prometheus mixins for Kubernetes and OpenShift monitoring best practices. Special design choices like using remote write and a Blackbox exporter are highlighted.
This document provides an overview of container management and Kubernetes concepts. It discusses delivery and deployment methods like classic deployment, containers, virtualization, and container orchestration. It then covers Kubernetes components like etcd, the control plane, and nodes. It outlines cluster administration tasks and best practices for cluster usage. Finally, it provides examples of Kubernetes resource types like pods, replica sets, and deployments.
How Confluent Completes the Event Streaming Platform (Addison Huddy & Dan Ros...HostedbyConfluent
Confluent Platform 6.0 and Project Metamorphosis complete the event streaming platform by providing elastic scalability, infinite storage, global access, and transforming Kafka. Key features include self-balancing clusters and dynamic scaling on Confluent Cloud, tiered storage and infinite retention on the platform, and cluster linking to simplify hybrid and multi-cloud deployments. These new capabilities help remove limitations on scale, storage, and deployment that traditionally challenged Kafka applications.
Serverless stream processing of Debezium data change events with Knative | De...Red Hat Developers
Come and join us for an (almost) no-slides session around the terrific trio of Debezium, Apache Kafka Streams, and Knative Eventing! Leveraging Apache Kafka as the de-facto standard for event-driven data pipelines, these open-source technologies allow you to ingest data changes from relational and NoSQL databases, process and enrich them, and consume them serverless-style. In a live demo, you’ll see how Debezium, Apache Kafka, Quarkus, and Knative are the dream-team for building serverless, cloud-native stream processing pipelines. You will learn: How to stream change events out of your database using Debezium How to use the Quarkus extension for Kafka Streams to build cloud-native stream processing applications, running either on the JVM or GraalVM How to consume and distribute Kafka messages with Knative Eventing, allowing you to manage modern serverless workloads on Kubernetes.
This three-day course teaches developers how to build applications that can publish and subscribe to data from an Apache Kafka cluster. Students will learn Kafka concepts and components, how to use Kafka and Confluent APIs, and how to develop Kafka producers, consumers, and streams applications. The hands-on course covers using Kafka tools, writing producers and consumers, ingesting data with Kafka Connect, and more. It is designed for developers who need to interact with Kafka as a data source or destination.
OSDC 2018 | Three years running containers with Kubernetes in Production by T...NETWAYS
The talk gives a state of the art update of experiences with deploying applications in Kubernetes on scale. If in clouds or on premises, Kubernetes took over the leading role as a container operating system. The central paradigm of stateless containers connected to storage and services is the core of Kubernetes. However, it can be extended to distributed databases, Machine Learning, Windows VMs in Kubernetes. All these applications have been considered as edge cases a few years ago, however, are going more and more mainstream today.
Model Driven SDLC using Docker #gopaddle #dockermeetupVinothini Raju
The document discusses model driven software development lifecycle (SDLC) using Docker. It describes using models for requirements, design, testing, and composition. Models are used to define services, dependencies, build processes and deployment configuration. The SDLC can be implemented from the models using forward or reverse engineering to generate Dockerfiles, images and docker-compose files to build, test and deploy applications as containers.
Securing Kafka At Zendesk (Joy Nag, Zendesk) Kafka Summit 2020confluent
Kafka is one of the most important foundation services at Zendesk. It became even more crucial with the introduction of Global Event Bus which my team built to propagate events between Kafka clusters hosted at different parts of the world and between different products. As part of its rollout, we had to add mTLS support in all of our Kafka Clusters (we have quite a few of them), this was to make propagation of events between clusters hosted at different parts of the world secure. It was quite a journey, but we eventually built a solution that is working well for us.
Things I will be sharing as part of the talk:
1. Establishing the use case/problem we were trying to solve (why we needed mTLS)
2. Building a Certificate Authority with open source tools (with self-signed Root CA)
3. Building helper components to generate certificates automatically and regenerate them before they expire (helps using a shorter TTL (Time To Live) which is good security practice) for both Kafka Clients and Brokers
4. Hot reloading regenerated certificates on Kafka brokers without downtime
5. What we built to rotate the self-signed root CA without downtime as well across the board
6. Monitoring and alerts on TTL of certificates
7. Performance impact of using TLS (along with why TLS affects kafka’s performance)
8. What we are doing to drive adoption of mTLS for existing Kafka clients using PLAINTEXT protocol by making onboarding easier
9. How this will become a base for other features we want, eg ACL, Rate Limiting (by using the principal from the TLS certificate as Identity of clients)
Meetup 12-12-2017 - Application Isolation on Kubernetesdtoledo67
Here are the slides I presented on 12-12-2017 at the Bay Area Microservices Meeting. I presented some of the best practices to achieve application isolation on Kubernetes
Container technologies use namespaces and cgroups to provide isolation between processes and limit resource usage. Docker builds on these technologies using a client-server model and additional features like images, containers, and volumes to package and run applications reliably and at scale. Kubernetes builds on Docker to provide a platform for automating deployment, scaling, and operations of containerized applications across clusters of hosts. It uses labels and pods to group related containers together and services to provide discovery and load balancing for pods.
Learn from the dozens of large-scale deployments how to get the most out of your Kubernetes environment:
- Container images optimization
- Organizing namespaces
- Readiness and Liveness probes
- Resource requests and limits
- Failing with grace
- Mapping external services
- Upgrading clusters with zero downtime
Openstack days sv building highly available services using kubernetes (preso)Allan Naim
This document discusses Google Cloud Platform's Kubernetes and how it can be used to build highly available services. It provides an overview of Kubernetes concepts like pods, labels, replica sets, volumes, and services. It then describes how Kubernetes Cluster Federation allows deploying applications across multiple Kubernetes clusters for high availability, geographic scaling, and other benefits. It outlines how to create clusters, configure the federated control plane, add clusters to the federation, deploy federated services and backends, and perform cross-cluster service discovery.
CN Asturias - Stateful application for kubernetes Cédrick Lunven
The document discusses running Apache Cassandra on Kubernetes with K8ssandra. K8ssandra combines Kubernetes and Cassandra to provide a scalable data store with an API layer and administration tools. It addresses challenges of running stateful applications in containers by providing scaling, consistency and resilience. K8ssandra allows Cassandra to be deployed in a cloud-native way on Kubernetes and provides easy and secure data access.
An Introduction to Kubernetes and Continuous Delivery FundamentalsAll Things Open
Presented at All Things Open RTP Meetup
Presented by Brad Topol
Title: An Introduction to Kubernetes and Continuous Delivery Fundamentals
Abstract: Kubernetes is a cloud infrastructure that has emerged as the de facto standard platform for managing, orchestrating, and provisioning container-based cloud native computing applications. Cloud native computing applications are built from a collection of smaller services and take advantage of the speed of development and scalability cloud computing environments provide. In this talk, we provide an overview of the fundamentals of Kubernetes. We begin with a short introduction to the concept of containers and describe the Kubernetes architecture. We then present several core features provided by Kubernetes such as Pods, ReplicaSets, Deployments, Service objects, and autoscaling capabilities. We conclude with a discussion of Kubernetes continuous delivery fundamentals and tools, including how to do small batch changes, source control, and developer access to production-like environments.
Introduction to Container Storage Interface (CSI)Idan Atias
Among the cool stuff we do at Silk, my colleagues and I develop the Silk CSI Plugin for customers who use our system as the storage layer for their Kubernetes workloads.
Before deep diving into the code and as part of my ramp-up on this subject I prepared some slides that cover some basic and important information on this topic.
These slides start by recapping some basic storage principals in containers and Kubernetes, continues with some more advanced use cases (including an "offline demo" of persisting Redis data on EBS volumes), and ends with a detailed information on the CSI solution itself.
IMHO, reviewing these slides can improve your understanding on this matter and can get you started implementing your own CSI plugin.
The main sources of information I used for preparing these slides are:
* Official CSI docs
* Kubernetes Storage Lingo 101 - Saad Ali, Google
* Container Storage Interface: Present and Future - Jie Yu, Mesosphere, Inc.
Kubernetes provides logical abstractions for deploying and managing containerized applications across a cluster. The main concepts include pods (groups of containers), controllers that ensure desired pod states are maintained, services for exposing pods, and deployments for updating replicated pods. Kubernetes allows defining pod specifications that include containers, volumes, probes, restart policies, and more. Controllers like replica sets ensure the desired number of pod replicas are running. Services provide discovery of pods through labels and load balancing. Deployments are used to declaratively define and rollout updates to replicated applications.
Kubernetes @ Squarespace (SRE Portland Meetup October 2017)Kevin Lynch
In this presentation I talk about our motivation to converting our microservices to run on Kubernetes. I discuss many of the technical challenges we encountered along the way, including networking issues, Java issues, monitoring and alerting, and managing all of our resources!
Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. It groups containers that make up an application into logical units for easy management and discovery called Pods. ReplicaSets ensure that a specified number of pod replicas are running at any given time. Key components include Pods, Services for enabling network access to applications, and Deployments to update Pods and manage releases.
Istio is an open-source service mesh that provides traffic management, telemetry and security for microservices. It works by injecting Envoy sidecar proxies into applications. The document provides an overview of Istio architecture, setup, and how it can be used for traffic management features like canary releases and advanced load balancing.
The OpenEBS Hangout #4 was held on 22nd December 2017 at 11:00 AM (IST and PST) where a live demo of cMotion was shown . Storage policies of OpenEBS 0.5 were also explained
This document introduces CoreOS, an open source operating system focused on automation, security, and scalability. It provides automatic updates, uses Docker containers, and includes tools like Etcd for service discovery and configuration. CoreOS is based on Gentoo Linux and uses systemd. It focuses on immutable infrastructure with atomic updates and rollbacks. The document describes CoreOS tools like Etcd, Locksmith, Cloud Config, Flannel and Fleet for cluster management.
Hands-On Introduction to Kubernetes at LISA17Ryan Jarvinen
This document provides an agenda and instructions for a hands-on introduction to Kubernetes tutorial. The tutorial will cover Kubernetes basics like pods, services, deployments and replica sets. It includes steps for setting up a local Kubernetes environment using Minikube and demonstrates features like rolling updates, rollbacks and self-healing. Attendees will learn how to develop container-based applications locally with Kubernetes and deploy changes to preview them before promoting to production.
Cloud Native Night, April 2018, Mainz: Workshop led by Jörg Schad (@joerg_schad, Technical Community Lead / Developer at Mesosphere)
Join our Meetup: https://www.meetup.com/de-DE/Cloud-Native-Night/
PLEASE NOTE:
During this workshop, Jörg showed many demos and the audience could participate on their laptops. Unfortunately, we can't provide these demos. Nevertheless, Jörg's slides give a deep dive into the topic.
DETAILS ABOUT THE WORKSHOP:
Kubernetes has been one of the topics in 2017 and will probably remain so in 2018. In this hands-on technical workshop you will learn how best to deploy, operate and scale Kubernetes clusters from one to hundreds of nodes using DC/OS. You will learn how to integrate and run Kubernetes alongside traditional applications and fast data services of your choice (e.g. Apache Cassandra, Apache Kafka, Apache Spark, TensorFlow and more) on any infrastructure.
This workshop best suits operators focussed on keeping their apps and services up and running in production and developers focussed on quickly delivering internal and customer facing apps into production.
You will learn how to:
- Introduction to Kubernetes and DC/OS (including the differences between both)
- Deploy Kubernetes on DC/OS in a secure, highly available, and fault-tolerant manner
- Solve operational challenges of running a large/multiple Kubernetes cluster
- One-click deploy big data stateful and stateless services alongside a Kubernetes cluster
This document discusses using GlusterFS storage in Kubernetes. It begins with an overview of GlusterFS as a scale-out distributed file system and its interfaces. It then covers Kubernetes storage concepts like StorageClasses, PersistentVolumeClaims (PVC), and PersistentVolumes (PV). It explains that StorageClasses define storage, PVC requests storage and creates a PV, and the PV provides actual mounted storage. It also demonstrates these concepts and shows the workflow of dynamically provisioning GlusterFS volumes in Kubernetes.
Kubernetes has become the defacto standard as a platform for container orchestration. Its ease of extending and many integrations has paved the way for a wide variety of data science and research tooling to be built on top of it.
From all encompassing tools like Kubeflow that make it easy for researchers to build end-to-end Machine Learning pipelines to specific orchestration of analytics engines such as Spark; Kubernetes has made the deployment and management of these things easy. This presentation will showcase some of the larger research tools in the ecosystem and go into how Kubernetes has enabled this easy form of application management.
An introductory look at Kubernetes and how it leverages AWS IaaS features to provide its own virtual clustering, and demonstration of some of the behaviour inside the cluster that makes Kubernetes a popular choice for microservice deployments.
This document provides an overview of a workshop on running Kubernetes on AWS. It outlines the prerequisites including installing Git, AWS CLI, kubectl, and cloning a GitHub repository. The workshop will cover basic Kubernetes concepts like pods, labels, replication controllers, deployments and services. It will demonstrate how to build a Kubernetes cluster on AWS using CloudFormation for infrastructure as code. Hands-on portions will include deploying containers, creating services, and observing the cluster architecture and networking. Additional topics are cluster add-ons like Kubernetes Dashboard and DNS, deploying applications, and cleaning up resources.
This document discusses testing Kubernetes and OpenShift at scale. It describes installing large clusters of 1000+ nodes, using scalability test tools like the Kubernetes performance test repo and OpenShift SVT repo to load clusters and generate traffic. Sample results show loading clusters with thousands of pods and projects, and peaks in master node resource usage when loading and deleting hundreds of pods simultaneously.
Flutter is a popular open source, cross-platform framework developed by Google. In this webinar we'll explore Flutter and its architecture, delve into the Flutter Embedder and Flutter’s Dart language, discover how to leverage Flutter for embedded device development, learn about Automotive Grade Linux (AGL) and its consortium and understand the rationale behind AGL's choice of Flutter for next-gen IVI systems. Don’t miss this opportunity to discover whether Flutter is right for your project.
A Study of Variable-Role-based Feature Enrichment in Neural Models of CodeAftab Hussain
Understanding variable roles in code has been found to be helpful by students
in learning programming -- could variable roles help deep neural models in
performing coding tasks? We do an exploratory study.
- These are slides of the talk given at InteNSE'23: The 1st International Workshop on Interpretability and Robustness in Neural Software Engineering, co-located with the 45th International Conference on Software Engineering, ICSE 2023, Melbourne Australia
Transform Your Communication with Cloud-Based IVR SolutionsTheSMSPoint
Discover the power of Cloud-Based IVR Solutions to streamline communication processes. Embrace scalability and cost-efficiency while enhancing customer experiences with features like automated call routing and voice recognition. Accessible from anywhere, these solutions integrate seamlessly with existing systems, providing real-time analytics for continuous improvement. Revolutionize your communication strategy today with Cloud-Based IVR Solutions. Learn more at: https://thesmspoint.com/channel/cloud-telephony
Introducing Crescat - Event Management Software for Venues, Festivals and Eve...Crescat
Crescat is industry-trusted event management software, built by event professionals for event professionals. Founded in 2017, we have three key products tailored for the live event industry.
Crescat Event for concert promoters and event agencies. Crescat Venue for music venues, conference centers, wedding venues, concert halls and more. And Crescat Festival for festivals, conferences and complex events.
With a wide range of popular features such as event scheduling, shift management, volunteer and crew coordination, artist booking and much more, Crescat is designed for customisation and ease-of-use.
Over 125,000 events have been planned in Crescat and with hundreds of customers of all shapes and sizes, from boutique event agencies through to international concert promoters, Crescat is rigged for success. What's more, we highly value feedback from our users and we are constantly improving our software with updates, new features and improvements.
If you plan events, run a venue or produce festivals and you're looking for ways to make your life easier, then we have a solution for you. Try our software for free or schedule a no-obligation demo with one of our product specialists today at crescat.io
Neo4j - Product Vision and Knowledge Graphs - GraphSummit ParisNeo4j
Dr. Jesús Barrasa, Head of Solutions Architecture for EMEA, Neo4j
Découvrez les dernières innovations de Neo4j, et notamment les dernières intégrations cloud et les améliorations produits qui font de Neo4j un choix essentiel pour les développeurs qui créent des applications avec des données interconnectées et de l’IA générative.
Software Engineering, Software Consulting, Tech Lead, Spring Boot, Spring Cloud, Spring Core, Spring JDBC, Spring Transaction, Spring MVC, OpenShift Cloud Platform, Kafka, REST, SOAP, LLD & HLD.
DDS Security Version 1.2 was adopted in 2024. This revision strengthens support for long runnings systems adding new cryptographic algorithms, certificate revocation, and hardness against DoS attacks.
Need for Speed: Removing speed bumps from your Symfony projects ⚡️Łukasz Chruściel
No one wants their application to drag like a car stuck in the slow lane! Yet it’s all too common to encounter bumpy, pothole-filled solutions that slow the speed of any application. Symfony apps are not an exception.
In this talk, I will take you for a spin around the performance racetrack. We’ll explore common pitfalls - those hidden potholes on your application that can cause unexpected slowdowns. Learn how to spot these performance bumps early, and more importantly, how to navigate around them to keep your application running at top speed.
We will focus in particular on tuning your engine at the application level, making the right adjustments to ensure that your system responds like a well-oiled, high-performance race car.
GraphSummit Paris - The art of the possible with Graph TechnologyNeo4j
Sudhir Hasbe, Chief Product Officer, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Most important New features of Oracle 23c for DBAs and Developers. You can get more idea from my youtube channel video from https://youtu.be/XvL5WtaC20A
OpenMetadata Community Meeting - 5th June 2024OpenMetadata
The OpenMetadata Community Meeting was held on June 5th, 2024. In this meeting, we discussed about the data quality capabilities that are integrated with the Incident Manager, providing a complete solution to handle your data observability needs. Watch the end-to-end demo of the data quality features.
* How to run your own data quality framework
* What is the performance impact of running data quality frameworks
* How to run the test cases in your own ETL pipelines
* How the Incident Manager is integrated
* Get notified with alerts when test cases fail
Watch the meeting recording here - https://www.youtube.com/watch?v=UbNOje0kf6E
Revolutionizing Visual Effects Mastering AI Face Swaps.pdfUndress Baby
The quest for the best AI face swap solution is marked by an amalgamation of technological prowess and artistic finesse, where cutting-edge algorithms seamlessly replace faces in images or videos with striking realism. Leveraging advanced deep learning techniques, the best AI face swap tools meticulously analyze facial features, lighting conditions, and expressions to execute flawless transformations, ensuring natural-looking results that blur the line between reality and illusion, captivating users with their ingenuity and sophistication.
Web:- https://undressbaby.com/
Artificia Intellicence and XPath Extension FunctionsOctavian Nadolu
The purpose of this presentation is to provide an overview of how you can use AI from XSLT, XQuery, Schematron, or XML Refactoring operations, the potential benefits of using AI, and some of the challenges we face.
Measures in SQL (SIGMOD 2024, Santiago, Chile)Julian Hyde
SQL has attained widespread adoption, but Business Intelligence tools still use their own higher level languages based upon a multidimensional paradigm. Composable calculations are what is missing from SQL, and we propose a new kind of column, called a measure, that attaches a calculation to a table. Like regular tables, tables with measures are composable and closed when used in queries.
SQL-with-measures has the power, conciseness and reusability of multidimensional languages but retains SQL semantics. Measure invocations can be expanded in place to simple, clear SQL.
To define the evaluation semantics for measures, we introduce context-sensitive expressions (a way to evaluate multidimensional expressions that is consistent with existing SQL semantics), a concept called evaluation context, and several operations for setting and modifying the evaluation context.
A talk at SIGMOD, June 9–15, 2024, Santiago, Chile
Authors: Julian Hyde (Google) and John Fremlin (Google)
https://doi.org/10.1145/3626246.3653374
4. k8s
open-source system for
automating deployment,
scaling, and management of
containerized applications
What is it?
cf
code-centric platform that
runs code in any language
or framework in the cloud
and manages its lifecycle
8. ● Container
● Pod - group of one or more containers with
shared storage/network
● Replication Controller - ensures that a
specified number of pod replicas are running
at any one time
● Deployment - provides declarative updates
for Pods and Replica Sets.
Kubernetes Abstractions 101
9. ● Service - defines a logical set of Pods and a
policy by which to access them
● Volume
● ConfigMap - configuration key/value pairs
● Secret - sensitive data
● Label & Label selector
● And more...
Kubernetes Abstractions 101 (continued)
15. Running your application
cf
● Blocks until app is
started
● Gives you logs
k8s
● Eventually starts your
containers
● You need to take care
of what’s happening
36. k8s
● Out of the box support
● 3rd party components
integration (e.g.
Prometheus)
Application Monitoring using PULL
cf
● Different URL for each
app instance (hack)
44. Credits
Special thanks to all the people who made and
released these awesome resources for free:
✘ Presentation template by SlidesCarnival
✘ Photographs by Unsplash