Enforcing Application SLA with Congress and MonascaFabio Giannetti
This document discusses using Congress and Monasca to enforce service level agreements (SLAs) for applications and infrastructure. It proposes using Congress policies to define SLA thresholds that trigger Monasca alarms, which then notify relevant parties. Examples are given for notifying operators if servers are underutilized, and evacuating critical business applications from unhealthy hosts. The current state integrates Monasca metrics with Congress, with future work including developing a Monasca alarm datasource and converting policies to alarms.
Ceilometer is an OpenStack telemetry service that collects measurements of OpenStack cloud usage. It consists of central and compute agents that collect metrics and event data from OpenStack services and push it to a central agent. The central agent then sends the data to collectors that store it in a database and make it available via the Ceilometer APIs for metrics, alarms, and events.
Weave Cortex: Multi-tenant, horizontally scalable Prometheus as a ServiceWeaveworks
This document describes Cortex, a multi-tenant horizontally scalable Prometheus as a service. It retrieves metrics from applications using Prometheus scrapers, distributes the metrics across ingesters using consistent hashing, stores metrics in DynamoDB with indexes and chunks in S3, and provides a Prometheus compatible query API. The goal is to build a proof of concept quickly to monitor tens of thousands of users sending tens of millions of samples per second in a cost effective and scalable way, reusing Prometheus where possible. There is still work to be done on features like recording rules, alerting, reliability, performance, and code cleanup before it is production ready.
Better Kafka Performance Without Changing Any Code | Simon Ritter, AzulHostedbyConfluent
Apache Kafka is the most popular open-source stream-processing software for collecting, processing, storing, and analyzing data at scale. Most known for its excellent performance, low latency, fault tolerance, and high throughput, it's capable of handling thousands of messages per second. For mission-critical applications, how do you ensure that the performance delivered is the performance required? This is especially important as Kafka is written in Java and Scala and runs on the JVM. The JVM is a fantastic platform that delivers on an internet scale.
In this session, we'll explore how making changes to the JVM design can eliminate the problems of garbage collection pauses and raise the throughput of applications. For cloud-based Kafka applications, this can deliver both lower latency and reduced infrastructure costs. All without changing a line of code!
Event-driven Applications with Kafka, Micronaut, and AWS Lambda | Dave Klein,...HostedbyConfluent
One of the great things about running applications in the cloud is that you only pay for the resources that you use. But that also makes it more important than ever for our applications to be resource-efficient. This becomes even more critical when we use serverless functions.
Micronaut is an application framework that provides dependency injection, developer productivity features, and excellent support for Apache Kafka. By performing dependency injection, AOP, and other productivity-enhancing magic at compile time, Micronaut allows us to build smaller, more efficient microservices and serverless functions.
In this session, we'll explore the ways that Apache Kafka and Micronaut work together to enable us to build fast, efficient, event-driven applications. Then we'll see it in action, using the AWS Lambda Sink Connector for Confluent Cloud.
Keeping Analytics Data Fresh in a Streaming Architecture | John Neal, QlikHostedbyConfluent
Qlik is an industry leader across its solution stack, both on the Data Integration side of things with Qlik Replicate (real-time CDC) and Qlik Compose (data warehouse and data lake automation), and on the Analytics side with Qlik Sense. These two “sides” of Qlik are coming together more frequently these days as the need for “always fresh” data increases across organizations.
When real-time streaming applications are the topic du jour, those companies are looking to Apache Kafka to provide the architectural backbone those applications require. Those same companies turn to Qlik Replicate to put the data from their enterprise database systems into motion at scale, whether that data resides in “legacy” mainframe databases; traditional relational databases such as Oracle, MySQL, or SQL Server; or applications such as SAP and SalesForce.
In this session we will look in depth at how Qlik Replicate can be used to continuously stream changes from a source database into Apache Kafka. From there, we will explore how a purpose-built consumer can be used to provide the bridge between Apache Kafka and an analytics application such as Qlik Sense.
Putting Kafka Together with the Best of Google Cloud Platform confluent
(Kir Titievsky, Google) Kafka Summit SF 2018
In this talk we will share some stories and patterns from customers who have built streaming pipelines and event-driven systems using Confluent Cloud in combination with Google Cloud Platform-native analytics tools, such as BigQuery and Dataflow. We will discuss what Confluent Cloud enables for hybrid deployments and how and why to mix and match platform-native and platform-neutral tools.
Strategies and techniques to optimize Kafka brokers and producers to minimize data loss under huge traffic volume, limited configuration options, less ideal and constant changing environment and balance against cost.
Enforcing Application SLA with Congress and MonascaFabio Giannetti
This document discusses using Congress and Monasca to enforce service level agreements (SLAs) for applications and infrastructure. It proposes using Congress policies to define SLA thresholds that trigger Monasca alarms, which then notify relevant parties. Examples are given for notifying operators if servers are underutilized, and evacuating critical business applications from unhealthy hosts. The current state integrates Monasca metrics with Congress, with future work including developing a Monasca alarm datasource and converting policies to alarms.
Ceilometer is an OpenStack telemetry service that collects measurements of OpenStack cloud usage. It consists of central and compute agents that collect metrics and event data from OpenStack services and push it to a central agent. The central agent then sends the data to collectors that store it in a database and make it available via the Ceilometer APIs for metrics, alarms, and events.
Weave Cortex: Multi-tenant, horizontally scalable Prometheus as a ServiceWeaveworks
This document describes Cortex, a multi-tenant horizontally scalable Prometheus as a service. It retrieves metrics from applications using Prometheus scrapers, distributes the metrics across ingesters using consistent hashing, stores metrics in DynamoDB with indexes and chunks in S3, and provides a Prometheus compatible query API. The goal is to build a proof of concept quickly to monitor tens of thousands of users sending tens of millions of samples per second in a cost effective and scalable way, reusing Prometheus where possible. There is still work to be done on features like recording rules, alerting, reliability, performance, and code cleanup before it is production ready.
Better Kafka Performance Without Changing Any Code | Simon Ritter, AzulHostedbyConfluent
Apache Kafka is the most popular open-source stream-processing software for collecting, processing, storing, and analyzing data at scale. Most known for its excellent performance, low latency, fault tolerance, and high throughput, it's capable of handling thousands of messages per second. For mission-critical applications, how do you ensure that the performance delivered is the performance required? This is especially important as Kafka is written in Java and Scala and runs on the JVM. The JVM is a fantastic platform that delivers on an internet scale.
In this session, we'll explore how making changes to the JVM design can eliminate the problems of garbage collection pauses and raise the throughput of applications. For cloud-based Kafka applications, this can deliver both lower latency and reduced infrastructure costs. All without changing a line of code!
Event-driven Applications with Kafka, Micronaut, and AWS Lambda | Dave Klein,...HostedbyConfluent
One of the great things about running applications in the cloud is that you only pay for the resources that you use. But that also makes it more important than ever for our applications to be resource-efficient. This becomes even more critical when we use serverless functions.
Micronaut is an application framework that provides dependency injection, developer productivity features, and excellent support for Apache Kafka. By performing dependency injection, AOP, and other productivity-enhancing magic at compile time, Micronaut allows us to build smaller, more efficient microservices and serverless functions.
In this session, we'll explore the ways that Apache Kafka and Micronaut work together to enable us to build fast, efficient, event-driven applications. Then we'll see it in action, using the AWS Lambda Sink Connector for Confluent Cloud.
Keeping Analytics Data Fresh in a Streaming Architecture | John Neal, QlikHostedbyConfluent
Qlik is an industry leader across its solution stack, both on the Data Integration side of things with Qlik Replicate (real-time CDC) and Qlik Compose (data warehouse and data lake automation), and on the Analytics side with Qlik Sense. These two “sides” of Qlik are coming together more frequently these days as the need for “always fresh” data increases across organizations.
When real-time streaming applications are the topic du jour, those companies are looking to Apache Kafka to provide the architectural backbone those applications require. Those same companies turn to Qlik Replicate to put the data from their enterprise database systems into motion at scale, whether that data resides in “legacy” mainframe databases; traditional relational databases such as Oracle, MySQL, or SQL Server; or applications such as SAP and SalesForce.
In this session we will look in depth at how Qlik Replicate can be used to continuously stream changes from a source database into Apache Kafka. From there, we will explore how a purpose-built consumer can be used to provide the bridge between Apache Kafka and an analytics application such as Qlik Sense.
Putting Kafka Together with the Best of Google Cloud Platform confluent
(Kir Titievsky, Google) Kafka Summit SF 2018
In this talk we will share some stories and patterns from customers who have built streaming pipelines and event-driven systems using Confluent Cloud in combination with Google Cloud Platform-native analytics tools, such as BigQuery and Dataflow. We will discuss what Confluent Cloud enables for hybrid deployments and how and why to mix and match platform-native and platform-neutral tools.
Strategies and techniques to optimize Kafka brokers and producers to minimize data loss under huge traffic volume, limited configuration options, less ideal and constant changing environment and balance against cost.
Project Frankenstein: A multitenant, horizontally scalable Prometheus as a se...Weaveworks
In this talk we'll present a prototype solution for multitenant, horizontally scalable Prometheus as a Service, code name "Project Frankenstein".
Frankenstein turns Prometheus architectural assumptions on their head, by marrying the PromQL query engine with a storage layer based on DynamoDB and S3. We have disaggregated the Prometheus binary into a microservices-style architecture, with separate services for distribution, ingest, alerting rules and storage. By designing all these services as fungible replicas, this solution can be scaled out with ease and failure of any individual replica can be dealt with gracefully.
This multitenant, scale-out Prometheus service forms a core component of Weave Cloud, a hosted management, monitoring and visualisation platform for cloud native applications. This platform is built from 100% open source components, and we're working with the Prometheus community to contribute all the changes we've made back to Prometheus. Project Frankenstein is open source and can be found at https://github.com/weaveworks/frankenstein
Event-driven Applications with Kafka, Micronaut, and AWS Lambda | Dave Klein,...HostedbyConfluent
One of the great things about running applications in the cloud is that you only pay for the resources that you use. But that also makes it more important than ever for our applications to be resource-efficient. This becomes even more critical when we use serverless functions.
Micronaut is an application framework that provides dependency injection, developer productivity features, and excellent support for Apache Kafka. By performing dependency injection, AOP, and other productivity-enhancing magic at compile time, Micronaut allows us to build smaller, more efficient microservices and serverless functions.
In this session, we'll explore the ways that Apache Kafka and Micronaut work together to enable us to build fast, efficient, event-driven applications. Then we'll see it in action, using the AWS Lambda Sink Connector for Confluent Cloud.
Bulletproof Kafka with Fault Tree Analysis (Andrey Falko, Lyft) Kafka Summit ...confluent
We recently learned about “Fault Tree Analysis” and decided to apply the technique to bulletproof our Apache Kafka deployments. In this talk, learn about fault tree analysis and what you should focus on to make your Apache Kafka clusters resilient.
This talk should provide a framework for answers the following common questions a Kafka operator or user might have:
What guarantees can I promise my users?
What should my replication factor?
What should the ISR setting be?
Should I use RAID or not?
Should I use external storage such as EBS or local disks?
5 lessons learned for successful migration to Confluent cloud | Natan Silinit...HostedbyConfluent
Confluent Cloud makes Devops engineers lives a lot more easier.
Yet moving 1500 microservices, 10K topics and 100K partitions to a multi-cluster Confluent cloud can be a challenge.
In this talk you will hear about 5 lessons that Wix has learned in order to successfully meet this challenge.
These lessons include:
1. Automation, Automation, Automation - all the process has to be completely automated at such scale
2. Prefer a gradual approach - E.g. migrate topics in small chunks and not all at once. Reduces risks if things go bad
3. Cleanup first - avoid migrating unused topics or topics with too many unnecessary partitions
stackconf 2021 | How we finally migrated an eCommerce-Platform to GCPNETWAYS
As Squad Architect Platform I supported the platform-team to migrate a complete ecommerce-environment to Google Cloud Platform. By sketching out various migration-steps, technical concepts and tooling I will explain we did the migration exactly this way.
One Click Streaming Data Pipelines & Flows | Leveraging Kafka & Spark | Ido F...HostedbyConfluent
The Apache Kafka ecosystem is very rich with components and pieces that make for designing and implementing secure, efficient, fault-tolerant and scalable event stream processing (ESP) systems. Using real-world examples, this talk covers why Apache Kafka is an excellent choice for cloud-native and hybrid architectures, how to go about designing, implementing and maintaining ESP systems, best practices and patterns for migrating to the cloud or hybrid configurations, when to go with PaaS or IaaS, what options are available for running Kafka in cloud or hybrid environments and what you need to build and maintain successful ESP systems that are secure, performant, reliable, highly-available and scalable.
Creating a Kafka Topic. Super easy? | Andrew Stevenson and Marios Andreopoulo...HostedbyConfluent
Making developers productive on Kafka requires giving self-service access. But even something as seemingly straightforward as Topic creation is not so easy, and in some cases can lead to catastrophe.
In this talk, we’ll share and demonstrate different approaches for developers to safely create Kafka Topics whilst sharing a few war stories of what can go wrong along the way.
From a Million to a Trillion Events Per Day: Stream Processing in Ludicrous M...confluent
In this talk we’ll describe the evolution of stream processing at Tesla and the challenges that are specific to our needs, such as large skews in message-processing latencies. We’ll describe how we built a reliable and performant ingestion platform that allows us to take an idea from a whiteboard to production in just a matter of hours. We’ll also discuss the design principles, tools, and incident response processes that have enabled a small team to support Kafka and downstream services in highly-available and multi-tenant environments at scale.
Taming a massive fleet of Python-based Kafka apps at Robinhood | Chandra Kuch...HostedbyConfluent
Robinhood uses Kafka in every line of its business, from stock and crypto trading to clearing and data analytics. One interesting aspect of our architecture is that many of our microservices leveraging Kafka are written in Python. When you combine Python's relatively slow performance coupled, its reliance on process-based parallelism and Robinhood’s scale, the result is a massive fleet of application processes producing to and consuming from our Kafka clusters. This fleet generates an atypical workload on Kafka that warrants a deeper investment in scalability and reliability.
This talk discusses our investments in Kafka infrastructure for a large-scale Python-based environment:
kafkahood: our librdkafka-based client library wrapper that codifies best practices, sane defaults and deep client-side observability.
kafkaproxy: a Rust-based sidecar proxy that reduces connection fan-in from Python gunicorn worker pools to our Kafka clusters.
We'll also present challenges we encountered along the way and share our learnings with the audience.
Distributed architecture in a cloud native microservices ecosystemZhenzhong Xu
This document summarizes key aspects of distributed architecture in a cloud native microservices ecosystem. It discusses Netflix's transition to microservices running in the cloud, key characteristics of microservices and cloud computing like scalability and availability, challenges of operating in the cloud like unpredictable failures and latency, Netflix's open source tools for discovery, circuit breaking, resilience, continuous delivery, and more. It also provides an overview of how to develop, integrate, operate, and optimize microservices in terms of embracing failures, caching, operations, and using a data-driven approach.
In this talk I will present a technique for deploying machine learning models to provide real-time predictions using Apache Pulsar Functions. In order to provide a prediction in real-time, the model usually receives a single data point from the caller, and is expected to provide an accurate prediction within a few milliseconds.
Throughout this talk, I will demonstrate the steps required to deploy a fully-trained ML that predicts the delivery time for a food delivery service based upon real-time traffic information, the customer's location, and the restaurant that will be fulfilling the order.
The Road Most Traveled: A Kafka Story | Heikki Nousiainen, AivenHostedbyConfluent
When moving to a cloud native architecture Moogsoft knew they needed more scale than Rabbit could provide. Moogsoft moved into Kafka which is known for quick writing and driving heavy event driven workloads on top of niceties such as replayability. Choosing the tool was easy, finding a vendor that ticked all their boxes was not. They needed to ensure scalability, upgradability, builds via existing IAC pipelines, and observability via existing tools. When Moogsoft found Aiven, they were impressed with their offering and ability to scale on demand. During this presentation we will explore how Moogsoft used Aiven for Kafka to manage and scale their data in the cloud.
This document summarizes a meetup presentation about deploying Kong API gateway with Mesosphere DC/OS. The presentation was given by Shashi Ranjan and Cooper Marcus of Kong and covered how Kong can help manage microservices and act as a central API gateway. It discussed how Kong provides functionality like authentication, security, logging and load balancing through plugins. The document also provided an overview of Kong editions, plugins, and common enterprise installations.
Netflix viewing data architecture evolution - EBJUG Nov 2014Philip Fisher-Ogden
Netflix's architecture for viewing data has evolved as streaming usage has grown. Each generation was designed for the next order of magnitude, and was informed by learnings from the previous. From SQL to NoSQL, from data center to cloud, from proprietary to open source, look inside to learn how this system has evolved. (slides from a talk given at the East Bay Java Users Group MeetUp in Nov 2014)
Exposing and Controlling Kafka Event Streaming with Kong Konnect Enterprise |...HostedbyConfluent
Event streaming allows companies to build more scalable and loosely coupled real-time applications supporting massive concurrency demands and simplifying the construction of services.
At the same time, API management provides capabilities to securely control the upstream services consumption, including the event processing infrastructure.
This session shows how Kong Konnect Enterprise can complement Kafka Event Streaming, exposing it to new and external consumers while applying specific and critical policies to control its consumption, including API key, OAuth/OIDC and others for authentication, rate limiting, caching, log processing, etc.
Data & Analytics Forum: Moving Telcos to Real TimeSingleStore
MemSQL is a real-time database that allows users to simultaneously ingest, serve, and analyze streaming data and transactions. It is an in-memory distributed relational database that supports SQL, key-value, documents, and geospatial queries. MemSQL provides real-time analytics capabilities through Streamliner, which allows one-click deployment of Apache Spark for real-time data pipelines and analytics without batch processing. It is available in free community and paid enterprise editions with support and additional features.
Stream Processing with Apache Kafka and .NETconfluent
Presentation from South Bay.NET meetup on 3/30.
Speaker: Matt Howlett, Software Engineer at Confluent
Apache Kafka is a scalable streaming platform that forms a key part of the infrastructure at many companies including Uber, Netflix, Walmart, Airbnb, Goldman Sachs and LinkedIn. In this talk Matt will give a technical overview of Kafka, discuss some typical use cases (from surge pricing to fraud detection to web analytics) and show you how to use Kafka from within your C#/.NET applications.
Container Orchestration with Traefk on Docker SwarmJakub Hajek
The presentation contains details of how to set up a fully-fledged environment based on Docker Swarm and Traefk. You will see a multi-tier application stack consisting of Edge router running in the first layer, then the frontend application and NodeJS backend. This is a quite common setup used in a microservices architecture. If you are building a high available environment without a single point of failure, it can be interesting for you.
The source code used in the presentation: https://github.com/jakubhajek/traefik-consul-swarm
This document discusses Contentful Engineering's migration from using AWS alone to using Kubernetes on AWS. Some key points:
1) Contentful migrated to take advantage of Kubernetes' focus on application delivery and open source development model over their previous Chef-based deployment platform.
2) They use Kops to manage Kubernetes clusters on AWS, deploying clusters in the same VPC and using kubenet networking and kube2iam to integrate with AWS services.
3) The migration process involved moving services to Kubernetes deployments and exposing them via LoadBalancer services, and updating service discovery in Route53.
4) Lessons learned include staying up to date with Kubernetes and Kops releases, customizing Kops outputs
Apache Kafka is a distributed streaming platform. It provides a high-throughput distributed messaging system with publish-subscribe capabilities. The document discusses Kafka producers and consumers, Kafka clients in different programming languages, and important configuration settings for Kafka brokers and topics. It also demonstrates sending messages to Kafka topics from a Java producer and consuming messages from the console consumer.
BigDataFest_ Building Modern Data Streaming Appsssuser73434e
https://sessionize.com/big-data-fest-by-softserve/
The Big Data Fest 2023 is a two-day online event that brings together experts, enthusiasts, and members of the community to discuss the latest developments, trending technologies, and tools, and make an impact on the future of Big Data and Data Engineering.
Attendees will have the opportunity to hear from keynote speakers, attend panel discussions and live Q&As, and participate in hands-on workshops.
The event will also feature a charity component aimed at raising money for Open Eyes Fund to buy ambulances for the hottest spots in Ukraine. We invite everyone to support this event and help make a difference in saving lives.
Participation in the event is free, but we encourage attendees to make donations to support this important initiative.
The conference will include a variety of activities divided into cloud streams, such as:
Keynote speeches from leading experts in the field of Big Data
Live Q&As
Panel discussions on the future of Data Engineering
Hands-on workshops on data management and analytics
Networking opportunities with top professionals and leading experts in the field.
Our main goal is to influence the future shape of Data Engineering and promote the use of Big Data for the greater good.
In my session, I will show you some best practices I have discovered over the last 7 years in building data streaming applications including IoT, CDC, Logs, and more.
In my modern approach, we utilize several open-source frameworks to maximize the best features of all. We often start with Apache NiFi as the orchestrator of streams flowing into Apache Pulsar and/or Apache Kafka. From there we build streaming ETL with Apache Spark and enhance events with serverless functions for ML and enrichment. We build continuous queries against our topics with Flink SQL. We will stream data into Iceberg and other data stores.
We use the best streaming tools for the current applications with FLiPN and FLaNK. https://www.datainmotion.dev/
https://www.youtube.com/watch?v=qW9CP8Xngk4&ab_channel=SoftServeCareer
Apache NiFi
Apache Flink
Apache Kafka
Apache iceberg
Streams Messaging Manager
SQL Stream Builder
Cloudera DataFlow Designer
NiFi Registry
Cloudera Schema Registry
Project Frankenstein: A multitenant, horizontally scalable Prometheus as a se...Weaveworks
In this talk we'll present a prototype solution for multitenant, horizontally scalable Prometheus as a Service, code name "Project Frankenstein".
Frankenstein turns Prometheus architectural assumptions on their head, by marrying the PromQL query engine with a storage layer based on DynamoDB and S3. We have disaggregated the Prometheus binary into a microservices-style architecture, with separate services for distribution, ingest, alerting rules and storage. By designing all these services as fungible replicas, this solution can be scaled out with ease and failure of any individual replica can be dealt with gracefully.
This multitenant, scale-out Prometheus service forms a core component of Weave Cloud, a hosted management, monitoring and visualisation platform for cloud native applications. This platform is built from 100% open source components, and we're working with the Prometheus community to contribute all the changes we've made back to Prometheus. Project Frankenstein is open source and can be found at https://github.com/weaveworks/frankenstein
Event-driven Applications with Kafka, Micronaut, and AWS Lambda | Dave Klein,...HostedbyConfluent
One of the great things about running applications in the cloud is that you only pay for the resources that you use. But that also makes it more important than ever for our applications to be resource-efficient. This becomes even more critical when we use serverless functions.
Micronaut is an application framework that provides dependency injection, developer productivity features, and excellent support for Apache Kafka. By performing dependency injection, AOP, and other productivity-enhancing magic at compile time, Micronaut allows us to build smaller, more efficient microservices and serverless functions.
In this session, we'll explore the ways that Apache Kafka and Micronaut work together to enable us to build fast, efficient, event-driven applications. Then we'll see it in action, using the AWS Lambda Sink Connector for Confluent Cloud.
Bulletproof Kafka with Fault Tree Analysis (Andrey Falko, Lyft) Kafka Summit ...confluent
We recently learned about “Fault Tree Analysis” and decided to apply the technique to bulletproof our Apache Kafka deployments. In this talk, learn about fault tree analysis and what you should focus on to make your Apache Kafka clusters resilient.
This talk should provide a framework for answers the following common questions a Kafka operator or user might have:
What guarantees can I promise my users?
What should my replication factor?
What should the ISR setting be?
Should I use RAID or not?
Should I use external storage such as EBS or local disks?
5 lessons learned for successful migration to Confluent cloud | Natan Silinit...HostedbyConfluent
Confluent Cloud makes Devops engineers lives a lot more easier.
Yet moving 1500 microservices, 10K topics and 100K partitions to a multi-cluster Confluent cloud can be a challenge.
In this talk you will hear about 5 lessons that Wix has learned in order to successfully meet this challenge.
These lessons include:
1. Automation, Automation, Automation - all the process has to be completely automated at such scale
2. Prefer a gradual approach - E.g. migrate topics in small chunks and not all at once. Reduces risks if things go bad
3. Cleanup first - avoid migrating unused topics or topics with too many unnecessary partitions
stackconf 2021 | How we finally migrated an eCommerce-Platform to GCPNETWAYS
As Squad Architect Platform I supported the platform-team to migrate a complete ecommerce-environment to Google Cloud Platform. By sketching out various migration-steps, technical concepts and tooling I will explain we did the migration exactly this way.
One Click Streaming Data Pipelines & Flows | Leveraging Kafka & Spark | Ido F...HostedbyConfluent
The Apache Kafka ecosystem is very rich with components and pieces that make for designing and implementing secure, efficient, fault-tolerant and scalable event stream processing (ESP) systems. Using real-world examples, this talk covers why Apache Kafka is an excellent choice for cloud-native and hybrid architectures, how to go about designing, implementing and maintaining ESP systems, best practices and patterns for migrating to the cloud or hybrid configurations, when to go with PaaS or IaaS, what options are available for running Kafka in cloud or hybrid environments and what you need to build and maintain successful ESP systems that are secure, performant, reliable, highly-available and scalable.
Creating a Kafka Topic. Super easy? | Andrew Stevenson and Marios Andreopoulo...HostedbyConfluent
Making developers productive on Kafka requires giving self-service access. But even something as seemingly straightforward as Topic creation is not so easy, and in some cases can lead to catastrophe.
In this talk, we’ll share and demonstrate different approaches for developers to safely create Kafka Topics whilst sharing a few war stories of what can go wrong along the way.
From a Million to a Trillion Events Per Day: Stream Processing in Ludicrous M...confluent
In this talk we’ll describe the evolution of stream processing at Tesla and the challenges that are specific to our needs, such as large skews in message-processing latencies. We’ll describe how we built a reliable and performant ingestion platform that allows us to take an idea from a whiteboard to production in just a matter of hours. We’ll also discuss the design principles, tools, and incident response processes that have enabled a small team to support Kafka and downstream services in highly-available and multi-tenant environments at scale.
Taming a massive fleet of Python-based Kafka apps at Robinhood | Chandra Kuch...HostedbyConfluent
Robinhood uses Kafka in every line of its business, from stock and crypto trading to clearing and data analytics. One interesting aspect of our architecture is that many of our microservices leveraging Kafka are written in Python. When you combine Python's relatively slow performance coupled, its reliance on process-based parallelism and Robinhood’s scale, the result is a massive fleet of application processes producing to and consuming from our Kafka clusters. This fleet generates an atypical workload on Kafka that warrants a deeper investment in scalability and reliability.
This talk discusses our investments in Kafka infrastructure for a large-scale Python-based environment:
kafkahood: our librdkafka-based client library wrapper that codifies best practices, sane defaults and deep client-side observability.
kafkaproxy: a Rust-based sidecar proxy that reduces connection fan-in from Python gunicorn worker pools to our Kafka clusters.
We'll also present challenges we encountered along the way and share our learnings with the audience.
Distributed architecture in a cloud native microservices ecosystemZhenzhong Xu
This document summarizes key aspects of distributed architecture in a cloud native microservices ecosystem. It discusses Netflix's transition to microservices running in the cloud, key characteristics of microservices and cloud computing like scalability and availability, challenges of operating in the cloud like unpredictable failures and latency, Netflix's open source tools for discovery, circuit breaking, resilience, continuous delivery, and more. It also provides an overview of how to develop, integrate, operate, and optimize microservices in terms of embracing failures, caching, operations, and using a data-driven approach.
In this talk I will present a technique for deploying machine learning models to provide real-time predictions using Apache Pulsar Functions. In order to provide a prediction in real-time, the model usually receives a single data point from the caller, and is expected to provide an accurate prediction within a few milliseconds.
Throughout this talk, I will demonstrate the steps required to deploy a fully-trained ML that predicts the delivery time for a food delivery service based upon real-time traffic information, the customer's location, and the restaurant that will be fulfilling the order.
The Road Most Traveled: A Kafka Story | Heikki Nousiainen, AivenHostedbyConfluent
When moving to a cloud native architecture Moogsoft knew they needed more scale than Rabbit could provide. Moogsoft moved into Kafka which is known for quick writing and driving heavy event driven workloads on top of niceties such as replayability. Choosing the tool was easy, finding a vendor that ticked all their boxes was not. They needed to ensure scalability, upgradability, builds via existing IAC pipelines, and observability via existing tools. When Moogsoft found Aiven, they were impressed with their offering and ability to scale on demand. During this presentation we will explore how Moogsoft used Aiven for Kafka to manage and scale their data in the cloud.
This document summarizes a meetup presentation about deploying Kong API gateway with Mesosphere DC/OS. The presentation was given by Shashi Ranjan and Cooper Marcus of Kong and covered how Kong can help manage microservices and act as a central API gateway. It discussed how Kong provides functionality like authentication, security, logging and load balancing through plugins. The document also provided an overview of Kong editions, plugins, and common enterprise installations.
Netflix viewing data architecture evolution - EBJUG Nov 2014Philip Fisher-Ogden
Netflix's architecture for viewing data has evolved as streaming usage has grown. Each generation was designed for the next order of magnitude, and was informed by learnings from the previous. From SQL to NoSQL, from data center to cloud, from proprietary to open source, look inside to learn how this system has evolved. (slides from a talk given at the East Bay Java Users Group MeetUp in Nov 2014)
Exposing and Controlling Kafka Event Streaming with Kong Konnect Enterprise |...HostedbyConfluent
Event streaming allows companies to build more scalable and loosely coupled real-time applications supporting massive concurrency demands and simplifying the construction of services.
At the same time, API management provides capabilities to securely control the upstream services consumption, including the event processing infrastructure.
This session shows how Kong Konnect Enterprise can complement Kafka Event Streaming, exposing it to new and external consumers while applying specific and critical policies to control its consumption, including API key, OAuth/OIDC and others for authentication, rate limiting, caching, log processing, etc.
Data & Analytics Forum: Moving Telcos to Real TimeSingleStore
MemSQL is a real-time database that allows users to simultaneously ingest, serve, and analyze streaming data and transactions. It is an in-memory distributed relational database that supports SQL, key-value, documents, and geospatial queries. MemSQL provides real-time analytics capabilities through Streamliner, which allows one-click deployment of Apache Spark for real-time data pipelines and analytics without batch processing. It is available in free community and paid enterprise editions with support and additional features.
Stream Processing with Apache Kafka and .NETconfluent
Presentation from South Bay.NET meetup on 3/30.
Speaker: Matt Howlett, Software Engineer at Confluent
Apache Kafka is a scalable streaming platform that forms a key part of the infrastructure at many companies including Uber, Netflix, Walmart, Airbnb, Goldman Sachs and LinkedIn. In this talk Matt will give a technical overview of Kafka, discuss some typical use cases (from surge pricing to fraud detection to web analytics) and show you how to use Kafka from within your C#/.NET applications.
Container Orchestration with Traefk on Docker SwarmJakub Hajek
The presentation contains details of how to set up a fully-fledged environment based on Docker Swarm and Traefk. You will see a multi-tier application stack consisting of Edge router running in the first layer, then the frontend application and NodeJS backend. This is a quite common setup used in a microservices architecture. If you are building a high available environment without a single point of failure, it can be interesting for you.
The source code used in the presentation: https://github.com/jakubhajek/traefik-consul-swarm
This document discusses Contentful Engineering's migration from using AWS alone to using Kubernetes on AWS. Some key points:
1) Contentful migrated to take advantage of Kubernetes' focus on application delivery and open source development model over their previous Chef-based deployment platform.
2) They use Kops to manage Kubernetes clusters on AWS, deploying clusters in the same VPC and using kubenet networking and kube2iam to integrate with AWS services.
3) The migration process involved moving services to Kubernetes deployments and exposing them via LoadBalancer services, and updating service discovery in Route53.
4) Lessons learned include staying up to date with Kubernetes and Kops releases, customizing Kops outputs
Apache Kafka is a distributed streaming platform. It provides a high-throughput distributed messaging system with publish-subscribe capabilities. The document discusses Kafka producers and consumers, Kafka clients in different programming languages, and important configuration settings for Kafka brokers and topics. It also demonstrates sending messages to Kafka topics from a Java producer and consuming messages from the console consumer.
BigDataFest_ Building Modern Data Streaming Appsssuser73434e
https://sessionize.com/big-data-fest-by-softserve/
The Big Data Fest 2023 is a two-day online event that brings together experts, enthusiasts, and members of the community to discuss the latest developments, trending technologies, and tools, and make an impact on the future of Big Data and Data Engineering.
Attendees will have the opportunity to hear from keynote speakers, attend panel discussions and live Q&As, and participate in hands-on workshops.
The event will also feature a charity component aimed at raising money for Open Eyes Fund to buy ambulances for the hottest spots in Ukraine. We invite everyone to support this event and help make a difference in saving lives.
Participation in the event is free, but we encourage attendees to make donations to support this important initiative.
The conference will include a variety of activities divided into cloud streams, such as:
Keynote speeches from leading experts in the field of Big Data
Live Q&As
Panel discussions on the future of Data Engineering
Hands-on workshops on data management and analytics
Networking opportunities with top professionals and leading experts in the field.
Our main goal is to influence the future shape of Data Engineering and promote the use of Big Data for the greater good.
In my session, I will show you some best practices I have discovered over the last 7 years in building data streaming applications including IoT, CDC, Logs, and more.
In my modern approach, we utilize several open-source frameworks to maximize the best features of all. We often start with Apache NiFi as the orchestrator of streams flowing into Apache Pulsar and/or Apache Kafka. From there we build streaming ETL with Apache Spark and enhance events with serverless functions for ML and enrichment. We build continuous queries against our topics with Flink SQL. We will stream data into Iceberg and other data stores.
We use the best streaming tools for the current applications with FLiPN and FLaNK. https://www.datainmotion.dev/
https://www.youtube.com/watch?v=qW9CP8Xngk4&ab_channel=SoftServeCareer
Apache NiFi
Apache Flink
Apache Kafka
Apache iceberg
Streams Messaging Manager
SQL Stream Builder
Cloudera DataFlow Designer
NiFi Registry
Cloudera Schema Registry
big data fest building modern data streaming appsTimothy Spann
big data fest building modern data streaming apps
25 May 2023
softtserver
flank stack
apache nifi
apache flink
apache kafka
minifi
java
apache iceberg
cloudera
tim spann
BigDataFest Building Modern Data Streaming Appsssuser73434e
BigDataFest: Building Modern Data Streaming Apps
2023
https://app.softserveinc.com/apply/big_data_fest/
CONFERENCE FOR
•DATA ENGINEERS•DATA SCIENTISTS•DATA ARCHITECTS
•DATA AND BUSINESS ANALYSTS•SOFTWARE DEVELOPERS
•ANYONE INTERESTED IN LEARNING MORE ABOUT DATA
Description
In my session, I will show you some best practices I have discovered over the last 7 years in building data streaming applications including IoT, CDC, Logs, and more.
In my modern approach, we utilize several open-source frameworks to maximize the best features of all. We often start with Apache NiFi as the orchestrator of streams flowing into Apache Pulsar and/or Apache Kafka. From there we build streaming ETL with Apache Spark and enhance events with serverless functions for ML and enrichment. We build continuous queries against our topics with Flink SQL. We will stream data into Iceberg and other data stores.
We use the best streaming tools for the current applications with FLiPN and FLaNK. https://www.datainmotion.dev/
Tim Spann is a Principal Developer Advocate at Cloudera where he works with Apache Pulsar, Apache Flink, Apache NiFi, Apache MXNet, TensorFlow, Apache Spark, big data, the IoT, machine learning, and deep learning. Tim has over a decade of experience with the IoT, big data, distributed computing, streaming technologies, and Java programming. Previously, he was a Principal Field Engineer at Cloudera, a Senior Solutions Architect at AirisData and a senior field engineer at Pivotal. He blogs for DZone, where he is the Big Data Zone leader, and runs a popular meetup in Princeton on big data, the IoT, deep learning, streaming, NiFi, the blockchain, and Spark. Tim is a frequent speaker at conferences such as IoT Fusion, Strata, ApacheCon, Data Works Summit Berlin, DataWorks Summit Sydney, and Oracle Code NYC. He holds a BS and MS in computer science.
https://www.datainmotion.dev/p/about-me.html
https://dzone.com/users/297029/bunkertor.html
https://conferences.oreilly.com/strata/strata-ny-2018/public/schedule/speaker/185963
Netflix keystone streaming data pipeline @scale in the cloud-dbtb-2016Monal Daxini
Keystone processes over 700 billion events per day (1 peta byte) with at-least once processing semantics in the cloud. We will explore in detail how we leverage Kafka, Samza, Docker, and Linux at scale to implement a multi-tenant pipeline in AWS cloud within a year. We will also share our plans on offering a Stream Processing as a Service for all of Netflix use.
Thanks to tools like kubeadm, Terraform or Ansible setting up a Kubernetes cluster on a dedicated environment is getting reachable, but what’s about setting up a bunch of cluster in multiple clouds in automatic way? This is still a challenge. Also if you want to do same in your own datacenter. In this talk we will take a look to the approach to orchestrate and manage a whole set of k8s cluster by the Cluster API project of kubernetes (a sub-project of sig-cluster-lifecycle). The main idea behind it is to use the Kubernetes API itself to manage multiple clusters with there master and worker nodes in same way you would manage your PODs - define the needed resources and the responsible controller will take care for providing it.
After an overview about the concepts of cluster API, I will show what’s needed to implement a cluster API conform machine class/deployment. There I will see that adding your own provider isn’t that hard as you may aspect. At the end of the day it just requires a simple interface to implement. The corresponding Kubermatic machine-controller we implemented at Loodse are available as open source, so its possible to play around with it. A live demo will show how easy it is to spin up and maintain multiple Kubernetes cluster at different public and on-premise cloud providers over one managing cluster. A final wrap up will summarize the current state of the Cluster API project and the advantages of managing clusters with CRDs and Controllers instead of stateful scripts.
Running Cloud Foundry for 12 months - An experience report | anyninesanynines GmbH
anynines ran a public PaaS located in a German datacenter based on Cloud Foundry. In more than 12 months of running a Cloud Foundry PaaS man lessons about security, high availability, open stack and many other exciting topics have been learned. See how Bosh can be used and how it shouldn't be used. Learn how to perform Cloud Foundry upgrades and read how to harden Cloud Foundry by adding more fault tolerance with pacemaker.
Komei Shimamura, Timothy Okwii, Johnu George, and Marc Solanas Tarre presented Surge, a service for deploying and managing real-time data processing pipelines on OpenStack. Surge supports Apache Kafka for distributed messaging and Apache Storm for real-time data processing. It allows 1-click deployment and scaling of pipelines and manages the processes through a web UI, making complex systems easier for users. The goal is to simplify running machine learning and analytics jobs on streaming data hosted on OpenStack.
Apache Kafka - A modern Stream Processing PlatformGuido Schmutz
After a quick overview and introduction of Apache Kafka, this session cover two components which extend the core of Apache Kafka: Kafka Connect and Kafka Streams/KSQL.
Kafka Connects role is to access data from the out-side-world and make it available inside Kafka by publishing it into a Kafka topic. On the other hand, Kafka Connect is also responsible to transport information from inside Kafka to the outside world, which could be a database or a file system. There are many existing connectors for different source and target systems available out-of-the-box, either provided by the community or by Confluent or other vendors. You simply configure these connectors and off you go.
Kafka Streams is a light-weight component which extends Kafka with stream processing functionality. By that, Kafka can now not only reliably and scalable transport events and messages through the Kafka broker but also analyse and process these event in real-time. Interestingly Kafka Streams does not provide its own cluster infrastructure and it is also not meant to run on a Kafka cluster. The idea is to run Kafka Streams where it makes sense, which can be inside a “normal” Java application, inside a Web container or on a more modern containerized (cloud) infrastructure, such as Mesos, Kubernetes or Docker. Kafka Streams has a lot of interesting features, such as reliable state handling, queryable state and much more. KSQL is a streaming engine for Apache Kafka, providing a simple and completely interactive SQL interface for processing data in Kafka.
http://www.oreilly.com/pub/e/3764
Keystone processes over 700 billion events per day (1 peta byte) with at-least-once processing semantics in the cloud. Monal Daxini details how they used Kafka, Samza, Docker, and Linux at scale to implement a multi-tenant pipeline in AWS cloud within a year. He'll also share plans on offering a Stream Processing as a Service for all of Netflix use.
Episode 4: Operating Kubernetes at Scale with DC/OSMesosphere Inc.
You’ve installed your Kubernetes cluster on DC/OS — now what? Operating Kubernetes efficiently can be challenging. In the final episode of our Kubernetes series, we will share best practices for operating your DC/OS Kubernetes cluster and maintaining performance. During this presentation, Joerg Schad and Chris Gaun show you how to successfully operate Kubernetes at scale in your environment.
During this session, we discuss:
1. How to upgrade DC/OS and Kubernetes with no downtime
2. How DC/OS guards against failure and enables fault domains that are resistant to outages within racks, availability zones, or cloud environments
3. How the monitoring and metrics capabilities on DC/OS improve operational analytics and help you get the most from your cluster
4. How cloud bursting extends your on-prem environment with resources from the cloud to handle spikes in your workload
This document discusses integrating Kata Containers with ACRN hypervisor. It begins with an overview of Kata architecture and how it fits into the container ecosystem. It then describes adaptations made to Kata to support ACRN, including adding ACRN as a supported hypervisor and implementing sandbox management APIs. Features added to ACRN-DM to support Kata are socket backend support and plans for PCI device hotplug support. Current results are not discussed and next steps include completing hotplug support, VM shutdown handling, validation with Kubernetes and Docker, and upstreaming changes. The goal is for performance to exceed or match QEMU and Firecracker.
Kubernetes is great for deploying stateless containers, but what about the big data ecosystem? Episode 3 of our Kubernetes series covers how DC/OS enables you to connect your Kubernetes-based applications to co-located big data services.
Slides cover:
1. Why persistence is challenging in distributed architectures
How DC/OS helps you take advantage of the services available in the big data ecosystem
2. How to connect Kubernetes to your data services through networking
3. How Apache Flink and Apache Spark work with Kubernetes to enable real-time data processing on DC/OS
James Watters Kafka Summit NYC 2019 KeynoteJames Watters
The document discusses how Spring Boot and Kafka can form the basis of a new enterprise application platform that enables continuous delivery and efficient scaling through microservices and event-driven architecture. It provides examples of companies like Netflix and T-Mobile that have successfully adopted this approach. The document advocates an "event-first" design and argues this platform approach allows for arbitrary scaling, multi-cloud deployment, and increased developer autonomy and agility.
This document summarizes Jean-Frederic Clere's presentation on moving a Tomcat cluster to the cloud. It discusses session replication in Tomcat clusters and challenges in the cloud like lack of multicast. It introduces solutions like KUBEPing and DNSPing that enable peer discovery through the Kubernetes API and DNS lookups. The presentation demonstrates these solutions in Katacoda tutorials and shows an operator that automates deployment. It aims to make Tomcat highly available in cloud environments like Kubernetes.
Altinity Cluster Manager: ClickHouse Management for Kubernetes and CloudAltinity Ltd
Webinar. August 21, 2019
By Robert Hodges and Altinity Engineering Team
Simplified management is a prerequisite for running any data warehouse at scale. Altinity is developing a new web-based console for ClickHouse called the Altinity Cluster Manager. It's now in beta and offers simplified operation of ClickHouse installations for users. In this webinar we introduce the ACM and demonstrate use on Kubernetes as well as Amazon Web Services. Attendees are welcome to sign up as beta testers and provide feedback. Please join us to see the future of Clickhouse management!
How to use kakfa for storing intermediate data and use it as a pub/sub model with each of the Producer/Consumer/Topic configs deeply and the Internals working of it.
IBM MQ V9 provides a new optional delivery model with two streams: a long-term support stream for stability and a rapid function delivery stream. It includes features like central provisioning of client configuration, a new quality of service for Advanced Message Security called Confidentiality, and LDAP authorization support for Windows clients. Activity trace information can now be subscribed to via publish/subscribe without additional configuration.
Utilizing messaging systems grants us the capability to decouple services from each other: downtime of a service consuming messages does not impact the functionality of the service sending messages and vice-versa.
In this talk we will discuss how to setup and use messaging systems. As practical examples, we use AMQP backed by ArtemisMQ, as well as kafka to send and receive messages, automatically as well as programmatically.
Speaker: Marco Bungart, Senior Software Engineer at ConSol Düsseldorf
Utilizing messaging systems grants us the capability to decouple services from each other: downtime of a service consuming messages does not impact the functionality of the service sending messages and vice-versa.
In this talk we will discuss how to setup and use messaging systems. As practical examples, we use AMQP backed by ArtemisMQ, as well as kafka to send and receive messages, automatically as well as programmatically.
Similar to Blueprint: Kafka Publisher of Ceilometer (20)
This presentation by Juraj Čorba, Chair of OECD Working Party on Artificial Intelligence Governance (AIGO), was made during the discussion “Artificial Intelligence, Data and Competition” held at the 143rd meeting of the OECD Competition Committee on 12 June 2024. More papers and presentations on the topic can be found at oe.cd/aicomp.
This presentation was uploaded with the author’s consent.
This presentation by Tim Capel, Director of the UK Information Commissioner’s Office Legal Service, was made during the discussion “The Intersection between Competition and Data Privacy” held at the 143rd meeting of the OECD Competition Committee on 13 June 2024. More papers and presentations on the topic can be found at oe.cd/ibcdp.
This presentation was uploaded with the author’s consent.
This presentation by OECD, OECD Secretariat, was made during the discussion “The Intersection between Competition and Data Privacy” held at the 143rd meeting of the OECD Competition Committee on 13 June 2024. More papers and presentations on the topic can be found at oe.cd/ibcdp.
This presentation was uploaded with the author’s consent.
This presentation by OECD, OECD Secretariat, was made during the discussion “Pro-competitive Industrial Policy” held at the 143rd meeting of the OECD Competition Committee on 12 June 2024. More papers and presentations on the topic can be found at oe.cd/pcip.
This presentation was uploaded with the author’s consent.
This presentation by OECD, OECD Secretariat, was made during the discussion “Artificial Intelligence, Data and Competition” held at the 143rd meeting of the OECD Competition Committee on 12 June 2024. More papers and presentations on the topic can be found at oe.cd/aicomp.
This presentation was uploaded with the author’s consent.
The importance of sustainable and efficient computational practices in artificial intelligence (AI) and deep learning has become increasingly critical. This webinar focuses on the intersection of sustainability and AI, highlighting the significance of energy-efficient deep learning, innovative randomization techniques in neural networks, the potential of reservoir computing, and the cutting-edge realm of neuromorphic computing. This webinar aims to connect theoretical knowledge with practical applications and provide insights into how these innovative approaches can lead to more robust, efficient, and environmentally conscious AI systems.
Webinar Speaker: Prof. Claudio Gallicchio, Assistant Professor, University of Pisa
Claudio Gallicchio is an Assistant Professor at the Department of Computer Science of the University of Pisa, Italy. His research involves merging concepts from Deep Learning, Dynamical Systems, and Randomized Neural Systems, and he has co-authored over 100 scientific publications on the subject. He is the founder of the IEEE CIS Task Force on Reservoir Computing, and the co-founder and chair of the IEEE Task Force on Randomization-based Neural Networks and Learning Systems. He is an associate editor of IEEE Transactions on Neural Networks and Learning Systems (TNNLS).
This presentation by Thibault Schrepel, Associate Professor of Law at Vrije Universiteit Amsterdam University, was made during the discussion “Artificial Intelligence, Data and Competition” held at the 143rd meeting of the OECD Competition Committee on 12 June 2024. More papers and presentations on the topic can be found at oe.cd/aicomp.
This presentation was uploaded with the author’s consent.
This presentation by Nathaniel Lane, Associate Professor in Economics at Oxford University, was made during the discussion “Pro-competitive Industrial Policy” held at the 143rd meeting of the OECD Competition Committee on 12 June 2024. More papers and presentations on the topic can be found at oe.cd/pcip.
This presentation was uploaded with the author’s consent.
This presentation by Professor Alex Robson, Deputy Chair of Australia’s Productivity Commission, was made during the discussion “Competition and Regulation in Professions and Occupations” held at the 77th meeting of the OECD Working Party No. 2 on Competition and Regulation on 10 June 2024. More papers and presentations on the topic can be found at oe.cd/crps.
This presentation was uploaded with the author’s consent.
Carrer goals.pptx and their importance in real lifeartemacademy2
Career goals serve as a roadmap for individuals, guiding them toward achieving long-term professional aspirations and personal fulfillment. Establishing clear career goals enables professionals to focus their efforts on developing specific skills, gaining relevant experience, and making strategic decisions that align with their desired career trajectory. By setting both short-term and long-term objectives, individuals can systematically track their progress, make necessary adjustments, and stay motivated. Short-term goals often include acquiring new qualifications, mastering particular competencies, or securing a specific role, while long-term goals might encompass reaching executive positions, becoming industry experts, or launching entrepreneurial ventures.
Moreover, having well-defined career goals fosters a sense of purpose and direction, enhancing job satisfaction and overall productivity. It encourages continuous learning and adaptation, as professionals remain attuned to industry trends and evolving job market demands. Career goals also facilitate better time management and resource allocation, as individuals prioritize tasks and opportunities that advance their professional growth. In addition, articulating career goals can aid in networking and mentorship, as it allows individuals to communicate their aspirations clearly to potential mentors, colleagues, and employers, thereby opening doors to valuable guidance and support. Ultimately, career goals are integral to personal and professional development, driving individuals toward sustained success and fulfillment in their chosen fields.
XP 2024 presentation: A New Look to Leadershipsamililja
Presentation slides from XP2024 conference, Bolzano IT. The slides describe a new view to leadership and combines it with anthro-complexity (aka cynefin).
This presentation by Yong Lim, Professor of Economic Law at Seoul National University School of Law, was made during the discussion “Artificial Intelligence, Data and Competition” held at the 143rd meeting of the OECD Competition Committee on 12 June 2024. More papers and presentations on the topic can be found at oe.cd/aicomp.
This presentation was uploaded with the author’s consent.
This presentation by OECD, OECD Secretariat, was made during the discussion “Competition and Regulation in Professions and Occupations” held at the 77th meeting of the OECD Working Party No. 2 on Competition and Regulation on 10 June 2024. More papers and presentations on the topic can be found at oe.cd/crps.
This presentation was uploaded with the author’s consent.
This presentation by Katharine Kemp, Associate Professor at the Faculty of Law & Justice at UNSW Sydney, was made during the discussion “The Intersection between Competition and Data Privacy” held at the 143rd meeting of the OECD Competition Committee on 13 June 2024. More papers and presentations on the topic can be found at oe.cd/ibcdp.
This presentation was uploaded with the author’s consent.
Suzanne Lagerweij - Influence Without Power - Why Empathy is Your Best Friend...Suzanne Lagerweij
This is a workshop about communication and collaboration. We will experience how we can analyze the reasons for resistance to change (exercise 1) and practice how to improve our conversation style and be more in control and effective in the way we communicate (exercise 2).
This session will use Dave Gray’s Empathy Mapping, Argyris’ Ladder of Inference and The Four Rs from Agile Conversations (Squirrel and Fredrick).
Abstract:
Let’s talk about powerful conversations! We all know how to lead a constructive conversation, right? Then why is it so difficult to have those conversations with people at work, especially those in powerful positions that show resistance to change?
Learning to control and direct conversations takes understanding and practice.
We can combine our innate empathy with our analytical skills to gain a deeper understanding of complex situations at work. Join this session to learn how to prepare for difficult conversations and how to improve our agile conversations in order to be more influential without power. We will use Dave Gray’s Empathy Mapping, Argyris’ Ladder of Inference and The Four Rs from Agile Conversations (Squirrel and Fredrick).
In the session you will experience how preparing and reflecting on your conversation can help you be more influential at work. You will learn how to communicate more effectively with the people needed to achieve positive change. You will leave with a self-revised version of a difficult conversation and a practical model to use when you get back to work.
Come learn more on how to become a real influencer!
Suzanne Lagerweij - Influence Without Power - Why Empathy is Your Best Friend...
Blueprint: Kafka Publisher of Ceilometer
1. Kafka Publisher of Ceilometer
OpenStack Summit @ Paris
Komei Shimamura, Yathiraj Udupi, Debo Dutta
[kshimamu|yudupi|dedutta]@cisco.com
2. Kafka: Distributed Messaging System
‣ Publish-Subscribe Architecture
‣ Component: Publisher, Broker, and Consumer
‣ Support for Publishing Messages Real-timely
OpenStack Summit @ Paris
publish fetch
BR CN
BR: broker, CN: consumer, PR: publisher 2
PR
PR
PR
BR CN
CN
1 2
3
send
3. Real-time Data Achieve - Use Cases
‣ Visualisation
‣ Kafka - ElasticSearch - kibana
‣ Machine Learning / Fault Detection
‣ Kafka - Storm
‣ i.e. this VM would accidentally terminate after10 min
‣ Machine Learning / Recommendation
‣ Kafka - Storm - Jubatus
‣ i.e. next VM should be launched at this host
OpenStack Summit @ Paris
3
OpenStack can now cooperate with External OSS!
4. How to Publish to Kafka from Ceilometer
‣ Just Configure pipeline.yaml
Ceilometer has
several publishers
Specify Kafka broker
Add New Publisher
As a Option
OpenStack Summit @ Paris
User Can Name Publishing
4
Data As a Topic
Ceilometer Can Publish its Data easily !
5. Summery
‣ Kafka is a Broker of Real-time Messages
‣ Minimal configuration is needed
‣ External OSS can take advantage of Ceilometer
OpenStack Summit @ Paris
5
PR BR CN
Ceilometer Kafka Broker A lot of
External OSS
Thank you!