Learning How to Build Event Streaming Applications with Pac-Man, Ricardo Ferreira, Developer Advocate, Confluent
https://www.meetup.com/Raleigh-Apache-Kafka-Meetup-by-Confluent/events/269215507/
The Incredible World Of Voice Search In Less Than 15 MinutesJohn Lincoln
John Lincoln presented on the topic of voice search. He discussed key stats on voice search usage, the differences between voice actions and voice searches, popular digital assistants and how they function, and how voice search differs from traditional text-based search. Lincoln also covered optimizing for voice search through local SEO, content, and structured data. He emphasized starting to optimize now for the growing importance of voice interactions.
PubCon Last Vegas 2015 - Editing AdWords ScriptsChristi Olson
A non-coders guide and introduction to AdWords Scripts. Presented at PubCon Vegas 2015 a step by step guide to implement AdWords scripts along with an example of how to edit and tweak a pre-written script to do what you want it to do.
Crossing the Streams: Rethinking Stream Processing with KStreams and KSQL confluent
(Viktor Gamov, Confluent) Kafka Summit SF 2018
All things change constantly! And dealing with constantly changing data at low latency is pretty hard. It doesn’t need to be that way. Apache Kafka, the de facto standard open source distributed stream processing system. Many of us know Kafka’s architectural and pub/sub API particulars. But that doesn’t mean we’re equipped to build the kind of real-time streaming data systems that the next generation of business requirements are going to demand. We need to get on board with streams!
Viktor Gamov will introduce Kafka Streams and KSQL—an important recent addition to the Confluent Open Source platform that lets us build sophisticated stream processing systems with little to no code at all! They will talk about how to deploy stream processing applications and look at the actual working code that will bring your thinking about streaming data systems from the ancient history of batch processing into the current era of streaming data!
P.S. No prior knowledge of Kafka Streams, KSQL or Ghostbusters needed!
Optimising Content For Voice Search & Virtual AssistantsKaizen
In the age of mobile devices and virtual assistants, the future of SEO requires optimizing content for voice search. New devices such as Apple's Siri, Amazon Echo and Google Home are further accelerating the trend toward voice search. Marketers and merchants must prepare now for a future in which voice and virtual assistants play a much larger role in content discovery and conversions. This session explores how to optimise your content and user experience for a future in which half or more of all queries will be voice-driven.
Alexa, the voice service that powers Amazon Echo, Echo Dot, Amazon Tap and Amazon Fire TV provides a set of built-in abilities, or skills, that enable customers to interact with devices in a more intuitive way using voice. Examples of these skills include the ability to play music, answer general questions, set an alarm or timer and more. Customers can then access these new skills simply by asking Alexa a question or making a command. This session will be a walkthrough of the latest Alexa Skills Kit (ASK) and will teach you how to build your own skills for Alexa enabled devices. You will also learn how to monitor your new skill using AWS CloudWatch and how to test your skill using AWS Lambda Unit Tests and the Alexa Voice and Service Simulators.
The document discusses ways to build performant web apps. It recommends sending less JavaScript to mobile phones since JavaScript can slow down interactivity. It suggests using progressive web apps, web components, and tools to build performant apps without relying on frameworks. The document promotes laziness and procrastination in coding approaches.
From learning how to code (2 weeks) then jumping in to EmberJS. A look into what it's like to be a beginning developer and how easy EmberJS is to use with very little knowledge.
This talk provides light insight on Ember-CLI, components, add-ons, and troubleshooting code.
Video: vimeo.com/144527585
What can be done with an API is limited only by imagination. However, what should be done using your API may have a more definable answer. Whether you are planning to leverage your API to extend your business model into new channels or to capture new revenue, it is The Business of APIs.
The Incredible World Of Voice Search In Less Than 15 MinutesJohn Lincoln
John Lincoln presented on the topic of voice search. He discussed key stats on voice search usage, the differences between voice actions and voice searches, popular digital assistants and how they function, and how voice search differs from traditional text-based search. Lincoln also covered optimizing for voice search through local SEO, content, and structured data. He emphasized starting to optimize now for the growing importance of voice interactions.
PubCon Last Vegas 2015 - Editing AdWords ScriptsChristi Olson
A non-coders guide and introduction to AdWords Scripts. Presented at PubCon Vegas 2015 a step by step guide to implement AdWords scripts along with an example of how to edit and tweak a pre-written script to do what you want it to do.
Crossing the Streams: Rethinking Stream Processing with KStreams and KSQL confluent
(Viktor Gamov, Confluent) Kafka Summit SF 2018
All things change constantly! And dealing with constantly changing data at low latency is pretty hard. It doesn’t need to be that way. Apache Kafka, the de facto standard open source distributed stream processing system. Many of us know Kafka’s architectural and pub/sub API particulars. But that doesn’t mean we’re equipped to build the kind of real-time streaming data systems that the next generation of business requirements are going to demand. We need to get on board with streams!
Viktor Gamov will introduce Kafka Streams and KSQL—an important recent addition to the Confluent Open Source platform that lets us build sophisticated stream processing systems with little to no code at all! They will talk about how to deploy stream processing applications and look at the actual working code that will bring your thinking about streaming data systems from the ancient history of batch processing into the current era of streaming data!
P.S. No prior knowledge of Kafka Streams, KSQL or Ghostbusters needed!
Optimising Content For Voice Search & Virtual AssistantsKaizen
In the age of mobile devices and virtual assistants, the future of SEO requires optimizing content for voice search. New devices such as Apple's Siri, Amazon Echo and Google Home are further accelerating the trend toward voice search. Marketers and merchants must prepare now for a future in which voice and virtual assistants play a much larger role in content discovery and conversions. This session explores how to optimise your content and user experience for a future in which half or more of all queries will be voice-driven.
Alexa, the voice service that powers Amazon Echo, Echo Dot, Amazon Tap and Amazon Fire TV provides a set of built-in abilities, or skills, that enable customers to interact with devices in a more intuitive way using voice. Examples of these skills include the ability to play music, answer general questions, set an alarm or timer and more. Customers can then access these new skills simply by asking Alexa a question or making a command. This session will be a walkthrough of the latest Alexa Skills Kit (ASK) and will teach you how to build your own skills for Alexa enabled devices. You will also learn how to monitor your new skill using AWS CloudWatch and how to test your skill using AWS Lambda Unit Tests and the Alexa Voice and Service Simulators.
The document discusses ways to build performant web apps. It recommends sending less JavaScript to mobile phones since JavaScript can slow down interactivity. It suggests using progressive web apps, web components, and tools to build performant apps without relying on frameworks. The document promotes laziness and procrastination in coding approaches.
From learning how to code (2 weeks) then jumping in to EmberJS. A look into what it's like to be a beginning developer and how easy EmberJS is to use with very little knowledge.
This talk provides light insight on Ember-CLI, components, add-ons, and troubleshooting code.
Video: vimeo.com/144527585
What can be done with an API is limited only by imagination. However, what should be done using your API may have a more definable answer. Whether you are planning to leverage your API to extend your business model into new channels or to capture new revenue, it is The Business of APIs.
Rediscovering the Value of Apache Kafka® in Modern Data Architectureconfluent
This document discusses the origins and value of Apache Kafka in modern data architectures. It describes how Kafka was created to handle continuous flows of data, addressing limitations in databases and messaging systems. Kafka provides a unified solution for messaging, data storage, and stream processing. It originated from the ideas of treating the log as a first-class citizen and combining messaging, durable storage, and stream processing capabilities into a streaming platform. The document demonstrates how Kafka can be used to build a game scoring application using streams and tables. It recommends ways to learn more about Kafka including trying Confluent Cloud, tutorials, books, and attending Kafka Summit.
Being an Apache Kafka Developer Hero in the World of Cloud (Ricardo Ferreira,...confluent
"Apache Kafka is an amazing piece of technology, that has been furiously adopted by companies all around the world to implement event-driven architectures. While its adoption continues to increase; the reality is that most developers often complain about the complexity of managing the clusters by themselves, which seriously decreases their ability to be agile. This talk will introduce Confluent Cloud, a service that offers Apache Kafka and the Confluent Platform so developers can focus on what they do best: the coding part.
Through interactive demos, it will be shown how to quickly reuse code written for standard Kafka APIs to connect to Confluent Cloud and doing some interesting stuff with it. This is a zero-experience-needed type of session, where the focus is on providing the first steps to beginners."
Introduction to Progressive Web Apps / Meet Magento PL 2018Filip Rakowski
This document is a presentation on progressive web apps (PWAs) given by Filip Rakowski. It discusses key aspects of PWAs including service workers, caching strategies, payment APIs, push notifications, background synchronization, web workers, and web app manifests. The presentation emphasizes how these technologies allow PWAs to provide native app-like experiences through features like offline support, push notifications, and one-click installation.
Filip Rakowski "Web Performance in modern JavaScript world"Fwdays
In mobile-first era where network connectivity is not always stable and low-end devices are widely used it’s extremely important to keep your web applications smooth and optimized. During the talk we’ll take a look at the performance challenges we are facing every day and how modern JavaScript technologies such as PWA and AMP can help solving them. We will investigate how to optimize our app loading time, make JavaScript parsing faster, how to deliver reliable waiting experience to our users and much more.
Magento 2 Performance: Every Second CountsJoshua Warren
On the web, every second counts. Studies have shown that a 1 second delay in load time can cost a mid-sized eCommerce company $2.5 million per year in lost revenue. Let’s look at what Magento 2 has done to improve performance and how we can take things a step further to ensure the Magento 2 sites we build and maintain are well designed, well written and very, very fast.
Presented at php[world] 2016.
1. The document discusses various options for implementing disaster recovery and high availability with Kafka across multiple data centers, including MirrorMaker, MirrorMaker 2, and stretch clusters.
2. MirrorMaker provides basic asynchronous replication between data centers but has limitations around failover and latency. MirrorMaker 2 and replicators support active-active production in both DCs but with more complexity.
3. Stretch clusters treat the multiple DCs as a single Kafka cluster with synchronous replication and no producer latency, but require more resources and rely on low WAN latency. The best option depends on requirements for developer ease, latency, consistency, and budget.
This document discusses the importance of monitoring in DevOps. It provides examples of metrics to monitor like response times, errors, user behavior etc. and how to collect and visualize this data. Open source tools like CollectD, Graphite, Logstash, Elasticsearch, Kibana, InfluxDB and Grafana are recommended for collection, storage and visualization of monitoring data. The document emphasizes making decisions based on facts obtained from monitoring and continuous improvement.
Performance optimization of vue.js apps with modern jsFilip Rakowski
This document discusses various techniques for optimizing the performance of Vue.js apps, including code splitting, lazy loading components and libraries, minimizing initial bundle size, prefetching resources, and using service workers to improve caching. Some key recommendations are to split code by route, lazily load off-screen components, defer non-critical libraries, and prefetch lazily loaded resources to improve performance and user experience. Measuring tools like the coverage tool, bundle analyzer, and import cost plugin can help identify optimization opportunities.
The document is a transcript from an API 101 workshop that provides an introduction to APIs. In the workshop, two presenters discuss what APIs are, the business benefits of APIs, REST architecture, and tips for API design and developer success. They cover topics such as API history, how APIs enable applications and services, examples of companies that built platforms using APIs, REST principles like HTTP verbs and response formats, and best practices for marketing and supporting developers. The workshop includes presentations, examples, and opportunities for audience Q&A.
The document is a transcript from an API 101 workshop. It provides an introduction to APIs and discusses what they are, their history, examples of how APIs work, and best practices for designing, marketing, and supporting APIs. The workshop consisted of presentations and discussions from multiple speakers on topics including the business benefits of APIs, REST architecture, and strategies for API and developer success.
The document discusses common failures libraries experience when trying to implement new technologies and services. It identifies failures such as assuming what works for one library will work for another, focusing on new technologies without tying them to strategic goals, and taking on projects without allocating adequate staff time and resources. The document then provides recommendations for building an innovative culture, such as questioning assumptions, encouraging staff learning and risk-taking, and involving staff from all levels in planning.
This document discusses using Facebook for mobile app distribution and promotion. It covers how the Facebook platform can help developers build great apps, distribute them through organic sharing on Facebook, and promote apps using Facebook ads. It provides information on the Facebook SDKs for iOS and Android, how to integrate sharing and login, best practices for permissions, and how to drive installs through mobile app install ads and sponsored stories on Facebook.
Collaborative technology in libraries allows people to work together on projects and documents over local and remote networks. The document discusses how social networks, private social networks, conversation tools, multimedia, meetings, sharing files, and collaborative workspaces can utilize collaborative technologies. It provides examples of using tools like social networks, wikis, comments, and mashups to engage users and bring people together to work on common tasks.
8 Lessons Learned from Using Kafka in 1500 microservices - confluent streamin...Natan Silnitsky
Kafka is the bedrock of Wix's distributed microservices system. For the last 5 years we have learned a lot about how to successfully scale our event-driven architecture to roughly 1500 microservices.
We’ve managed to achieve higher decoupling and independence for our various services and dev teams that have very different use-cases while maintaining a single uniform infrastructure in place.
In these slides you will learn about 8 key decisions and steps you can take in order to safely scale-up your Kafka-based system. These include:
* How to increase dev velocity of event driven style code.
* How to optimize working with Kafka in polyglot setting
* How to support growing amount of traffic and developers.
What is Kafka & why is it Important? (UKOUG Tech17, Birmingham, UK - December...Lucas Jellema
Fast data arrives in real time and potentially high volume. Rapid processing, filtering and aggregation is required to ensure timely reaction and actual information in user interfaces. Doing so is a challenge, make this happen in a scalable and reliable fashion is even more interesting. This session introduces Apache Kafka as the scalable event bus that takes care of the events as they flow in and Kafka Streams and KSQL for the streaming analytics. Both Java and Node applications are demonstrated that interact with Kafka and leverage Server Sent Events and WebSocket channels to update the Web UI in real time. User activity performed by the audience in the Web UI is processed by the Kafka powered back end and results in live updates on all clients.
This presentation includes a demonstration of remote database synchronization through Twitter.
The document discusses building chatbots using Google Cloud Functions and API.AI. It covers the design, development and deployment process. For design, it discusses creating a persona, style guide and sample dialogs. For development, it explains how conversations work with speech to text, natural language processing and text to speech. Cloud Functions is presented as a serverless platform to build event-based microservices for chatbots. API.AI is demonstrated for natural language understanding. Integrations with Actions on Google and other platforms are also covered. The document concludes with resources for conversational design guidelines.
This document discusses improving the developer experience for open source projects by focusing on documentation, outdated documentation being a common problem, and defining processes for code style, linting, contributions and releases. It recommends using containers to reduce prerequisites and mimic production environments. Defining processes is described as important for others to understand what to do. Laravel is presented as a case study of a project that has well-defined processes.
I Don’t Always Test My Streams, But When I Do, I Do it in Production (Viktor ...confluent
Testing stream processing applications (Kafka Streams and ksqlDB) isn’t always straightforward. You could run a simple topology manually and observe the results. But how about repeatable tests that you can run anytime, as part of a build without a Kafka cluster or Zookeeper? Luckily, Kafka Streams includes the TopologyTestDriver module (and ksqlDB includes test-runner) that allows you to do precisely that. After learning this, no doubt, your test coverage is sky-high! However, how will your stream processing application perform once deployed to production? You might depend on external resources such as databases, web services, and connectors. Viktor will start this talk covering the basics of unit testing of Kafka Streams applications using TopologyTestDriver. Viktor will also look at some popular open-source libraries for testing streams applications. Viktor demonstrates TestContainers, a Java library that provides lightweight, disposable instances of shared databases, Kafka clusters, and anything else that can run in a Docker container and how to use it for integration testing of processing applications! And lastly, Viktor will show ksqlDB’s test-runner to unit test your KSQL applications.
Ember.js - Harnessing Convention Over ConfigurationTracy Lee
The document appears to be a presentation about Ember and Ember-CLI. It discusses the power of Ember's conventions over configuration approach and the Ember-CLI tool. It highlights features like app structure, Babel compilation, live reload, testing support, and deployment pipelines. The presentation demonstrates how to get started with Ember by installing Ember-CLI and generating a new app, and covers utilizing the addon ecosystem with examples like Ember Data. Resources are provided for learning more about building basic Ember apps.
Building API data products on top of your real-time data infrastructureconfluent
This talk and live demonstration will examine how Confluent and Gravitee.io integrate to unlock value from streaming data through API products.
You will learn how data owners and API providers can document, secure data products on top of Confluent brokers, including schema validation, topic routing and message filtering.
You will also see how data and API consumers can discover and subscribe to products in a developer portal, as well as how they can integrate with Confluent topics through protocols like REST, Websockets, Server-sent Events and Webhooks.
Whether you want to monetize your real-time data, enable new integrations with partners, or provide self-service access to topics through various protocols, this webinar is for you!
Rediscovering the Value of Apache Kafka® in Modern Data Architectureconfluent
This document discusses the origins and value of Apache Kafka in modern data architectures. It describes how Kafka was created to handle continuous flows of data, addressing limitations in databases and messaging systems. Kafka provides a unified solution for messaging, data storage, and stream processing. It originated from the ideas of treating the log as a first-class citizen and combining messaging, durable storage, and stream processing capabilities into a streaming platform. The document demonstrates how Kafka can be used to build a game scoring application using streams and tables. It recommends ways to learn more about Kafka including trying Confluent Cloud, tutorials, books, and attending Kafka Summit.
Being an Apache Kafka Developer Hero in the World of Cloud (Ricardo Ferreira,...confluent
"Apache Kafka is an amazing piece of technology, that has been furiously adopted by companies all around the world to implement event-driven architectures. While its adoption continues to increase; the reality is that most developers often complain about the complexity of managing the clusters by themselves, which seriously decreases their ability to be agile. This talk will introduce Confluent Cloud, a service that offers Apache Kafka and the Confluent Platform so developers can focus on what they do best: the coding part.
Through interactive demos, it will be shown how to quickly reuse code written for standard Kafka APIs to connect to Confluent Cloud and doing some interesting stuff with it. This is a zero-experience-needed type of session, where the focus is on providing the first steps to beginners."
Introduction to Progressive Web Apps / Meet Magento PL 2018Filip Rakowski
This document is a presentation on progressive web apps (PWAs) given by Filip Rakowski. It discusses key aspects of PWAs including service workers, caching strategies, payment APIs, push notifications, background synchronization, web workers, and web app manifests. The presentation emphasizes how these technologies allow PWAs to provide native app-like experiences through features like offline support, push notifications, and one-click installation.
Filip Rakowski "Web Performance in modern JavaScript world"Fwdays
In mobile-first era where network connectivity is not always stable and low-end devices are widely used it’s extremely important to keep your web applications smooth and optimized. During the talk we’ll take a look at the performance challenges we are facing every day and how modern JavaScript technologies such as PWA and AMP can help solving them. We will investigate how to optimize our app loading time, make JavaScript parsing faster, how to deliver reliable waiting experience to our users and much more.
Magento 2 Performance: Every Second CountsJoshua Warren
On the web, every second counts. Studies have shown that a 1 second delay in load time can cost a mid-sized eCommerce company $2.5 million per year in lost revenue. Let’s look at what Magento 2 has done to improve performance and how we can take things a step further to ensure the Magento 2 sites we build and maintain are well designed, well written and very, very fast.
Presented at php[world] 2016.
1. The document discusses various options for implementing disaster recovery and high availability with Kafka across multiple data centers, including MirrorMaker, MirrorMaker 2, and stretch clusters.
2. MirrorMaker provides basic asynchronous replication between data centers but has limitations around failover and latency. MirrorMaker 2 and replicators support active-active production in both DCs but with more complexity.
3. Stretch clusters treat the multiple DCs as a single Kafka cluster with synchronous replication and no producer latency, but require more resources and rely on low WAN latency. The best option depends on requirements for developer ease, latency, consistency, and budget.
This document discusses the importance of monitoring in DevOps. It provides examples of metrics to monitor like response times, errors, user behavior etc. and how to collect and visualize this data. Open source tools like CollectD, Graphite, Logstash, Elasticsearch, Kibana, InfluxDB and Grafana are recommended for collection, storage and visualization of monitoring data. The document emphasizes making decisions based on facts obtained from monitoring and continuous improvement.
Performance optimization of vue.js apps with modern jsFilip Rakowski
This document discusses various techniques for optimizing the performance of Vue.js apps, including code splitting, lazy loading components and libraries, minimizing initial bundle size, prefetching resources, and using service workers to improve caching. Some key recommendations are to split code by route, lazily load off-screen components, defer non-critical libraries, and prefetch lazily loaded resources to improve performance and user experience. Measuring tools like the coverage tool, bundle analyzer, and import cost plugin can help identify optimization opportunities.
The document is a transcript from an API 101 workshop that provides an introduction to APIs. In the workshop, two presenters discuss what APIs are, the business benefits of APIs, REST architecture, and tips for API design and developer success. They cover topics such as API history, how APIs enable applications and services, examples of companies that built platforms using APIs, REST principles like HTTP verbs and response formats, and best practices for marketing and supporting developers. The workshop includes presentations, examples, and opportunities for audience Q&A.
The document is a transcript from an API 101 workshop. It provides an introduction to APIs and discusses what they are, their history, examples of how APIs work, and best practices for designing, marketing, and supporting APIs. The workshop consisted of presentations and discussions from multiple speakers on topics including the business benefits of APIs, REST architecture, and strategies for API and developer success.
The document discusses common failures libraries experience when trying to implement new technologies and services. It identifies failures such as assuming what works for one library will work for another, focusing on new technologies without tying them to strategic goals, and taking on projects without allocating adequate staff time and resources. The document then provides recommendations for building an innovative culture, such as questioning assumptions, encouraging staff learning and risk-taking, and involving staff from all levels in planning.
This document discusses using Facebook for mobile app distribution and promotion. It covers how the Facebook platform can help developers build great apps, distribute them through organic sharing on Facebook, and promote apps using Facebook ads. It provides information on the Facebook SDKs for iOS and Android, how to integrate sharing and login, best practices for permissions, and how to drive installs through mobile app install ads and sponsored stories on Facebook.
Collaborative technology in libraries allows people to work together on projects and documents over local and remote networks. The document discusses how social networks, private social networks, conversation tools, multimedia, meetings, sharing files, and collaborative workspaces can utilize collaborative technologies. It provides examples of using tools like social networks, wikis, comments, and mashups to engage users and bring people together to work on common tasks.
8 Lessons Learned from Using Kafka in 1500 microservices - confluent streamin...Natan Silnitsky
Kafka is the bedrock of Wix's distributed microservices system. For the last 5 years we have learned a lot about how to successfully scale our event-driven architecture to roughly 1500 microservices.
We’ve managed to achieve higher decoupling and independence for our various services and dev teams that have very different use-cases while maintaining a single uniform infrastructure in place.
In these slides you will learn about 8 key decisions and steps you can take in order to safely scale-up your Kafka-based system. These include:
* How to increase dev velocity of event driven style code.
* How to optimize working with Kafka in polyglot setting
* How to support growing amount of traffic and developers.
What is Kafka & why is it Important? (UKOUG Tech17, Birmingham, UK - December...Lucas Jellema
Fast data arrives in real time and potentially high volume. Rapid processing, filtering and aggregation is required to ensure timely reaction and actual information in user interfaces. Doing so is a challenge, make this happen in a scalable and reliable fashion is even more interesting. This session introduces Apache Kafka as the scalable event bus that takes care of the events as they flow in and Kafka Streams and KSQL for the streaming analytics. Both Java and Node applications are demonstrated that interact with Kafka and leverage Server Sent Events and WebSocket channels to update the Web UI in real time. User activity performed by the audience in the Web UI is processed by the Kafka powered back end and results in live updates on all clients.
This presentation includes a demonstration of remote database synchronization through Twitter.
The document discusses building chatbots using Google Cloud Functions and API.AI. It covers the design, development and deployment process. For design, it discusses creating a persona, style guide and sample dialogs. For development, it explains how conversations work with speech to text, natural language processing and text to speech. Cloud Functions is presented as a serverless platform to build event-based microservices for chatbots. API.AI is demonstrated for natural language understanding. Integrations with Actions on Google and other platforms are also covered. The document concludes with resources for conversational design guidelines.
This document discusses improving the developer experience for open source projects by focusing on documentation, outdated documentation being a common problem, and defining processes for code style, linting, contributions and releases. It recommends using containers to reduce prerequisites and mimic production environments. Defining processes is described as important for others to understand what to do. Laravel is presented as a case study of a project that has well-defined processes.
I Don’t Always Test My Streams, But When I Do, I Do it in Production (Viktor ...confluent
Testing stream processing applications (Kafka Streams and ksqlDB) isn’t always straightforward. You could run a simple topology manually and observe the results. But how about repeatable tests that you can run anytime, as part of a build without a Kafka cluster or Zookeeper? Luckily, Kafka Streams includes the TopologyTestDriver module (and ksqlDB includes test-runner) that allows you to do precisely that. After learning this, no doubt, your test coverage is sky-high! However, how will your stream processing application perform once deployed to production? You might depend on external resources such as databases, web services, and connectors. Viktor will start this talk covering the basics of unit testing of Kafka Streams applications using TopologyTestDriver. Viktor will also look at some popular open-source libraries for testing streams applications. Viktor demonstrates TestContainers, a Java library that provides lightweight, disposable instances of shared databases, Kafka clusters, and anything else that can run in a Docker container and how to use it for integration testing of processing applications! And lastly, Viktor will show ksqlDB’s test-runner to unit test your KSQL applications.
Ember.js - Harnessing Convention Over ConfigurationTracy Lee
The document appears to be a presentation about Ember and Ember-CLI. It discusses the power of Ember's conventions over configuration approach and the Ember-CLI tool. It highlights features like app structure, Babel compilation, live reload, testing support, and deployment pipelines. The presentation demonstrates how to get started with Ember by installing Ember-CLI and generating a new app, and covers utilizing the addon ecosystem with examples like Ember Data. Resources are provided for learning more about building basic Ember apps.
Similar to Learning How to Build Event Streaming Applications with Pac-Man (20)
Building API data products on top of your real-time data infrastructureconfluent
This talk and live demonstration will examine how Confluent and Gravitee.io integrate to unlock value from streaming data through API products.
You will learn how data owners and API providers can document, secure data products on top of Confluent brokers, including schema validation, topic routing and message filtering.
You will also see how data and API consumers can discover and subscribe to products in a developer portal, as well as how they can integrate with Confluent topics through protocols like REST, Websockets, Server-sent Events and Webhooks.
Whether you want to monetize your real-time data, enable new integrations with partners, or provide self-service access to topics through various protocols, this webinar is for you!
Catch the Wave: SAP Event-Driven and Data Streaming for the Intelligence Ente...confluent
In our exclusive webinar, you'll learn why event-driven architecture is the key to unlocking cost efficiency, operational effectiveness, and profitability. Gain insights on how this approach differs from API-driven methods and why it's essential for your organization's success.
Santander Stream Processing with Apache Flinkconfluent
Flink is becoming the de facto standard for stream processing due to its scalability, performance, fault tolerance, and language flexibility. It supports stream processing, batch processing, and analytics through one unified system. Developers choose Flink for its robust feature set and ability to handle stream processing workloads at large scales efficiently.
Unlocking the Power of IoT: A comprehensive approach to real-time insightsconfluent
In today's data-driven world, the Internet of Things (IoT) is revolutionizing industries and unlocking new possibilities. Join Data Reply, Confluent, and Imply as we unveil a comprehensive solution for IoT that harnesses the power of real-time insights.
Workshop híbrido: Stream Processing con Flinkconfluent
El Stream processing es un requisito previo de la pila de data streaming, que impulsa aplicaciones y pipelines en tiempo real.
Permite una mayor portabilidad de datos, una utilización optimizada de recursos y una mejor experiencia del cliente al procesar flujos de datos en tiempo real.
En nuestro taller práctico híbrido, aprenderás cómo filtrar, unir y enriquecer fácilmente datos en tiempo real dentro de Confluent Cloud utilizando nuestro servicio Flink sin servidor.
Industry 4.0: Building the Unified Namespace with Confluent, HiveMQ and Spark...confluent
Our talk will explore the transformative impact of integrating Confluent, HiveMQ, and SparkPlug in Industry 4.0, emphasizing the creation of a Unified Namespace.
In addition to the creation of a Unified Namespace, our webinar will also delve into Stream Governance and Scaling, highlighting how these aspects are crucial for managing complex data flows and ensuring robust, scalable IIoT-Platforms.
You will learn how to ensure data accuracy and reliability, expand your data processing capabilities, and optimize your data management processes.
Don't miss out on this opportunity to learn from industry experts and take your business to the next level.
La arquitectura impulsada por eventos (EDA) será el corazón del ecosistema de MAPFRE. Para seguir siendo competitivas, las empresas de hoy dependen cada vez más del análisis de datos en tiempo real, lo que les permite obtener información y tiempos de respuesta más rápidos. Los negocios con datos en tiempo real consisten en tomar conciencia de la situación, detectar y responder a lo que está sucediendo en el mundo ahora.
Eventos y Microservicios - Santander TechTalkconfluent
Durante esta sesión examinaremos cómo el mundo de los eventos y los microservicios se complementan y mejoran explorando cómo los patrones basados en eventos nos permiten descomponer monolitos de manera escalable, resiliente y desacoplada.
Q&A with Confluent Experts: Navigating Networking in Confluent Cloudconfluent
This document discusses networking options and best practices for Confluent Cloud. It provides an overview of public endpoints, private link, and peering options. It then discusses best practices for private networking architectures on Azure using hub-and-spoke and private link designs. Finally, it addresses networking considerations and challenges for Kafka Connect managed connectors, as well as planned enhancements for DNS peering and outbound private link support.
Purpose of the session is to have a dive into Apache, Kafka, Data Streaming and Kafka in the cloud
- Dive into Apache Kafka
- Data Streaming
- Kafka in the cloud
Build real-time streaming data pipelines to AWS with Confluentconfluent
Traditional data pipelines often face scalability issues and challenges related to cost, their monolithic design, and reliance on batch data processing. They also typically operate under the premise that all data needs to be stored in a single centralized data source before it's put to practical use. Confluent Cloud on Amazon Web Services (AWS) provides a fully managed cloud-native platform that helps you simplify the way you build real-time data flows using streaming data pipelines and Apache Kafka.
Q&A with Confluent Professional Services: Confluent Service Meshconfluent
No matter whether you are migrating your Kafka cluster to Confluent Cloud, running a cloud-hybrid environment or are in a different situation where data protection and encryption of sensitive information is required, Confluent Service Mesh allows you to transparently encrypt your data without the need to make code changes to you existing applications.
Citi Tech Talk: Event Driven Kafka Microservicesconfluent
Microservices have become a dominant architectural paradigm for building systems in the enterprise, but they are not without their tradeoffs. Learn how to build event-driven microservices with Apache Kafka
Confluent & GSI Webinars series - Session 3confluent
An in depth look at how Confluent is being used in the financial services industry. Gain an understanding of how organisations are utilising data in motion to solve common problems and gain benefits from their real time data capabilities.
It will look more deeply into some specific use cases and show how Confluent technology is used to manage costs and mitigate risks.
This session is aimed at Solutions Architects, Sales Engineers and Pre Sales, and also the more technically minded business aligned people. Whilst this is not a deeply technical session, a level of knowledge around Kafka would be helpful.
This document discusses moving to an event-driven architecture using Confluent. It begins by outlining some of the limitations of traditional messaging middleware approaches. Confluent provides benefits like stream processing, persistence, scalability and reliability while avoiding issues like lack of structure, slow consumers, and technical debt. The document then discusses how Confluent can help modernize architectures, enable new real-time use cases, and reduce costs through migration. It provides examples of how companies like Advance Auto Parts and Nord/LB have benefitted from implementing Confluent platforms.
This session will show why the old paradigm does not work and that a new approach to the data strategy needs to be taken. It aims to show how a Data Streaming Platform is integral to the evolution of a company’s data strategy and how Confluent is not just an integration layer but the central nervous system for an organisation
Vous apprendrez également à :
• Créer plus rapidement des produits et fonctionnalités à l’aide d’une suite complète de connecteurs et d’outils de gestion des flux, et à connecter vos environnements à des pipelines de données
• Protéger vos données et charges de travail les plus critiques grâce à des garanties intégrées en matière de sécurité, de gouvernance et de résilience
• Déployer Kafka à grande échelle en quelques minutes tout en réduisant les coûts et la charge opérationnelle associés
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
Discover top-tier mobile app development services, offering innovative solutions for iOS and Android. Enhance your business with custom, user-friendly mobile applications.
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
Digital Banking in the Cloud: How Citizens Bank Unlocked Their MainframePrecisely
Inconsistent user experience and siloed data, high costs, and changing customer expectations – Citizens Bank was experiencing these challenges while it was attempting to deliver a superior digital banking experience for its clients. Its core banking applications run on the mainframe and Citizens was using legacy utilities to get the critical mainframe data to feed customer-facing channels, like call centers, web, and mobile. Ultimately, this led to higher operating costs (MIPS), delayed response times, and longer time to market.
Ever-changing customer expectations demand more modern digital experiences, and the bank needed to find a solution that could provide real-time data to its customer channels with low latency and operating costs. Join this session to learn how Citizens is leveraging Precisely to replicate mainframe data to its customer channels and deliver on their “modern digital bank” experiences.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
The Microsoft 365 Migration Tutorial For Beginner.pptxoperationspcvita
This presentation will help you understand the power of Microsoft 365. However, we have mentioned every productivity app included in Office 365. Additionally, we have suggested the migration situation related to Office 365 and how we can help you.
You can also read: https://www.systoolsgroup.com/updates/office-365-tenant-to-tenant-migration-step-by-step-complete-guide/
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/how-axelera-ai-uses-digital-compute-in-memory-to-deliver-fast-and-energy-efficient-computer-vision-a-presentation-from-axelera-ai/
Bram Verhoef, Head of Machine Learning at Axelera AI, presents the “How Axelera AI Uses Digital Compute-in-memory to Deliver Fast and Energy-efficient Computer Vision” tutorial at the May 2024 Embedded Vision Summit.
As artificial intelligence inference transitions from cloud environments to edge locations, computer vision applications achieve heightened responsiveness, reliability and privacy. This migration, however, introduces the challenge of operating within the stringent confines of resource constraints typical at the edge, including small form factors, low energy budgets and diminished memory and computational capacities. Axelera AI addresses these challenges through an innovative approach of performing digital computations within memory itself. This technique facilitates the realization of high-performance, energy-efficient and cost-effective computer vision capabilities at the thin and thick edge, extending the frontier of what is achievable with current technologies.
In this presentation, Verhoef unveils his company’s pioneering chip technology and demonstrates its capacity to deliver exceptional frames-per-second performance across a range of standard computer vision networks typical of applications in security, surveillance and the industrial sector. This shows that advanced computer vision can be accessible and efficient, even at the very edge of our technological ecosystem.
Connector Corner: Seamlessly power UiPath Apps, GenAI with prebuilt connectorsDianaGray10
Join us to learn how UiPath Apps can directly and easily interact with prebuilt connectors via Integration Service--including Salesforce, ServiceNow, Open GenAI, and more.
The best part is you can achieve this without building a custom workflow! Say goodbye to the hassle of using separate automations to call APIs. By seamlessly integrating within App Studio, you can now easily streamline your workflow, while gaining direct access to our Connector Catalog of popular applications.
We’ll discuss and demo the benefits of UiPath Apps and connectors including:
Creating a compelling user experience for any software, without the limitations of APIs.
Accelerating the app creation process, saving time and effort
Enjoying high-performance CRUD (create, read, update, delete) operations, for
seamless data management.
Speakers:
Russell Alfeche, Technology Leader, RPA at qBotic and UiPath MVP
Charlie Greenberg, host
In the realm of cybersecurity, offensive security practices act as a critical shield. By simulating real-world attacks in a controlled environment, these techniques expose vulnerabilities before malicious actors can exploit them. This proactive approach allows manufacturers to identify and fix weaknesses, significantly enhancing system security.
This presentation delves into the development of a system designed to mimic Galileo's Open Service signal using software-defined radio (SDR) technology. We'll begin with a foundational overview of both Global Navigation Satellite Systems (GNSS) and the intricacies of digital signal processing.
The presentation culminates in a live demonstration. We'll showcase the manipulation of Galileo's Open Service pilot signal, simulating an attack on various software and hardware systems. This practical demonstration serves to highlight the potential consequences of unaddressed vulnerabilities, emphasizing the importance of offensive security practices in safeguarding critical infrastructure.
Northern Engraving | Nameplate Manufacturing Process - 2024Northern Engraving
Manufacturing custom quality metal nameplates and badges involves several standard operations. Processes include sheet prep, lithography, screening, coating, punch press and inspection. All decoration is completed in the flat sheet with adhesive and tooling operations following. The possibilities for creating unique durable nameplates are endless. How will you create your brand identity? We can help!
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
Conversational agents, or chatbots, are increasingly used to access all sorts of services using natural language. While open-domain chatbots - like ChatGPT - can converse on any topic, task-oriented chatbots - the focus of this paper - are designed for specific tasks, like booking a flight, obtaining customer support, or setting an appointment. Like any other software, task-oriented chatbots need to be properly tested, usually by defining and executing test scenarios (i.e., sequences of user-chatbot interactions). However, there is currently a lack of methods to quantify the completeness and strength of such test scenarios, which can lead to low-quality tests, and hence to buggy chatbots.
To fill this gap, we propose adapting mutation testing (MuT) for task-oriented chatbots. To this end, we introduce a set of mutation operators that emulate faults in chatbot designs, an architecture that enables MuT on chatbots built using heterogeneous technologies, and a practical realisation as an Eclipse plugin. Moreover, we evaluate the applicability, effectiveness and efficiency of our approach on open-source chatbots, with promising results.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/temporal-event-neural-networks-a-more-efficient-alternative-to-the-transformer-a-presentation-from-brainchip/
Chris Jones, Director of Product Management at BrainChip , presents the “Temporal Event Neural Networks: A More Efficient Alternative to the Transformer” tutorial at the May 2024 Embedded Vision Summit.
The expansion of AI services necessitates enhanced computational capabilities on edge devices. Temporal Event Neural Networks (TENNs), developed by BrainChip, represent a novel and highly efficient state-space network. TENNs demonstrate exceptional proficiency in handling multi-dimensional streaming data, facilitating advancements in object detection, action recognition, speech enhancement and language model/sequence generation. Through the utilization of polynomial-based continuous convolutions, TENNs streamline models, expedite training processes and significantly diminish memory requirements, achieving notable reductions of up to 50x in parameters and 5,000x in energy consumption compared to prevailing methodologies like transformers.
Integration with BrainChip’s Akida neuromorphic hardware IP further enhances TENNs’ capabilities, enabling the realization of highly capable, portable and passively cooled edge devices. This presentation delves into the technical innovations underlying TENNs, presents real-world benchmarks, and elucidates how this cutting-edge approach is positioned to revolutionize edge AI across diverse applications.
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
Introduction of Cybersecurity with OSS at Code Europe 2024
Learning How to Build Event Streaming Applications with Pac-Man
1. The talk will start shortly - before then, have a chat with your fellow attendees!
Questions at the end
This is unless the speaker says
otherwise. But generally,
because of internet delays and
lags it can be a bit disruptive
during an Online talk.
Online Meetup Etiquette
Mute during talk
But remember to unmute when
you want to ask a question at
the end (or chat with fellow
attendees before/After)
Use chat to engage
Use Zoom chat and/or create a
thread in the #events channel
on our Community Slack space
so that your discussion can
continue afterwards with
fellow Kafkateers.
Be on Camera &
React!
Speakers during these events
are talking to blank screens.
And it can be exhausting! Use
Zoom reactions at the bottom of
the page to give them something
to work off of!
1.
2.
Continue learning and collaborating
WELCOME!
Thank you for joining us in these unique circumstances. We hope you’re safe and well.
3. If you haven’t already joined, get on the our slack
workspace and continue the conversation with
other Kafkateers. If you’re already a member,
start a thread for this event in #events.
A single online source of everything
you’ll need to learn Kafka. Plus it’s
totally free and ungated…
Confluent Developer
developer.confluent.iohttps://cnfl.io/slack
Confluent Community Slack
You are being recorded and this footage may be added to public channels
2. About me
@riferrei | #kafkameetup | @CONFLUENTINC
• RICARDO FERREIRA
• Works for confluent
• Developer advocate
• Ricardo@confluent.iO
• HTTPS://RIFERREI.NET
6. @riferrei | #kafkameetup | @CONFLUENTINC
I don’t know
“my app only understands sql”
“the source only understands sql”
“I can only process atomic data”
“that is what current books says”
“it just feels right doing like this”
7. @riferrei | #kafkameetup | @CONFLUENTINC
But what if…
database
1000x more volume
Non-transactional events
Transactional events
LOG