IIoT / Industry 4.0 with Apache Kafka, Connect, KSQL, Apache PLC4X Kai Wähner
Data integration and processing is a huge challenge in Industrial IoT (IIoT, aka Industry 4.0 or Automation Industry) due to monolithic systems and proprietary protocols. Apache Kafka, its ecosystem (Kafka Connect, KSQL) and Apache PLC4X are a great open source choice to implement this integration end to end in a scalable, reliable and flexible way.
This blog post covers a high level overview about the challenges and a good, flexible architecture. At the end, I share a video recording and the corresponding slide deck. These provide many more details and insights.
Apache Kafka is the De-facto Standard for Real-Time Event Streaming. It provides
Open Source (Apache 2.0 License)
Global-scale
Real-time
Persistent Storage
Stream Processing
PCL4X allows vertical integration and to write software independent of PLCs using JDBC-like adapters for various protocols like Siemens S7, Modbus, Allen Bradley, Beckhoff ADS, OPC-UA, Emerson, Profinet, BACnet, Ethernet.
Github example: https://github.com/kaiwaehner/iiot-integration-apache-plc4x-kafka-connect-ksql-opc-ua-modbus-siemens-s7
More details: http://www.kai-waehner.de/blog/2019/09/02/iiot-data-integr…and-apache-plc4x/
Video Recording: https://youtu.be/RWKggid25ds
Processing IoT Data from End to End with MQTT and Apache Kafka confluent
(Kai Waehner, Confluent) Kafka Summit SF 2018
This session discusses end-to-end use cases such as connected cars, smart home or healthcare sensors, where you integrate Internet of Things (IoT) devices with enterprise IT using open source technologies and standards. MQTT is a lightweight messaging protocol for IoT. However, MQTT is not built for high scalability, longer storage or easy integration to legacy systems. Apache Kafka is a highly scalable distributed streaming platform, which ingests, stores, processes and forwards high volumes of data from thousands of IoT devices.
This session discusses the Apache Kafka open source ecosystem as a streaming platform to process IoT data. See a live demo of how MQTT brokers like Mosquitto or RabbitMQ integrate with Kafka, and how you can even integrate MQTT clients to Kafka without MQTT Broker. Learn how to analyze the IoT data either natively on Kafka with Kafka Streams/KSQL or on an external big data cluster like Spark, Flink or Elasticsearch leveraging Kafka Connect.
Using the FLaNK Stack for edge ai (apache mxnet, apache flink, apache nifi, a...Timothy Spann
Using the FLaNK Stack for edge ai (apache mxnet, apache flink, apache nifi, apache kafka, apache kudu)
Demos and how to build applications at scale with real-time events in Apache NiFi to Apache Kafka to Apache Flink then stored to Apache Kudu and Apache HDFS. The easy button.
Confluent Operator as Cloud-Native Kafka Operator for KubernetesKai Wähner
Agenda:
- Cloud Native vs. SaaS / Serverless Kafka
- The Emergence of Kubernetes
- Kafka on K8s Deployment Challenges
- Confluent Operator as Kafka Operator
- Q&A
Confluent Operator enables you to:
Provisioning, management and operations of Confluent Platform (including ZooKeeper, Apache Kafka, Kafka Connect, KSQL, Schema Registry, REST Proxy, Control Center)
Deployment on any Kubernetes Platform (Vanilla K8s, OpenShift, Rancher, Mesosphere, Cloud Foundry, Amazon EKS, Azure AKS, Google GKE, etc.)
Automate provisioning of Kafka pods in minutes
Monitor SLAs through Confluent Control Center or Prometheus
Scale Kafka elastically, handle fail-over & Automate rolling updates
Automate security configuration
Built on our first hand knowledge of running Confluent at scale
Fully supported for production usage
Why Cloud-Native Kafka Matters: 4 Reasons to Stop Managing it YourselfDATAVERSITY
With your most talented teams bogged down managing a massive Kafka deployment, it can be challenging to move the dial on projects that drive real value for your business. For example, launching your next major feature, fueling more best-in-breed services like AI/ML on your cloud provider platform, or developing your first use cases for real-time data movement across clouds. By shifting to a fully managed, cloud-native service for Kafka you can unlock your teams to work on the projects that make the best use of your data in motion.
In this webinar you will learn about:
• The increasing value of data in motion to your business
• Challenges and costs of self-managing a large-scale Kafka deployment
• Benefits of managed cloud services for non-core activities like data storage, data warehousing, and messaging
• Optimizing time usage for value-generating activity like new product launches
• Potential cost savings for your business with a cloud-native service for Kafka
Best Practices for Streaming IoT Data with MQTT and Apache Kafka®confluent
Watch this talk here: https://www.confluent.io/online-talks/best-practices-for-streaming-iot-data-with-MQTT-and-apache-kafka-on-demand
Organizations today are looking to stream IoT data to Apache Kafka. However, connecting tens of thousands or even millions of devices over unreliable networks can create some architecture challenges.
In this session, we will identify and demo some best practices for implementing a large scale IoT system that can stream MQTT messages to Apache Kafka.
IIoT / Industry 4.0 with Apache Kafka, Connect, KSQL, Apache PLC4X Kai Wähner
Data integration and processing is a huge challenge in Industrial IoT (IIoT, aka Industry 4.0 or Automation Industry) due to monolithic systems and proprietary protocols. Apache Kafka, its ecosystem (Kafka Connect, KSQL) and Apache PLC4X are a great open source choice to implement this integration end to end in a scalable, reliable and flexible way.
This blog post covers a high level overview about the challenges and a good, flexible architecture. At the end, I share a video recording and the corresponding slide deck. These provide many more details and insights.
Apache Kafka is the De-facto Standard for Real-Time Event Streaming. It provides
Open Source (Apache 2.0 License)
Global-scale
Real-time
Persistent Storage
Stream Processing
PCL4X allows vertical integration and to write software independent of PLCs using JDBC-like adapters for various protocols like Siemens S7, Modbus, Allen Bradley, Beckhoff ADS, OPC-UA, Emerson, Profinet, BACnet, Ethernet.
Github example: https://github.com/kaiwaehner/iiot-integration-apache-plc4x-kafka-connect-ksql-opc-ua-modbus-siemens-s7
More details: http://www.kai-waehner.de/blog/2019/09/02/iiot-data-integr…and-apache-plc4x/
Video Recording: https://youtu.be/RWKggid25ds
Processing IoT Data from End to End with MQTT and Apache Kafka confluent
(Kai Waehner, Confluent) Kafka Summit SF 2018
This session discusses end-to-end use cases such as connected cars, smart home or healthcare sensors, where you integrate Internet of Things (IoT) devices with enterprise IT using open source technologies and standards. MQTT is a lightweight messaging protocol for IoT. However, MQTT is not built for high scalability, longer storage or easy integration to legacy systems. Apache Kafka is a highly scalable distributed streaming platform, which ingests, stores, processes and forwards high volumes of data from thousands of IoT devices.
This session discusses the Apache Kafka open source ecosystem as a streaming platform to process IoT data. See a live demo of how MQTT brokers like Mosquitto or RabbitMQ integrate with Kafka, and how you can even integrate MQTT clients to Kafka without MQTT Broker. Learn how to analyze the IoT data either natively on Kafka with Kafka Streams/KSQL or on an external big data cluster like Spark, Flink or Elasticsearch leveraging Kafka Connect.
Using the FLaNK Stack for edge ai (apache mxnet, apache flink, apache nifi, a...Timothy Spann
Using the FLaNK Stack for edge ai (apache mxnet, apache flink, apache nifi, apache kafka, apache kudu)
Demos and how to build applications at scale with real-time events in Apache NiFi to Apache Kafka to Apache Flink then stored to Apache Kudu and Apache HDFS. The easy button.
Confluent Operator as Cloud-Native Kafka Operator for KubernetesKai Wähner
Agenda:
- Cloud Native vs. SaaS / Serverless Kafka
- The Emergence of Kubernetes
- Kafka on K8s Deployment Challenges
- Confluent Operator as Kafka Operator
- Q&A
Confluent Operator enables you to:
Provisioning, management and operations of Confluent Platform (including ZooKeeper, Apache Kafka, Kafka Connect, KSQL, Schema Registry, REST Proxy, Control Center)
Deployment on any Kubernetes Platform (Vanilla K8s, OpenShift, Rancher, Mesosphere, Cloud Foundry, Amazon EKS, Azure AKS, Google GKE, etc.)
Automate provisioning of Kafka pods in minutes
Monitor SLAs through Confluent Control Center or Prometheus
Scale Kafka elastically, handle fail-over & Automate rolling updates
Automate security configuration
Built on our first hand knowledge of running Confluent at scale
Fully supported for production usage
Why Cloud-Native Kafka Matters: 4 Reasons to Stop Managing it YourselfDATAVERSITY
With your most talented teams bogged down managing a massive Kafka deployment, it can be challenging to move the dial on projects that drive real value for your business. For example, launching your next major feature, fueling more best-in-breed services like AI/ML on your cloud provider platform, or developing your first use cases for real-time data movement across clouds. By shifting to a fully managed, cloud-native service for Kafka you can unlock your teams to work on the projects that make the best use of your data in motion.
In this webinar you will learn about:
• The increasing value of data in motion to your business
• Challenges and costs of self-managing a large-scale Kafka deployment
• Benefits of managed cloud services for non-core activities like data storage, data warehousing, and messaging
• Optimizing time usage for value-generating activity like new product launches
• Potential cost savings for your business with a cloud-native service for Kafka
Best Practices for Streaming IoT Data with MQTT and Apache Kafka®confluent
Watch this talk here: https://www.confluent.io/online-talks/best-practices-for-streaming-iot-data-with-MQTT-and-apache-kafka-on-demand
Organizations today are looking to stream IoT data to Apache Kafka. However, connecting tens of thousands or even millions of devices over unreliable networks can create some architecture challenges.
In this session, we will identify and demo some best practices for implementing a large scale IoT system that can stream MQTT messages to Apache Kafka.
Telco 4.0 - Payment and FinServ Integration for Data in Motion with 5G and Ap...Kai Wähner
The Era of Telco 4.0: Embracing Digital Transformation with Data in Motion. Learn about Payment and FinServ Integration for Data in Motion with 5G and Apache Kafka.
1) The rise of Telco 4.0 and the future forward
2) Data in Motion in the Telco industry
3) Real-world Fintech and Payment examples powered by Data in Motion
Confluent Cloud Networking | Rajan Sundaram, ConfluentHostedbyConfluent
Introduction to networking options available in Confluent Cloud Self Serve provisioning of confluent Kafka clusters. VPC Peering, VNet Peering, Transit Gateway and Private Link Options for AWS, GCP, Azure networking offering. Caveats of confluent's cloud networking solutions customers should be aware of. Details of two major pieces of the architecture of Confluent Cloud - Data Plane Network and Control Plane.
New Features in Confluent Platform 6.0 / Apache Kafka 2.6Kai Wähner
New Features in Confluent Platform 6.0 / Apache Kafka 2.6, including REST Proxy and API, Tiered Storage for AWS S3 and GCP GCS, Cluster Linking (On-Premise, Edge, Hybrid, Multi-Cloud), Self-Balancing Clusters), ksqlDB.
Let's build a simple ingest to cloud datawarehouse with low codeTimothy Spann
Let's build a simple ingest to cloud datawarehouse with low code
From DevOps Stage 2020
Using NiFi to load data warehouse, cloud data, iot, sensors, rest, csv, network, events, logs and all types of data into various sql, nosql, cloud and on premise data stores.
principal dataflow field engineer
tim spann
Building a real-time data processing pipeline using Apache Kafka, Kafka Conne...Paul Brebner
With the rapid onset of the global Covid-19 Pandemic from the start of this year the USA Centers for Disease Control and Prevention (CDC) had to quickly implement a new Covid-19 specific pipeline to collect testing data from all of the USA’s states and territories, and carry out other critical steps including integration, cleaning, checking, enrichment, analysis, and enforcing data governance and privacy etc. The pipeline then produces multiple consumable results for federal and public agencies. They did this in under 30 days, using Apache Kafka. In this presentation we'll build a similar (but simpler) pipeline for ingesting, integrating, indexing, searching/analysing and visualising some publicly available tidal data. We'll briefly introduce each technology and component, and walk through the steps of using Apache Kafka, Kafka Connect, Elasticsearch and Kibana to build the pipeline and visualise the results.
Kafka at the Edge: an IoT scenario with OpenShift Streams for Apache Kafka | ...Red Hat Developers
Apache Kafka is taking the world by storm and is rapidly becoming the de-facto event bus for event-driven and streaming applications that respond to events and data in real time. OpenShift Streams for Apache Kafka is Red Hat's fully hosted and managed Apache Kafka service targeting development teams that want to incorporate streaming data and scalable messaging in their applications, without the burden of setting up and maintaining a Kafka cluster infrastructure.
In this session you will discover how Apache Kafka can be used in an IoT scenario to ingest data from devices and make them available in real-time to other applications.
More specifically you will learn how to:
Simulate devices that send MQTT messages to a MQTT broker
Use Apache Camel and Camel-K to bridge MQTT with Apache Kafka
Use Kafka Streams in a Quarkus application to process the device messages
Query the state of the devices using GraphQ
IoT Sensor Analytics with Kafka, ksqlDB and TensorFlowKai Wähner
Use cases and architectures for IoT projects leveraging Apache Kafka, ksqlDB, machine Learning / deep Learning frameworks like TensorFlow, and cloud infrastructure.
Large numbers of IoT devices lead to big data and the need for further processing and analysis. Apache Kafka is a highly scalable and distributed open source streaming platform, which can connect to MQTT and other IoT standards. Kafka ingests, stores, processes and forwards high volumes of data from thousands of IoT devices.
The rapidly expanding world of stream processing can be daunting, with new concepts such as various types of time semantics, windowed aggregates, changelogs, and programming frameworks to master. KSQL is the streaming SQL engine on top of Apache Kafka which simplifies all this and make stream processing available to everyone without the need to write source code.
This talk shows how to leverage Kafka and KSQL in an IoT sensor analytics scenario for predictive maintenance and integration with real time monitoring systems. A live demo shows how to embed and deploy Machine Learning models - built with frameworks like TensorFlow, DeepLearning4J or H2O - into mission-critical and scalable real time applications.
Enabling Smarter Cities and Connected Vehicles with an Event Streaming Platfo...Kai Wähner
Many cities are investing in technologies to transform their cities into smart city- environments in which data collection and analysis is utilized to manage assets and resources efficiently. Modern technology can help connect the right data, at the right time, to the right people, processes and systems. Innovations around smart cities and the Internet of Things give cities the ability to improve motor safety, unify and manage transportation systems and traffic, save energy and provide a better experience for the residents.
By utilizing an event streaming platform, like Confluent, cities are able to process data in real-time from thousands of sources, such as sensors. By aggregating that data and analyzing real-time data streams, more informed decisions can be made and fine-tuned operations developed for a positive impact on everyday challenges faced by cities.
Learn how to:
-Overcome challenges for building a smarter city
-Build a real time infrastructure to correlate relevant events
-Connect thousands of devices, machines, and people
-Leverage open source and fully managed solutions from the Apache Kafka ecosystem
Building Event Driven (Micro)services with Apache KafkaGuido Schmutz
What is a Microservices architecture and how does it differ from a Service-Oriented Architecture? Should you use traditional REST APIs to bind services together? Or is it better to use a richer, more loosely-coupled protocol? This talk will start with quick recap of how we created systems over the past 20 years and how different architectures evolved from it. The talk will show how we piece services together in event driven systems, how we use a distributed log (event hub) to create a central, persistent history of events and what benefits we achieve from doing so.
Apache Kafka is a perfect match for building such an asynchronous, loosely-coupled event-driven backbone. Events trigger processing logic, which can be implemented in a more traditional as well as in a stream processing fashion. The talk will show the difference between a request-driven and event-driven communication and show when to use which. It highlights how the modern stream processing systems can be used to hold state both internally as well as in a database and how this state can be used to further increase independence of other services, the primary goal of a Microservices architecture.
With Apache Kafka’s rise for event-driven architectures, developers require a specification to design effective event-driven APIs. AsyncAPI has been developed based on OpenAPI to define the endpoints and schemas of brokers and topics. For Kafka applications, the broker’s design to handle high throughput serialized payloads brings challenges for consumers and producers managing the structure of the message. For this reason, a registry becomes critical to achieve schema governance. Apicurio Registry is an end-to-end solution to store API definitions and schemas for Kafka applications. The project includes serializers, deserializers, and additional tooling. The registry supports several types of artifacts including OpenAPI, AsyncAPI, GraphQL, Apache Avro, Google protocol buffers, JSON Schema, Kafka Connect schema, WSDL, and XML Schema (XSD). It also checks them for validity and compatibility.
In this session, we will be covering the following topics:
● The importance of having a contract-first approach to event-driven APIs
● What is AsyncAPI, and how it helps to define Kafka endpoints and schemas
● The Kafka challenges on message structure when serializing and deserializing
● Introduction to Apicurio Registry and schema management for Kafka
● Examples of how to use Apicurio Registry with popular Java frameworks like Spring and Quarkus
At BAADER, we design and engineer innovative and holistic solutions that ensure intelligent, safe, efficient and sustainable food processing in all phases, from the handling of live and raw protein materials to the finished food products. As a key player in the food value chain, we aim to take further significant steps toward greater efficiency, traceability, transparency, profitability, and sustainability through new digital solutions.
During our digital transformation we are working on two ends: At one hand there are many brownfield factories unprepared for the digital journey and the other hand we have powerful greenfield technologies like Apache Kafka. Now, we have to bring two mindsets together – robust food processing machinery and highly scalable software technologies. In this talk, we will present how we successfully started to ingest various kinds of IoT data into our Kafka cluster - spotlighted from both ends.
Technical Deep Dive: Using Apache Kafka to Optimize Real-Time Analytics in Fi...confluent
Watch this talk here: https://www.confluent.io/online-talks/using-apache-kafka-to-optimize-real-time-analytics-financial-services-iot-applications
When it comes to the fast-paced nature of capital markets and IoT, the ability to analyze data in real time is critical to gaining an edge. It’s not just about the quantity of data you can analyze at once, it’s about the speed, scale, and quality of the data you have at your fingertips.
Modern streaming data technologies like Apache Kafka and the broader Confluent platform can help detect opportunities and threats in real time. They can improve profitability, yield, and performance. Combining Kafka with Panopticon visual analytics provides a powerful foundation for optimizing your operations.
Use cases in capital markets include transaction cost analysis (TCA), risk monitoring, surveillance of trading and trader activity, compliance, and optimizing profitability of electronic trading operations. Use cases in IoT include monitoring manufacturing processes, logistics, and connected vehicle telemetry and geospatial data.
This online talk will include in depth practical demonstrations of how Confluent and Panopticon together support several key applications. You will learn:
-Why Apache Kafka is widely used to improve performance of complex operational systems
-How Confluent and Panopticon open new opportunities to analyze operational data in real time
-How to quickly identify and react immediately to fast-emerging trends, clusters, and anomalies
-How to scale data ingestion and data processing
-Build new analytics dashboards in minutes
The Rise Of Event Streaming – Why Apache Kafka Changes EverythingKai Wähner
Business digitalization trends like microservices, the Internet of Things or Machine Learning are driving the need to process events at a whole new scale, speed and efficiency. Traditional solutions like ETL/data integration or messaging are not build to serve these needs.
Today, the open source project Apache Kafka® is being used by thousands of companies including over 60% of the Fortune 100 to power and innovate their businesses by focusing their data strategies around event-driven architectures leveraging event streaming.We will discuss the market and technology changes that have given rise to Kafka and to Event Streaming, and we will introduce the audience to the key aspects of building an Event streaming platform with Kafka. Examples of productive use cases from the automotive, manufacturing and transportation sector will showcase the power of event streaming.
What's new in confluent platform 5.4 online talkconfluent
To stay informed about the latest features in Confluent Platform 5.4 join Martijn Kieboom Solutions Engineer at Confluent, for the ‘What’s New in Confluent 5.4?’ on February 12 at 11 am GMT/ 12 Noon CET. Martijn will talk through the new features including:
Role-Based Access Control and how it enables highly granular control of permissions and platform access
Structured Audit Logs and how they enable the capture of authorization logs
How Multi-Region Clusters deliver asynchronous replication at the topic level, allowing companies to run a single Kafka Cluster across multiple data-centres
Schema validations role in enabling businesses that run Kafka at scale to deliver data compatibility across platforms
0-60: Tesla's Streaming Data Platform ( Jesse Yates, Tesla) Kafka Summit SF 2019confluent
Tesla ingests trillions of events every day from hundreds of unique data sources through our streaming data platform. Find out how we developed a set of high-throughput, non-blocking primitives that allow us to transform and ingest data into a variety of data stores with minimal development time. Additionally, we will discuss how these primitives allowed us to completely migrate the streaming platform in just a few months. Finally, we will talk about how we scale team size sub-linearly to data volumes, while continuing to onboard new use cases.
5 lessons learned for successful migration to Confluent cloud | Natan Silinit...HostedbyConfluent
Confluent Cloud makes Devops engineers lives a lot more easier.
Yet moving 1500 microservices, 10K topics and 100K partitions to a multi-cluster Confluent cloud can be a challenge.
In this talk you will hear about 5 lessons that Wix has learned in order to successfully meet this challenge.
These lessons include:
1. Automation, Automation, Automation - all the process has to be completely automated at such scale
2. Prefer a gradual approach - E.g. migrate topics in small chunks and not all at once. Reduces risks if things go bad
3. Cleanup first - avoid migrating unused topics or topics with too many unnecessary partitions
GCP for Apache Kafka® Users: Stream Ingestion and Processingconfluent
Watch this talk here: https://www.confluent.io/online-talks/gcp-for-apache-kafka-users-stream-ingestion-processing
In private and public clouds, stream analytics commonly means stateless processing systems organized around Apache Kafka® or a similar distributed log service. GCP took a somewhat different tack, with Cloud Pub/Sub, Dataflow, and BigQuery, distributing the responsibility for processing among ingestion, processing and database technologies.
We compare the two approaches to data integration and show how Dataflow allows you to join and transform and deliver data streams among on-prem and cloud Apache Kafka clusters, Cloud Pub/Sub topics and a variety of databases. The session will have a mix of architectural discussions and practical code reviews of Dataflow-based pipelines.
Connected Vehicles and V2X with Apache KafkaKai Wähner
This session discusses uses cases leveraging Apache Kafka open source ecosystem as streaming platform to process IoT data.
See use cases, architectural alternatives and a live demo of how devices connect to Kafka via MQTT. Learn how to analyze the IoT data either natively on Kafka with Kafka Streams/KSQL, or on an external big data cluster like Spark, Flink or Elastic leveraging Kafka Connect, and how to leverage TensorFlow for Machine Learning.
The focus is on connected cars / connected vehicles and V2X use cases respectively mobility services.
A live demo shows how to build a cloud-native IoT infrastructure on Kubernetes to connect and process streaming data in real-time from 100.000 cars to do predictive maintenance at scale in real-time.
Code for the live demo on Github:
https://github.com/kaiwaehner/hivemq-mqtt-tensorflow-kafka-realtime-iot-machine-learning-training-inference
IoT Sensor Analytics with Apache Kafka, KSQL, TensorFlow and MQTT => Kafka-Native End-to-End IoT Data Integration and Processing.
Large numbers of IoT devices lead to big data and the need for further processing and analysis. Apache Kafka is a highly scalable and distributed open source streaming platform, which can connect to MQTT and other IoT standards. Kafka ingests, stores, processes and forwards high volumes of data from thousands of IoT devices.
The rapidly expanding world of stream processing can be daunting, with new concepts such as various types of time semantics, windowed aggregates, changelogs, and programming frameworks to master. KSQL is the streaming SQL engine on top of Apache Kafka which simplifies all this and make stream processing available to everyone without the need to write source code.
This talk shows how to leverage Kafka and KSQL in an IoT sensor analytics scenario for predictive maintenance and integration with real time monitoring systems. A live demo shows how to embed and deploy Machine Learning models - built with frameworks like TensorFlow, DeepLearning4J or H2O - into mission-critical and scalable real time applications.
Telco 4.0 - Payment and FinServ Integration for Data in Motion with 5G and Ap...Kai Wähner
The Era of Telco 4.0: Embracing Digital Transformation with Data in Motion. Learn about Payment and FinServ Integration for Data in Motion with 5G and Apache Kafka.
1) The rise of Telco 4.0 and the future forward
2) Data in Motion in the Telco industry
3) Real-world Fintech and Payment examples powered by Data in Motion
Confluent Cloud Networking | Rajan Sundaram, ConfluentHostedbyConfluent
Introduction to networking options available in Confluent Cloud Self Serve provisioning of confluent Kafka clusters. VPC Peering, VNet Peering, Transit Gateway and Private Link Options for AWS, GCP, Azure networking offering. Caveats of confluent's cloud networking solutions customers should be aware of. Details of two major pieces of the architecture of Confluent Cloud - Data Plane Network and Control Plane.
New Features in Confluent Platform 6.0 / Apache Kafka 2.6Kai Wähner
New Features in Confluent Platform 6.0 / Apache Kafka 2.6, including REST Proxy and API, Tiered Storage for AWS S3 and GCP GCS, Cluster Linking (On-Premise, Edge, Hybrid, Multi-Cloud), Self-Balancing Clusters), ksqlDB.
Let's build a simple ingest to cloud datawarehouse with low codeTimothy Spann
Let's build a simple ingest to cloud datawarehouse with low code
From DevOps Stage 2020
Using NiFi to load data warehouse, cloud data, iot, sensors, rest, csv, network, events, logs and all types of data into various sql, nosql, cloud and on premise data stores.
principal dataflow field engineer
tim spann
Building a real-time data processing pipeline using Apache Kafka, Kafka Conne...Paul Brebner
With the rapid onset of the global Covid-19 Pandemic from the start of this year the USA Centers for Disease Control and Prevention (CDC) had to quickly implement a new Covid-19 specific pipeline to collect testing data from all of the USA’s states and territories, and carry out other critical steps including integration, cleaning, checking, enrichment, analysis, and enforcing data governance and privacy etc. The pipeline then produces multiple consumable results for federal and public agencies. They did this in under 30 days, using Apache Kafka. In this presentation we'll build a similar (but simpler) pipeline for ingesting, integrating, indexing, searching/analysing and visualising some publicly available tidal data. We'll briefly introduce each technology and component, and walk through the steps of using Apache Kafka, Kafka Connect, Elasticsearch and Kibana to build the pipeline and visualise the results.
Kafka at the Edge: an IoT scenario with OpenShift Streams for Apache Kafka | ...Red Hat Developers
Apache Kafka is taking the world by storm and is rapidly becoming the de-facto event bus for event-driven and streaming applications that respond to events and data in real time. OpenShift Streams for Apache Kafka is Red Hat's fully hosted and managed Apache Kafka service targeting development teams that want to incorporate streaming data and scalable messaging in their applications, without the burden of setting up and maintaining a Kafka cluster infrastructure.
In this session you will discover how Apache Kafka can be used in an IoT scenario to ingest data from devices and make them available in real-time to other applications.
More specifically you will learn how to:
Simulate devices that send MQTT messages to a MQTT broker
Use Apache Camel and Camel-K to bridge MQTT with Apache Kafka
Use Kafka Streams in a Quarkus application to process the device messages
Query the state of the devices using GraphQ
IoT Sensor Analytics with Kafka, ksqlDB and TensorFlowKai Wähner
Use cases and architectures for IoT projects leveraging Apache Kafka, ksqlDB, machine Learning / deep Learning frameworks like TensorFlow, and cloud infrastructure.
Large numbers of IoT devices lead to big data and the need for further processing and analysis. Apache Kafka is a highly scalable and distributed open source streaming platform, which can connect to MQTT and other IoT standards. Kafka ingests, stores, processes and forwards high volumes of data from thousands of IoT devices.
The rapidly expanding world of stream processing can be daunting, with new concepts such as various types of time semantics, windowed aggregates, changelogs, and programming frameworks to master. KSQL is the streaming SQL engine on top of Apache Kafka which simplifies all this and make stream processing available to everyone without the need to write source code.
This talk shows how to leverage Kafka and KSQL in an IoT sensor analytics scenario for predictive maintenance and integration with real time monitoring systems. A live demo shows how to embed and deploy Machine Learning models - built with frameworks like TensorFlow, DeepLearning4J or H2O - into mission-critical and scalable real time applications.
Enabling Smarter Cities and Connected Vehicles with an Event Streaming Platfo...Kai Wähner
Many cities are investing in technologies to transform their cities into smart city- environments in which data collection and analysis is utilized to manage assets and resources efficiently. Modern technology can help connect the right data, at the right time, to the right people, processes and systems. Innovations around smart cities and the Internet of Things give cities the ability to improve motor safety, unify and manage transportation systems and traffic, save energy and provide a better experience for the residents.
By utilizing an event streaming platform, like Confluent, cities are able to process data in real-time from thousands of sources, such as sensors. By aggregating that data and analyzing real-time data streams, more informed decisions can be made and fine-tuned operations developed for a positive impact on everyday challenges faced by cities.
Learn how to:
-Overcome challenges for building a smarter city
-Build a real time infrastructure to correlate relevant events
-Connect thousands of devices, machines, and people
-Leverage open source and fully managed solutions from the Apache Kafka ecosystem
Building Event Driven (Micro)services with Apache KafkaGuido Schmutz
What is a Microservices architecture and how does it differ from a Service-Oriented Architecture? Should you use traditional REST APIs to bind services together? Or is it better to use a richer, more loosely-coupled protocol? This talk will start with quick recap of how we created systems over the past 20 years and how different architectures evolved from it. The talk will show how we piece services together in event driven systems, how we use a distributed log (event hub) to create a central, persistent history of events and what benefits we achieve from doing so.
Apache Kafka is a perfect match for building such an asynchronous, loosely-coupled event-driven backbone. Events trigger processing logic, which can be implemented in a more traditional as well as in a stream processing fashion. The talk will show the difference between a request-driven and event-driven communication and show when to use which. It highlights how the modern stream processing systems can be used to hold state both internally as well as in a database and how this state can be used to further increase independence of other services, the primary goal of a Microservices architecture.
With Apache Kafka’s rise for event-driven architectures, developers require a specification to design effective event-driven APIs. AsyncAPI has been developed based on OpenAPI to define the endpoints and schemas of brokers and topics. For Kafka applications, the broker’s design to handle high throughput serialized payloads brings challenges for consumers and producers managing the structure of the message. For this reason, a registry becomes critical to achieve schema governance. Apicurio Registry is an end-to-end solution to store API definitions and schemas for Kafka applications. The project includes serializers, deserializers, and additional tooling. The registry supports several types of artifacts including OpenAPI, AsyncAPI, GraphQL, Apache Avro, Google protocol buffers, JSON Schema, Kafka Connect schema, WSDL, and XML Schema (XSD). It also checks them for validity and compatibility.
In this session, we will be covering the following topics:
● The importance of having a contract-first approach to event-driven APIs
● What is AsyncAPI, and how it helps to define Kafka endpoints and schemas
● The Kafka challenges on message structure when serializing and deserializing
● Introduction to Apicurio Registry and schema management for Kafka
● Examples of how to use Apicurio Registry with popular Java frameworks like Spring and Quarkus
At BAADER, we design and engineer innovative and holistic solutions that ensure intelligent, safe, efficient and sustainable food processing in all phases, from the handling of live and raw protein materials to the finished food products. As a key player in the food value chain, we aim to take further significant steps toward greater efficiency, traceability, transparency, profitability, and sustainability through new digital solutions.
During our digital transformation we are working on two ends: At one hand there are many brownfield factories unprepared for the digital journey and the other hand we have powerful greenfield technologies like Apache Kafka. Now, we have to bring two mindsets together – robust food processing machinery and highly scalable software technologies. In this talk, we will present how we successfully started to ingest various kinds of IoT data into our Kafka cluster - spotlighted from both ends.
Technical Deep Dive: Using Apache Kafka to Optimize Real-Time Analytics in Fi...confluent
Watch this talk here: https://www.confluent.io/online-talks/using-apache-kafka-to-optimize-real-time-analytics-financial-services-iot-applications
When it comes to the fast-paced nature of capital markets and IoT, the ability to analyze data in real time is critical to gaining an edge. It’s not just about the quantity of data you can analyze at once, it’s about the speed, scale, and quality of the data you have at your fingertips.
Modern streaming data technologies like Apache Kafka and the broader Confluent platform can help detect opportunities and threats in real time. They can improve profitability, yield, and performance. Combining Kafka with Panopticon visual analytics provides a powerful foundation for optimizing your operations.
Use cases in capital markets include transaction cost analysis (TCA), risk monitoring, surveillance of trading and trader activity, compliance, and optimizing profitability of electronic trading operations. Use cases in IoT include monitoring manufacturing processes, logistics, and connected vehicle telemetry and geospatial data.
This online talk will include in depth practical demonstrations of how Confluent and Panopticon together support several key applications. You will learn:
-Why Apache Kafka is widely used to improve performance of complex operational systems
-How Confluent and Panopticon open new opportunities to analyze operational data in real time
-How to quickly identify and react immediately to fast-emerging trends, clusters, and anomalies
-How to scale data ingestion and data processing
-Build new analytics dashboards in minutes
The Rise Of Event Streaming – Why Apache Kafka Changes EverythingKai Wähner
Business digitalization trends like microservices, the Internet of Things or Machine Learning are driving the need to process events at a whole new scale, speed and efficiency. Traditional solutions like ETL/data integration or messaging are not build to serve these needs.
Today, the open source project Apache Kafka® is being used by thousands of companies including over 60% of the Fortune 100 to power and innovate their businesses by focusing their data strategies around event-driven architectures leveraging event streaming.We will discuss the market and technology changes that have given rise to Kafka and to Event Streaming, and we will introduce the audience to the key aspects of building an Event streaming platform with Kafka. Examples of productive use cases from the automotive, manufacturing and transportation sector will showcase the power of event streaming.
What's new in confluent platform 5.4 online talkconfluent
To stay informed about the latest features in Confluent Platform 5.4 join Martijn Kieboom Solutions Engineer at Confluent, for the ‘What’s New in Confluent 5.4?’ on February 12 at 11 am GMT/ 12 Noon CET. Martijn will talk through the new features including:
Role-Based Access Control and how it enables highly granular control of permissions and platform access
Structured Audit Logs and how they enable the capture of authorization logs
How Multi-Region Clusters deliver asynchronous replication at the topic level, allowing companies to run a single Kafka Cluster across multiple data-centres
Schema validations role in enabling businesses that run Kafka at scale to deliver data compatibility across platforms
0-60: Tesla's Streaming Data Platform ( Jesse Yates, Tesla) Kafka Summit SF 2019confluent
Tesla ingests trillions of events every day from hundreds of unique data sources through our streaming data platform. Find out how we developed a set of high-throughput, non-blocking primitives that allow us to transform and ingest data into a variety of data stores with minimal development time. Additionally, we will discuss how these primitives allowed us to completely migrate the streaming platform in just a few months. Finally, we will talk about how we scale team size sub-linearly to data volumes, while continuing to onboard new use cases.
5 lessons learned for successful migration to Confluent cloud | Natan Silinit...HostedbyConfluent
Confluent Cloud makes Devops engineers lives a lot more easier.
Yet moving 1500 microservices, 10K topics and 100K partitions to a multi-cluster Confluent cloud can be a challenge.
In this talk you will hear about 5 lessons that Wix has learned in order to successfully meet this challenge.
These lessons include:
1. Automation, Automation, Automation - all the process has to be completely automated at such scale
2. Prefer a gradual approach - E.g. migrate topics in small chunks and not all at once. Reduces risks if things go bad
3. Cleanup first - avoid migrating unused topics or topics with too many unnecessary partitions
GCP for Apache Kafka® Users: Stream Ingestion and Processingconfluent
Watch this talk here: https://www.confluent.io/online-talks/gcp-for-apache-kafka-users-stream-ingestion-processing
In private and public clouds, stream analytics commonly means stateless processing systems organized around Apache Kafka® or a similar distributed log service. GCP took a somewhat different tack, with Cloud Pub/Sub, Dataflow, and BigQuery, distributing the responsibility for processing among ingestion, processing and database technologies.
We compare the two approaches to data integration and show how Dataflow allows you to join and transform and deliver data streams among on-prem and cloud Apache Kafka clusters, Cloud Pub/Sub topics and a variety of databases. The session will have a mix of architectural discussions and practical code reviews of Dataflow-based pipelines.
Connected Vehicles and V2X with Apache KafkaKai Wähner
This session discusses uses cases leveraging Apache Kafka open source ecosystem as streaming platform to process IoT data.
See use cases, architectural alternatives and a live demo of how devices connect to Kafka via MQTT. Learn how to analyze the IoT data either natively on Kafka with Kafka Streams/KSQL, or on an external big data cluster like Spark, Flink or Elastic leveraging Kafka Connect, and how to leverage TensorFlow for Machine Learning.
The focus is on connected cars / connected vehicles and V2X use cases respectively mobility services.
A live demo shows how to build a cloud-native IoT infrastructure on Kubernetes to connect and process streaming data in real-time from 100.000 cars to do predictive maintenance at scale in real-time.
Code for the live demo on Github:
https://github.com/kaiwaehner/hivemq-mqtt-tensorflow-kafka-realtime-iot-machine-learning-training-inference
IoT Sensor Analytics with Apache Kafka, KSQL, TensorFlow and MQTT => Kafka-Native End-to-End IoT Data Integration and Processing.
Large numbers of IoT devices lead to big data and the need for further processing and analysis. Apache Kafka is a highly scalable and distributed open source streaming platform, which can connect to MQTT and other IoT standards. Kafka ingests, stores, processes and forwards high volumes of data from thousands of IoT devices.
The rapidly expanding world of stream processing can be daunting, with new concepts such as various types of time semantics, windowed aggregates, changelogs, and programming frameworks to master. KSQL is the streaming SQL engine on top of Apache Kafka which simplifies all this and make stream processing available to everyone without the need to write source code.
This talk shows how to leverage Kafka and KSQL in an IoT sensor analytics scenario for predictive maintenance and integration with real time monitoring systems. A live demo shows how to embed and deploy Machine Learning models - built with frameworks like TensorFlow, DeepLearning4J or H2O - into mission-critical and scalable real time applications.
Beyond the brokers - A tour of the Kafka ecosystemDamien Gasparina
Beyond the brokers - A tour of the Kafka ecosystem. Presentation done the 28/03/2019 (Lyon JUG: https://www.meetup.com/Lyon-Java-User-Group-LyonJUG/events/259569434/)
Beyond the brokers - Un tour de l'écosystème KafkaFlorent Ramiere
Apache Kafka ne se résume pas aux brokers, il y a tout un écosystème open-source qui gravite autour. Je vous propose ainsi de découvrir les principaux composants comme Kafka Streams, KSQL, Kafka Connect, Rest proxy, Schema Registry, MirrorMaker, etc.
Au delà des brokers, un tour de l’environnement Kafka | Florent Ramièreconfluent
During the Confluent Streaming event in Paris, Florent Ramière, Technical Account Manager at Confluent, goes beyond brokers, introducing a whole new ecosystem with Kafka Streams, KSQL, Kafka Connect, Rest proxy, Schema Registry, MirrorMaker, etc.
Kafka for Real-Time Replication between Edge and Hybrid CloudKai Wähner
Not all workloads allow cloud computing. Low latency, cybersecurity, and cost-efficiency require a suitable combination of edge computing and cloud integration.
This session explores architectures and design patterns for software and hardware considerations to deploy hybrid data streaming with Apache Kafka anywhere. A live demo shows data synchronization from the edge to the public cloud across continents with Kafka on Hivecell and Confluent Cloud.
Budapest Data/ML - Building Modern Data Streaming Apps with NiFi, Flink and K...Timothy Spann
Budapest Data/ML - Building Modern Data Streaming Apps with NiFi, Flink and Kafka
Apache NiFi, Apache Flink, Apache Kafka
Timothy Spann
Principal Developer Advocate
Cloudera
Data in Motion
https://budapestdata.hu/2023/en/speakers/timothy-spann/
Timothy Spann
Principal Developer Advocate
Cloudera (US)
LinkedIn · GitHub · datainmotion.dev
June 8 · Online · English talk
Building Modern Data Streaming Apps with NiFi, Flink and Kafka
In my session, I will show you some best practices I have discovered over the last 7 years in building data streaming applications including IoT, CDC, Logs, and more.
In my modern approach, we utilize several open-source frameworks to maximize the best features of all. We often start with Apache NiFi as the orchestrator of streams flowing into Apache Kafka. From there we build streaming ETL with Apache Flink SQL. We will stream data into Apache Iceberg.
We use the best streaming tools for the current applications with FLaNK. flankstack.dev
BIO
Tim Spann is a Principal Developer Advocate in Data In Motion for Cloudera. He works with Apache NiFi, Apache Pulsar, Apache Kafka, Apache Flink, Flink SQL, Apache Pinot, Trino, Apache Iceberg, DeltaLake, Apache Spark, Big Data, IoT, Cloud, AI/DL, machine learning, and deep learning. Tim has over ten years of experience with the IoT, big data, distributed computing, messaging, streaming technologies, and Java programming.
Previously, he was a Developer Advocate at StreamNative, Principal DataFlow Field Engineer at Cloudera, a Senior Solutions Engineer at Hortonworks, a Senior Solutions Architect at AirisData, a Senior Field Engineer at Pivotal and a Team Leader at HPE. He blogs for DZone, where he is the Big Data Zone leader, and runs a popular meetup in Princeton & NYC on Big Data, Cloud, IoT, deep learning, streaming, NiFi, the blockchain, and Spark. Tim is a frequent speaker at conferences such as ApacheCon, DeveloperWeek, Pulsar Summit and many more. He holds a BS and MS in computer science.
Safer Commutes & Streaming Data | George Padavick, Ohio Department of Transpo...HostedbyConfluent
The Ohio Department of Transportation has adopted Confluent as the event driven enabler of DriveOhio, a modern Intelligent Transportation System. DriveOhio digitally links sensors, cameras, speed monitoring equipment, and smart highway assets in real time, to dynamically adjust the surface road network to maximize the safety and efficiency for travelers. Over the past 24 months the team has increased the number and types of devices within the DriveOhio environment, while also working to see their vendors adopt Kafka to better participate in data sharing.
DataOps on Streaming Data: From Kafka to InfluxDB via Kubernetes Native Flows...InfluxData
In this session, we are going to create a Lenses DataOps hub for IoT data with Apache Kafka and InfluxDB flows over Kubernetes. We will demonstrate how to create streaming flows and securely explore and monitor real-time data. We will use Kubernetes to spin up scalable flows and go through how we can simply provision such flows with secret management and monitoring end to end out capabilities.
Spark Streaming + Kafka 0.10: an integration story by Joan Viladrosa Riera at...Big Data Spain
Spark Streaming has supported Kafka since it’s inception, but a lot has changed since those times, both in Spark and Kafka sides, to make this integration more fault-tolerant and reliable.
https://www.bigdataspain.org/2017/talk/spark-streaming-kafka-0-10-an-integration-story
Big Data Spain 2017
16th - 17th Kinépolis Madrid
Kubernetes connectivity to Cloud Native Kafka | Evan Shortiss and Hugo Guerre...HostedbyConfluent
If you want to build an ecosystem of streaming data to your Kafka platform, you will need a much easier way for your developer to quickly move what’s on the source to your cluster. Better yet, making the connector serverless so it would NOT waste any resources for being idle, and having a trusted partner manage your Kafka infrastructure for you. In this session, we will show you how easy we have made streaming data with great user experience. Flexible resource management with our new secret weapon in the Apache Camel project -- Kamelet. We’ll also demonstrate how Red Hat OpenShift Streams for Apache Kafka simplifies the provisioning of Kafka deployments in a public cloud, managing the cluster,topics, and configuring secure access to the Kafka cluster for your developers.
Streaming Data Ingest and Processing with Apache KafkaAttunity
Apache™ Kafka is a fast, scalable, durable, and fault-tolerant
publish-subscribe messaging system. It offers higher throughput, reliability and replication. To manage growing data volumes, many companies are leveraging Kafka for streaming data ingest and processing.
Join experts from Confluent, the creators of Apache™ Kafka, and the experts at Attunity, a leader in data integration software, for a live webinar where you will learn how to:
-Realize the value of streaming data ingest with Kafka
-Turn databases into live feeds for streaming ingest and processing
-Accelerate data delivery to enable real-time analytics
-Reduce skill and training requirements for data ingest
The recorded webinar on slide 32 includes a demo using automation software (Attunity Replicate) to stream live changes from a database into Kafka and also includes a Q&A with our experts.
For more information, please go to www.attunity.com/kafka.
Bringing Streaming Data To The Masses: Lowering The “Cost Of Admission” For Y...confluent
(Bob Lehmann, Bayer) Kafka Summit SF 2018
You’ve built your streaming data platform. The early adopters are “all in” and have developed producers, consumers and stream processing apps for a number of use cases. A large percentage of the enterprise, however, has expressed interest but hasn’t made the leap. Why?
In 2014, Bayer Crop Science (formerly Monsanto) adopted a cloud first strategy and started a multi-year transition to the cloud. A Kafka-based cross-datacenter DataHub was created to facilitate this migration and to drive the shift to real-time stream processing. The DataHub has seen strong enterprise adoption and supports a myriad of use cases. Data is ingested from a wide variety of sources and the data can move effortlessly between an on premise datacenter, AWS and Google Cloud. The DataHub has evolved continuously over time to meet the current and anticipated needs of our internal customers. The “cost of admission” for the platform has been lowered dramatically over time via our DataHub Portal and technologies such as Kafka Connect, Kubernetes and Presto. Most operations are now self-service, onboarding of new data sources is relatively painless and stream processing via KSQL and other technologies is being incorporated into the core DataHub platform.
In this talk, Bob Lehmann will describe the origins and evolution of the Enterprise DataHub with an emphasis on steps that were taken to drive user adoption. Bob will also talk about integrations between the DataHub and other key data platforms at Bayer, lessons learned and the future direction for streaming data and stream processing at Bayer.
IoT Architectures for Apache Kafka and Event Streaming - Industry 4.0, Digita...Kai Wähner
The Internet of Things (IoT) is getting more and more traction as valuable use cases come to light. Whether you are in Healthcare, Telecommunications, Manufacturing, Banking or Retail to name a few industries, there is one key challenge and that's the integration of backend IoT data logs and applications, business services and cloud services to process the data in real time and at scale.
In this talk, we will be sharing how Kafka has become the leading technology used throughout the business to provide Real Time Event Streaming. Explore real life use cases of Kafka Connect, Kafka Streams and KSQL independent of the data deployment be it on a private or public Cloud, On Premise or at the Edge.
Audi - Connected car infrastructure
Robert Bosch Power Tools - Track and Trace of devices and people at construction areas
Deutsche Bahn - Customer 360 for train timetable updates
E.ON - IoT Streaming Platform to integrate and build smart home, smart building and smart grid infrastructures
Enhancing Performance with Globus and the Science DMZGlobus
ESnet has led the way in helping national facilities—and many other institutions in the research community—configure Science DMZs and troubleshoot network issues to maximize data transfer performance. In this talk we will present a summary of approaches and tips for getting the most out of your network infrastructure using Globus Connect Server.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfPeter Spielvogel
Building better applications for business users with SAP Fiori.
• What is SAP Fiori and why it matters to you
• How a better user experience drives measurable business benefits
• How to get started with SAP Fiori today
• How SAP Fiori elements accelerates application development
• How SAP Build Code includes SAP Fiori tools and other generative artificial intelligence capabilities
• How SAP Fiori paves the way for using AI in SAP apps
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™UiPathCommunity
In questo evento online gratuito, organizzato dalla Community Italiana di UiPath, potrai esplorare le nuove funzionalità di Autopilot, il tool che integra l'Intelligenza Artificiale nei processi di sviluppo e utilizzo delle Automazioni.
📕 Vedremo insieme alcuni esempi dell'utilizzo di Autopilot in diversi tool della Suite UiPath:
Autopilot per Studio Web
Autopilot per Studio
Autopilot per Apps
Clipboard AI
GenAI applicata alla Document Understanding
👨🏫👨💻 Speakers:
Stefano Negro, UiPath MVPx3, RPA Tech Lead @ BSP Consultant
Flavio Martinelli, UiPath MVP 2023, Technical Account Manager @UiPath
Andrei Tasca, RPA Solutions Team Lead @NTT Data
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex ProofsAlex Pruden
This paper presents Reef, a system for generating publicly verifiable succinct non-interactive zero-knowledge proofs that a committed document matches or does not match a regular expression. We describe applications such as proving the strength of passwords, the provenance of email despite redactions, the validity of oblivious DNS queries, and the existence of mutations in DNA. Reef supports the Perl Compatible Regular Expression syntax, including wildcards, alternation, ranges, capture groups, Kleene star, negations, and lookarounds. Reef introduces a new type of automata, Skipping Alternating Finite Automata (SAFA), that skips irrelevant parts of a document when producing proofs without undermining soundness, and instantiates SAFA with a lookup argument. Our experimental evaluation confirms that Reef can generate proofs for documents with 32M characters; the proofs are small and cheap to verify (under a second).
Paper: https://eprint.iacr.org/2023/1886
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
4. EVOLVESCALEBUILDIoT initiative challenges
★ 94% face challenges collecting and analysing IoT data
★ 26+ Billion devices by 2020, Gartner
More Data, More Complexity
★ 2.500.000 trillion bytes produced daily
REAL TIME DATA INGESTION + IMMEDIATE QUERIES =
ACTIONABLE / TIMELY INSIGHTS
IoT Data Flows
8. InfluxDBsensor data
MQTT
InfluxDB
Connector
LENSES SQL
PROCESSOR
KAFKA
CONNECT
KAFKA
CONNECT
MQTT
Connector
KAFKA STREAMS
Synthetic Data
With Moquette
1 Broker,
data generate
sensor data
InfluxDB Docker
9. ®
IoT &
High Volumes, N devices
& irregular intervals
Real Time Analytics
& Microservices
Unlimited streams of data, async transfers
Producers & Consumers decoupled operation
Kafka Streams API
Processing and analysing data in motion
Multiple sources of data
& long term storeage
Kafka Connect API,
move data with pluggable reusable & scalable connectors
An open source streaming framework
with messaging semantics where records are key-value pairs.
10. LENSES BOX
For Kafka Development
Single docker image with all services
Growing list of examples & synthetic data generators.
Stream Reactor 25+ connectors
Kafka CLI autocomplete.
Rest and WebSocket endpoints
Redux JS Client
Testing Libraries
FREE!
All you need to start working with Apache Kafka ecosystem
11. MQTT Source connector
• Message Queueing Telemetry Transport messaging protocol
• Optimised for connected devices and M2M protocol
• Low bandwidth, high-latency, unreliable networks
• Plain or SSL connections
• Kafka Connect Query Language
INSERT INTO kafka_topic
SELECT * FROM /mqtt_source_topic
[WITHCONVERTER=`myclass`]
MQTT
Distributed, Fault tolerant, scalable
No code required
CLI / UI / Monitor
KCQL Support
Multiple Distributions
12. • Process data in motion with Lenses SQL
• Create & Deploy Lenses SQL Processors
• Native Kafka & Kubernetes Support Scalability
• Visual Topology
• Monitor and manage processors via LENSES
Process & Analyse
Data in motion
13. InfluxDB Sink connector
• AVRO & JSON Support
• Error handling ( Noop, Retry, Throw)
• Field selection & field alias
• Multiple inserts supported
• Kafka Connect Query Language
• MEASUREMENTS, KCQL: INSERTS
• TAGS, KCQL: WITHTAG
• TIMESTAMPS, KCQL: WITHTIMESTAMP
• DURATION
Distributed, Fault tolerant, scalable
No code required
CLI / UI / Monitor
KCQL Support
Multiple Distributions
INSERT INTO sensorMeasure
SELECT * FROM sensor_data_avg
WITHTAG (id, poweredBy=Lenses)
14. Distributed, Fault tolerant, scalable
No code required
CLI / UI / Monitor
KCQL Support
Multiple Distributions
InfluxDB Sink connector
• Timestamp nanosecond
• KCQL
• No filter support
• No function support
• Throughput involves domain knowlege
• Kafka Connect rebalances
15. visual data flow
Visual and Interactive Topology
for a 360 view of your data
End-To-End Topology Management
Visualise your Kafka-based data flows
in real time