In this guest webinar by Kevin Webber, we cover the entire architecture of a Reactive system, from a responsive UI implemented with Vue.js, to a fully event sourced collection of microservices implemented with Java, Lagom, Cassandra, and Kafka.
For the full recording, visit: https://www.lightbend.com/blog/full-stack-reactive-in-practice-webinar
How To Build, Integrate, and Deploy Real-Time Streaming Pipelines On KubernetesLightbend
In this webinar with Craig Blitz and Kiki Carter of Lightbend, we review how Lightbend’s Pipelines module enables you to develop components ("streamlets") using the appropriate technology, wire them together as pipelines, and deploy them with Kubernetes without all the manual, time-consuming labor.
Detecting Real-Time Financial Fraud with Cloudflow on KubernetesLightbend
Deploying a robust streaming data pipeline can be a daunting task when your company’s financial information is at risk. For starters, how do you ensure proper provisioning of resources? How do you preserve end-to-end application and data consistency? How do you make all of this work in the cloud with Kubernetes and avoid YAML hell? Answer: Cloudflow, a new open-source toolkit for simplifying the development, deployment, and operation of streaming data pipelines.
Time Series Analysis Using an Event Streaming PlatformDr. Mirko Kämpf
Advanced time series analysis (TSA) requires very special data preparation procedures to convert raw data into useful and compatible formats.
In this presentation you will see some typical processing patterns for time series based research, from simple statistics to reconstruction of correlation networks.
The first case is relevant for anomaly detection and to protect safety.
Reconstruction of graphs from time series data is a very useful technique to better understand complex systems like supply chains, material flows in factories, information flows within organizations, and especially in medical research.
With this motivation we will look at typical data aggregation patterns. We investigate how to apply analysis algorithms in the cloud. Finally we discuss a simple reference architecture for TSA on top of the Confluent Platform or Confluent cloud.
Akka at Enterprise Scale: Performance Tuning Distributed ApplicationsLightbend
Organizations like Starbucks, HPE, and PayPal (see our customers) have selected the Akka toolkit for their enterprise scale distributed applications; and when it comes to squeezing out the best possible performance, the secret is using two particular modules in tandem: Akka Cluster and Akka Streams.
In this webinar by Nolan Grace, Senior Solution Architect at Lightbend, we look at these two Akka modules and discuss the features that will push your application architecture to the next tier of performance.
For the full blog post, including the video, visit: https://www.lightbend.com/blog/akka-at-enterprise-scale-performance-tuning-distributed-applications
Leveraging services in stream processor apps at Ticketmaster (Derek Cline, Ti...confluent
Is your organization adopting Kafka as their messaging bus but you've found that it will take too long to migrate your existing service-oriented architecture to a log-oriented architecture? Some of the biggest challenges in building a new stream processor can be implementing all the business logic again. It has become increasingly common for companies with high-throughput source streams and change-data-capture logs to want to build systems fast. At Ticketmaster, we have found a solution to the problem by leveraging the business logic in our existing services and calling them from our Java based KafkaStreams processor applications in an efficient manner. In this talk, we will examine the initial challenges we faced in our transition, then we will explore the solutions we built to address the use cases at Ticketmaster. The primary focus will address our workflow around calling services to bring stream processor applications to market fast. We will review our challenges and share tips for success.
Lessons From HPE: From Batch To Streaming For 20 Billion Sensors With Lightbe...Lightbend
In this guest webinar with Chris McDermott, Lead Data Engineer at HPE, learn how HPE InfoSight–powered by Lightbend Platform–has emerged as the go-to solution for providing real-time metrics and predictive analytics across various network, server, storage, and data center technologies.
Machine Learning At Speed: Operationalizing ML For Real-Time Data StreamsLightbend
Audience: Architects, Data Scientists, Developers
Technical level: Introductory
From home intrusion detection, to self-driving cars, to keeping data center operations healthy, Machine Learning (ML) has become one of the hottest topics in software engineering today. While much of the focus has been on the actual creation of the algorithms used in ML, the less talked-about challenge is how to serve these models in production, often utilizing real-time streaming data.
The traditional approach to model serving is to treat the model as code, which means that ML implementation has to be continually adapted for model serving. As the amount of machine learning tools and techniques grows, the efficiency of such an approach is becoming more questionable. Additionally, machine learning and model serving are driven by very different quality of service requirements; while machine learning is typically batch, dealing with scalability and processing power, model serving is mostly concerned with performance and stability.
In this webinar with O’Reilly author and Lightbend Principal Architect, Boris Lublinsky, we will define an alternative approach to model serving, based on treating the model itself as data. Using popular frameworks like Akka Streams and Apache Flink, Boris will review how to implement this approach, explaining how it can help you:
* Achieve complete decoupling between the model implementation for machine learning and model serving, enforcing better standardization of your model serving implementation.
* Enable dynamic updates of the served model without having to restart the system.
* Utilize Tensorflow and PMML as model representation and their usage for building “real time updatable” model serving architecture.
How To Build, Integrate, and Deploy Real-Time Streaming Pipelines On KubernetesLightbend
In this webinar with Craig Blitz and Kiki Carter of Lightbend, we review how Lightbend’s Pipelines module enables you to develop components ("streamlets") using the appropriate technology, wire them together as pipelines, and deploy them with Kubernetes without all the manual, time-consuming labor.
Detecting Real-Time Financial Fraud with Cloudflow on KubernetesLightbend
Deploying a robust streaming data pipeline can be a daunting task when your company’s financial information is at risk. For starters, how do you ensure proper provisioning of resources? How do you preserve end-to-end application and data consistency? How do you make all of this work in the cloud with Kubernetes and avoid YAML hell? Answer: Cloudflow, a new open-source toolkit for simplifying the development, deployment, and operation of streaming data pipelines.
Time Series Analysis Using an Event Streaming PlatformDr. Mirko Kämpf
Advanced time series analysis (TSA) requires very special data preparation procedures to convert raw data into useful and compatible formats.
In this presentation you will see some typical processing patterns for time series based research, from simple statistics to reconstruction of correlation networks.
The first case is relevant for anomaly detection and to protect safety.
Reconstruction of graphs from time series data is a very useful technique to better understand complex systems like supply chains, material flows in factories, information flows within organizations, and especially in medical research.
With this motivation we will look at typical data aggregation patterns. We investigate how to apply analysis algorithms in the cloud. Finally we discuss a simple reference architecture for TSA on top of the Confluent Platform or Confluent cloud.
Akka at Enterprise Scale: Performance Tuning Distributed ApplicationsLightbend
Organizations like Starbucks, HPE, and PayPal (see our customers) have selected the Akka toolkit for their enterprise scale distributed applications; and when it comes to squeezing out the best possible performance, the secret is using two particular modules in tandem: Akka Cluster and Akka Streams.
In this webinar by Nolan Grace, Senior Solution Architect at Lightbend, we look at these two Akka modules and discuss the features that will push your application architecture to the next tier of performance.
For the full blog post, including the video, visit: https://www.lightbend.com/blog/akka-at-enterprise-scale-performance-tuning-distributed-applications
Leveraging services in stream processor apps at Ticketmaster (Derek Cline, Ti...confluent
Is your organization adopting Kafka as their messaging bus but you've found that it will take too long to migrate your existing service-oriented architecture to a log-oriented architecture? Some of the biggest challenges in building a new stream processor can be implementing all the business logic again. It has become increasingly common for companies with high-throughput source streams and change-data-capture logs to want to build systems fast. At Ticketmaster, we have found a solution to the problem by leveraging the business logic in our existing services and calling them from our Java based KafkaStreams processor applications in an efficient manner. In this talk, we will examine the initial challenges we faced in our transition, then we will explore the solutions we built to address the use cases at Ticketmaster. The primary focus will address our workflow around calling services to bring stream processor applications to market fast. We will review our challenges and share tips for success.
Lessons From HPE: From Batch To Streaming For 20 Billion Sensors With Lightbe...Lightbend
In this guest webinar with Chris McDermott, Lead Data Engineer at HPE, learn how HPE InfoSight–powered by Lightbend Platform–has emerged as the go-to solution for providing real-time metrics and predictive analytics across various network, server, storage, and data center technologies.
Machine Learning At Speed: Operationalizing ML For Real-Time Data StreamsLightbend
Audience: Architects, Data Scientists, Developers
Technical level: Introductory
From home intrusion detection, to self-driving cars, to keeping data center operations healthy, Machine Learning (ML) has become one of the hottest topics in software engineering today. While much of the focus has been on the actual creation of the algorithms used in ML, the less talked-about challenge is how to serve these models in production, often utilizing real-time streaming data.
The traditional approach to model serving is to treat the model as code, which means that ML implementation has to be continually adapted for model serving. As the amount of machine learning tools and techniques grows, the efficiency of such an approach is becoming more questionable. Additionally, machine learning and model serving are driven by very different quality of service requirements; while machine learning is typically batch, dealing with scalability and processing power, model serving is mostly concerned with performance and stability.
In this webinar with O’Reilly author and Lightbend Principal Architect, Boris Lublinsky, we will define an alternative approach to model serving, based on treating the model itself as data. Using popular frameworks like Akka Streams and Apache Flink, Boris will review how to implement this approach, explaining how it can help you:
* Achieve complete decoupling between the model implementation for machine learning and model serving, enforcing better standardization of your model serving implementation.
* Enable dynamic updates of the served model without having to restart the system.
* Utilize Tensorflow and PMML as model representation and their usage for building “real time updatable” model serving architecture.
Digital Transformation with Kubernetes, Containers, and MicroservicesLightbend
See the full presentation here: https://www.lightbend.com/blog/digital-transformation-kubernetes-containers-microservices
In this talk by David Ogren, Principal Enterprise Architect at Lightbend, we draw from experiences helping our clients successfully create, migrate to, and manage cloud-native system architectures.
Pivoting Spring XD to Spring Cloud Data Flow with Sabby AnandanPivotalOpenSourceHub
Pivoting Spring XD to Spring Cloud Data Flow: A microservice based architecture for stream processing
Microservice based architectures are not just for distributed web applications! They are also a powerful approach for creating distributed stream processing applications. Spring Cloud Data Flow enables you to create and orchestrate standalone executable applications that communicate over messaging middleware such as Kafka and RabbitMQ that when run together, form a distributed stream processing application. This allows you to scale, version and operationalize stream processing applications following microservice based patterns and practices on a variety of runtime platforms such as Cloud Foundry, Apache YARN and others.
About Sabby Anandan
Sabby Anandan is a Product Manager at Pivotal. Sabby is focused on building products that eliminate the barriers between application development, cloud, and big data.
A Marriage of Lambda and Kappa: Supporting Iterative Development of an Event ...confluent
With increasing data volumes typically comes a corresponding increase in (non-windowed) batch processing times, and many companies have looked to streaming as a way of delivering data processing results faster and more reliably. Event Driven Architectures further enhance these offerings by breaking centralised Data Platforms into loosely coupled and distributed solutions, supported by linearly scalable technologies such as Apache Kafka(TM) and Apache Kafka Streams(TM).
However, there remains a problem of how to handle changes to operational systems: if a record is the result of business logic, and that business logic changes, what do we do? Do we recalculate everything on the fly, adding in additional latencies for all data requests and potentially breaching non-functional requirements? Or do we run a batch job, risking that incorrect data will be served whilst the job is running?
This talk covers how 6point6 leveraged Kafka and Kafka Streams to transition a customer from a traditional business flow onto an Event Driven Architecture, with business logic triggered directly by real-time events across over 3000 loosely coupled business services, whilst ensuring that the active development of these services (and their containing logic and models) would not affect components which relied on data served by the platform.
Learn how we:
– Used versioning of topics, data and business logic to facilitate iterative development, ensuring that the reprocessing of large volumes of data would not result in incorrect or stale data being delivered.
– Handled distributed versioning of JSON event messages between separate teams/services, using discovery, automated contract negotiation and version porting.
– Developed technical patterns and libraries to allow rapid development and deployment of new event driven services using Kafka Streams.
– Developed functionality and approaches for deploying defensive services, including strategies for event retry and failure.
Watch this webcast here: https://www.confluent.io/online-talks/whats-new-in-confluent-platform-55/
Join the Confluent Product Marketing team as we provide an overview of Confluent Platform 5.5, which makes Apache Kafka and event streaming more broadly accessible to developers with enhancements to data compatibility, multi-language development, and ksqlDB.
Building an event-driven architecture with Apache Kafka allows you to transition from traditional silos and monolithic applications to modern microservices and event streaming applications. With these benefits has come an increased demand for Kafka developers from a wide range of industries. The Dice Tech Salary Report recently ranked Kafka as the highest-paid technological skill of 2019, a year removed from ranking it second.
With Confluent Platform 5.5, we are making it even simpler for developers to connect to Kafka and start building event streaming applications, regardless of their preferred programming languages or the underlying data formats used in their applications.
This session will cover the key features of this latest release, including:
-Support for Protobuf and JSON schemas in Confluent Schema Registry and throughout our entire platform
-Exactly once semantics for non-Java clients
-Admin functions in REST Proxy (preview)
-ksqlDB 0.7 and ksqlDB Flow View in Confluent Control Center
Live Event Debugging With ksqlDB at Reddit | Hannah Hagen and Paul Kiernan, R...HostedbyConfluent
Convincing developers to write tests for new code is hard; convincing developers to write tests for new event data is even harder. At Reddit, engineers have often deployed new app versions, only to find out later that the event wasn’t firing at all, or it was missing critical fields. So this begs the question, “How can engineers at Reddit be confident that the events they instrument are accurate and complete?”
In this session, we will learn about an internal tool developed at Reddit to QA events in real-time. This KSQL-powered web app streams events from our pipeline, allowing developers to filter events they care about using criteria like User ID, Device ID or the type of user interaction. With a backbone of KSQL and Kafka Streams, engineers can get real-time feedback on how accurate (or how erroneous) their event data is.
Stateful, Stateless and Serverless - Running Apache Kafka® on Kubernetesconfluent
Speakers: Joe Beda, Co-founder and CTO, Heptio + Gwen Shapira, Principal Data Architect, Confluent
With the rapid adoption of microservices, there is a growing need for solutions to manage deployment, resources and data for fleets of microservices. Kubernetes is a resource management framework for containers that is rapidly growing in popularity. Apache Kafka is a streaming platform that makes data accessible to the edges of an organization. It's no wonder the question of running Kafka on Kubernetes keeps coming up!
In this online talk, Joe Beda, CTO of Heptio and co-creator of Kubernetes, and Gwen Shapira, principal data architect at Confluent and Kafka PMC member, will help you navigate through the hype, address frequently asked questions and deliver critical information to help you decide if running Kafka on Kubernetes is the right approach for your organization.
You will:
-Get an introduction to the basic concepts you need to know as you plan to deploy services on Kubernetes.
-Learn which parts of the Kafka ecosystem fit Kubernetes like a glove, and which require special attention.
-Pick up useful tips for getting started.
-See why Confluent Platform for Kubernetes is the simplest solution to deploying and orchestrating Kafka on Kubernetes, using container images and a Kubernetes operator.
Watch the recording: https://videos.confluent.io/watch/yoZcuazDjDDTcj1sRnaD3J?.
Monitoring Large-Scale Apache Spark Clusters at DatabricksAnyscale
At Databricks, we manage Apache Spark clusters for customers to run various production workloads. In this talk, we share our experiences in building a real-time monitoring system for thousands of Spark nodes, including the lessons we learned and the value we’ve seen from our efforts so far.
The was part of the talk presented at #monitorSF Meetup held at Databricks HQ in SF.
Dennis Wittekind, Confluent, Senior Customer Success Engineer
Perhaps you have heard of Kafka Connect and think it would be a great fit in your application's architecture, but you like to know how things work before you propose them to your team? Perhaps you know enough Connect to be dangerous, but you haven't had the time to really understand all the moving pieces? This meetup talk is for you! We'll briefly introduce Connect to the uninitiated, and then jump in to underlying concepts and considerations you should make when running Connect in production! We'll even run a live demo! What could go wrong!?
https://www.meetup.com/Saint-Louis-Kafka-meetup-group/events/272687113/
Migrating from One Cloud Provider to Another (Without Losing Your Data or You...HostedbyConfluent
If you’re considering -- or planning -- a cloud migration, you may be concerned about risks to your data and your mental health. Migrations at scale are fraught with risk. You absolutely can’t lose data, compromise its integrity, or suffer downtime, so you want to be slow and careful. On the other hand, you’re paying two providers for every day the migration goes on, so you need to move as fast as possible.
Unity Technologies accumulates lots of data. We recently moved our data infrastructure as part of a major cloud migration from Amazon Web Services (AWS) to Google Cloud Platform (GCP).
To minimize risk and costs our team used Apache Kafka and Confluent Platform, while engaging Confluent Platform Professional Services to help ensure a speedy and seamless migration. Kafka was already serving as the backbone to our data infrastructure, which handles over half a million events per second, and during the migration it also served as the bridge between AWS and GCP.
Join us at this session to learn about the processes and tools used, the challenges faced, and the lessons learned as we moved our operations and petabytes of data from AWS to GCP with zero downtime.
Bravo Six, Going Realtime. Transitioning Activision Data Pipeline to Streamin...HostedbyConfluent
Activision Data team has been running a data pipeline for a variety of Activision games for many years. Historically we used a mix of micro-batch microservices coupled with classic Big Data tools like Hadoop and Hive for ETL. As a result, it could take up to 4-6 hours for data to be available to the end customers.
In the last few years, the adoption of data in the organization skyrocketed. We needed to de-legacy our data pipeline and provide near-realtime access to data in order to improve reporting, gather insights faster, power web and mobile applications. I want to tell a story about heavily leveraging Kafka Streams and Kafka Connect to reduce the end latency to minutes, at the same time making the pipeline easier and cheaper to run. We were able to successfully validate the new data pipeline by launching two massive games just 4 weeks apart.
Flattening the Curve with Kafka (Rishi Tarar, Northrop Grumman Corp.) Kafka S...confluent
Responding to a global pandemic presents a unique set of technical and public health challenges. The real challenge is the ability to gather data coming in via many data streams in variety of formats influences the real-world outcome and impacts everyone. The Centers for Disease Control and Prevention CELR (COVID Electronic Lab Reporting) program was established to rapidly aggregate, validate, transform, and distribute laboratory testing data submitted by public health departments and other partners. Confluent Kafka with KStreams and Connect play a critical role in program objectives to:
o Track the threat of COVID-19 virus
o Provide comprehensive data for local, state, and federal response
o Better understand locations with an increase in incidence
Event Driven Architecture with a RESTful Microservices Architecture (Kyle Ben...confluent
Tinder’s Quickfire Pipeline powers all things data at Tinder. It was originally built using AWS Kinesis Firehoses and has since been extended to use both Kafka and other event buses. It is the core of Tinder’s data infrastructure. This rich data flow of both client and backend data has been extended to service a variety of needs at Tinder, including Experimentation, ML, CRM, and Observability, allowing backend developers easier access to shared client side data. We perform this using many systems, including Kafka, Spark, Flink, Kubernetes, and Prometheus. Many of Tinder’s systems were natively designed in an RPC first architecture.
Things we’ll discuss decoupling your system at scale via event-driven architectures include:
– Powering ML, backend, observability, and analytical applications at scale, including an end to end walk through of our processes that allow non-programmers to write and deploy event-driven data flows.
– Show end to end the usage of dynamic event processing that creates other stream processes, via a dynamic control plane topology pattern and broadcasted state pattern
– How to manage the unavailability of cached data that would normally come from repeated API calls for data that’s being backfilled into Kafka, all online! (and why this is not necessarily a “good” idea)
– Integrating common OSS frameworks and libraries like Kafka Streams, Flink, Spark and friends to encourage the best design patterns for developers coming from traditional service oriented architectures, including pitfalls and lessons learned along the way.
– Why and how to avoid overloading microservices with excessive RPC calls from event-driven streaming systems
– Best practices in common data flow patterns, such as shared state via RocksDB + Kafka Streams as well as the complementary tools in the Apache Ecosystem.
– The simplicity and power of streaming SQL with microservices
How Credit Karma Makes Real-Time Decisions For 60 Million Users With Akka Str...Lightbend
In this webinar, Engineering Manager at Credit Karma, Dustin Lyons, discusses how not long ago his team was facing a common challenge shared by many financial services architects and engineering leaders: not only how to move from the offline, batch-mode processing of Big Data to streaming, Fast Data, and how to enable real-time decision making based on the information flowing in from over 60 million members.
Dustin reviews how his team migrated away from PHP and successfully implemented Akka Streams with Apache Kafka to ingest, process and route real-time events throughout their data ecosystem. At the end of this presentation, you’ll better understand:
* The design considerations for new Fast Data architectures, from streaming to microservices to real-time analysis.
* Some lessons learned when it comes to progressing from batch to streaming using Akka, Spark and Kafka
* Why Akka’s self-healing actor model and the resilience that it provides is actually what matters most when delivering real-time customer experiences
In this talk, Confluent co-founder and CEO, Jay Kreps will cover the rise of two trends:
1. The rise of Apache Kafka and event streams
2. The rise of the public cloud and cloud-native data systems
... and the problems we need to solve as these two trends come together.
Digital Transformation with Kubernetes, Containers, and MicroservicesLightbend
See the full presentation here: https://www.lightbend.com/blog/digital-transformation-kubernetes-containers-microservices
In this talk by David Ogren, Principal Enterprise Architect at Lightbend, we draw from experiences helping our clients successfully create, migrate to, and manage cloud-native system architectures.
Pivoting Spring XD to Spring Cloud Data Flow with Sabby AnandanPivotalOpenSourceHub
Pivoting Spring XD to Spring Cloud Data Flow: A microservice based architecture for stream processing
Microservice based architectures are not just for distributed web applications! They are also a powerful approach for creating distributed stream processing applications. Spring Cloud Data Flow enables you to create and orchestrate standalone executable applications that communicate over messaging middleware such as Kafka and RabbitMQ that when run together, form a distributed stream processing application. This allows you to scale, version and operationalize stream processing applications following microservice based patterns and practices on a variety of runtime platforms such as Cloud Foundry, Apache YARN and others.
About Sabby Anandan
Sabby Anandan is a Product Manager at Pivotal. Sabby is focused on building products that eliminate the barriers between application development, cloud, and big data.
A Marriage of Lambda and Kappa: Supporting Iterative Development of an Event ...confluent
With increasing data volumes typically comes a corresponding increase in (non-windowed) batch processing times, and many companies have looked to streaming as a way of delivering data processing results faster and more reliably. Event Driven Architectures further enhance these offerings by breaking centralised Data Platforms into loosely coupled and distributed solutions, supported by linearly scalable technologies such as Apache Kafka(TM) and Apache Kafka Streams(TM).
However, there remains a problem of how to handle changes to operational systems: if a record is the result of business logic, and that business logic changes, what do we do? Do we recalculate everything on the fly, adding in additional latencies for all data requests and potentially breaching non-functional requirements? Or do we run a batch job, risking that incorrect data will be served whilst the job is running?
This talk covers how 6point6 leveraged Kafka and Kafka Streams to transition a customer from a traditional business flow onto an Event Driven Architecture, with business logic triggered directly by real-time events across over 3000 loosely coupled business services, whilst ensuring that the active development of these services (and their containing logic and models) would not affect components which relied on data served by the platform.
Learn how we:
– Used versioning of topics, data and business logic to facilitate iterative development, ensuring that the reprocessing of large volumes of data would not result in incorrect or stale data being delivered.
– Handled distributed versioning of JSON event messages between separate teams/services, using discovery, automated contract negotiation and version porting.
– Developed technical patterns and libraries to allow rapid development and deployment of new event driven services using Kafka Streams.
– Developed functionality and approaches for deploying defensive services, including strategies for event retry and failure.
Watch this webcast here: https://www.confluent.io/online-talks/whats-new-in-confluent-platform-55/
Join the Confluent Product Marketing team as we provide an overview of Confluent Platform 5.5, which makes Apache Kafka and event streaming more broadly accessible to developers with enhancements to data compatibility, multi-language development, and ksqlDB.
Building an event-driven architecture with Apache Kafka allows you to transition from traditional silos and monolithic applications to modern microservices and event streaming applications. With these benefits has come an increased demand for Kafka developers from a wide range of industries. The Dice Tech Salary Report recently ranked Kafka as the highest-paid technological skill of 2019, a year removed from ranking it second.
With Confluent Platform 5.5, we are making it even simpler for developers to connect to Kafka and start building event streaming applications, regardless of their preferred programming languages or the underlying data formats used in their applications.
This session will cover the key features of this latest release, including:
-Support for Protobuf and JSON schemas in Confluent Schema Registry and throughout our entire platform
-Exactly once semantics for non-Java clients
-Admin functions in REST Proxy (preview)
-ksqlDB 0.7 and ksqlDB Flow View in Confluent Control Center
Live Event Debugging With ksqlDB at Reddit | Hannah Hagen and Paul Kiernan, R...HostedbyConfluent
Convincing developers to write tests for new code is hard; convincing developers to write tests for new event data is even harder. At Reddit, engineers have often deployed new app versions, only to find out later that the event wasn’t firing at all, or it was missing critical fields. So this begs the question, “How can engineers at Reddit be confident that the events they instrument are accurate and complete?”
In this session, we will learn about an internal tool developed at Reddit to QA events in real-time. This KSQL-powered web app streams events from our pipeline, allowing developers to filter events they care about using criteria like User ID, Device ID or the type of user interaction. With a backbone of KSQL and Kafka Streams, engineers can get real-time feedback on how accurate (or how erroneous) their event data is.
Stateful, Stateless and Serverless - Running Apache Kafka® on Kubernetesconfluent
Speakers: Joe Beda, Co-founder and CTO, Heptio + Gwen Shapira, Principal Data Architect, Confluent
With the rapid adoption of microservices, there is a growing need for solutions to manage deployment, resources and data for fleets of microservices. Kubernetes is a resource management framework for containers that is rapidly growing in popularity. Apache Kafka is a streaming platform that makes data accessible to the edges of an organization. It's no wonder the question of running Kafka on Kubernetes keeps coming up!
In this online talk, Joe Beda, CTO of Heptio and co-creator of Kubernetes, and Gwen Shapira, principal data architect at Confluent and Kafka PMC member, will help you navigate through the hype, address frequently asked questions and deliver critical information to help you decide if running Kafka on Kubernetes is the right approach for your organization.
You will:
-Get an introduction to the basic concepts you need to know as you plan to deploy services on Kubernetes.
-Learn which parts of the Kafka ecosystem fit Kubernetes like a glove, and which require special attention.
-Pick up useful tips for getting started.
-See why Confluent Platform for Kubernetes is the simplest solution to deploying and orchestrating Kafka on Kubernetes, using container images and a Kubernetes operator.
Watch the recording: https://videos.confluent.io/watch/yoZcuazDjDDTcj1sRnaD3J?.
Monitoring Large-Scale Apache Spark Clusters at DatabricksAnyscale
At Databricks, we manage Apache Spark clusters for customers to run various production workloads. In this talk, we share our experiences in building a real-time monitoring system for thousands of Spark nodes, including the lessons we learned and the value we’ve seen from our efforts so far.
The was part of the talk presented at #monitorSF Meetup held at Databricks HQ in SF.
Dennis Wittekind, Confluent, Senior Customer Success Engineer
Perhaps you have heard of Kafka Connect and think it would be a great fit in your application's architecture, but you like to know how things work before you propose them to your team? Perhaps you know enough Connect to be dangerous, but you haven't had the time to really understand all the moving pieces? This meetup talk is for you! We'll briefly introduce Connect to the uninitiated, and then jump in to underlying concepts and considerations you should make when running Connect in production! We'll even run a live demo! What could go wrong!?
https://www.meetup.com/Saint-Louis-Kafka-meetup-group/events/272687113/
Migrating from One Cloud Provider to Another (Without Losing Your Data or You...HostedbyConfluent
If you’re considering -- or planning -- a cloud migration, you may be concerned about risks to your data and your mental health. Migrations at scale are fraught with risk. You absolutely can’t lose data, compromise its integrity, or suffer downtime, so you want to be slow and careful. On the other hand, you’re paying two providers for every day the migration goes on, so you need to move as fast as possible.
Unity Technologies accumulates lots of data. We recently moved our data infrastructure as part of a major cloud migration from Amazon Web Services (AWS) to Google Cloud Platform (GCP).
To minimize risk and costs our team used Apache Kafka and Confluent Platform, while engaging Confluent Platform Professional Services to help ensure a speedy and seamless migration. Kafka was already serving as the backbone to our data infrastructure, which handles over half a million events per second, and during the migration it also served as the bridge between AWS and GCP.
Join us at this session to learn about the processes and tools used, the challenges faced, and the lessons learned as we moved our operations and petabytes of data from AWS to GCP with zero downtime.
Bravo Six, Going Realtime. Transitioning Activision Data Pipeline to Streamin...HostedbyConfluent
Activision Data team has been running a data pipeline for a variety of Activision games for many years. Historically we used a mix of micro-batch microservices coupled with classic Big Data tools like Hadoop and Hive for ETL. As a result, it could take up to 4-6 hours for data to be available to the end customers.
In the last few years, the adoption of data in the organization skyrocketed. We needed to de-legacy our data pipeline and provide near-realtime access to data in order to improve reporting, gather insights faster, power web and mobile applications. I want to tell a story about heavily leveraging Kafka Streams and Kafka Connect to reduce the end latency to minutes, at the same time making the pipeline easier and cheaper to run. We were able to successfully validate the new data pipeline by launching two massive games just 4 weeks apart.
Flattening the Curve with Kafka (Rishi Tarar, Northrop Grumman Corp.) Kafka S...confluent
Responding to a global pandemic presents a unique set of technical and public health challenges. The real challenge is the ability to gather data coming in via many data streams in variety of formats influences the real-world outcome and impacts everyone. The Centers for Disease Control and Prevention CELR (COVID Electronic Lab Reporting) program was established to rapidly aggregate, validate, transform, and distribute laboratory testing data submitted by public health departments and other partners. Confluent Kafka with KStreams and Connect play a critical role in program objectives to:
o Track the threat of COVID-19 virus
o Provide comprehensive data for local, state, and federal response
o Better understand locations with an increase in incidence
Event Driven Architecture with a RESTful Microservices Architecture (Kyle Ben...confluent
Tinder’s Quickfire Pipeline powers all things data at Tinder. It was originally built using AWS Kinesis Firehoses and has since been extended to use both Kafka and other event buses. It is the core of Tinder’s data infrastructure. This rich data flow of both client and backend data has been extended to service a variety of needs at Tinder, including Experimentation, ML, CRM, and Observability, allowing backend developers easier access to shared client side data. We perform this using many systems, including Kafka, Spark, Flink, Kubernetes, and Prometheus. Many of Tinder’s systems were natively designed in an RPC first architecture.
Things we’ll discuss decoupling your system at scale via event-driven architectures include:
– Powering ML, backend, observability, and analytical applications at scale, including an end to end walk through of our processes that allow non-programmers to write and deploy event-driven data flows.
– Show end to end the usage of dynamic event processing that creates other stream processes, via a dynamic control plane topology pattern and broadcasted state pattern
– How to manage the unavailability of cached data that would normally come from repeated API calls for data that’s being backfilled into Kafka, all online! (and why this is not necessarily a “good” idea)
– Integrating common OSS frameworks and libraries like Kafka Streams, Flink, Spark and friends to encourage the best design patterns for developers coming from traditional service oriented architectures, including pitfalls and lessons learned along the way.
– Why and how to avoid overloading microservices with excessive RPC calls from event-driven streaming systems
– Best practices in common data flow patterns, such as shared state via RocksDB + Kafka Streams as well as the complementary tools in the Apache Ecosystem.
– The simplicity and power of streaming SQL with microservices
How Credit Karma Makes Real-Time Decisions For 60 Million Users With Akka Str...Lightbend
In this webinar, Engineering Manager at Credit Karma, Dustin Lyons, discusses how not long ago his team was facing a common challenge shared by many financial services architects and engineering leaders: not only how to move from the offline, batch-mode processing of Big Data to streaming, Fast Data, and how to enable real-time decision making based on the information flowing in from over 60 million members.
Dustin reviews how his team migrated away from PHP and successfully implemented Akka Streams with Apache Kafka to ingest, process and route real-time events throughout their data ecosystem. At the end of this presentation, you’ll better understand:
* The design considerations for new Fast Data architectures, from streaming to microservices to real-time analysis.
* Some lessons learned when it comes to progressing from batch to streaming using Akka, Spark and Kafka
* Why Akka’s self-healing actor model and the resilience that it provides is actually what matters most when delivering real-time customer experiences
In this talk, Confluent co-founder and CEO, Jay Kreps will cover the rise of two trends:
1. The rise of Apache Kafka and event streams
2. The rise of the public cloud and cloud-native data systems
... and the problems we need to solve as these two trends come together.
The need to handle increasingly large volumes of data, to quickly drive decisions (via streaming technologies and machine learning algorithms), to scale systems effectively, to guarantee the right level of continuity, to float data across systems efficiently and others are becoming critical and challenging requirements. During this talk we’ll demonstrate how to design reactive, resilient, message driven and elastic applications by combining technologies such as Akka, Kakfa, Cassandra and Spark along with architectural patterns like CQRS, ES, etc. in order to achieve the previously mentioned needs.
Andrii Dembitskyi "Events in our applications Event bus and distributed systems"Fwdays
События являются довольно сильным инструментом для приложений:
коммуникация между компонентами системы;
история действий над данными;
триггеры для операций;
интеграция с посторонними системами.
Во время доклада я расскажу о применениях событий. На какие грабли можно наступить спеша в выборе инструмента и какое место они имеют в нашей архитектуре.
Tuning and optimizing webcenter spaces application white paperVinay Kumar
This white paper focuses on Oracle WebCenter Spaces performance problem and analysis after post production deployment. We will tune JVM ( JRocket). Webcenter Portal, Webcenter content and ADF task flow.
SOAP Web Services have a well established role in the enterprise, but aside from the many benefits of the WS-* standards, SOAP and XML also carry additional baggage for developers. Consequently, REST Web Services are gaining tremendous popularity within the developer community. This session will begin by comparing and contrasting the basic concepts of both SOAP and REST Web Services. Building on that foundation, Sam Brannen will show attendees how to implement SOAP-based applications using Spring-WS 2.0. He will then demonstrate how to build a similar REST-ful application using Spring MVC 3.0. The session will conclude with an in-depth look at both server-side and client-side development as well as efficient integration testing of Web Services using the Spring Framework.
Backbone.js gives structure to web applications by providing models with key-value binding and custom events, collections with a rich API of functions, views with declarative event handling, and connects it all to your existing API over a RESTful JSON interface.
From Surendra Reddy's presentation "Walking Through Cloud Serving at Yahoo!" at the 2009 Cloud Computing Expo in Santa Clara, CA, USA. Here's the talk description on the Expo's site: http://cloudcomputingexpo.com/event/session/508
This ppt tells about struts in java. All the methods and brief knowledge of struts. For more info about struts and free projects on it please visit : http://s4al.com/category/study-java/
Presentations includes following topics :-
Introduction of ReactJS.
Component workflow.
State management and useful life-cycles.
React hooks.
Server Side Rendering.
This is a presentation given on October 24 by Michael Uzquiano of Cloud CMS (http://www.cloudcms.com) at the MongoDB Boston conference.
In this presentation, we cover Hazelcast - an in-memory data grid that provides distributed object persistence across multiple nodes in a cluster. When backed by MongoDB, objects are naturally written to Mongo by Hazelcast. The integration points are clean and easy to implement.
We cover a few simple cases along with code samples to provide the MongoDB community with some ideas of how to integrate Hazelcast into their own MongoDB Java applications.
IoT 'Megaservices' - High Throughput Microservices with AkkaLightbend
**********
Watch this presentation on-demand!
https://info.lightbend.com/iot-megaservices-high-throughput-microservices-with-akka-register.html
**********
In this interactive presentation by Hugh McKee, Developer Advocate at Lightbend, we’ll share our experiences helping our clients create a system architecture that can support high throughput microservices (aka "Megaservices"). We’ll do that using IoT demo applications designed to push cloud service providers like Amazon and Google to their limits. Using sample code that you can later run on your own machine, we’ll look at:
* Modeling real-life digital twins for hundreds of thousands of IoT devices in the field, looking into how these megaservices are implemented in Akka.
* Visualizing Akka Actors–which represent IoT digital twins–in a “crop circle” formation that represents a complete distributed Reactive application, and watching at messages are processed across Akka Cluster nodes using cluster sharding.
* Some code behind the whole set up, which is built using OSS like Akka, Java, JavaScript, and Kubernetes.
Follow us on social:
TW: https://twitter.com/lightbend
LI: https://www.linkedin.com/company/lightbend-inc-/
FB: https://www.facebook.com/lightbendOfficial/
For more about Lightbend:
Blog: https://www.lightbend.com/blog
Newsletter: https://www.lightbend.com/newsletter
How Akka Cluster Works: Actors Living in a ClusterLightbend
Hugh McKee, Developer Advocate at Lightbend, demonstrates how Akka Actors work inside of a cluster, including the code and in-browser visualizations you need to grok it.
See the full content with videos here: https://www.lightbend.com/blog/how-akka-cluster-works-actors-living-in-a-cluster
The Reactive Principles: Eight Tenets For Building Cloud Native ApplicationsLightbend
In this presentation by Jonas Bonér, creator of Akka and founder/CTO of Lightbend, we review a set of eight Reactive Principles that enable the design and implementation of Cloud Native applications–applications that are highly concurrent, distributed, performant, scalable, and resilient, while at the same time conserving resources when deploying, operating, and maintaining them.
Putting the 'I' in IoT - Building Digital Twins with Akka MicroservicesLightbend
In this webinar with Hugh McKee, Developer Advocate for Akka Platform, we’ll look at “What on Earth”, a demo exploring how Akka Microservices serves as an ideal solution for high-scale digital twinning for IoT.
For the full presentation, including video, visit: https://www.lightbend.com/blog/iot-building-digital-twins-with-akka-microservices
In this webinar by Jonas Bonér, creator of Akka and CTO/Co-Founder of Lightbend, we take a look at Cloudstate, an OSS tool built on Akka, gRPC, Knative, GraalVM, and Kubernetes. Cloudstate lets you model, manage, and scale stateful services while preserving responsiveness by designing for resilience and elasticity.
Digital Transformation from Monoliths to Microservices to Serverless and BeyondLightbend
Join this highly-visual presentation by Hugh McKee, Developer Advocate at Lightbend, to learn more about the ramifications and opportunities along the evolution from monolithic systems, to microservices architectures, to serverless (FaaS).
See the video presentation on the Lightbend blog at: https://www.lightbend.com/blog/digital-transformation-from-monoliths-to-microservices-to-serverless-and-beyond
Akka Anti-Patterns, Goodbye: Six Features of Akka 2.6Lightbend
In this special guest webinar with Akka expert and Reactive System Consultant, Manuel Bernhardt, we review Akka 2.6 release highlights and a selection of 6 former anti-patterns that have now been rendered impossible by design.
Microservices, Kubernetes, and Application Modernization Done RightLightbend
In this talk by David Ogren, Enterprise Architect at Lightbend, we draw from experiences helping our clients successfully create, migrate to, and manage cloud-native system architectures. We look at some of the common pitfalls and anti-patterns of modernization efforts, and some of the best practices for taking an incremental approach to transforming legacy systems.
See the full post with video on the Lightbend blog: https://www.lightbend.com/blog/microservices-kubernetes-application-modernization
Akka and Kubernetes: A Symbiotic Love StoryLightbend
In this webinar by Hugh McKee, Developer Advocate at Lightbend, we take a look at how Akka and Kubernetes enjoy a symbiotic relationship, using live “crop circle” visuals to help. See the full video, slides, and additional resources here:
https://www.lightbend.com/blog/akka-and-kubernetes-a-symbiotic-love-story
Scala 3 Is Coming: Martin Odersky Shares What To KnowLightbend
Join Dr. Martin Odersky, the creator of Scala and co-founder of Lightbend, on a tour of what is in store and highlight some of his favorite features of Scala 3!
Migrating From Java EE To Cloud-Native Reactive SystemsLightbend
A lot of businesses that never before considered themselves as “technology companies” are now faced with digital modernization imperatives that force them to rethink their application and infrastructure architecture. On the path to becoming a digital, on-demand provider, development speed is the ultimate competitive advantage.
This presents challenges to many organizations that have huge investments in legacy Java EE infrastructure, where technical debt and monolithic system architectures require modernization in order to confront various business risks. Usually, changes need to be made within existing frameworks to keep pace with new web-scale organizations.
If your legacy monolith is no longer serving the expanding needs of your business, then join Markus Eisele, Director of Developer Advocacy at Lightbend, to learn what you can do to migrate from Java EE to cloud-native, Reactive systems—as defined by the Reactive Manifesto.
Running Kafka On Kubernetes With Strimzi For Real-Time Streaming ApplicationsLightbend
In this talk by Sean Glover, Principal Engineer at Lightbend, we will review how the Strimzi Kafka Operator, a supported technology in Lightbend Platform, makes many operational tasks in Kafka easy, such as the initial deployment and updates of a Kafka and ZooKeeper cluster.
See the blog post containing the YouTube video here: https://www.lightbend.com/blog/running-kafka-on-kubernetes-with-strimzi-for-real-time-streaming-applications
Designing Events-First Microservices For A Cloud Native WorldLightbend
In this talk by Jonas Bonér, Lightbend CTO/Co-Founder and creator of Akka, we will explore the nature of events, what it means to be event-driven, and how we can unleash the power of events and commands by applying an events first, domain-driven design to microservices-based architectures.
For more information, head over to lightbend.com/blog!
Scala Security: Eliminate 200+ Code-Level Threats With Fortify SCA For ScalaLightbend
Join Jeremy Daggett, Solutions Architect at Lightbend, to see how Fortify SCA for Scala works differently from existing Static Code Analysis tools to help you uncover security issues early in the SDLC of your mission-critical applications.
A Glimpse At The Future Of Apache Spark 3.0 With Deep Learning And KubernetesLightbend
In this special guest webinar with Holden Karau, speaker, author and Developer Advocate at Google, we’ll take a walk through some of the interesting JIRAs, look at external components being developed (like deep learning support), and also talk about the future of running real-time Spark workloads on Kubernetes.
Akka and Kubernetes: Reactive From Code To CloudLightbend
In this webinar with special guest Fabio Tiriticco, we will explore how Akka is the perfect companion to Kubernetes, providing the application level requirements needed to successfully deploy and manage your cloud-native services with technologies built specifically for cloud-native applications, like Kubernetes.
Hands On With Spark: Creating A Fast Data Pipeline With Structured Streaming ...Lightbend
In this talk by Gerard Maas, O’Reilly author and Senior Software Engineer at Lightbend, we focus on choosing the right Fast Data stream processing features of Apache Spark, taking a practical, code-driven look at the two APIs available for this: the mature Spark Streaming and its younger sibling, Structured Streaming.
How Akka Works: Visualize And Demo Akka With A Raspberry-Pi ClusterLightbend
In this webinar by Lightbend’s Eric Loots, Scala & Tooling Practice Lead, and Kikia Carter, Principal Enterprise Architect, we use a simple yet powerful visualization of a 5-node, Raspberry Pi-based cluster to reveal the inner workings of Akka Cluster. In a matter of minutes, you will gain a strong understanding of clustering, even if you don’t know anything about Akka.
Ready for Fast Data: How Lightbend Enables Teams To Build Real-Time, Streamin...Lightbend
In this webinar with Mike Kelland, VP of Global Services at Lightbend, we will share some details of our specialized enablement strategy that allows teams of all sizes to successfully adopt Fast Data technologies and techniques. Based on over a decade of experience developing technologies that support real-time data streaming applications, Lightbend has the tools, expertise, and training courses you need to ramp up your team for Fast Data.
Akka Streams And Kafka Streams: Where Microservices Meet Fast DataLightbend
In a recent survey, 90% of over 2400 developers reported having at least some real-time functionality in their systems. Enterprises are realizing that the ability to extract value from streaming data in near real-time is the new competitive advantage.
Two technologies–Akka Streams and Kafka Streams–have emerged as popular tools to use with Apache Kafka for addressing the shared requirements of availability, scalability, and resilience for both streaming microservices and Fast Data. So which one should you use for specific use cases?
Your Digital Assistant.
Making complex approach simple. Straightforward process saves time. No more waiting to connect with people that matter to you. Safety first is not a cliché - Securely protect information in cloud storage to prevent any third party from accessing data.
Would you rather make your visitors feel burdened by making them wait? Or choose VizMan for a stress-free experience? VizMan is an automated visitor management system that works for any industries not limited to factories, societies, government institutes, and warehouses. A new age contactless way of logging information of visitors, employees, packages, and vehicles. VizMan is a digital logbook so it deters unnecessary use of paper or space since there is no requirement of bundles of registers that is left to collect dust in a corner of a room. Visitor’s essential details, helps in scheduling meetings for visitors and employees, and assists in supervising the attendance of the employees. With VizMan, visitors don’t need to wait for hours in long queues. VizMan handles visitors with the value they deserve because we know time is important to you.
Feasible Features
One Subscription, Four Modules – Admin, Employee, Receptionist, and Gatekeeper ensures confidentiality and prevents data from being manipulated
User Friendly – can be easily used on Android, iOS, and Web Interface
Multiple Accessibility – Log in through any device from any place at any time
One app for all industries – a Visitor Management System that works for any organisation.
Stress-free Sign-up
Visitor is registered and checked-in by the Receptionist
Host gets a notification, where they opt to Approve the meeting
Host notifies the Receptionist of the end of the meeting
Visitor is checked-out by the Receptionist
Host enters notes and remarks of the meeting
Customizable Components
Scheduling Meetings – Host can invite visitors for meetings and also approve, reject and reschedule meetings
Single/Bulk invites – Invitations can be sent individually to a visitor or collectively to many visitors
VIP Visitors – Additional security of data for VIP visitors to avoid misuse of information
Courier Management – Keeps a check on deliveries like commodities being delivered in and out of establishments
Alerts & Notifications – Get notified on SMS, email, and application
Parking Management – Manage availability of parking space
Individual log-in – Every user has their own log-in id
Visitor/Meeting Analytics – Evaluate notes and remarks of the meeting stored in the system
Visitor Management System is a secure and user friendly database manager that records, filters, tracks the visitors to your organization.
"Secure Your Premises with VizMan (VMS) – Get It Now"
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I ...Juraj Vysvader
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I didn't get rich from it but it did have 63K downloads (powered possible tens of thousands of websites).
Climate Science Flows: Enabling Petabyte-Scale Climate Analysis with the Eart...Globus
The Earth System Grid Federation (ESGF) is a global network of data servers that archives and distributes the planet’s largest collection of Earth system model output for thousands of climate and environmental scientists worldwide. Many of these petabyte-scale data archives are located in proximity to large high-performance computing (HPC) or cloud computing resources, but the primary workflow for data users consists of transferring data, and applying computations on a different system. As a part of the ESGF 2.0 US project (funded by the United States Department of Energy Office of Science), we developed pre-defined data workflows, which can be run on-demand, capable of applying many data reduction and data analysis to the large ESGF data archives, transferring only the resultant analysis (ex. visualizations, smaller data files). In this talk, we will showcase a few of these workflows, highlighting how Globus Flows can be used for petabyte-scale climate analysis.
Accelerate Enterprise Software Engineering with PlatformlessWSO2
Key takeaways:
Challenges of building platforms and the benefits of platformless.
Key principles of platformless, including API-first, cloud-native middleware, platform engineering, and developer experience.
How Choreo enables the platformless experience.
How key concepts like application architecture, domain-driven design, zero trust, and cell-based architecture are inherently a part of Choreo.
Demo of an end-to-end app built and deployed on Choreo.
Innovating Inference - Remote Triggering of Large Language Models on HPC Clus...Globus
Large Language Models (LLMs) are currently the center of attention in the tech world, particularly for their potential to advance research. In this presentation, we'll explore a straightforward and effective method for quickly initiating inference runs on supercomputers using the vLLM tool with Globus Compute, specifically on the Polaris system at ALCF. We'll begin by briefly discussing the popularity and applications of LLMs in various fields. Following this, we will introduce the vLLM tool, and explain how it integrates with Globus Compute to efficiently manage LLM operations on Polaris. Attendees will learn the practical aspects of setting up and remotely triggering LLMs from local machines, focusing on ease of use and efficiency. This talk is ideal for researchers and practitioners looking to leverage the power of LLMs in their work, offering a clear guide to harnessing supercomputing resources for quick and effective LLM inference.
Why React Native as a Strategic Advantage for Startup Innovation.pdfayushiqss
Do you know that React Native is being increasingly adopted by startups as well as big companies in the mobile app development industry? Big names like Facebook, Instagram, and Pinterest have already integrated this robust open-source framework.
In fact, according to a report by Statista, the number of React Native developers has been steadily increasing over the years, reaching an estimated 1.9 million by the end of 2024. This means that the demand for this framework in the job market has been growing making it a valuable skill.
But what makes React Native so popular for mobile application development? It offers excellent cross-platform capabilities among other benefits. This way, with React Native, developers can write code once and run it on both iOS and Android devices thus saving time and resources leading to shorter development cycles hence faster time-to-market for your app.
Let’s take the example of a startup, which wanted to release their app on both iOS and Android at once. Through the use of React Native they managed to create an app and bring it into the market within a very short period. This helped them gain an advantage over their competitors because they had access to a large user base who were able to generate revenue quickly for them.
Prosigns: Transforming Business with Tailored Technology SolutionsProsigns
Unlocking Business Potential: Tailored Technology Solutions by Prosigns
Discover how Prosigns, a leading technology solutions provider, partners with businesses to drive innovation and success. Our presentation showcases our comprehensive range of services, including custom software development, web and mobile app development, AI & ML solutions, blockchain integration, DevOps services, and Microsoft Dynamics 365 support.
Custom Software Development: Prosigns specializes in creating bespoke software solutions that cater to your unique business needs. Our team of experts works closely with you to understand your requirements and deliver tailor-made software that enhances efficiency and drives growth.
Web and Mobile App Development: From responsive websites to intuitive mobile applications, Prosigns develops cutting-edge solutions that engage users and deliver seamless experiences across devices.
AI & ML Solutions: Harnessing the power of Artificial Intelligence and Machine Learning, Prosigns provides smart solutions that automate processes, provide valuable insights, and drive informed decision-making.
Blockchain Integration: Prosigns offers comprehensive blockchain solutions, including development, integration, and consulting services, enabling businesses to leverage blockchain technology for enhanced security, transparency, and efficiency.
DevOps Services: Prosigns' DevOps services streamline development and operations processes, ensuring faster and more reliable software delivery through automation and continuous integration.
Microsoft Dynamics 365 Support: Prosigns provides comprehensive support and maintenance services for Microsoft Dynamics 365, ensuring your system is always up-to-date, secure, and running smoothly.
Learn how our collaborative approach and dedication to excellence help businesses achieve their goals and stay ahead in today's digital landscape. From concept to deployment, Prosigns is your trusted partner for transforming ideas into reality and unlocking the full potential of your business.
Join us on a journey of innovation and growth. Let's partner for success with Prosigns.
Advanced Flow Concepts Every Developer Should KnowPeter Caitens
Tim Combridge from Sensible Giraffe and Salesforce Ben presents some important tips that all developers should know when dealing with Flows in Salesforce.
Globus Connect Server Deep Dive - GlobusWorld 2024Globus
We explore the Globus Connect Server (GCS) architecture and experiment with advanced configuration options and use cases. This content is targeted at system administrators who are familiar with GCS and currently operate—or are planning to operate—broader deployments at their institution.
Code reviews are vital for ensuring good code quality. They serve as one of our last lines of defense against bugs and subpar code reaching production.
Yet, they often turn into annoying tasks riddled with frustration, hostility, unclear feedback and lack of standards. How can we improve this crucial process?
In this session we will cover:
- The Art of Effective Code Reviews
- Streamlining the Review Process
- Elevating Reviews with Automated Tools
By the end of this presentation, you'll have the knowledge on how to organize and improve your code review proces
Multiple Your Crypto Portfolio with the Innovative Features of Advanced Crypt...Hivelance Technology
Cryptocurrency trading bots are computer programs designed to automate buying, selling, and managing cryptocurrency transactions. These bots utilize advanced algorithms and machine learning techniques to analyze market data, identify trading opportunities, and execute trades on behalf of their users. By automating the decision-making process, crypto trading bots can react to market changes faster than human traders
Hivelance, a leading provider of cryptocurrency trading bot development services, stands out as the premier choice for crypto traders and developers. Hivelance boasts a team of seasoned cryptocurrency experts and software engineers who deeply understand the crypto market and the latest trends in automated trading, Hivelance leverages the latest technologies and tools in the industry, including advanced AI and machine learning algorithms, to create highly efficient and adaptable crypto trading bots
Exploring Innovations in Data Repository Solutions - Insights from the U.S. G...Globus
The U.S. Geological Survey (USGS) has made substantial investments in meeting evolving scientific, technical, and policy driven demands on storing, managing, and delivering data. As these demands continue to grow in complexity and scale, the USGS must continue to explore innovative solutions to improve its management, curation, sharing, delivering, and preservation approaches for large-scale research data. Supporting these needs, the USGS has partnered with the University of Chicago-Globus to research and develop advanced repository components and workflows leveraging its current investment in Globus. The primary outcome of this partnership includes the development of a prototype enterprise repository, driven by USGS Data Release requirements, through exploration and implementation of the entire suite of the Globus platform offerings, including Globus Flow, Globus Auth, Globus Transfer, and Globus Search. This presentation will provide insights into this research partnership, introduce the unique requirements and challenges being addressed and provide relevant project progress.
OpenFOAM solver for Helmholtz equation, helmholtzFoam / helmholtzBubbleFoamtakuyayamamoto1800
In this slide, we show the simulation example and the way to compile this solver.
In this solver, the Helmholtz equation can be solved by helmholtzFoam. Also, the Helmholtz equation with uniformly dispersed bubbles can be simulated by helmholtzBubbleFoam.
First Steps with Globus Compute Multi-User EndpointsGlobus
In this presentation we will share our experiences around getting started with the Globus Compute multi-user endpoint. Working with the Pharmacology group at the University of Auckland, we have previously written an application using Globus Compute that can offload computationally expensive steps in the researcher's workflows, which they wish to manage from their familiar Windows environments, onto the NeSI (New Zealand eScience Infrastructure) cluster. Some of the challenges we have encountered were that each researcher had to set up and manage their own single-user globus compute endpoint and that the workloads had varying resource requirements (CPUs, memory and wall time) between different runs. We hope that the multi-user endpoint will help to address these challenges and share an update on our progress here.
Strategies for Successful Data Migration Tools.pptxvarshanayak241
Data migration is a complex but essential task for organizations aiming to modernize their IT infrastructure and leverage new technologies. By understanding common challenges and implementing these strategies, businesses can achieve a successful migration with minimal disruption. Data Migration Tool like Ask On Data play a pivotal role in this journey, offering features that streamline the process, ensure data integrity, and maintain security. With the right approach and tools, organizations can turn the challenge of data migration into an opportunity for growth and innovation.
Field Employee Tracking System| MiTrack App| Best Employee Tracking Solution|...informapgpstrackings
Keep tabs on your field staff effortlessly with Informap Technology Centre LLC. Real-time tracking, task assignment, and smart features for efficient management. Request a live demo today!
For more details, visit us : https://informapuae.com/field-staff-tracking/
How Recreation Management Software Can Streamline Your Operations.pptxwottaspaceseo
Recreation management software streamlines operations by automating key tasks such as scheduling, registration, and payment processing, reducing manual workload and errors. It provides centralized management of facilities, classes, and events, ensuring efficient resource allocation and facility usage. The software offers user-friendly online portals for easy access to bookings and program information, enhancing customer experience. Real-time reporting and data analytics deliver insights into attendance and preferences, aiding in strategic decision-making. Additionally, effective communication tools keep participants and staff informed with timely updates. Overall, recreation management software enhances efficiency, improves service delivery, and boosts customer satisfaction.
De mooiste recreatieve routes ontdekken met RouteYou en FME
Full Stack Reactive In Practice
1.
2. Full Stack Reactive in Practice
Presenting a real-world CQRS
(command query responsibility
segregation) and ES (event
sourcing) architecture.
Kevin Webber
Director of Engineering @
Reading Plus
w: kevinwebber.ca
m: medium.com/@kvnwbbr
t: @kvnwbbr
3. — Reading Plus is an adaptive platform that helps K-12
students become successful readers and lifelong
learners
— We will be applying reactive techniques and
technologies to a mission with purpose
— Find us at https://readingplus.com and stay tuned for
upcoming engineering opportunities
4. Why Reactive?
— React, Vue, and other front-end frameworks are
enabling innovative user experiences
5. Why Reactive?
— React, Vue, and other front-end frameworks are
enabling innovative user experiences
— Meanwhile, relational databases continue to dominate
the server side
6. Why Reactive?
— React, Vue, and other front-end frameworks are
enabling innovative user experiences
— Meanwhile, relational databases continue to dominate
the server side
— Reactive principles and frameworks help create an
end-to-end event-driven channel, all the way up and
all the way down
7. Traditional architecture
— Unbounded in complexity and
side effects
— Putting React or Vue on top
won't help much
— Batch-mode, high-latency,
high-failure rates
— Has an impact
— Customer happiness?
— Developer joy?
— Visit TheDailyWTF :)
8. What we will cover
All references and examples will be in Java.
1. Designing reactive systems → Event Storming, DDD
2. Implementing reactive systems → Lagom, Play, Akka
3. Deploying reactive systems → sbt, Kubernetes
9. Part 1. Designing Reactive Systems
— Raw ingredients
— Commands & events
— Inception
— Event storming & DDD
— Fidelity
— Event modeling
— Handoff
— User stories
12. Event sourcing
— Commands are applied
against an entity, the entity
can accept or reject a
command
— If accepted, an entity will
generate an event and then
apply that event in order to
change state
13. Defining aggregate boundaries
— Structures implementation
around related items
— The yellow sticky in the
middle represents state
14. Defining bounded contexts
— Group together related
business concepts
— Contain domain complexity
— A single person can master an
aggregate
— A single team can master a
bounded context
15. How do we model a bounded context?
— Arrange events in linear order by time
— Events cluster into closely related groups (aggregates)
— Aggregates cluster into closely related groups
(bounded contexts)
— Event Storming helps discover the problem space
— Event Modeling creates a blueprint for a solution
17. Essential learning resources
— Book: Event Storming by Alberto Brandolini
— Book: Domain-Driven Design Distilled by Vaughn
Vernon
— Website: Event Modeling by Adam Dymitruk
18. Part 2. Implementation
— Tools
— Lagom, Play, and Akka
— Cassandra and Kafka
— Pa!erns
— Event sourcing, CQRS
— Architecture
— Interface, BFF, services
— Integration
— Point to point, streaming,
pub/sub
21. Placing a wire transfer
We'll cover the command channel first.
1. Create command
2. Send over REST
3. Direct command to entity
4. Create event
5. Change state
6. Publish event
24. API gateway
— Backend for Frontend (BFF) is
a pattern that involves
creating a 1:1 relationship
between backends and UIs
— The BFF then handles all calls
to various underlying
microservices (Portfolio,
Broker, Wires, etc)
— Also performs authentication,
authorization, etc
25. Play (command side API)
@Override
public ServiceCall<Transfer, TransferId> transferFunds() {
TransferId transferId = TransferId.newId();
return transfer ->
transferRepository
.get(transferId) // 1 (get reference to entity)
.ask(TransferCommand.TransferFunds.builder()
.source(transfer.getSourceAccount())
.destination(transfer.getDestinationAccount())
.amount(transfer.getFunds())
.build()
) // 2 (ask pattern)
.thenApply(done -> transferId);
}
26. BFF and Service Integration
— Play routes all commands and
queries to Lagom
microservices
— Lagom is a framework built on
top of Akka for developing
reactive microservices
— Based on the principles of
CQRS (Command Query
Responsibility Segregation)
27. Lagom (aggregates / entities)
— Based around persistent entities (aka, aggregates)
— Stored in memory for fast access
— Can be recovered via Cassandra as the event log
public class TransferEntity extends
PersistentEntity<TransferCommand,
TransferEvent,
Optional<TransferState>> {
// ...
}
28. Lagom (event sourcing)
— No mutable data, only
immutable state
— State is only changed on
successful event application
— Events are journaled
— Entity can always recover in-
memory by replaying events
from the journal
— Entities are not queried
directly, read-side queries are
backed by views
29. State machines
Let's say we're modeling an ATM
machine...
— This ATM machine has three
different states
— Each state has a different
behaviour for the same input
https://www.uml-diagrams.org/
state-machine-diagrams.html
35. Views
— Read side processors build a
precomputed table
— Queries are only against
precomputed tables
— Use this to populate data on
initial page load
36. Creating a portfolio
1. When each portfolio is
created, we create a portfolio
entity and then persist the
create event to the journal
2. A read side processor
subscribes to journal updates
3. With each update, the read
side processor updates a
projected view
Let's cover #3...
37. Lagom ReadSide processor
Update a precomputed query table (Cassandra) on every
new event we subscribe to.
@Override
public ReadSideHandler<PortfolioEvent> buildHandler() {
return readSide.<PortfolioEvent>builder("portfolio_offset") // 1
.setGlobalPrepare(this::prepareCreateTables) // 2
.setPrepare(tag -> prepareWritePortfolios()) // 3
.setEventHandler(Opened.class, this::processPortfolioChanged) // 4
.build();
}
private CompletionStage<Done> prepareWritePortfolios() {
return session
.prepare("INSERT INTO portfolio_summary (portfolioId, name) VALUES (?, ?)")
.thenApply(ps -> {
this.writePortfolios = ps;
return Done.getInstance();
});
}
38. Queries are then executed against the projected view for
a vastly lower latency user experience.
@Override
public ServiceCall<NotUsed, PSequence<PortfolioSummary>> getAllPortfolios() {
return request -> {
CompletionStage<PSequence<PortfolioSummary>> result =
db.selectAll("SELECT portfolioId, name FROM portfolio_summary;")
.thenApply(rows -> {
List<PortfolioSummary> summary = rows.stream().map(row ->
PortfolioSummary.builder()
.portfolioId(new PortfolioId(
row.getString("portfolioId")))
.name(row.getString("name"))
.build())
.collect(Collectors.toList());
return TreePVector.from(summary);
});
return result;
};
}
39. Streaming
1. Render initial page with precomputed view over HTTP/
REST
2. Switch to unidirectional streaming for updates (events
over WS)
3. Commands over REST will still cause full page
refreshes (can change unidirectional stream to BiDi
stream in future)
Instead, let's use the Lagom PubSub API to push updates
to the UI in real-time...
40. Streaming (Lagom)
This code exposes a Reactive Streams Source via Lagom,
which Play can then "attach" to.
@Override
public ServiceCall<NotUsed, Source<String, ?>> transferStream() {
return request -> {
// subscribe to events on a specific topic ("transfer")
final PubSubRef<String> topic =
pubSub.refFor(
TopicId.of(String.class,
"transfer"));
// return the Source as a future (standard async Java 8)
return CompletableFuture.completedFuture(topic.subscriber());
};
}
41. Streaming (architecture)
PubSub works by broadcasting
events to subscribers:
— Publisher is TransferEntity
— Subscriber is
WireTransferServiceImpl
— This will create a streaming
Source
44. WebSockets (Vue.js)
connect() {
this.socket = new WebSocket(
"ws://localhost:9000/api/transfer/stream");
this.socket.onopen = () => {
this.socket.onmessage = (e) => {
let event = JSON.parse(e.data);
var index = -1;
// 1. determine if we're updating a row (initiated)
// or adding a new row (completed)
for (var i = 0; i < this.transfers.length; i++) {
if (this.transfers[i].id === event.id) {
index = i;
break;
}
}
if (index === -1) {
// unshift is similar to push, but prepends
this.transfers.unshift({
// ... 3. create object with id, status, etc
});
} else {
let t = {
// ... 4. create object with id, status, etc
};
this.transfers.splice(index, 1, t);
this.updateCashOnHand();
}
};
};
}
45. Part 3. Deployment
— Packaging
— sbt, service boundaries
— Testing
— Deploying to Minikube
— Handling dependencies in
k8s such as Cassandra,
Kafka
— Deployment
— Production deployments
47. Reactive Stock Trader needs...
— Formation of a cluster so Lagom entities can stay
available in memory
— Connection to Cassandra and Kafka (in a "high-
availability" clustered configuration)
— Cassandra for the backing journal
— Kafka for the Lagom Message Broker API for
communication between services
49. In this example, we have a three node Kubernetes
cluster, with two bounded contexts spread across those
nodes. This offers resilience and scale.
50. Deploying to Minikube
Instructions are available here to help deploy Reactive
Stock Trader (along with Cassandra and Kafka) to
Minikube:
https://github.com/RedElastic/reactive-stock-trader/tree/
master/deploy/instructions
51. Conclusion
— CQRS separates writes and reads for reliability and
performance
— Event sourcing eliminates mutability concerns of
relational databases while preserving their query
capabilities
— Operationally Lagom is cloud-native and ready to
deploy to AWS, Azure, GCP, etc, via Kubernetes
52. What about serverless?
— For the foreseeable future we
need to understand how our
software interacts with the
runtime environment
— A serverless component may
be part of a reactive system,
but is not a replacement for a
reactive system
— Reactive systems enable
portability across cloud
vendors, whereas many
serverless offerings lock us in
53. Reactive in Practice
For a complete look at this
material, visit IBM Developer and
check out Reactive in Practice, a
12 part series (parts 8-12 to be
published mid-September).
https://developer.ibm.com/series/
reactive-in-practice/
Thanks to Dana Harrington,
Lightbend, and IBM.