see labs at https://github.com/advantageous/j1-talks-2016
Import based on PPT so there is more notes. This is from our JavaOne Talk 2016 on Reakt, reactive Java programming with promises, circuit breakers, and streams. Reakt is a reactive Java lib that provides promises, streams, and a reactor to handle asynchronous call coordination. It was influenced by the design of promises in ES6. You want to async-call serviceA and then serviceB, take the results of serviceA and serviceB, and then call serviceC. Then, based on the results of call C, call D or E and then return the results to the original caller. Calls to A, B, C, D, and E are all async calls, and none should take longer than 10 seconds. If they do, then return a timeout to the original caller. The whole async call sequence should time out in 20 seconds if it does not complete and should also check for circuit breakers and provide back pressure feedback so the system does not have cascading failures. Learn more in this session.
Reactive Java: Promises and Streams with Reakt (JavaOne talk 2016)Rick Hightower
see labs at https://github.com/advantageous/j1-talks-2016
Import based on PDF. This is from our JavaOne Talk 2016 on Reakt, reactive Java programming with promises, circuit breakers, and streams. Reakt is a reactive Java lib that provides promises, streams, and a reactor to handle asynchronous call coordination. It was influenced by the design of promises in ES6. You want to async-call serviceA and then serviceB, take the results of serviceA and serviceB, and then call serviceC. Then, based on the results of call C, call D or E and then return the results to the original caller. Calls to A, B, C, D, and E are all async calls, and none should take longer than 10 seconds. If they do, then return a timeout to the original caller. The whole async call sequence should time out in 20 seconds if it does not complete and should also check for circuit breakers and provide back pressure feedback so the system does not have cascading failures. Learn more in this session.
High-Speed Reactive Microservices - trials and tribulationsRick Hightower
Covers how we built a set of high-speed reactive microservices and maximized cloud/hardware costs while meeting objectives in resilience and scalability. This has more notes attached as it is based on the ppt not the PDF.
This session endeavors to explain high-speed reactive microservice architecture, a set of patterns for building services that can readily back mobile and web applications at scale. It uses a scale-up and -out versus a scale-out model to do more with less hardware. A scale-up model uses in-memory operational data, efficient queue handoff, and microbatch streaming, plus async calls to handle more calls on a single node. High-speed microservice architecture endeavors to get back to OOP roots, where data and logic live together in a cohesive, understandable representation of the problem domain, and away from separation of data and logic, because data lives with the service logic that operates on it.
Covers how we built a set of high-speed reactive microservices and maximized cloud/hardware costs while meeting objectives in resilience and scalability. Talks about Akka, Kafka, QBit, in-memory computing, from a practitioners point of view. Based on the talks delivered by Geoff Chandler, Jason Daniel, and Rick Hightower at JavaOne 2016 and SF Fintech at Scale 2017, but updated.
Exploring the problem of Microservices communication and how both Kafka and Service Mesh solutions address it. We then look at some approaches for combining both.
Presented at the inaugural Kafka summit (2016) hosted by Confluent in San Francisco
Abstract:
Kafka is a backbone for various data pipelines and asynchronous messaging at LinkedIn and beyond. 2015 was an exciting year at LinkedIn in that we hit a new level of scale with Kafka: we now process more than 1 trillion published messages per day across nearly 1300 brokers. We run into some interesting production issues at this scale and I will dive into some of the most critical incidents that we encountered at LinkedIn in the past year:
Data loss: We have extremely stringent SLAs on latency and completeness that were violated on a few occasions. Some of these incidents were due to subtle configuration problems or even missing features.
Offset resets: As of early 2015, Kafka-based offset management was still a relatively new feature and we occasionally hit offset resets. Troubleshooting these incidents turned out to be extremely tricky and resulted in various fixes in offset management/log compaction as well as our monitoring.
Cluster unavailability due to high request/response latencies: Such incidents demonstrate how even subtle performance regressions and monitoring gaps can lead to an eventual cluster meltdown.
Power failures! What happens when an entire data center goes down? We experienced this first hand and it was not so pretty.
and more…
This talk will go over how we detected, investigated and remediated each of these issues and summarize some of the features in Kafka that we are working on that will help eliminate or mitigate such incidents in the future.
Kafka and Storm - event processing in realtimeGuido Schmutz
Apache Kafka is publish-subscribe messaging rethought as a distributed commit log. It is designed to allow a single cluster to serve as the central data backbone for a large organization. It can be elastically and transparently expanded without downtime. Storm is a distributed realtime computation system. Storm makes it easy to reliably process unbounded streams of data, doing for realtime processing what Hadoop did for batch processing. Storm has many use cases: realtime analytics, online machine learning, continuous computation, distributed RPC, ETL, and more. This session presents the main concepts of Kafka and Storm and then shows how a simple stream processing application is implemented using these two technologies.
Reactive Java: Promises and Streams with Reakt (JavaOne talk 2016)Rick Hightower
see labs at https://github.com/advantageous/j1-talks-2016
Import based on PDF. This is from our JavaOne Talk 2016 on Reakt, reactive Java programming with promises, circuit breakers, and streams. Reakt is a reactive Java lib that provides promises, streams, and a reactor to handle asynchronous call coordination. It was influenced by the design of promises in ES6. You want to async-call serviceA and then serviceB, take the results of serviceA and serviceB, and then call serviceC. Then, based on the results of call C, call D or E and then return the results to the original caller. Calls to A, B, C, D, and E are all async calls, and none should take longer than 10 seconds. If they do, then return a timeout to the original caller. The whole async call sequence should time out in 20 seconds if it does not complete and should also check for circuit breakers and provide back pressure feedback so the system does not have cascading failures. Learn more in this session.
High-Speed Reactive Microservices - trials and tribulationsRick Hightower
Covers how we built a set of high-speed reactive microservices and maximized cloud/hardware costs while meeting objectives in resilience and scalability. This has more notes attached as it is based on the ppt not the PDF.
This session endeavors to explain high-speed reactive microservice architecture, a set of patterns for building services that can readily back mobile and web applications at scale. It uses a scale-up and -out versus a scale-out model to do more with less hardware. A scale-up model uses in-memory operational data, efficient queue handoff, and microbatch streaming, plus async calls to handle more calls on a single node. High-speed microservice architecture endeavors to get back to OOP roots, where data and logic live together in a cohesive, understandable representation of the problem domain, and away from separation of data and logic, because data lives with the service logic that operates on it.
Covers how we built a set of high-speed reactive microservices and maximized cloud/hardware costs while meeting objectives in resilience and scalability. Talks about Akka, Kafka, QBit, in-memory computing, from a practitioners point of view. Based on the talks delivered by Geoff Chandler, Jason Daniel, and Rick Hightower at JavaOne 2016 and SF Fintech at Scale 2017, but updated.
Exploring the problem of Microservices communication and how both Kafka and Service Mesh solutions address it. We then look at some approaches for combining both.
Presented at the inaugural Kafka summit (2016) hosted by Confluent in San Francisco
Abstract:
Kafka is a backbone for various data pipelines and asynchronous messaging at LinkedIn and beyond. 2015 was an exciting year at LinkedIn in that we hit a new level of scale with Kafka: we now process more than 1 trillion published messages per day across nearly 1300 brokers. We run into some interesting production issues at this scale and I will dive into some of the most critical incidents that we encountered at LinkedIn in the past year:
Data loss: We have extremely stringent SLAs on latency and completeness that were violated on a few occasions. Some of these incidents were due to subtle configuration problems or even missing features.
Offset resets: As of early 2015, Kafka-based offset management was still a relatively new feature and we occasionally hit offset resets. Troubleshooting these incidents turned out to be extremely tricky and resulted in various fixes in offset management/log compaction as well as our monitoring.
Cluster unavailability due to high request/response latencies: Such incidents demonstrate how even subtle performance regressions and monitoring gaps can lead to an eventual cluster meltdown.
Power failures! What happens when an entire data center goes down? We experienced this first hand and it was not so pretty.
and more…
This talk will go over how we detected, investigated and remediated each of these issues and summarize some of the features in Kafka that we are working on that will help eliminate or mitigate such incidents in the future.
Kafka and Storm - event processing in realtimeGuido Schmutz
Apache Kafka is publish-subscribe messaging rethought as a distributed commit log. It is designed to allow a single cluster to serve as the central data backbone for a large organization. It can be elastically and transparently expanded without downtime. Storm is a distributed realtime computation system. Storm makes it easy to reliably process unbounded streams of data, doing for realtime processing what Hadoop did for batch processing. Storm has many use cases: realtime analytics, online machine learning, continuous computation, distributed RPC, ETL, and more. This session presents the main concepts of Kafka and Storm and then shows how a simple stream processing application is implemented using these two technologies.
Troubleshooting Kafka's socket server: from incident to resolutionJoel Koshy
LinkedIn’s Kafka deployment is nearing 1300 brokers that move close to 1.3 trillion messages a day. While operating Kafka smoothly even at this scale is testament to both Kafka’s scalability and the operational expertise of LinkedIn SREs we occasionally run into some very interesting bugs at this scale. In this talk I will dive into a production issue that we recently encountered as an example of how even a subtle bug can suddenly manifest at scale and cause a near meltdown of the cluster. We will go over how we detected and responded to the situation, investigated it after the fact and summarize some lessons learned and best-practices from this incident.
Reducing Microservice Complexity with Kafka and Reactive Streamsjimriecken
My talk from ScalaDays 2016 in New York on May 11, 2016:
Transitioning from a monolithic application to a set of microservices can help increase performance and scalability, but it can also drastically increase complexity. Layers of inter-service network calls for add latency and an increasing risk of failure where previously only local function calls existed. In this talk, I'll speak about how to tame this complexity using Apache Kafka and Reactive Streams to:
- Extract non-critical processing from the critical path of your application to reduce request latency
- Provide back-pressure to handle both slow and fast producers/consumers
- Maintain high availability, high performance, and reliable messaging
- Evolve message payloads while maintaining backwards and forwards compatibility.
Reactive Design Patterns: a talk by Typesafe's Dr. Roland KuhnZalando Technology
We had the great pleasure of hosting a talk by Dr. Roland Kuhn: leader of Typesafe’s Akka project, and coauthor of the book Reactive Design Patterns and the Reactive Manifesto. For a standing-room-only crowd, Roland highlighted the importance of making reactive software: of considering responsiveness, maintainability, elasticity and scalability from the outset of development. He explored several architecture elements that are commonly found in reactive systems, such as the circuit breaker, various replication techniques, and flow control protocols. These patterns are language-agnostic and also independent of the abundant choice of reactive programming frameworks and libraries. Check out his slides!
Deep Dive into the Pulsar Binary Protocol - Pulsar Virtual Summit Europe 2021StreamNative
To achieve maximum performance, some important choices have been made when designing the Pulsar binary protocol.
This session will explain how Pulsar implements all the features of a high quality streaming protocol such as frame multiplexing, session establishment, keep-alive, flow control, authentication and authorisation, encoding, zero-copy capabilities and more.
Uber has one of the largest Kafka deployment in the industry. To improve the scalability and availability, we developed and deployed a novel federated Kafka cluster setup which hides the cluster details from producers/consumers. Users do not need to know which cluster a topic resides and the clients view a "logical cluster". The federation layer will map the clients to the actual physical clusters, and keep the location of the physical cluster transparent from the user. Cluster federation brings us several benefits to support our business growth and ease our daily operation. In particular, Client control. Inside Uber there are a large of applications and clients on Kafka, and it's challenging to migrate a topic with live consumers between clusters. Coordinations with the users are usually needed to shift their traffic to the migrated cluster. Cluster federation enables much control of the clients from the server side by enabling consumer traffic redirection to another physical cluster without restarting the application. Scalability: With federation, the Kafka service can horizontally scale by adding more clusters when a cluster is full. The topics can freely migrate to a new cluster without notifying the users or restarting the clients. Moreover, no matter how many physical clusters we manage per topic type, from the user perspective, they view only one logical cluster. Availability: With a topic replicated to at least two clusters we can tolerate a single cluster failure by redirecting the clients to the secondary cluster without performing a region-failover. This also provides much freedom and alleviates the risks for us to carry out important maintenance on a critical cluster. Before the maintenance, we mark the cluster as a secondary and migrate off the live traffic and consumers. We will present the details of the architecture and several interesting technical challenges we overcame.
Bulletproof Kafka with Fault Tree Analysis (Andrey Falko, Lyft) Kafka Summit ...confluent
We recently learned about “Fault Tree Analysis” and decided to apply the technique to bulletproof our Apache Kafka deployments. In this talk, learn about fault tree analysis and what you should focus on to make your Apache Kafka clusters resilient.
This talk should provide a framework for answers the following common questions a Kafka operator or user might have:
What guarantees can I promise my users?
What should my replication factor?
What should the ISR setting be?
Should I use RAID or not?
Should I use external storage such as EBS or local disks?
Strata+Hadoop 2017 San Jose: Lessons from a year of supporting Apache Kafkaconfluent
The number of deployments of Apache Kafka at enterprise scale has greatly increased in the years since Kafka’s original development in 2010. Along with this rapid growth has come a wide variety of use cases and deployment strategies that transcend what Kafka’s creators imagined when they originally developed the technology. As the scope and reach of streaming data platforms based on Apache Kafka has grown, the need to understand monitoring and troubleshooting strategies has as well.
Dustin Cote and Ryan Pridgeon share their experience supporting Apache Kafka at enterprise-scale and explore monitoring and troubleshooting techniques to help you avoid pitfalls when scaling large-scale Kafka deployments.
Topics include:
- Effective use of JMX for Kafka
- Tools for preventing small problems from becoming big ones
- Efficient architectures proven in the wild
- Finding and storing the right information when it all goes wrong
Visit www.confluent.io for more information.
Sharing is Caring: Toward Creating Self-tuning Multi-tenant Kafka (Anna Povzn...HostedbyConfluent
Deploying Kafka to support multiple teams or even an entire company has many benefits. It reduces operational costs, simplifies onboarding of new applications as your adoption grows, and consolidates all your data in one place. However, this makes applications sharing the cluster vulnerable to any one or few of them taking all cluster resources. The combined cluster load also becomes less predictable, increasing the risk of overloading the cluster and data unavailability.
In this talk, we will describe how to use quota framework in Apache Kafka to ensure that a misconfigured client or unexpected increase in client load does not monopolize broker resources. You will get a deeper understanding of bandwidth and request quotas, how they get enforced, and gain intuition for setting the limits for your use-cases.
While quotas limit individual applications, there must be enough cluster capacity to support the combined application load. Onboarding new applications or scaling the usage of existing applications may require manual quota adjustments and upfront capacity planning to ensure high availability.
We will describe the steps we took toward solving this problem in Confluent Cloud, where we must immediately support unpredictable load with high availability. We implemented a custom broker quota plugin (KIP-257) to replace static per broker quota allocation with dynamic and self-tuning quotas based on the available capacity (which we also detect dynamically). By learning our journey, you will have more insights into the relevant problems and techniques to address them.
Stream-Native Processing with Pulsar FunctionsStreamlio
The Apache Pulsar messaging solution can perform lightweight, extensible processing on messaging as they stream through the system. This presentation provides an overview of this new functionality.
The Alpakka initiative brings together existing systems and their technologies with the Reactive Streams implementation in Akka. This gives you high-level APIs to model streams of data that form Reactive Enterprise Integrations.
We’ll look at examples where we see Alpakka in action with Kafka, MQTT and JMS.
KafkaConsumer - Decoupling Consumption and Processing for Better Resource Uti...confluent
When working with KafkaConsumer, we usually employ single thread both for reading and processing of messages. KafkaConsumer is not thread-safe, so using single thread fits in well. Downside of this approach is that you are limited to single thread for processing messages.
By decoupling consumption and processing, we can achieve processing parallelization with single consumer and get the most out of multi-core CPU architectures available today. While this can be very useful in certain use-case scenarios, it's not trivial to implement.
How do we use multiple threads with KafkaConsumer which is not thread safe? How do we react to consumer group rebalancing? Can we get desired processing and ordering guarantees? In this talk we 'll try to answer these questions and explore challenges we face on our path.
Grokking TechTalk #24: Kafka's principles and protocolsGrokking VN
Bài talk sẽ giới thiệu về Kafka, và đào sâu về các principles của Kafka, các thiết kế của Kafka để làm Kafka nhanh, scalable và độ ổn định cao. Bài talk cũng chia sẻ về cách Kafka servers tương tác với Kafka clients.
Bài talk đào sâu vào internals của Kafka và phân tích tại sao các design decisions được thiết kế như vậy. Bài talk phù hợp cho các bạn software engineer đã, đang muốn tìm hiểu về các job queue, message queue khác nhau.
Speaker: Nguyen Quang Minh
- Software Engineer, Technical Lead @ Employment Hero
- Contributor of `ruby-kafka` (the most popular Kafka client for Ruby)
Netty Notes Part 3 - Channel Pipeline and EventLoopsRick Hightower
Learning more about Netty helps me understand Vert.x better. Netty in Action is a great book. The threading model of Netty is very important to understanding event loops and reactive programming.
Troubleshooting Kafka's socket server: from incident to resolutionJoel Koshy
LinkedIn’s Kafka deployment is nearing 1300 brokers that move close to 1.3 trillion messages a day. While operating Kafka smoothly even at this scale is testament to both Kafka’s scalability and the operational expertise of LinkedIn SREs we occasionally run into some very interesting bugs at this scale. In this talk I will dive into a production issue that we recently encountered as an example of how even a subtle bug can suddenly manifest at scale and cause a near meltdown of the cluster. We will go over how we detected and responded to the situation, investigated it after the fact and summarize some lessons learned and best-practices from this incident.
Reducing Microservice Complexity with Kafka and Reactive Streamsjimriecken
My talk from ScalaDays 2016 in New York on May 11, 2016:
Transitioning from a monolithic application to a set of microservices can help increase performance and scalability, but it can also drastically increase complexity. Layers of inter-service network calls for add latency and an increasing risk of failure where previously only local function calls existed. In this talk, I'll speak about how to tame this complexity using Apache Kafka and Reactive Streams to:
- Extract non-critical processing from the critical path of your application to reduce request latency
- Provide back-pressure to handle both slow and fast producers/consumers
- Maintain high availability, high performance, and reliable messaging
- Evolve message payloads while maintaining backwards and forwards compatibility.
Reactive Design Patterns: a talk by Typesafe's Dr. Roland KuhnZalando Technology
We had the great pleasure of hosting a talk by Dr. Roland Kuhn: leader of Typesafe’s Akka project, and coauthor of the book Reactive Design Patterns and the Reactive Manifesto. For a standing-room-only crowd, Roland highlighted the importance of making reactive software: of considering responsiveness, maintainability, elasticity and scalability from the outset of development. He explored several architecture elements that are commonly found in reactive systems, such as the circuit breaker, various replication techniques, and flow control protocols. These patterns are language-agnostic and also independent of the abundant choice of reactive programming frameworks and libraries. Check out his slides!
Deep Dive into the Pulsar Binary Protocol - Pulsar Virtual Summit Europe 2021StreamNative
To achieve maximum performance, some important choices have been made when designing the Pulsar binary protocol.
This session will explain how Pulsar implements all the features of a high quality streaming protocol such as frame multiplexing, session establishment, keep-alive, flow control, authentication and authorisation, encoding, zero-copy capabilities and more.
Uber has one of the largest Kafka deployment in the industry. To improve the scalability and availability, we developed and deployed a novel federated Kafka cluster setup which hides the cluster details from producers/consumers. Users do not need to know which cluster a topic resides and the clients view a "logical cluster". The federation layer will map the clients to the actual physical clusters, and keep the location of the physical cluster transparent from the user. Cluster federation brings us several benefits to support our business growth and ease our daily operation. In particular, Client control. Inside Uber there are a large of applications and clients on Kafka, and it's challenging to migrate a topic with live consumers between clusters. Coordinations with the users are usually needed to shift their traffic to the migrated cluster. Cluster federation enables much control of the clients from the server side by enabling consumer traffic redirection to another physical cluster without restarting the application. Scalability: With federation, the Kafka service can horizontally scale by adding more clusters when a cluster is full. The topics can freely migrate to a new cluster without notifying the users or restarting the clients. Moreover, no matter how many physical clusters we manage per topic type, from the user perspective, they view only one logical cluster. Availability: With a topic replicated to at least two clusters we can tolerate a single cluster failure by redirecting the clients to the secondary cluster without performing a region-failover. This also provides much freedom and alleviates the risks for us to carry out important maintenance on a critical cluster. Before the maintenance, we mark the cluster as a secondary and migrate off the live traffic and consumers. We will present the details of the architecture and several interesting technical challenges we overcame.
Bulletproof Kafka with Fault Tree Analysis (Andrey Falko, Lyft) Kafka Summit ...confluent
We recently learned about “Fault Tree Analysis” and decided to apply the technique to bulletproof our Apache Kafka deployments. In this talk, learn about fault tree analysis and what you should focus on to make your Apache Kafka clusters resilient.
This talk should provide a framework for answers the following common questions a Kafka operator or user might have:
What guarantees can I promise my users?
What should my replication factor?
What should the ISR setting be?
Should I use RAID or not?
Should I use external storage such as EBS or local disks?
Strata+Hadoop 2017 San Jose: Lessons from a year of supporting Apache Kafkaconfluent
The number of deployments of Apache Kafka at enterprise scale has greatly increased in the years since Kafka’s original development in 2010. Along with this rapid growth has come a wide variety of use cases and deployment strategies that transcend what Kafka’s creators imagined when they originally developed the technology. As the scope and reach of streaming data platforms based on Apache Kafka has grown, the need to understand monitoring and troubleshooting strategies has as well.
Dustin Cote and Ryan Pridgeon share their experience supporting Apache Kafka at enterprise-scale and explore monitoring and troubleshooting techniques to help you avoid pitfalls when scaling large-scale Kafka deployments.
Topics include:
- Effective use of JMX for Kafka
- Tools for preventing small problems from becoming big ones
- Efficient architectures proven in the wild
- Finding and storing the right information when it all goes wrong
Visit www.confluent.io for more information.
Sharing is Caring: Toward Creating Self-tuning Multi-tenant Kafka (Anna Povzn...HostedbyConfluent
Deploying Kafka to support multiple teams or even an entire company has many benefits. It reduces operational costs, simplifies onboarding of new applications as your adoption grows, and consolidates all your data in one place. However, this makes applications sharing the cluster vulnerable to any one or few of them taking all cluster resources. The combined cluster load also becomes less predictable, increasing the risk of overloading the cluster and data unavailability.
In this talk, we will describe how to use quota framework in Apache Kafka to ensure that a misconfigured client or unexpected increase in client load does not monopolize broker resources. You will get a deeper understanding of bandwidth and request quotas, how they get enforced, and gain intuition for setting the limits for your use-cases.
While quotas limit individual applications, there must be enough cluster capacity to support the combined application load. Onboarding new applications or scaling the usage of existing applications may require manual quota adjustments and upfront capacity planning to ensure high availability.
We will describe the steps we took toward solving this problem in Confluent Cloud, where we must immediately support unpredictable load with high availability. We implemented a custom broker quota plugin (KIP-257) to replace static per broker quota allocation with dynamic and self-tuning quotas based on the available capacity (which we also detect dynamically). By learning our journey, you will have more insights into the relevant problems and techniques to address them.
Stream-Native Processing with Pulsar FunctionsStreamlio
The Apache Pulsar messaging solution can perform lightweight, extensible processing on messaging as they stream through the system. This presentation provides an overview of this new functionality.
The Alpakka initiative brings together existing systems and their technologies with the Reactive Streams implementation in Akka. This gives you high-level APIs to model streams of data that form Reactive Enterprise Integrations.
We’ll look at examples where we see Alpakka in action with Kafka, MQTT and JMS.
KafkaConsumer - Decoupling Consumption and Processing for Better Resource Uti...confluent
When working with KafkaConsumer, we usually employ single thread both for reading and processing of messages. KafkaConsumer is not thread-safe, so using single thread fits in well. Downside of this approach is that you are limited to single thread for processing messages.
By decoupling consumption and processing, we can achieve processing parallelization with single consumer and get the most out of multi-core CPU architectures available today. While this can be very useful in certain use-case scenarios, it's not trivial to implement.
How do we use multiple threads with KafkaConsumer which is not thread safe? How do we react to consumer group rebalancing? Can we get desired processing and ordering guarantees? In this talk we 'll try to answer these questions and explore challenges we face on our path.
Grokking TechTalk #24: Kafka's principles and protocolsGrokking VN
Bài talk sẽ giới thiệu về Kafka, và đào sâu về các principles của Kafka, các thiết kế của Kafka để làm Kafka nhanh, scalable và độ ổn định cao. Bài talk cũng chia sẻ về cách Kafka servers tương tác với Kafka clients.
Bài talk đào sâu vào internals của Kafka và phân tích tại sao các design decisions được thiết kế như vậy. Bài talk phù hợp cho các bạn software engineer đã, đang muốn tìm hiểu về các job queue, message queue khác nhau.
Speaker: Nguyen Quang Minh
- Software Engineer, Technical Lead @ Employment Hero
- Contributor of `ruby-kafka` (the most popular Kafka client for Ruby)
Netty Notes Part 3 - Channel Pipeline and EventLoopsRick Hightower
Learning more about Netty helps me understand Vert.x better. Netty in Action is a great book. The threading model of Netty is very important to understanding event loops and reactive programming.
This presentation was delivered to a group of Data Wranglers that focus on data processing. It outlined the challenge of the current state of business and explains that asynchronous processing is the way to manage the growing sources and volume of business information.
Laimonas Lileika — Hybrid Project Management: Excellence Behind a BuzzwordAgileLAB
Laimonas Lileika will encourage you to unleash your Project Management creativity by combining Agile and Waterfall paradigms.
This speech is for you if you are interested in:
- Importance of Context in Project Management;
- Most frequent misperceptions about Agile and Waterfall models;
- Pragmatic approach to project management: how to make a hybrid work in real.
SenchaCon 2016 - How to Auto Generate a Back-end in MinutesSpeedment, Inc.
Connecting your JavaScript application to a database is tedious. Back-end developers spend hours modeling the database, securing connections, writing SQL, optimizing queries, deploying to a server, and fixing bugs. In this session, you'll learn how Ext Speeder gives your front-end team a tool to automatically generate a full back-end. In minutes, a REST API between a Sencha Ext JS Grid application and a relational database is created. This will save you a huge amount of time and also minimizes the risk of human error. Application time-to-market has never been shorter.
2016 Mastering SAP Tech - 2 Speed IT and lessons from an Agile Waterfall eCom...Eneko Jon Bilbao
A recent clash of worlds occurred when a local client asked to deliver their Hybris eCommerce portal on top of their global template SAP system. The backend SAP team jogged along in the traditional waterfall pace whilst the frontend Hybris team sought to sprint along in agile fashion. This is the story of how we managed the different worlds, the skills required and the lessons learned from both teams.
This is the presentation I would have loved to see when I started using Composer with Drupal. Based on my experience working with Composer and Drupal 7 + Drupal 8.
Learn about the basics working with the Dependency Management for PHP: Composer. Dicover the commands, files (composer.lock and composer.json), the pros but also the cons of using the tool.
This was presented in October 2016 in Cebu for Cebu Drupal Meetups, and Drupalcamp Japan 2017 in Tokyo in January 2017.
Management of Distributed TransactionsAnkita Dubey
Distributed Database System
A distributed database system consists of loosely coupled sites that share no physical component
Database systems that run on each site are independent of each other
Transactions may access data at one or more sites
The management of distributed transactions require dealing with several problems which are strictly interconnected, like-
Reliability
Concurrency control
Efficient utilization of the resources of the whole system.
Agile is gathering momentum but its not easy to switch to Agile especially in complex environments like banking or multinationals. Many companies can’t refuse Waterfall but understand the value of Agile and want to start applying it. How to combine Waterfall and Agile in one project, do it effectively and get value? In every standard Waterfall phase from initiation till closure Agile is able to help Project manager, team and stakeholders be more effective, adaptive, meet end user expectations better and have a fun. There are cases from CISCO Systems, NASA, US health care program to learn from.
I want to demonstrate that it is possible and often necessary to combine both Waterfall and Agile in one project. We will review challenges of big complex environments that have absorbed Waterfall and have strict procedures and guidelines but are willing to gradually move to Agile.
WebSocket MicroService vs. REST MicroserviceRick Hightower
Comparing the speed of RPC calls over WebScoket Microservices versus REST based microservices. Using wrk, QBit, and examples in Java we show how much faster WebSocket is for doing RPC service calls.
SpringOne Platform 2017
Stéphane Maldini, Pivotal; Simon Basle, Pivotal
"In 2016, Project Reactor was the foundation before Spring Reactive story, in particular with Reactor Core 3.0 fueling our initial Spring Framework 5 development.
2017 and 2018 are the years Project Reactor empowers the final Spring Framework 5 GA and an entire ecosystem, thus including further refinement, feedbacks and incredible new features. In fact, the new Reactor Core 3.1 and Reactor Netty 0.7 are the very major versions used by the like of Spring Boot 2.0, and they have dramatically consolidated around a simple but yet coherent API.
Discover those changes and the new Reactor capabilities including support for Reactive AOP, Observability, Tracing, Error Strategies for long-running streams, new Netty driver, improved test support, community driven initiatives and much more
Finally, the first java framework & ecosystem gets the reactive library it needs !"
20160609 nike techtalks reactive applications tools of the tradeshinolajla
An update to my talk about concurrency abstractions, including event loops (node.js and Vert.x), CSP (Go, Clojure), Futures, CPS/Dataflow (RxJava) and Actors (Erlang, Akka)
DevFest Belgium 2016.
Overview on some of the reactive frameworks for Android and Java (RxJava 1.x/2.x, Reactor, Akka, Agera). Examples, comparison and interoperability.
Reactive Programming, Traits and Principles. What is Reactive, where does it come from, and what is it good for? How does it differ from event driven programming? It only functional?
Performance measurement methodology — Maksym Pugach | Elixir Evening Club 3Elixir Club
Доповідь Максима Пугача, Team Lead/Software Engineer at LITSLINK, на Elixir Evening Club 3, Kyiv, 13.12.2018
Наступна конференція - http://www.elixirkyiv.club/
A boss of mine once told me "Just see, my poorly written Vert.x app outperforms my poorly written Elixir app". Now it is time to take up the gauntlet.
Cлідкуйте за нами у соцмережах @ElixirClubUA та #ElixirClubUA
Анонси та матеріали конференцій - https://www.fb.me/ElixirClubUA
Новини - https://twitter.com/ElixirClubUA
Фото та невимушена атмосфера - https://www.instagram.com/ElixirClubUA
*Канал організаторів мітапа - https://t.me/incredevly
Reactive Qt - Ivan Čukić (Qt World Summit 2015)Ivan Čukić
Reactive programming is an emerging discipline which achieves concurrency using events-based programming. Today, It is mostly used for writing very scalable web services that can achieve high concurrency levels even on a single thread.
The concept is simple - make a system that is fully event-based, and look at events not as isolated instances, but as streams. When we have streams, we can manipulate them as if they were simple ranges. We can filter them, modify them, combine multiple streams into one etc.
Reactive programming is not only applicable to the web services, it can be used in any event-based environment. In our case, in normal Qt applications, to enrich the power of signals and slots.
[...]
Reactors.io fuses the best parts of functional reactive programming and the Actor Model. Reactors are the basic units of concurrent execution which can perform computations as well. They allow you to create concurrent and distributed applications more easily, by providing correct, robust and composable programming abstractions.
One of the most common performance issues in serverless architectures is elevated latencies from external services, such as DynamoDB, ElasticSearch or Stripe.
In this webinar, we will show you how to quickly identify and debug these problems, and some best practices for dealing with poor performing 3rd party services.
Beyond fault tolerance with actor programming - Fabio Tiriticco - Codemotion ...Codemotion
The Actor model has been around for a while, but only the Reactive revolution is bringing it to trend. Find out how your application can benefit from Actors to achieve Resilience - the ability to spring back into shape from a failure state. Akka is a toolkit that brings Actors to the JVM - think Java or Scala - and that leverages on them to help you build concurrent, distributed and resilient applications.
Beyond Fault Tolerance with Actor ProgrammingFabio Tiriticco
Actor Programming is a software building approach that lets you can go beyond fault tolerance and achieve Resilience, which is the capacity of a system to self-heal and spring back into a fresh shape. First I'll introduce the difference between Reactive Programming and Reactive Systems, and then we'll go over a couple of implementation examples using Scala and Akka.
The coupled GitHub repository with the code is here: https://github.com/ticofab/ActorDemo
Reactive Card Magic: Understanding Spring WebFlux and Project ReactorVMware Tanzu
Spring Framework 5.0 and Spring Boot 2.0 contain groundbreaking technologies known as reactive streams, which enable applications to utilize computing resources efficiently.
In this session, James Weaver will discuss the reactive capabilities of Spring, including WebFlux, WebClient, Project Reactor, and functional reactive programming. The session will be centered around a fun demonstration application that illustrates reactive operations in the context of manipulating playing cards.
Presenter : James Weaver, Pivotal
Similar to Reactive Java: Promises and Streams with Reakt (JavaOne Talk 2016) (20)
Just a JSON parser plus a small subset of JSONPath.
Small (currently 4200 lines of code)
Very fast, uses an index overlay from the ground up.
Does not do JavaBean serialization but can serialize into basic Java types and can map to Java classes and Java records.
This talk was done in Feb 2020. Sergey and I co-presented at CTO Forum on Microservices and Service Mesh (how they relate, requirements, goals, best practices and how DevOps and Agile has had convergence in the set of features for Service Mesh and gateways around observability, feature flags, etc.)
Early Draft: Service Mesh allows developers to focus on business logic while the crosscutting network data layer code is handled by the Service Mesh. This is a boon because this code can be tricky to implement and hard to test all of the edge cases. Service Mesh takes this a few steps further than AOP or Servlet Filters or custom language-specific frameworks because it works regardless of the underlying programming language being used which is great for polyglot development shops. Thus standardizing how these layers work, while allowing teams to pick the best tools or languages for the job at hand. Kubernetes and Istio Service Mesh automate best practices for DevSecOps needs like: failover, scale-out, scalability, health checks, circuit breakers, rate limiters, metrics, observability, avoiding cascading failure, disaster recovery, and traffic routing; supporting CI/CD and microservices architecture.
Istio’s ability to automate and maintaining zero trust networks is its most important feature. In the age of high-profile data breaches, security is paramount. Companies want to avoid major brand issues that impact the bottom line and shrink market capitalization in an instant. Istio allows a standard way to do mTLS and auto certificate rotation which helps prevent a breach and limits the blast radius if a breach occurs. Istio also takes the concern of mTLS from microservices deployments and makes it easy to use taking the burden off of application developers.
You can’t afford to not transform. Digital transformation requires a deep understanding of practices. Having a team called DevOps is not doing DevOps per se. Teams must adopt the culture of DevOps, Agility, Lean, MVP, etc. as it is a clear win. The book Accelerate covers studies and guides to show the business value and ROI for adopting these practices.
There are guides, books, practices, and additional information cited. Takes ideas from the book Accelerate by Forsgren PhD, Nicole, Jez Humble, from IT Revolution Press, personal experience and Pluralsight courses on CI/CD, DevOps adoption, etc.
Netty Notes Part 2 - Transports and BuffersRick Hightower
Continues on from Part 1 of Netty Notes which covered an overview of Netty concepts. Dives into transports and buffer usage, and why Netty matters for performance.
Consul: Microservice Enabling Microservices and Reactive ProgrammingRick Hightower
Consul is a service discovery system that provides a microservice style interface to services, service topology and service health.
With service discovery you can look up services which are organized in the topology of your datacenters. Consul uses client agents and RAFT to provide a consistent view of services. Consul provides a consistent view of configuration as well also using RAFT. Consul provides a microservice interface to a replicated view of your service topology and its configuration. Consul can monitor and change services topology based on health of individual nodes.
Consul provides scalable distributed health checks. Consul only does minimal datacenter to datacenter communication so each datacenter has its own Consul cluster. Consul provides a domain model for managing topology of datacenters, server nodes, and services running on server nodes along with their configuration and current health status.
Consul is like combining the features of a DNS server plus Consistent Key/Value Store like etcd plus features of ZooKeeper for service discovery, and health monitoring like Nagios but all rolled up into a consistent system. Essentially, Consul is all the bits you need to have a coherent domain service model available to provide service discovery, health and replicated config, service topology and health status. Consul also provides a nice REST interface and Web UI to see your service topology and distributed service config.
Consul organizes your services in a Catalog called the Service Catalog and then provides a DNS and REST/HTTP/JSON interface to it.
To use Consul you start up an agent process. The Consul agent process is a long running daemon on every member of Consul cluster. The agent process can be run in server mode or client mode. Consul agent clients would run on every physical server or OS virtual machine (if that makes more sense). Client runs on server hosting services. The clients use gossip and RPC calls to stay in sync with Consul.
A client, consul agent running in client mode, forwards request to a server, consul agent running in server mode. Clients are mostly stateless. The client does LAN gossip to the server nodes to communicate changes.
A server, consul agent running in server mode, is like a client agent but with more tasks. The consul servers use the RAFT quorum mechanism to see who is the leader. The consul servers maintain cluster state like the Service Catalog. The leader manages a consistent view of config key/value pairs, and service health and topology. Consul servers also handle WAN gossip to other datacenters. Consul server nodes forwards queries to leader, and forward queries to other datacenters.
A Datacenter is fairly obvious. It is anything that allows for fast communication between nodes, with as few or no hops, little or no routing, and in short: high speed communication. This could be an Amazon EC2 availability zone, a networking environment like a subnet, or any private, low latency, high
The Java microservice lib. QBit is a reactive programming lib for building microservices - JSON, HTTP, WebSocket, and REST. QBit uses reactive programming to build elastic REST, and WebSockets based cloud friendly, web services. SOA evolved for mobile and cloud. QBit is a Java first programming model. It uses common Java idioms to do reactive programming.
It focuses on Java 8. It is one of the few of a crowded field of reactive programming libs/frameworks that focuses on Java 8. It is not a lib written in XYZ that has a few Java examples to mark a check off list. It is written in Java and focuses on Java reactive programming using active objects architecture which is a focus on OOP reactive programming with lambdas and is not a pure functional play. It is a Java 8 play on reactive programming.
Services can be stateful, which fits the micro service architecture well. Services will typically own or lease the data instead of using a cache.
CPU Sharded services, each service does a portion of the workload in its own thread to maximize core utilization.
The idea here is you have a large mass of data that you need to do calculations on. You can keep the data in memory (fault it in or just keep in the largest part of the histogram in memory not the long tail). You shard on an argument to the service methods. (This was how I wrote some personalization engine in the recent past).
Worker Pool service, these are for IO where you have to talk to an IO service that is not async (database usually or legacy integration) or even if you just have to do a lot of IO. These services are semi-stateless. They may manage conversational state of many requests but it is transient.
ServiceQueue wraps a Java object and forces methods calls, responses and events to go through high-speed, batching queues.
ServiceBundle uses a collection of ServiceQueues.
ServiceServer uses a ServiceBundle and exposes it to REST/JSON and WebSocket/JSON.
Events are integrated into the system. You can register for an event using an annotation @EventChannel, or you can implement the event channel interface. Event Bus can be replicated. Event busses can be clustered (optional library). There is not one event bus. You can create as many as you like. Currently the event bus works over WebSocket/JSON. You could receive events from non-Java applications.
Find out more at: https://github.com/advantageous/qbit
Groovy JSON support and the Boon JSON parser are up to 3x to 5x faster than Jackson at parsing JSON from String and char[], and 2x to 4x faster at parsing byte[].
Groovy JSON support and Boon JSON support are also faster than Jackson at encoding JSON strings. Boon is faster than Jackson at serializing/de-serializing Java instances to/fro JSON. The core of the Boon JSON parser has been forked into Groovy 2.3 (now in Beta). In the process Boon JSON support was improved and further enhanced. Groovy and Boon JSON parsers speeds are equivalent. Groovy now has the fastest JSON parser on the JVM.
MongoDB quickstart for Java, PHP, and Python developersRick Hightower
Quick introduction to MongoDB.
Covers major features, CRUD, DB operations, comparison to SQL, basic console, etc.
Covers architecture of Replica Sets, Autosharding, MapReudce, etc.
Examples in JavaScript, Java, PHP and Python.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfPeter Spielvogel
Building better applications for business users with SAP Fiori.
• What is SAP Fiori and why it matters to you
• How a better user experience drives measurable business benefits
• How to get started with SAP Fiori today
• How SAP Fiori elements accelerates application development
• How SAP Build Code includes SAP Fiori tools and other generative artificial intelligence capabilities
• How SAP Fiori paves the way for using AI in SAP apps
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
5. Reakt
General purpose library for callback coordination and streams
Implements JavaScript style Promises and adapts them to a MT world
Can be used with
Reactor pattern systems,
actor system, event bus system,
and traditional forms of async Java programming
Lambda expression friendly
5
8. Goals
Small and focused
Easy-to-use
Lambda friendly
Scalar async calls, and streams
Fluent
Evolve it (no change for the sake of change but get it right)
Semantic versioning
8
9. More Project Goals
Should work with
actor model
MT model
Reactor Pattern (event loop)
Supports
Async call coordination
Complex call coordination
and streams
9
10. Problem: Async Coordination is Tricky
Results can comeback on foreign threads
Call several async services, one fails or times out? Now what?
Need to combine results of several calls before you can respond to the caller
What if a downstream service goes down?
What if your async handler throws an exception?
You can’t use a blocking future, or you will tie up the event handling thread
How do you test async services using a standard unit test framework?
10
11. Status
We use it often. We like it.
Integration libs for Guava, Vert.x, Netty (in the works), Kinesis, Cassandra,
DynamoDB, etc.
Write async call handling for Lokate, and Elekt
QBit now uses it instead of its own callbacks and async coordination
Version 3.1 has major stability improvements (a lot of work was done)
Version 4 - Major update to simplify and refine interfaces (after JavaOne)
Won’t change how it is used, but will clean up the interfaces and 11
14. Implementations of Reactor Pattern
Browser DOM model and client side JavaScript
AWT/GTK+/Windows C API
Twisted - Python
Akka's I/O Layer Architecture
Node.js - JavaScript
Vert.x - (Netty)
Multi-Reactor pattern
Spring 5 Reactor 14
15. Our experience with Async, Reactor Pattern, and Actors
Worked with Actor system / high-speed messaging / PoCs (2011)
Used Vert.x to handle large amount of traffic on less resources (2012)
Wrote QBit to batch stream service calls to optimize thread-hand off
and IO throughput (2013)
Needed high-speed call coordination for OAuth rate limiter (2014)
fronting many backend services
Worked on QBit Reactor but interface was better than before
but still too complicated
Worked on many microservices in 12 factor cloud env - lots of async
call coordination and circuit breaker - lots of retries (2015)
15
16. Why async, responsive and reactive?
Reactive Manifesto - what is it and why it matters
Reactor Pattern - Most common form of reactive programming, adopts
Promises
Microservices
Async programming prefered for resiliency
Avoiding cascading failures (“synchronous calls considered harmful”)
Async programming is key: How do you manages async call coordination?
16
18. Reactor Pattern
Reactor pattern:
event-driven,
handlers register for events,
events can come from multiple sources,
single-threaded system - for handling events
handles multiple event loops
Can aggregate events from other IO threads
What is the most popular Reactor Pattern ecosystem? Oldest? Most
18
19. Promise from most popular Reactor system
AHA : This looks nice and makes sense
JS client side most popular reactor pattern of all time
What does JavaScript use to simplify async callback coordination?
Promises!
Node.js
Most popular server-side reactor pattern, and growing
What does JavaScript use to simplify async callback coordination?
Promises!
Reakt was born! 19
20. AHA Moment!
Wrote client libs in Java with Reakt Promises and ES6
(JavaScript) with/Promises
Code looked very similar
Semantics were same
Slight syntactic differences
“Wow!” That is clean
Hard to look at old way 20
22. Reakt Concepts
Promise: Handler for registering events from an async call
Callback: Handler for resolving responses to an async call (scalar async result) / Mostly
internal
Stream: Like a Callback but has N numbers or results (stream of results)
Breaker: Async circuit breakers
Expected: Results that could be missing
Reactor: Replays callbacks on Actor, Verticle or event handler thread (event loop)
repeating tasks, delay tasks
Result: Async result, success, failure, result from a Callback or Stream
22
24. Promise Concepts
Like ES6 promises can be:
Completed States:
Resolved: callback/action relating to the promise succeeded
Rejected: callback/action relating to the promise failed
When a promise has been resolved or rejected it is marked completed
Completed: callback/action has been fulfilled/resolved or rejected
24
26. Special concerns with Java MT
JavaScript is single-threaded - Java is not.
Three types of Reakt promises:
Callback promises (async) (Promise)
Blocking promises (for unit testing and legacy integration)
(BlockingPromise)
Replay promises (ReplayPromise)
allow promises to be handled on the same thread as caller
Works with Vert.x verticles, QBit service actors, other actors and even bus reactor (Netty)
26
28. Handler methods
then() - use handle result of async calls
thenExpected() - use handle async calls whose result could be null
thenSafe() - like then but handles exception from handler
thenSafeExpected() - same as thenSafe but result could be null
catchError() - handles an exception
28
29. Promises.all
Promises.all(promises)
You create a promise whose then or catchError trigger (it resolves) when all
promises passed async return (all resolve)
If any promise fails, the all promise fails
29
36. Reactor
Manages callbacks (ReplayPromises) that execute in caller's thread (thread
safe, async callbacks)
Promise handlers that are triggered in caller’s thread
Timeouts for async calls
Manages tasks that run in the caller's thread
Repeating tasks that run in a caller's thread
one shot timed tasks that run in the caller's thread
Adapts to event loop, Verticle, Actor 36
37. Notable Reactor Methods
addRepeatingTask(interval, runnable) add a task that repeats every interval
runTaskAfter(afterInterval, runnable) run a task after an interval expires
deferRun(runnable) run a task on this thread as soon as you can
all(...) creates a all promise; resolves with Reactor (you can pass a timeout)
any(...) create any promise with Reactor (you can pass a timeout)
promise() creates a ReplayPromise so Reactor manages promise (you can
pass a timeout)
37
41. Circuit Breaker
Breaker is short for circuit breaker
Wraps access to a service, resource, repo, connection, queue, etc.
Tracks errors which can trigger the breaker to open
Breaker is just an interface / contract
Implementers can be creative in what is considered an open or broken breaker
41
42. 42
Come to the lab tomorrow to
See a full use case using
Async circuit breakers
48. Stream
Handler for N results
While a Callback and Promise is for one Result, a Stream is for N results
Callback/Promise for Scalar returns
Stream is for many returns
Similar to Java 9 Flow, RxJava or Reactive Streams
Java 8/9 lambda expression friendly
(Fuller example as extra material on slide deck depending on time go to
end of slide deck or just cover next two slides) 48
52. Example Recommendation Service
Recommendation service
Watch what user does, then suggest recommended items
Recommendation service runs many recommendation engines per microservice
52
53. Worked example will show
53
User recommendation service
Delay giving recommendations to a user
until that user is loaded from a backend
service store
Users are streamed in (uses streams)
Stream comes in on foreign thread and we
use reactor to move handler to service actor
thread
If user is already in service actor, then
recommend a list of recommendations right
away
If user not in system, batch load user from
backend service store
Requests are batched to reduce IO overhead
Users can come from many sources from
service store (cache, disk cache, DB), and are
delivered as soon as found in a continuous
stream of user lists
55. Every 50 ms check to see if the
userIdsToLoad is greater than 0,
If so request those users now.
When a user is not found
loadUserFromStoreService is
called. If there are 100,
outstanding requests, then load
those users now.
Listen to the userStoreService’s
userStream
55
56. Process the stream
result.
Populate the user map
(or use a Simple cache
with an expiry and a
max number of users
allowed to be in
system).
Since the user is now
loaded, see if their are
outstanding calls
(promises) and resolve
those calls.
56
58. Next steps
1)Get rid of invoke and detect when a frame drops (let Geoff explain this
one)
2)Simplify interface for Promise/Callback Reakt 4.0
a) We use semantic versioning, but even from version to version so far interfaces are fairly
compatible for 97% of use cases
3)More reakt libs ***
4)Refine streaming interface
5)Add more support for Vert.x reakt
a) Support streaming via Reakt 58
60. Conclusion
Reakt provides an easy-to-use lib for handling async callbacks
It uses Promise concepts from ES6 which seem well thought out and natural
We worked with many async libs and wrote a few our self, and really like the ES6
terminology and ease of use
Since Java is MT and JavaScript is not there are some differences
Java 8/9 lambda expression friendly
Async call coordination can be difficult but all promises, any promises,
reactor with replay promises and timeouts make it easier
Reakt is evolving and we welcome feedback and contributions (bug 60
63. Author Geoff Chandler
Senior Director at a large Media Company.
Works with Node.js, Cassandra, Mesos, QBit, EC2, and reactive programming. Major
Contributor to QBit, Spring Boot, Reakt, and more.
Creator of Lokate, ddp-client-java, guicefx, and various devops tools for gradle.
63
64. Author Bio Rick Hightower
Rick frequently writes about and develops high-speed microservices. He focuses on
streaming, monitoring, alerting, and deploying microservices. He was the founding
developer of QBit and Reakt as well as Boon.
64
66. Example
Take from a real world scenario which gave birth to use using Vert.x, and later creating
QBit and Reakt
Example uses Streams, and Promises
This is not the actual code from the actual project (this is just an example)
66
71. Every 50 ms check to see if the
userIdsToLoad is greater than 0,
If so request those users now.
When a user is not found
loadUserFromStoreService is
called. If there are 100,
outstanding requests, then load
those users now.
Listen to the userStoreService’s
userStream
71
72. Process the stream
result.
Populate the user map
(or use a Simple cache
with an expiry and a
max number of users
allowed to be in
system).
Since the user is now
loaded, see if their are
outstanding calls
(promises) and resolve
those calls.
72
74. Streams vs Service Calls
Microservices / RESTful services / SOA services
REST / HTTP calls common denominator
Even messaging can be request/reply
Streams vs. Service Calls
Level of abstraction differences,
Calls can be streamed, Results can be streamed
What level of abstraction fits the problem you are trying to solve
Are streams an implementation details or a direct concept?
74
75. Related projects
QBit Java Microservice (built on top of Vert.x for IO)
Using Reakt reactor to manage callbacks,
REST and WebSocket services (WebSocket RPC) use Reakt Promises and Reakt Callbacks
Lokate - service discovery lib for DNS-A, DNS-SRV, Consul, Mesos,
Marathon
Uses Reakt invokeable promises (Vert.x for IO)
Elekt - leadership lib that uses tools like Consul to do leadership election
(uses promises)
Reakt-Guava - Reakt Bridge to Guava listable futures 75
76. Promise
Promises can be used for all manners of async
programming
not just Reactor Pattern
You can use it with standard Java Lib
Bridges for Guava, Vert.x and Cassandra
QBit uses it (Service Actor/Microservices),
Lokate (Discovery), 76
77. Other Async Models
Messaging (Golang, Erlang, RabbitMQ, JMS, Kafka)
Actors (Erlang, Akka)
Active Objects (Akka types actors, DCOM)
Common problems when dealing with handling calls to
services:
Handling the call
77
78. Reactor works with
Works with Reactor Architecture (Vert.x, Spring Reactor )
Works with Actor model and Active Objects (Akka actors, Akka typed actor,
QBit, etc.)
ReplayPromises need a Reactor
Reactor is an interface
Replace it with one optimized for your environment
Or manage ReplayPromises and tasks with something else like a
Reactor
78
These slides are from a talk we gave at JavaOne 2016 for work we did in the past, and open-source projects that we are working on.
10 seconds
10 seconds
Give example of not small
Looked at RXJava, Reactive Streams, Actors
Used Vert.x extensively
Not tied to QBit.
The headline is the slide, briefly cover the concepts.
Results can comeback on foreign threadsIf you need to call several services, what if one of them fails or times out?What if you need to combine the results of several calls before you can respond to the caller?What if a downstream service goes down?What if your async handler throws an exception?If you use a blocking future, you can tie up the event handling threadHow do you test async services with a unit test framework?
The best way to improve a lib is to use it.
Many methods were added to simplify daily dev life.
Scratching our own itches.
Each slide in this section is to be covered quickly. No more than 30 seconds per slide.
Geoff tells anecdotes about experience with ES6. Breaking a large monolithic Node.js app into microservices calling into services written in Java/Vert.x.
Picked Vert.x to handle large amount of traffic on fewer boxes for in-memory service
Vert.x + early QBit handled more load on 13 boxes than 2,000 boxes did for similar system
Created stream/batch, service actor system to simplify Vert.x dev (circa 2012/2013)
maximize throughput by minimizing IO and thread hand-off (QBit Java Microservices Lib)
Trial by fire, Callback coordination rough
Needed high-speed call coordination for OAuth rate limiter
fronting many backend services
Worked on QBit Reactor but interface was better than before but still too complicated
2016 - Worked on Vert.x / QBit project / Node.js project
Started using Node.js / JavaScript promises for client libs
Nice abstraction for dealing with async service calls
JS Promises were just right
We have been doing async callback coordination for some time. Using messaging, and handler ids, and streams, and reactor pattern, and all forms of async programming models.
It is clunky. We tend to think about things as Future. We tend to think in a MT world not a reactor, async, streaming world. Promises are a nice abstraction to not just reactor pattern but many forms of Java async programming. Having written three or four async callback handlers systems and not really liking any of them per se, and seeing the similarities to JS Promise but also seeing the eloquence of async promises, it seemed to make sense to adopt the terminology of JavaScript promises and the simplicity of it.
Picked Vert.x as our server to handle large amount of traffic on fewer boxes
Vert.x + early QBit allowed us to handle more load on 13 boxes than 2000 boxes on similar system at same company
Created stream/batch, service, actor system to simplify Vert.x dev (circa 2012/2013) and to maximize throughput by minimizing IO and thread hand-off (QBit Java Microservices Lib) - Trial by fire, Callback coordination rough.
Needed to do high-speed call coordination for OAuth rate limiter fronting many backend services - Worked on QBit Reactor but interface was better than before but clunky
2016 - Worked on Vert.x / QBit project and started writing integration libs
Started using Node.js / JavaScript promises
Nice abstraction for dealing with async service calls
The point of this slide and the next is to make the claim that streaming is not the only way to do reactive programming.
The most common form of reactive programming is the reactor pattern / event loop, i.e., Browser DOM, Node.JS, Twisted and Vert.x.
Streaming fits many problem domains but so do service calls which are also more common.
Also streaming can be an implementation detail (as you can stream calls and stream responses).
If streaming is not the only way to handle async programming and not even the most common, what / how to people do service style programming and what tools do they use.
Node.js and Browser DOM JS are the two most common forms of async programming and they use Promises.
Akka has promises, Netty has promises, Vert.x has async result which is similar and QBit had something like promises before Reakt.
Reakt attempts to be separate from any particular implementation to focus on being a good promise lib for any sort of async programming in Java.
From Microservices paper by Martin Fowler et al: “Synchronous calls considered harmful: Any time you have a number of synchronous calls between services you will encounter the multiplicative effect of downtime. Simply, this is when the downtime of your system becomes the product of the downtimes of the individual components. ...” http://martinfowler.com/articles/microservices.html
“The reactor design pattern is an event handling pattern for handling service requests delivered concurrently to a service handler by one or more inputs. The service handler then demultiplexes the incoming requests and dispatches them synchronously to the associated request handlers.” https://en.wikipedia.org/wiki/Reactor_pattern
“The Reactor pattern has been introduced in [Schmidt95] as a general architecture for event-driven systems. It explains how to register handlers for particular event types, and how to activate handlers when events occur, even when events come from multiple sources, in a single-threaded environment. In other words, the reactor allows for the combination of multiple event-loops, without introducing additional threads.” http://www.cs.vu.nl/~eliens/online/oo/I/2/reactor.html
A good anecdote would be good here.
Having multiple projects open.
Loading the wrong one. Thinking it is the Java client when it was the JavaScript client.
Wrote client libs in Java with Reakt Promises and ES6 (JavaScript) with Promises
Code looked very similar
Semantics were same
Slight syntactic differences
We were both “Wow!” That is clean
It became hard to look at old way
10 seconds.. We cover this again in detail
This has been adapted from this article on ES6 promises. A promise can be:
fulfilled The callback/action relating to the promise succeeded
rejected The callback/action relating to the promise failed
pending The callback/action has not been fulfilled or rejected yet
completed The callback/action has been fulfilled/resolved or rejected
Java is not single threaded, meaning that two bits of code can run at the same time, so the design of this promise and streaming library takes that into account.
JavaScript is single-threaded - You can make call, then register for callbacks and that is ok because call won’t be made until event loop moves on. Not so in Java! Java is MT to the core.
Java is not single threaded, meaning that two bits of code can run at the same time, so the design of this promise and streaming library takes that into account.
There are three types of promises:
Callback promises
Blocking promises (for testing and legacy integration)
Replay promises (allow promises to be handled on the same thread as caller)
Replay promises are the most like their JS cousins. Replay promises are usually managed by the Reakt Reactor and supports environments like Vert.x and QBit. See the wiki for more details on Replay promises.
It is common to make async calls to store data in a NoSQL store or to call a remote REST interface or deal with a distributed cache or queue. Also Java is strongly typed so the library that mimics JS promises is going to look a bit different. We tried to use similar terminology where it makes sense.
Events and Streams are great for things that can happen multiple times on the same object — keyup, touchstart, or event a user action stream from Kafka, etc.
With those events you don't really care about what happened before when you attached the listener.
But often times when dealing with services and data repositories, you want to handle a response with a specific next action, and a different action if there was an error or timeout from the responses. You essentially want to call and handle a response asynchronously and that is what promises allow.
This is not our first time to bat with Promises. QBit has had Promises for a few years now. We just called them CallbackBuilders instead. We wanted to use more standard terminology and wanted to use the same terminology and modeling on projects that do not use QBit like Conekt, Vert.x, RxJava, and reactive streams.
At their most basic level, promises are like event listeners except:
A promise can only succeed or fail once. A promise cannot succeed or fail twice, neither can it switch from success to failure. Once it enters its completed state, then it is done.
thenExpect and thenSafeExpect
The handlers thenExpect and thenSafeExpect return a Reakt Expected instance. Expected is like Option in Java 8, it has methods like map, filter, etc. and adds methods ifEmpty, isEmpty. This gives a nice fluent API when you don't know if a successful return is null or not.
then and thenSafeExpect
The methods then and thenSafe async return the result that is not wrapped in an Expected object, i.e., the raw result. Use then and thenSafe when you know the async return will not be null. Use thenExpect and thenSafeExpect if the value could be null or if you want to map or filter the result.
thenMap
Use thenMap when a promise returns for example a List<Employee>, but you only want the first Employee. See Promise.thenMap for more details.
safe thenSafe thenSafeExpect
Unless you are using a reactor, custom Promises or blocking promises, the then* handlers will typically run in a foreign thread and if they throw an exception depending on the library, they could get logged in an odd way. If you think your handler could throw an exception (not the service you are calling but your handlers), then you might want to use thenSafe or thenSafeExpect. These will wrap your async then* handler code in a try/catch and pass the thrown exception to a ThenHandlerException to catchError. If your code ever hangs when making an async call, try using a thenSafe or thenSafeExpect. They ensure that any exceptions thrown in your handler don't get dropped by the system you are using, which could indicate a lack of understanding of the async lib you are using or that you are using it wrong. If it hangs, try thenSafe or thenSafeExpect. They help you debug async problems.
We have all functionality with promises. You can create a promise that waits (async triggers) on all promises passed to it to be async returned.
The Promise.all(list or array) method returns a promise that resolves when all of the promises have resolved, or rejects with the reason of the first passed promise that rejects.
This example is from the lab. It saves a Todo item into two tables into cassandra.
If either promise fails, the all promise fails.
If they both succeed, the all promise succeeds.
If every promise you pass to a promise is invokeable then the all promise is invokeable as well, and calling invoke on it will invoke all of the children promises.
Promise.any(promises...) method returns a promise that resolves as soon as one of promises resolves. It will ignore errors from any of the other promises as long as they don’t all error.
It is similar but different than Promises.race from JavaScript. Many JS libs have an any equiv. Promise.race does not seem as useful as Promsie.any, but we might add Promise.race as well.
This is an example of adapting a Guava style future handling to Reakt so you can use fluent promises with Cassandra.
Guava gets used by many libraries for its async support. Many NoSQL drivers use Guava, e.g., Cassandra.
Guava is JDK 1.6 backwards compatible.
Reakt provides composable promises that support lambda expressions, and a fluent API.
This bridge allows you to use Reakt's promises, reactive streams and callbacks to have a more modern Java experience with libs like Cassandra and other libs that use Guava.
The above shows adapting a Vert.x async handler to a Reakt promise.
We used this quite a bit.
Reactor is similar to QBit Reactor (but cleaner, 2nd attempt) or Vert.x Verticle context.
The Reactor is a class that enables
callbacks that execute in caller's thread (thread safe, async callbacks)
tasks that run in the caller's thread
repeating tasks that run in a caller's thread
one shot after time period tasks that run in the caller's thread
The reakt Reactor is a lot like the QBit Reactor or the Vert.x context. It allows you to enable tasks that run in that actors or verticles thread.
The reakt Reactor creates replay promises. Replay promises execute in the same thread as the caller. They are "replayed" in the callers thread.
QBit implements a service actor model (similar to Akka type actors), and Vert.x implements a Reactor model (like Node.js).
QBit, for example ensures that all method calls are queued and handled by the service/actor thread. You can also use the QBit Reactor to ensure that callbacks happen on the same thread as the caller. This allows you callbacks to be thread safe. The Reakt Reactor is a drop in replacement for QBit Reactor except that the Reakt Reactor uses Reakt Promises, asyncResults and Callbacks. QBit 2 and Conekt will use Reakt's API and not its own.
You can use the Reakt Reactor with RxJava, Vert.x, or Spring Reactor and other similar minded projects to manage repeating tasks, tasks, and callbacks on the same thread as the caller (which you do not always need to do).
The Reactor is just an interface so you could replace it with an optimized version.
Reactor Methods of note
Here is a high level list of Reactor methods.
addRepeatingTask(interval, runnable) add a task that repeats every interval
runTaskAfter(afterInterval, runnable) run a task after an interval expires
deferRun(Runnable runnable) run a task on this thread as soon as you can
static reactor(...) create a reactor
all(...) create a promise that does not async return until all promises async return. (you can pass a timeout)
any(...) create a promise that does not async return until one of the promises async return. (you can pass a timeout)
process process all tasks, callbacks.
Here is the Reactor interface that can be implemented by anyone.
Promise invokeWithReactor (shorthand) runas replay on reactor
This might be a good slide to actually pull up the invokeWithReactor code and show what it is doing.
This slide might make more sense now.
A Breaker is short for Circuit Breaker. The idea behind the breaker is to wrap access to a service so that errors can be tracked and the circuit breaker can open if errors are exceeded. Like all things in Reakt there is an interface for Breaker that defines a contract but other implementations can get creative on how they detect the Breaker has been thrown.
In this example we are using this with a database connect, but imagine this could be a downstream REST client or a downstream WebSocket client.
It has been our experience, especially in cloud environments. That downstream clients can be restarted and they not only have to be reconnected to, but they may have to be relooked up in DNS or some other form of client discovery. They may have moved addresses entirely.
@Override
public Promise<Boolean> connect() {
return invokablePromise(promise -> {
serviceMgmt.increment("connect.called");
discoveryService.lookupService(cassandraURI).thenSafe(cassandraUris -> {
serviceMgmt.increment("discovery.service.success");
final Builder builder = builder();
cassandraUris.forEach(cassandraURI1 -> builder.withPort(cassandraURI1.getPort())
.addContactPoints(cassandraURI1.getHost()).build());
futureToPromise(builder.build().connectAsync()) //Cassandra / Guava Reakt bridge.
.catchError(error -> promise.reject("Unable to load initial session", error))
.then(sessionToInitialize ->
buildDBIfNeeded(sessionToInitialize)
.thenSafe(session -> {
cassandraErrors.set(0);
sessionBreaker = Breaker.operational(session, 10, theSession ->
!theSession.isClosed() && cassandraErrors.incrementAndGet() > 25
);
promise.resolve(true);
})
.catchError(error ->
promise.reject(
"Unable to create or initialize session", error)
).invokeWithReactor(reactor)
).invokeWithReactor(reactor);
}).catchError(error -> serviceMgmt.increment("discovery.service.fail")).invokeWithReactor(reactor);
});
}
A BlockingPromise is very much like a Java Future. It is blocking. This is useful for unit testing and for legacy integration.
Promises returns a blocking promise as follows:
Blocking Promise
/**
* Create a blocking promise.
* NOTE BLOCKING PROMISES ARE FOR LEGACY INTEGRATION AND TESTING ONLY!!!
* After you create a promise you register its then and catchError and then you use it to
* handle a callback.
*
* @param <T> type of result
* @return new promise
*/
static <T> Promise<T> blockingPromise() {
return new BlockingPromise<>();
}
You could show what invokeAsBlockingProimse looks like.
Show the actual code.
The second code listing shows how to use an expected as well.
Stream is a generic event handler for N results, i.e., a stream of results. This is a like a type of Callback for streaming results. While Callback can be considered for scalar results, a Stream is more appropriate for non-scalar results, i.e., Stream.onNext will get called many times.
StreamResult is a result of an async operations with optional methods for cancel and request more items.
Break it down
Recently, Rick help develop many resilient, fault-tolerant microservices running in a Heroku clone based on Mesos in Scala and Java. He adapted QBit, microservice lib, so that when you drop in a service, it hooks into all of the 12 factor features that Orchard supports. QBit plugs into Orchard, port binding, DNS discovery, KPI stats, monitoring/distributed logging, health system, etc. Rick also wrote support for API gateway services via built-in Swagger support. This allows generation of service docs as well as REST clients for Python, Ruby, Java, Scala, etc. For the JVM, you also get WebSocket client which are auto generated runtime stubs which can invoke 500K method calls (request response) over WebSocket a second (per thread). Rick also wrote the OAuth rate limiter microservice which does OAuth rate limiting by application id and service load balancing for another online media services. Before that Rick wrote a 100 million-user in-memory content preference engine microservice for a large media company with custom NoSQL service store (2014) as part of this effort he wrote high speed JSON REST/WebSocket framework for reactive computing model based on Boon and Vert.x (and a disk batcher capable of writing 720 MB per second—the disk batcher was later used by Beats).Rick wrote over 150,000+ lines of open source code in 2013 – 2016. Rick also contributed to the reference implementations of Grid Computing and enterprise caches as well as being a member of several spec. committees (JSR-347, JSR-107, etc.) Rick is the primary author of Boon, SlumberDB and QBit.Rick is author of the best-selling book Java Tools for Extreme Programming (#1 SW development book on Amazon for 3 months) and other books. Rick also wrote a book on Python that covered programming and OO basics which was used as college text for introduction to programming and software development. Rick setup computer science programs at an elementary school and taught classes for three years. "Rick has the distinction of writing the single most popular article/series ever published on the Java technology zone." --Jenni Aloi, IBM DeveloperWorks. Rick also wrote a book on Java Web Development, which is the number one download on TheServerSide.com, and he wrote about NoSQL and scalability on InfoQ, and was the NoSQL editor for a short period of time. Prior to becoming a consultant, Rick helped create a startup that hosted 2,000 online stores on commodity hardware.
The point of this slide and the previous is to make the claim that streaming is not the only way to do reactive programming.
The most common form of reactive programming is the reactor pattern / event loop, i.e., Browser DOM, Node.JS, Twisted and Vert.x.
Streaming fits many problem domains but so do service calls which are also more common.
Also streaming can be an implementation detail (as you can stream calls and stream responses).
If streaming is not the only way to handle async programming and not even the most common, what / how to people do service style programming and what tools do they use.
Node.js and Browser DOM JS are the two most common forms of async programming and they use Promises.
Akka has promises, Netty has promises, Vert.x has async result which is similar and QBit had something like promises before Reakt.
Reakt attempts to be separate from any particular implementation to focus on being a good promise lib for any sort of async programming in Java.
Works well with Reactor Architecture (Vert.x, Spring Reactor, )
Works well with Actor model and Active Objects (Akka actors, Akka typed actor, QBit, etc.)
ReplayPromises need a Reactor (or something that fulfills that role)
Reactor is just an interface so you can replace it with one optimized for your environment
You can talk about changes in 4.0 and how we are simplifying the interface.