This document discusses reactive programming concepts using Java 9 and Spring Reactor. It introduces reactive streams interfaces in Java 9 like Publisher, Subscriber, and Subscription. It then covers Spring Reactor implementations of these interfaces using Mono and Flux. Code examples are provided for creating simple reactive streams and combining them using operators. The threading model and use of schedulers in Spring Reactor is also briefly explained.
This document summarizes a presentation about Alpakka, a Reactive Enterprise Integration library for Java and Scala based on Reactive Streams and Akka Streams. Alpakka provides connectors to various data sources and messaging systems that allow them to be accessed and processed using Akka Streams. Examples of connectors discussed include Kafka, MQTT, JMS, Elasticsearch and various cloud platforms. The document also provides an overview of Akka Streams and how they allow building responsive, asynchronous and resilient data processing pipelines.
Choosing the right high availability strategyMariaDB plc
This document discusses different high availability strategies for MariaDB databases. It covers asynchronous and semi-synchronous replication, which provide redundancy and failover capabilities but can have data loss risks. Synchronous replication with Galera Cluster is also described, which guarantees no data loss but has higher latency. Other topics include terminology, data redundancy approaches, and how features can be combined for resilient configurations.
This document discusses high availability strategies for MariaDB databases. It defines high availability as a system that is continuously operational for a desirably long period of time. It then examines different levels of availability based on uptime percentages and corresponding downtime. Various high availability components are described, including data redundancy, failover/switchover solutions, and monitoring. Asynchronous and synchronous replication techniques are summarized, along with how MaxScale can implement read/write splitting and failover automation. The benefits and limitations of Galera cluster synchronous replication are also provided.
After the first two sessions, we start to have a feeling as to how we can get a program to take some user input, do some calculations based on that input to generate output and finally communicate that output back to the user. On the third session, we will concentrate a bit more on the basis of computation. We will learn how to generate more complicated expressions, how to make our program to choose among different alternatives (selection), and how to repeat a computation many times (iteration). Finally, we will learn how to keep code clean, reusable and maintainable by separating a particular sub-computation into a function.
The document discusses reliability guarantees in Apache Kafka. It explains that Kafka provides reliability through replication of data across multiple brokers. As long as the minimum number of in-sync replicas (ISRs) is maintained, messages will not be lost even if individual brokers fail. It also discusses best practices for producers and consumers to ensure data is not lost such as using acks=all for producers, disabling unclean leader election, committing offsets only after processing is complete, and monitoring for errors, lag and reconciliation of message counts.
Apache Kafka Reliability Guarantees StrataHadoop NYC 2015 Jeff Holoman
Kafka provides reliability guarantees through replication and configuration settings. It replicates data across multiple brokers to protect against failures. Producers can ensure data reaches the brokers through configuration of request.required.acks. Consumers can commit offsets to prevent data loss. Monitoring is also important to detect any potential data loss between producers and consumers.
The document discusses data loss and duplication in Apache Kafka. It begins with an overview of Kafka and how it works as a distributed commit log. It then covers sources of data loss, such as failures at the producer or cluster level. Data duplication can occur when producers retry messages or consumers restart from an unclean shutdown. The document provides configurations and techniques to minimize data loss and duplication, such as using producer acknowledgments and storing metadata to validate messages. It also discusses monitoring Kafka using JMX metrics to detect issues.
This document summarizes a presentation about Alpakka, a Reactive Enterprise Integration library for Java and Scala based on Reactive Streams and Akka Streams. Alpakka provides connectors to various data sources and messaging systems that allow them to be accessed and processed using Akka Streams. Examples of connectors discussed include Kafka, MQTT, JMS, Elasticsearch and various cloud platforms. The document also provides an overview of Akka Streams and how they allow building responsive, asynchronous and resilient data processing pipelines.
Choosing the right high availability strategyMariaDB plc
This document discusses different high availability strategies for MariaDB databases. It covers asynchronous and semi-synchronous replication, which provide redundancy and failover capabilities but can have data loss risks. Synchronous replication with Galera Cluster is also described, which guarantees no data loss but has higher latency. Other topics include terminology, data redundancy approaches, and how features can be combined for resilient configurations.
This document discusses high availability strategies for MariaDB databases. It defines high availability as a system that is continuously operational for a desirably long period of time. It then examines different levels of availability based on uptime percentages and corresponding downtime. Various high availability components are described, including data redundancy, failover/switchover solutions, and monitoring. Asynchronous and synchronous replication techniques are summarized, along with how MaxScale can implement read/write splitting and failover automation. The benefits and limitations of Galera cluster synchronous replication are also provided.
After the first two sessions, we start to have a feeling as to how we can get a program to take some user input, do some calculations based on that input to generate output and finally communicate that output back to the user. On the third session, we will concentrate a bit more on the basis of computation. We will learn how to generate more complicated expressions, how to make our program to choose among different alternatives (selection), and how to repeat a computation many times (iteration). Finally, we will learn how to keep code clean, reusable and maintainable by separating a particular sub-computation into a function.
The document discusses reliability guarantees in Apache Kafka. It explains that Kafka provides reliability through replication of data across multiple brokers. As long as the minimum number of in-sync replicas (ISRs) is maintained, messages will not be lost even if individual brokers fail. It also discusses best practices for producers and consumers to ensure data is not lost such as using acks=all for producers, disabling unclean leader election, committing offsets only after processing is complete, and monitoring for errors, lag and reconciliation of message counts.
Apache Kafka Reliability Guarantees StrataHadoop NYC 2015 Jeff Holoman
Kafka provides reliability guarantees through replication and configuration settings. It replicates data across multiple brokers to protect against failures. Producers can ensure data reaches the brokers through configuration of request.required.acks. Consumers can commit offsets to prevent data loss. Monitoring is also important to detect any potential data loss between producers and consumers.
The document discusses data loss and duplication in Apache Kafka. It begins with an overview of Kafka and how it works as a distributed commit log. It then covers sources of data loss, such as failures at the producer or cluster level. Data duplication can occur when producers retry messages or consumers restart from an unclean shutdown. The document provides configurations and techniques to minimize data loss and duplication, such as using producer acknowledgments and storing metadata to validate messages. It also discusses monitoring Kafka using JMX metrics to detect issues.
The Good, The Bad, and The Avro (Graham Stirling, Saxo Bank and David Navalho...confluent
- Saxo Bank is migrating to a data mesh architecture using Apache Kafka and Avro schemas to distribute data across domains and enable data sharing.
- They are working to automate the onboarding process for new data domains and producers/consumers to simplify development and ensure governance.
- Some challenges include limited support for .NET in Confluent platforms, compatibility issues between code generators and the schema registry, and mapping complex database schemas to Avro schemas.
Reactive Java: Promises and Streams with Reakt (JavaOne talk 2016)Rick Hightower
see labs at https://github.com/advantageous/j1-talks-2016
Import based on PDF. This is from our JavaOne Talk 2016 on Reakt, reactive Java programming with promises, circuit breakers, and streams. Reakt is a reactive Java lib that provides promises, streams, and a reactor to handle asynchronous call coordination. It was influenced by the design of promises in ES6. You want to async-call serviceA and then serviceB, take the results of serviceA and serviceB, and then call serviceC. Then, based on the results of call C, call D or E and then return the results to the original caller. Calls to A, B, C, D, and E are all async calls, and none should take longer than 10 seconds. If they do, then return a timeout to the original caller. The whole async call sequence should time out in 20 seconds if it does not complete and should also check for circuit breakers and provide back pressure feedback so the system does not have cascading failures. Learn more in this session.
Grokking TechTalk #24: Kafka's principles and protocolsGrokking VN
Bài talk sẽ giới thiệu về Kafka, và đào sâu về các principles của Kafka, các thiết kế của Kafka để làm Kafka nhanh, scalable và độ ổn định cao. Bài talk cũng chia sẻ về cách Kafka servers tương tác với Kafka clients.
Bài talk đào sâu vào internals của Kafka và phân tích tại sao các design decisions được thiết kế như vậy. Bài talk phù hợp cho các bạn software engineer đã, đang muốn tìm hiểu về các job queue, message queue khác nhau.
Speaker: Nguyen Quang Minh
- Software Engineer, Technical Lead @ Employment Hero
- Contributor of `ruby-kafka` (the most popular Kafka client for Ruby)
The document discusses various topics related to the Tungsten Connector including:
- The role of the Connector in routing connections to the appropriate database nodes.
- Best practices for deploying Connectors in different topologies including on application servers, dedicated nodes, or database nodes with load balancing.
- How to perform zero-downtime maintenance on a Tungsten cluster by manually switching the master role between nodes using the cctrl utility.
- How Connectors route connections in a composite cluster with multiple local clusters and how affinity can be set to prefer local reads from a particular cluster.
- That a Connector can provide access to multiple clusters or composite clusters by configuring the dat
Kafka Reliability - When it absolutely, positively has to be thereGwen (Chen) Shapira
Kafka provides reliability guarantees through replication and configuration settings. It replicates data across multiple brokers to protect against failures. Producers can ensure data is committed to all in-sync replicas through configuration settings like request.required.acks. Consumers maintain offsets and can commit after processing to prevent data loss. Monitoring is also important to detect any potential issues or data loss in the Kafka system.
The document discusses using a Mule ESB until successful scope component. The until successful scope will retry processing a message until it succeeds or reaches the maximum number of retries. It provides examples of using it for calling web services, database operations, and subflows. It also describes the key attributes like max retries, milliseconds between retries, and failure expression. The document concludes by providing XML code for a flow that uses an until successful scope to retry an HTTP request until it receives a 200 response code.
Apache Kafka is a distributed messaging system that provides fast, highly scalable messaging through a publish-subscribe model. It was built at LinkedIn as a central hub for messaging between systems and focuses on scalability and fault tolerance. Kafka uses a distributed commit log architecture with topics that are partitioned for scalability and parallelism. It provides high throughput and fault tolerance through replication and an in-sync replica set.
1. The document discusses process synchronization and solving the critical section problem. It covers solutions like Peterson's algorithm, mutex locks, semaphores, monitors and condition variables.
2. Classical synchronization problems like the bounded buffer, readers-writers, and dining philosophers problems are used to test new synchronization schemes.
3. The document also discusses topics like deadlock, starvation, serializability and concurrency control algorithms in transaction processing.
Task Scheduler is a Windows component that allows scheduling programs and scripts to launch at predefined times or intervals. It was introduced in Windows 95 and renamed to Task Scheduler in Windows 98. There are different types of task schedulers that vary in how tasks are executed and how devices are handled, from simple endless loops to more complex priority-based preemptive schedulers that can handle interrupts. Precise timing of task execution is a limitation of cyclic executives where tasks must complete before the next starts.
Adding Real-time Features to PHP ApplicationsRonny López
It's possible to introduce real-time features to PHP applications without deep modifications of the current codebase.
Using WAMP you can build distributed systems out of application components which are loosely coupled and communicate in (soft) real-time.
There is no need to learn a whole new language, with the implications it has.
It also opens the door to write reactive, event-based, distributed architectures and to achieve easier scalability by distributing messages to multiple systems.
Producer Performance Tuning for Apache KafkaJiangjie Qin
Kafka is well known for high throughput ingestion. However, to get the best latency characteristics without compromising on throughput and durability, we need to tune Kafka. In this talk, we share our experiences to achieve the optimal combination of latency, throughput and durability for different scenarios.
AMIS SIG - Introducing Apache Kafka - Scalable, reliable Event Bus & Message ...Lucas Jellema
Introduction of Apache Kafka - the open source platform for real time message queuing and reliable, scalable, distributed event handling and high volume pub/sub implementation.
see GitHub https://github.com/MaartenSmeets/kafka-workshop for the workshop resources.
The document discusses intra-cluster replication in Apache Kafka, including its architecture where partitions are replicated across brokers for high availability. Kafka uses a leader and in-sync replicas approach to strongly consistent replication while tolerating failures. Performance considerations in Kafka replication include latency and durability tradeoffs for producers and optimizing throughput for consumers.
Architecting for the cloud elasticity securityLen Bass
Concurrency and state management are important considerations for achieving elasticity in cloud systems. There are three types of state: session state kept by clients, server-side state kept in processes, and persistent state stored externally. Server-side state makes scaling difficult, while stateless servers allow elasticity. Memcached provides a way to synchronize small amounts of in-memory state across servers to support stateless services running elastically in the cloud.
Sadayuki Furuhashi created Kumofs, a distributed key-value store, and MessagePack, a cross-language object serialization library. Kumofs is optimized for low latency with zero-hop reads and no single points of failure. It scales out linearly as servers are added without impacting applications. MessagePack is a compact binary format like JSON used for cross-language communication. MessagePack-RPC is a cross-language messaging library that uses an asynchronous, pipelined protocol over an event-driven I/O model.
Architecting for the cloud cloud providersLen Bass
The document discusses cloud providers and services available on Amazon Web Services. It provides an overview of compute, storage, database, and other services and how they can provide redundancy across availability zones and regions. Examples are given of different outage scenarios that can occur at the zone, region, or provider level and strategies for architecting applications to mitigate risks from these outages.
Reactive programming is quite a popular topic these days. For a long time, reactive programming was constrained to interactive user interface designs. With the advancement of hardware (multi-core CPU’s) and the internet, the scale, complexity, and responsiveness of software began to rise which led to reactive programming being regarded as a major programming paradigm.
Read more from here: https://blog.lftechnology.com/introduction-to-reactive-programming-part-1-5b7c63685586
By: Subash Poudel (Software Engineer @ Leapfrog Technology, Inc.)
Get ready to experience fast and scalable performance in your web applications as we dive into the world of Reactive Programming. Our guide using WebFlux is perfect for both beginners and experts a like.
Reactive Streams, linking Reactive Application to Spark Streaming by Luc Bour...Spark Summit
This document summarizes a presentation about linking reactive applications to Spark Streaming using Reactive Streams. It discusses back pressure in Spark Streaming, how Spark 1.5 introduced dynamic rate limiting to support back pressure, and how the rate is estimated using a PID controller. It also describes reactive applications as being responsive, resilient, elastic, and message-driven. Reactive Streams is presented as a specification that allows connecting systems using a back pressure interface in the JVM. Finally, it demonstrates how end-to-end back pressure can be achieved between a reactive application, Spark Streaming, and a Reactive Streams receiver.
This document provides an overview of reactive programming in Java and Spring 5. It discusses reactive programming concepts like reactive streams specification, Reactor library, and operators. It also covers how to build reactive applications with Spring WebFlux, including creating reactive controllers, routing with functional endpoints, using WebClient for HTTP requests, and testing with WebTestClient.
The Good, The Bad, and The Avro (Graham Stirling, Saxo Bank and David Navalho...confluent
- Saxo Bank is migrating to a data mesh architecture using Apache Kafka and Avro schemas to distribute data across domains and enable data sharing.
- They are working to automate the onboarding process for new data domains and producers/consumers to simplify development and ensure governance.
- Some challenges include limited support for .NET in Confluent platforms, compatibility issues between code generators and the schema registry, and mapping complex database schemas to Avro schemas.
Reactive Java: Promises and Streams with Reakt (JavaOne talk 2016)Rick Hightower
see labs at https://github.com/advantageous/j1-talks-2016
Import based on PDF. This is from our JavaOne Talk 2016 on Reakt, reactive Java programming with promises, circuit breakers, and streams. Reakt is a reactive Java lib that provides promises, streams, and a reactor to handle asynchronous call coordination. It was influenced by the design of promises in ES6. You want to async-call serviceA and then serviceB, take the results of serviceA and serviceB, and then call serviceC. Then, based on the results of call C, call D or E and then return the results to the original caller. Calls to A, B, C, D, and E are all async calls, and none should take longer than 10 seconds. If they do, then return a timeout to the original caller. The whole async call sequence should time out in 20 seconds if it does not complete and should also check for circuit breakers and provide back pressure feedback so the system does not have cascading failures. Learn more in this session.
Grokking TechTalk #24: Kafka's principles and protocolsGrokking VN
Bài talk sẽ giới thiệu về Kafka, và đào sâu về các principles của Kafka, các thiết kế của Kafka để làm Kafka nhanh, scalable và độ ổn định cao. Bài talk cũng chia sẻ về cách Kafka servers tương tác với Kafka clients.
Bài talk đào sâu vào internals của Kafka và phân tích tại sao các design decisions được thiết kế như vậy. Bài talk phù hợp cho các bạn software engineer đã, đang muốn tìm hiểu về các job queue, message queue khác nhau.
Speaker: Nguyen Quang Minh
- Software Engineer, Technical Lead @ Employment Hero
- Contributor of `ruby-kafka` (the most popular Kafka client for Ruby)
The document discusses various topics related to the Tungsten Connector including:
- The role of the Connector in routing connections to the appropriate database nodes.
- Best practices for deploying Connectors in different topologies including on application servers, dedicated nodes, or database nodes with load balancing.
- How to perform zero-downtime maintenance on a Tungsten cluster by manually switching the master role between nodes using the cctrl utility.
- How Connectors route connections in a composite cluster with multiple local clusters and how affinity can be set to prefer local reads from a particular cluster.
- That a Connector can provide access to multiple clusters or composite clusters by configuring the dat
Kafka Reliability - When it absolutely, positively has to be thereGwen (Chen) Shapira
Kafka provides reliability guarantees through replication and configuration settings. It replicates data across multiple brokers to protect against failures. Producers can ensure data is committed to all in-sync replicas through configuration settings like request.required.acks. Consumers maintain offsets and can commit after processing to prevent data loss. Monitoring is also important to detect any potential issues or data loss in the Kafka system.
The document discusses using a Mule ESB until successful scope component. The until successful scope will retry processing a message until it succeeds or reaches the maximum number of retries. It provides examples of using it for calling web services, database operations, and subflows. It also describes the key attributes like max retries, milliseconds between retries, and failure expression. The document concludes by providing XML code for a flow that uses an until successful scope to retry an HTTP request until it receives a 200 response code.
Apache Kafka is a distributed messaging system that provides fast, highly scalable messaging through a publish-subscribe model. It was built at LinkedIn as a central hub for messaging between systems and focuses on scalability and fault tolerance. Kafka uses a distributed commit log architecture with topics that are partitioned for scalability and parallelism. It provides high throughput and fault tolerance through replication and an in-sync replica set.
1. The document discusses process synchronization and solving the critical section problem. It covers solutions like Peterson's algorithm, mutex locks, semaphores, monitors and condition variables.
2. Classical synchronization problems like the bounded buffer, readers-writers, and dining philosophers problems are used to test new synchronization schemes.
3. The document also discusses topics like deadlock, starvation, serializability and concurrency control algorithms in transaction processing.
Task Scheduler is a Windows component that allows scheduling programs and scripts to launch at predefined times or intervals. It was introduced in Windows 95 and renamed to Task Scheduler in Windows 98. There are different types of task schedulers that vary in how tasks are executed and how devices are handled, from simple endless loops to more complex priority-based preemptive schedulers that can handle interrupts. Precise timing of task execution is a limitation of cyclic executives where tasks must complete before the next starts.
Adding Real-time Features to PHP ApplicationsRonny López
It's possible to introduce real-time features to PHP applications without deep modifications of the current codebase.
Using WAMP you can build distributed systems out of application components which are loosely coupled and communicate in (soft) real-time.
There is no need to learn a whole new language, with the implications it has.
It also opens the door to write reactive, event-based, distributed architectures and to achieve easier scalability by distributing messages to multiple systems.
Producer Performance Tuning for Apache KafkaJiangjie Qin
Kafka is well known for high throughput ingestion. However, to get the best latency characteristics without compromising on throughput and durability, we need to tune Kafka. In this talk, we share our experiences to achieve the optimal combination of latency, throughput and durability for different scenarios.
AMIS SIG - Introducing Apache Kafka - Scalable, reliable Event Bus & Message ...Lucas Jellema
Introduction of Apache Kafka - the open source platform for real time message queuing and reliable, scalable, distributed event handling and high volume pub/sub implementation.
see GitHub https://github.com/MaartenSmeets/kafka-workshop for the workshop resources.
The document discusses intra-cluster replication in Apache Kafka, including its architecture where partitions are replicated across brokers for high availability. Kafka uses a leader and in-sync replicas approach to strongly consistent replication while tolerating failures. Performance considerations in Kafka replication include latency and durability tradeoffs for producers and optimizing throughput for consumers.
Architecting for the cloud elasticity securityLen Bass
Concurrency and state management are important considerations for achieving elasticity in cloud systems. There are three types of state: session state kept by clients, server-side state kept in processes, and persistent state stored externally. Server-side state makes scaling difficult, while stateless servers allow elasticity. Memcached provides a way to synchronize small amounts of in-memory state across servers to support stateless services running elastically in the cloud.
Sadayuki Furuhashi created Kumofs, a distributed key-value store, and MessagePack, a cross-language object serialization library. Kumofs is optimized for low latency with zero-hop reads and no single points of failure. It scales out linearly as servers are added without impacting applications. MessagePack is a compact binary format like JSON used for cross-language communication. MessagePack-RPC is a cross-language messaging library that uses an asynchronous, pipelined protocol over an event-driven I/O model.
Architecting for the cloud cloud providersLen Bass
The document discusses cloud providers and services available on Amazon Web Services. It provides an overview of compute, storage, database, and other services and how they can provide redundancy across availability zones and regions. Examples are given of different outage scenarios that can occur at the zone, region, or provider level and strategies for architecting applications to mitigate risks from these outages.
Reactive programming is quite a popular topic these days. For a long time, reactive programming was constrained to interactive user interface designs. With the advancement of hardware (multi-core CPU’s) and the internet, the scale, complexity, and responsiveness of software began to rise which led to reactive programming being regarded as a major programming paradigm.
Read more from here: https://blog.lftechnology.com/introduction-to-reactive-programming-part-1-5b7c63685586
By: Subash Poudel (Software Engineer @ Leapfrog Technology, Inc.)
Get ready to experience fast and scalable performance in your web applications as we dive into the world of Reactive Programming. Our guide using WebFlux is perfect for both beginners and experts a like.
Reactive Streams, linking Reactive Application to Spark Streaming by Luc Bour...Spark Summit
This document summarizes a presentation about linking reactive applications to Spark Streaming using Reactive Streams. It discusses back pressure in Spark Streaming, how Spark 1.5 introduced dynamic rate limiting to support back pressure, and how the rate is estimated using a PID controller. It also describes reactive applications as being responsive, resilient, elastic, and message-driven. Reactive Streams is presented as a specification that allows connecting systems using a back pressure interface in the JVM. Finally, it demonstrates how end-to-end back pressure can be achieved between a reactive application, Spark Streaming, and a Reactive Streams receiver.
This document provides an overview of reactive programming in Java and Spring 5. It discusses reactive programming concepts like reactive streams specification, Reactor library, and operators. It also covers how to build reactive applications with Spring WebFlux, including creating reactive controllers, routing with functional endpoints, using WebClient for HTTP requests, and testing with WebTestClient.
Resilience Planning & How the Empire Strikes BackC4Media
Video and slides synchronized, mp3 and slide download available at URL http://bit.ly/1pGpnbd.
Bhakti Mehta approaches best practices for building resilient, stable and predictable services: preventing cascading failures, timeouts pattern, retry pattern, circuit breakers and other techniques which have been pervasively used at Blue Jeans Network. Filmed at qconsf.com.
Bhakti Mehta is the author of "RESTful Java Patterns and Best practices” and "Developing RESTful Services with JAX-RS 2.0, WebSockets, and JSON”. Bhakti is a Senior Software Engineer at Blue Jeans Network. As part of her current role, she works on developing RESTful services that can be consumed by ISV partners and the developer community.
Unit 1 Computer organization and InstructionsBalaji Vignesh
The document discusses computer architecture and organization. It defines a computer as a programmable machine that can manipulate data according to instructions. It describes how calculations were originally done by human computers before electronic computers were developed. It discusses computer components, applications, and generations of computers. It outlines eight design ideas for computer architecture, including designing for Moore's Law, using abstraction, prioritizing common tasks, and incorporating parallelism, pipelining, prediction, memory hierarchy, and redundancy. Performance metrics like execution time and throughput are also covered.
Reactive programming with Rx-Java allows building responsive systems that can handle varying workloads and failures. It promotes asynchronous and non-blocking code using observable sequences and operators. Rx-Java was created at Netflix to address issues like network chattiness and callback hell in their API. It transforms callback-based code into declarative pipelines. Key concepts are Observables that emit notifications, Operators that transform Observables, and Subscribers that receive emitted items. Rx-Java gained popularity due to its support for concurrency, error handling, and composability.
While developing distributed apps, most teams are focused in delivery of business value. Sometimes, after production deployment, a few moments later, we realize that exceptions arise, time-outs blow. The system need more fault tolerance. Presentations overviews a few patterns and principles of fault and latency tolerance for such systems.
Fault Tolerance in Distributed EnvironmentOrkhan Gasimov
The document discusses various techniques for achieving fault tolerance in distributed systems, including service coordination, handling high load, RPC mechanics, circuit breakers, N-modular redundancy, recovery blocks, actors and error kernels, and instance healers. It describes common issues that can occur like network failures and overloaded services, and explains solutions such as service discovery, load balancing, timeouts, and dynamically scaling services horizontally.
This document discusses data microservices and related concepts. It introduces data microservices as lightweight services that input or output data and can be composed into pipelines. Spring Cloud Stream is presented as a framework that simplifies developing data microservices through annotations like @EnableBinding, @StreamListener, and @SendTo. The document also covers topics like communication models, transactions, event sourcing, and CQRS as patterns for ensuring data consistency across distributed systems of microservices.
Multi-service reactive streams using Spring, Reactor, RSocketStéphane Maldini
This document discusses multi-service reactive streams using RSocket, Reactor, and Spring. It introduces reactive programming concepts and how RSocket provides a binary protocol for efficient machine-to-machine communication through request-response, fire-and-forget, request-stream, and request-channel interaction models. It also addresses how RSocket features like resumption, metadata, fragmentation, and leasing improve performance and flexibility compared to other protocols.
Journey into Reactive Streams and Akka StreamsKevin Webber
Are streams just collections? What's the difference between Java 8 streams and Reactive Streams? How do I implement Reactive Streams with Akka? Pub/sub, dynamic push/pull, non-blocking, non-dropping; these are some of the other concepts covered. We'll also discuss how to leverage streams in a real-world application.
- Designed and developed simplified version of single thread event-based web server engine with Java
- Designed overall architectures and implemented event loop and socket programming
- Attained higher amount of throughput and lower error rate of http request handling comparing with NodeJS
With the advent of “big data”, it has become inevitable to analyze huge volumes of data in real-time to make sense out of it. For this to happen seamlessly, the streaming of that data is necessary. This is where Reactive Streams step in.
Akka Streams is built on top of the Reactive Streams interface. This webinar will be an introduction to Akka Streams and how it simplifies the aspect of back-pressure in real-time streaming.
Here’s an outline of the webinar -
~ Introduction to the problem set
~ How do Akka Streams help simplify the problem of back-pressure?
~ Basic terminologies of Akka Streams
~ Live demo of a real-life problem being solved with Akka Streams
The document provides an overview of Confluent Control Center and how it can be used to monitor Apache Kafka deployments. It discusses how Control Center provides visibility into key metrics for brokers, topics, consumers and connectors. It also describes how Control Center helps answer important business questions about whether applications are receiving all data, showing the latest data, if the applications or cluster need to scale, and ensures data is not lost. Control Center provides dashboards, alerts and visibility to help operators effectively manage Kafka clusters and identify and address issues.
This document discusses process synchronization and the classical producer-consumer problem. It begins with a recap of semaphores and an outline of topics to be covered, including process synchronization, synchronization hardware, mutex locks, and classical synchronization problems. It then explains process synchronization and gives examples. Next, it describes synchronization hardware and mutex locks. It introduces the producer-consumer problem and bounded buffer problem. Finally, it provides pseudocode to solve the producer-consumer problem using semaphores to control access to the shared buffer by the producer and consumer processes.
Process management in Operating System_Unit-2mohanaps
In this PPT Of operating system it covers:
Process Concept; Process Control Block; Process Scheduling; CPU Scheduling - Basic Concepts; Scheduling Algorithms – FIFO; RR; SJF; Multi- level; Multi-level feedback. Process Synchronization and deadlocks: The Critical Section Problem; Synchronization hardware; Semaphores; Classical problems; Deadlock: System model; Characterization; Deadlock prevention; Avoidance and Detection.
We have a DREAM: Distributed Reactive Programming with Consistency Guarantees...Alessandro Margara
This document introduces DREAM, a middleware for distributed reactive programming with consistency guarantees. Reactive programming simplifies development of reactive systems that react to external changes by making data dependencies explicit. DREAM allows distributed reactive programming in Java while providing three levels of consistency: causal, glitch-free, and atomic. It relies on a decentralized event-based middleware and uses subscriptions and notifications to propagate updates between observables and reactives. An evaluation compares the consistency levels and shows advantages of distribution in terms of delay and traffic under different scenarios.
Similar to Reactive solutions using java 9 and spring reactor (20)
Code reviews are vital for ensuring good code quality. They serve as one of our last lines of defense against bugs and subpar code reaching production.
Yet, they often turn into annoying tasks riddled with frustration, hostility, unclear feedback and lack of standards. How can we improve this crucial process?
In this session we will cover:
- The Art of Effective Code Reviews
- Streamlining the Review Process
- Elevating Reviews with Automated Tools
By the end of this presentation, you'll have the knowledge on how to organize and improve your code review proces
Software Engineering, Software Consulting, Tech Lead, Spring Boot, Spring Cloud, Spring Core, Spring JDBC, Spring Transaction, Spring MVC, OpenShift Cloud Platform, Kafka, REST, SOAP, LLD & HLD.
Utilocate offers a comprehensive solution for locate ticket management by automating and streamlining the entire process. By integrating with Geospatial Information Systems (GIS), it provides accurate mapping and visualization of utility locations, enhancing decision-making and reducing the risk of errors. The system's advanced data analytics tools help identify trends, predict potential issues, and optimize resource allocation, making the locate ticket management process smarter and more efficient. Additionally, automated ticket management ensures consistency and reduces human error, while real-time notifications keep all relevant personnel informed and ready to respond promptly.
The system's ability to streamline workflows and automate ticket routing significantly reduces the time taken to process each ticket, making the process faster and more efficient. Mobile access allows field technicians to update ticket information on the go, ensuring that the latest information is always available and accelerating the locate process. Overall, Utilocate not only enhances the efficiency and accuracy of locate ticket management but also improves safety by minimizing the risk of utility damage through precise and timely locates.
Do you want Software for your Business? Visit Deuglo
Deuglo has top Software Developers in India. They are experts in software development and help design and create custom Software solutions.
Deuglo follows seven steps methods for delivering their services to their customers. They called it the Software development life cycle process (SDLC).
Requirement — Collecting the Requirements is the first Phase in the SSLC process.
Feasibility Study — after completing the requirement process they move to the design phase.
Design — in this phase, they start designing the software.
Coding — when designing is completed, the developers start coding for the software.
Testing — in this phase when the coding of the software is done the testing team will start testing.
Installation — after completion of testing, the application opens to the live server and launches!
Maintenance — after completing the software development, customers start using the software.
Launch Your Streaming Platforms in MinutesRoshan Dwivedi
The claim of launching a streaming platform in minutes might be a bit of an exaggeration, but there are services that can significantly streamline the process. Here's a breakdown:
Pros of Speedy Streaming Platform Launch Services:
No coding required: These services often use drag-and-drop interfaces or pre-built templates, eliminating the need for programming knowledge.
Faster setup: Compared to building from scratch, these platforms can get you up and running much quicker.
All-in-one solutions: Many services offer features like content management systems (CMS), video players, and monetization tools, reducing the need for multiple integrations.
Things to Consider:
Limited customization: These platforms may offer less flexibility in design and functionality compared to custom-built solutions.
Scalability: As your audience grows, you might need to upgrade to a more robust platform or encounter limitations with the "quick launch" option.
Features: Carefully evaluate which features are included and if they meet your specific needs (e.g., live streaming, subscription options).
Examples of Services for Launching Streaming Platforms:
Muvi [muvi com]
Uscreen [usencreen tv]
Alternatives to Consider:
Existing Streaming platforms: Platforms like YouTube or Twitch might be suitable for basic streaming needs, though monetization options might be limited.
Custom Development: While more time-consuming, custom development offers the most control and flexibility for your platform.
Overall, launching a streaming platform in minutes might not be entirely realistic, but these services can significantly speed up the process compared to building from scratch. Carefully consider your needs and budget when choosing the best option for you.
Mobile app Development Services | Drona InfotechDrona Infotech
Drona Infotech is one of the Best Mobile App Development Company In Noida Maintenance and ongoing support. mobile app development Services can help you maintain and support your app after it has been launched. This includes fixing bugs, adding new features, and keeping your app up-to-date with the latest
Visit Us For :
Enterprise Resource Planning System includes various modules that reduce any business's workload. Additionally, it organizes the workflows, which drives towards enhancing productivity. Here are a detailed explanation of the ERP modules. Going through the points will help you understand how the software is changing the work dynamics.
To know more details here: https://blogs.nyggs.com/nyggs/enterprise-resource-planning-erp-system-modules/
When deliberating between CodeIgniter vs CakePHP for web development, consider their respective strengths and your project requirements. CodeIgniter, known for its simplicity and speed, offers a lightweight framework ideal for rapid development of small to medium-sized projects. It's praised for its straightforward configuration and extensive documentation, making it beginner-friendly. Conversely, CakePHP provides a more structured approach with built-in features like scaffolding, authentication, and ORM. It suits larger projects requiring robust security and scalability. Ultimately, the choice hinges on your project's scale, complexity, and your team's familiarity with the frameworks.
Introducing Crescat - Event Management Software for Venues, Festivals and Eve...Crescat
Crescat is industry-trusted event management software, built by event professionals for event professionals. Founded in 2017, we have three key products tailored for the live event industry.
Crescat Event for concert promoters and event agencies. Crescat Venue for music venues, conference centers, wedding venues, concert halls and more. And Crescat Festival for festivals, conferences and complex events.
With a wide range of popular features such as event scheduling, shift management, volunteer and crew coordination, artist booking and much more, Crescat is designed for customisation and ease-of-use.
Over 125,000 events have been planned in Crescat and with hundreds of customers of all shapes and sizes, from boutique event agencies through to international concert promoters, Crescat is rigged for success. What's more, we highly value feedback from our users and we are constantly improving our software with updates, new features and improvements.
If you plan events, run a venue or produce festivals and you're looking for ways to make your life easier, then we have a solution for you. Try our software for free or schedule a no-obligation demo with one of our product specialists today at crescat.io
Quarkus Hidden and Forbidden ExtensionsMax Andersen
Quarkus has a vast extension ecosystem and is known for its subsonic and subatomic feature set. Some of these features are not as well known, and some extensions are less talked about, but that does not make them less interesting - quite the opposite.
Come join this talk to see some tips and tricks for using Quarkus and some of the lesser known features, extensions and development techniques.
Transform Your Communication with Cloud-Based IVR SolutionsTheSMSPoint
Discover the power of Cloud-Based IVR Solutions to streamline communication processes. Embrace scalability and cost-efficiency while enhancing customer experiences with features like automated call routing and voice recognition. Accessible from anywhere, these solutions integrate seamlessly with existing systems, providing real-time analytics for continuous improvement. Revolutionize your communication strategy today with Cloud-Based IVR Solutions. Learn more at: https://thesmspoint.com/channel/cloud-telephony
Atelier - Innover avec l’IA Générative et les graphes de connaissancesNeo4j
Atelier - Innover avec l’IA Générative et les graphes de connaissances
Allez au-delà du battage médiatique autour de l’IA et découvrez des techniques pratiques pour utiliser l’IA de manière responsable à travers les données de votre organisation. Explorez comment utiliser les graphes de connaissances pour augmenter la précision, la transparence et la capacité d’explication dans les systèmes d’IA générative. Vous partirez avec une expérience pratique combinant les relations entre les données et les LLM pour apporter du contexte spécifique à votre domaine et améliorer votre raisonnement.
Amenez votre ordinateur portable et nous vous guiderons sur la mise en place de votre propre pile d’IA générative, en vous fournissant des exemples pratiques et codés pour démarrer en quelques minutes.
Hand Rolled Applicative User ValidationCode KataPhilip Schwarz
Could you use a simple piece of Scala validation code (granted, a very simplistic one too!) that you can rewrite, now and again, to refresh your basic understanding of Applicative operators <*>, <*, *>?
The goal is not to write perfect code showcasing validation, but rather, to provide a small, rough-and ready exercise to reinforce your muscle-memory.
Despite its grandiose-sounding title, this deck consists of just three slides showing the Scala 3 code to be rewritten whenever the details of the operators begin to fade away.
The code is my rough and ready translation of a Haskell user-validation program found in a book called Finding Success (and Failure) in Haskell - Fall in love with applicative functors.
Need for Speed: Removing speed bumps from your Symfony projects ⚡️Łukasz Chruściel
No one wants their application to drag like a car stuck in the slow lane! Yet it’s all too common to encounter bumpy, pothole-filled solutions that slow the speed of any application. Symfony apps are not an exception.
In this talk, I will take you for a spin around the performance racetrack. We’ll explore common pitfalls - those hidden potholes on your application that can cause unexpected slowdowns. Learn how to spot these performance bumps early, and more importantly, how to navigate around them to keep your application running at top speed.
We will focus in particular on tuning your engine at the application level, making the right adjustments to ensure that your system responds like a well-oiled, high-performance race car.
GraphSummit Paris - The art of the possible with Graph TechnologyNeo4j
Sudhir Hasbe, Chief Product Officer, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
2. session flow
• Abstract concept
• Java 9 additions
• Write a java 9 simple impl
• Spring Reactor overview
• Write a Spring Reactor implementations using Spring Boot
• WebFlux overview
• Write an E2E WebFlux app (with or without UI)
4. Reactive Streams - concept
• Many times blocking code is wasteful
• Concurrency and parallelizm are good but usually requires more resources
• Is Async code the solution?
• Threading are not good enough / easy enough to code
• Callbacks, Futures and even CompetableFuture are not good enough
• Pull was not good enough - blocking or simply wasting resources
• Push was not good enough - “suffocating” the consumer
• Pull-Push was the solution
• Overall throughput vs. single request performance
5. Reactive Streams - concept
• Simpler code, making it more readable.
• Abstracts away from
• Boilerplate code --> focus on business logic.
• Low-level threading, synchronization, and concurrency issues.
• Stream processing implies memory efficient
• The model can be applied almost everywhere, both in-memory as well as
over protocols like http, to solve almost any kind of problem.
• Reactive Streams vs. Java Streams
• Flow of elements with operations between flow parts
• Subscriber will initiate processing
• Async vs. sync
6. Reactive Streams - concept
From the spec:
Reactive Streams is an initiative to provide a standard for asynchronous
stream processing with non-blocking back pressure. This encompasses
efforts aimed at runtime environments (JVM and JavaScript) as well as
network protocols.
7. Reactive Streams - characteristics
• Elastic
• The system stays responsive under varying workload.
• Reactive Systems can react to changes in the input rate by increasing or
decreasing the resources allocated to service these inputs.
• Implies an ability to replicate components and split inputs among them.
• Predictive/Reactive scaling algorithms supported by relevant live
performance measures.
• Resilient
• The system stay responsive in case of failures
• Failures are contained in each isolated component
• High availability is achieved by replication where needed
8. Reactive Streams - characteristics
• Responsive
• The system responds in a timely manner if at all possible
• Implies that problems are detected quickly and dealt with effectively.
• Rapid and consistent response times (consistent service quality)
• Message Driven
• Reactive Systems rely on asynchronous message-passing
to establish a boundary between components
• Implies load management, elasticity, and flow control by monitoring the
message queues
• Back-pressure and failure delegation
9. Reactive Streams - semantics
• A Concept to deal with synchronous/asynchronous stream
processing with non-blocking back-pressure.
(notifying producers of data to reduce their message rate.)
• Processing might be synchronous or asynchronous but it shines
on the “communication” UCs
• Consumers are in control of the Data Flow
11. Interfaces
• The Java 9 addition is java.util.concurrent.Flow interface
• Publisher – the producer of elements.
• Subscriber – the handler of elements.
• Subscription - the specific Publisher-Subscriber binding
• Processor – a typical node in a reactive “chain” essentially consumes
elements from previous node and produces elements to the next one
• 1:1 semantic equivalism between Java APIs and their respective
Reactive Streams counterparts.
• org.reactivestreams.FlowAdapters to gap between reactive streams
interfaces and java 9 Flow interfaces
14. Interfaces
Subscriber
Subscription
Publisher
- Disengaging
Publisher failed or
finished
Call onError() or onComplete() accordingly
Call cancel() to signal
the Subscription that it
is terminated and that
the Subscription is
revoked
Subscriber must not call
any Subscription/Publisher
methods after it was
signaled for complete/error
Call cancel() on
unrecoverable failure
or if the Subscription
is no longer needed or
if onSubscribe() is
called when an active
Subscription is
already inplace
Call onError() on
unrecoverable failure
of the Subscription
15. Interfaces
• Publisher – the producer. Provides a single method: subscribe().
• Subscriber – subscribes to a Producer. Has the following methods:
• onSubscribe – will be invoked before any other method.
• onNext – invoked with the next item to process.
• onError – unrecoverable error, no more methods will be invoked on this
instance.
• onComplete – no additional data will be receive (last method to be
invoked).
16. Interfaces
• Subscription – provided to the onSubscribe invocation.
Provides the following methods:
• request(n) – demands n more elements from the producer.
• cancel – best effort method to cancel the subscription.
• Processor – implements both Producer and Subscriber.
• Publisher/Subscription/Subscriber/Processor—
all methods defined by these interfaces return “void”
to support asynchronicity
17. Reactive Streams
• The JDK doesn’t provide implementations for reactive-streams
interfaces.
• Available popular implementations:
• Project Reactor – used by Spring 5.
• Akka Streams.
• RxJava.
• Vert.x.
• Slick.
• MongoDB Java driver.
• Reactive Rabbit (RabbitMQ).
• Ratpack.
18. Flow interface - exercise 1
• Implement a naive solution for Flow interfaces
• Publisher
• Subscriber
• Subscription
• Impl should be minimal - ex. printing the msg you are given
• Stick to the required API impl and avoid internal impl.
• Add a main() for running your solution
19. Flow interface - exercise 2
• Change your solution to use SubmissionPublisher
• Add Processor impl and chain it to the flow.
• Impl should be minimal - ex. altering the msg you are given
• Stick to the required API impl and avoid internal impl.
• Run your flow and verify both subscriptions are working
20. Internal semantics
• Subscriber’s buffer bounds are known & controlled by the subscribers
• Back-pressure is mandatory so the use of unbounded buffers can be
avoided
• Publisher must respect Subscriber’s limits. It can either
• buffer exceeding elements
• drop exceeding elements.
• Maximum number of elements that may arrive = P-B-N
(until more demand is signaled to the Publisher)
• P - total number of elements requested
• B - number of elements B in its input buffer
• N - number of elements that have been processed
22. Reactor semantics
• Mono and Flux are implementations of Publisher
• The assembly line metaphor
• The “belt”
• The raw materials pouring on it
• The workstations - introducing transformation, might be overloaded
• The end product ready to be consumed
• Multiple operators as part of the fluent API
• Cold vs. Hot publishers
24. Reactor - Code examples
• Flux/Mono docs including diagrams
• Creating a Flux using generate()
Flux<UUID> uuids = Flux.generate(
(sink)->sink.next(UUID.randomUUID()));
• Creating a Flux using create()
Flux<Person> flux = Flux.create();
We will see more about create() soon...
25. Reactor exercise - Clock 1
• Create a “ticker”
• Generate a flow of values representing the time
• Print second’s ticks to the console
27. Reactor exercise - Clock 2
• Change your “ticker”
• To show only 10 second’s ticks
• To finish after 20 ticks.
• Display a “closing statement” for your flux
• Try to implement it using create() …..
28. Reactor exercise - Clock 3
• Change your “ticker”
• To use the state supplier for the time increment
29. Reactor - Code examples
• Creating a “heartbeat” Flux using interval() and a Duration
Flux<Long> intervals=Flux.interval(Duration.ofSeconds(2))
• Creating a “join” point for multiple publishers
Mono<Void> joinPoint = Mono.when(publisher1, pub2, p3);
• Creating special publishers
Mono<Integer> empty = Mono.empty();
Flux<String> nothing = Flux.never();
30. Reactor - Code examples
• Combining fluxs using merge()
Flux<Integer> nums = Flux.merge(evens, odds);
• Combining fluxs using concat()
Flux<Integer> nums = Flux.concat(lows, highs);
• Combining fluxs using zip()
Flux<Tuple2<Integer,Integer>> colunmValue =
Flux.zip(lefts, rights);
• Same goes for xxxWith() methods:
evens.mergeWith(odds);
lows.concatWith(highs);
lefts.zipWith(rights);
32. Reactor exercise - Clock 4
• Change your “ticker”
• use interval()
• Use zipWith to achieve the same “ticking” functionality
33. Reactor - Code examples
• Combining fluxs using zip() for throttling the data flow:
Flux<Long> delay =
Flux.interval(Duration.ofMillis(500));
Flux<String> pacedFlux=dataFlux.zipWith(delay,(d,l)-
>d);
• extracting flux data using blocking operators w/o duration
Mono<Person> person = mono.block();
Mono<Person> person = flux.blockFirst(timeoutDuration);
Mono<Person> person = flux.blockLast();
34. Spring & Reactive Streams
Spring Data Repos Spring Data Reactive Repos
Servlet container
(Tomcat, Jetty, etc.)
NIO Runtime
(Netty, servlet 3.1, etc.)
Spring MVC Spring Web Flux
Spring Reactive Security
Spring Boot
Spring Security
35. Spring 5 / Boot 2 - offering for reactive
• Reactor
• Spring MVC
• Flux/ Mono support in controllers
• Handler / Router functions
• Reactive data stores (Mongo, Redis, Casandra)
• WebFlux starter and Netty as default
• Multiple WebSocketClients for Netty, undertow, etc.
36. Spring 5 / Reactor - Testing
• StepVerifier
• WebClient
• Client side reactive end point
• Its API is working with publishers and subscribers
• Both on the response side but also on the request side (building the body)
• Filtering and strategies are available too
• Error handling using onStatus()
• WebTestClient
• For unit testing specific handlers or routes
• For E2E integration tests with real server
• TCK - Reactive Streams Technology Compatibility Kit
37. • V1
• Generate a stream of stock quotes (make sure to throttle)
• Use WebFlux to expose this stream over HTTP
• Use WebClient and SSE media type
• V2
• Generate a stream of stock quotes (make sure to throttle)
• Use reactive MongDB to store this stream in mogo
• “Proove” stocks keep flowing into mongo using you WebClient
• V3
• Reactively consume the data from Mongo
• Use WebClient to consume your stocks endpoints
• Try to keep it coming…..
• V4
• Add a filter to retrieve stocks by symbol name
• Change to hot stream
Reactive - exercises - "Stocks"
38. Spring 5 / Reactor - Threading
• Publishers and Subscriber are not dealing with threads.
Operators are dealing with them instead.
• Dealing with threading by selecting the type of Scheduler you are
using
• Schedulers are used to publishOn(), subscribeOn(), cancelOn()
• publishOn() applies down stream whereas subscribeOn() applies up stream
• Flux.range(1, 100).publishOn(Schedulers.parallel());
• Flux.interval(Duration.ofMillis(10), Schedulers.newSingle("dedicatedOne"));
• You should handle blocking calls in a specific way
• Mono wrapBlockingCode = Mono.fromCallable(() -> {
return /* blocking synchronous call */
});
wrapedBlockingCode = wrapBlockingCode.subscribeOn(Schedulers.elastic());
39. Spring 5 / Reactor - downsides
• Harder to debug
• Easy to add blocking code while fixing something
• Most of the traditional integration libraries are still blocking
• Limited options for Reactive data stores
• Spring Security is still not supported.
40. • Typical UCs that are good
candidates for reactive solutions:
• Handling high rate of external
transactions (stocks, financial, etc.)
• Logging
• UI events
• Sensor’s metrics
• IoT readings
• Let’s see a typical UC from
the domain of UI event
handling
From: https://gist.github.com/staltz/868e7e9bc2a7b8c1f754
41. Summary
• The hardest part is thinking in Reactive
• let go of imperative / stateful habits of typical programming
• forcing your brain to work in a different paradigm
• Instead of "X changes Y" go for "Y is changed by X".
• Reactive Programming is programming with asynchronous data streams
• Never block
• Always handle your exceptions
• You observe these streams and react when a value is emitted.
• Separation of concerns is fundamental to function chaining
• Immutability of streams is fundamental too
42. • “Who to follow suggestion” (based on the idea from: https://gist.github.com/staltz/868e7e9bc2a7b8c1f754)
• FE track - implement a suggestion window / popup
• Fetch GitHub users (or any other API from https://github.com/toddmotto/public-apis)
• Support selecting/ removing a specific suggestion
• Support selecting / removing a specific suggestion
• Add the relevant page for rendering the suggestions
• BE track - implement RESTful API
• Schedule the fetch of GitHub users
• Persist users in MongoDB
• Retrieve users using a REST controller
• Use WebClient / WebTestClient to consume your API
• Make sure you render/list only 10 users at a time
• Support ‘refresh’ functionality to replace the 10 retrieved users
(make sure you don’t “lose” users and you don’t re-fetch them)
• Take care of extreme cases like initialization for example
Summary exercise