This document discusses integrating applications in a reactive way. It begins by defining reactive programming and reactive systems, emphasizing asynchronous, non-blocking architectures. It then discusses application integration patterns like those in Apache Camel, which supports over 200 components and many integration patterns. The document ends by discussing how to build larger reactive systems through microservice integration and communication across reactive and non-reactive applications.
This is your one stop shop introduction to get oriented to the world of reactive programming. There are lots of such intros out there even manifestos. We hope this is the one where you don't get lost and it makes sense. Get a definition of what "reactive" means and why it matters. Learn about Reactive Streams and Reactive Extensions and the emerging ecosystem around them. Get a sense for what going reactive means for the programming model. See lots of hands-on demos introducing the basic concepts in composition libraries using RxJava and Reactor.
Building a real time, solr-powered recommendation engineTrey Grainger
Searching text is what Solr is known for, but did you know that many companies receive an equal or greater business impact through implementing a recommendation engine in addition to their text search capabilities? With a few tweaks, Solr (or Lucene) can also serve as a full featured recommendation engine. Machine learning libraries like Apache Mahout provide excellent behavior-based, off-line recommendation algorithms, but what if you want more control? This talk will demonstrate how to effectively utilize Solr to perform collaborative filtering (users who liked this also liked…), categorical classification and subsequent hierarchical-based recommendations, as well as related-concept extraction and concept based recommendations. Sound difficult? It’s not. Come learn step-by-step how to create a powerful real-time recommendation engine using Apache Solr and see some real-world examples of some of these strategies in action.
Integrating Splunk into your Spring ApplicationsDamien Dallimore
How much visibility do you really have into your Spring applications? How effectively are you capturing,harnessing and correlating the logs, metrics, & messages from your Spring applications that can be used to deliver this visibility ? What tools and techniques are you providing your Spring developers with to better create and utilize this mass of machine data ? In this session I'll answer these questions and show how Splunk can be used to not only provide historical and realtime visibility into your Spring applications , but also as a platform that developers can use to become more "devops effective" & easily create custom big data integrations and standalone solutions.I'll discuss and demonstrate many of Splunk's Java apps,frameworks and SDK and also cover the Spring Integration Adaptors for Splunk.
Streaming all over the world Real life use cases with Kafka Streamsconfluent
Streaming all over the world Real life use cases with Kafka Streams, Dr. Benedikt Linse, Senior Solutions Architect, Confluent
https://www.meetup.com/Apache-Kafka-Germany-Munich/events/281819704/
Building a Streaming Microservice Architecture: with Apache Spark Structured ...Databricks
As we continue to push the boundaries of what is possible with respect to pipeline throughput and data serving tiers, new methodologies and techniques continue to emerge to handle larger and larger workloads
File Format Benchmarks - Avro, JSON, ORC, & ParquetOwen O'Malley
Hadoop Summit June 2016
The landscape for storing your big data is quite complex, with several competing formats and different implementations of each format. Understanding your use of the data is critical for picking the format. Depending on your use case, the different formats perform very differently. Although you can use a hammer to drive a screw, it isn’t fast or easy to do so. The use cases that we’ve examined are: * reading all of the columns * reading a few of the columns * filtering using a filter predicate * writing the data Furthermore, it is important to benchmark on real data rather than synthetic data. We used the Github logs data available freely from http://githubarchive.org We will make all of the benchmark code open source so that our experiments can be replicated.
This is your one stop shop introduction to get oriented to the world of reactive programming. There are lots of such intros out there even manifestos. We hope this is the one where you don't get lost and it makes sense. Get a definition of what "reactive" means and why it matters. Learn about Reactive Streams and Reactive Extensions and the emerging ecosystem around them. Get a sense for what going reactive means for the programming model. See lots of hands-on demos introducing the basic concepts in composition libraries using RxJava and Reactor.
Building a real time, solr-powered recommendation engineTrey Grainger
Searching text is what Solr is known for, but did you know that many companies receive an equal or greater business impact through implementing a recommendation engine in addition to their text search capabilities? With a few tweaks, Solr (or Lucene) can also serve as a full featured recommendation engine. Machine learning libraries like Apache Mahout provide excellent behavior-based, off-line recommendation algorithms, but what if you want more control? This talk will demonstrate how to effectively utilize Solr to perform collaborative filtering (users who liked this also liked…), categorical classification and subsequent hierarchical-based recommendations, as well as related-concept extraction and concept based recommendations. Sound difficult? It’s not. Come learn step-by-step how to create a powerful real-time recommendation engine using Apache Solr and see some real-world examples of some of these strategies in action.
Integrating Splunk into your Spring ApplicationsDamien Dallimore
How much visibility do you really have into your Spring applications? How effectively are you capturing,harnessing and correlating the logs, metrics, & messages from your Spring applications that can be used to deliver this visibility ? What tools and techniques are you providing your Spring developers with to better create and utilize this mass of machine data ? In this session I'll answer these questions and show how Splunk can be used to not only provide historical and realtime visibility into your Spring applications , but also as a platform that developers can use to become more "devops effective" & easily create custom big data integrations and standalone solutions.I'll discuss and demonstrate many of Splunk's Java apps,frameworks and SDK and also cover the Spring Integration Adaptors for Splunk.
Streaming all over the world Real life use cases with Kafka Streamsconfluent
Streaming all over the world Real life use cases with Kafka Streams, Dr. Benedikt Linse, Senior Solutions Architect, Confluent
https://www.meetup.com/Apache-Kafka-Germany-Munich/events/281819704/
Building a Streaming Microservice Architecture: with Apache Spark Structured ...Databricks
As we continue to push the boundaries of what is possible with respect to pipeline throughput and data serving tiers, new methodologies and techniques continue to emerge to handle larger and larger workloads
File Format Benchmarks - Avro, JSON, ORC, & ParquetOwen O'Malley
Hadoop Summit June 2016
The landscape for storing your big data is quite complex, with several competing formats and different implementations of each format. Understanding your use of the data is critical for picking the format. Depending on your use case, the different formats perform very differently. Although you can use a hammer to drive a screw, it isn’t fast or easy to do so. The use cases that we’ve examined are: * reading all of the columns * reading a few of the columns * filtering using a filter predicate * writing the data Furthermore, it is important to benchmark on real data rather than synthetic data. We used the Github logs data available freely from http://githubarchive.org We will make all of the benchmark code open source so that our experiments can be replicated.
Troubleshooting Kerberos in Hadoop: Taming the BeastDataWorks Summit
Kerberos is the ubiquitous authentication mechanism when it comes to secure any Hadoop Services. With recent updates in Hadoop core and various Apache Hadoop components, inherent Kerberos support has matured and has come a long way.
Understanding & configuring Kerberos is still a challenge but even more painful & frustrating is troubleshooting a Kerberos issue. There are lot of things (small & big) that can go wrong (and will go wrong!). This talk covers the Kerberos debugging part in detail and discusses the tools & tricks that can be used to narrow down any Kerberos issue.
Rather than discussing the issues and their resolution, we will focus on how to approach a Kerberos problem and do's / dont's in Kerberos scene. This talk will provide a step by step guide that will equip the audience for troubleshooting future Kerberos problems.
Agenda is to discuss:
- Systematic approach to Kerberos troubleshooting
- Kerberos Tools available in Hadoop arsenal
- Tips & Tricks to narrow down Kerberos issues quickly
- Some nasty Kerberos issues from Support trenches
Some prior knowledge on Kerberos basics will be appreciated but is not a prerequisite.
Speaker:
Vipin Rathor, Sr. Product Specialist (HDP Security), Hortonworks
Introduction to Apache Flink - Fast and reliable big data processingTill Rohrmann
This presentation introduces Apache Flink, a massively parallel data processing engine which currently undergoes the incubation process at the Apache Software Foundation. Flink's programming primitives are presented and it is shown how easily a distributed PageRank algorithm can be implemented with Flink. Intriguing features such as dedicated memory management, Hadoop compatibility, streaming and automatic optimisation make it an unique system in the world of Big Data processing.
Decoupling your application using Symfony Messenger and eventshmmonteiro
Web applications get more complex over time.
We start with a simple application, create a business on top of it. Start hiring people, and all of a sudden the code does not talk in the same business language. It becomes harder to change.
There are strategies like Domain Driven Design that shows how to put business rules into your code and publish domain events that can be consumed asynchronously by another service.
For example, when a booking is made, we want to send an invoice, reacting to a Booking was confirmed domain event.
By decoupling the code with domain events we have the help of Symfony Messenger.
Symfony messenger allow us to simplify our business allowing to publish and react to those domain events, no matter where we publish them. We can even create specific alarms on some specific events that are important to our business and specific retry strategies.
Apache kafka performance(latency)_benchmark_v0.3SANG WON PARK
Apache Kafka를 이용하여 이미지 데이터를 얼마나 빠르게(with low latency) 전달 가능한지 성능 테스트.
최종 목적은 AI(ML/DL) 모델의 입력으로 대량의 실시간 영상/이미지 데이터를 전달하는 메세지 큐로 사용하기 위하여, Drone/제조공정 등의 장비에서 전송된 이미지를 얼마나 빨리 AI Model로 전달 할 수 있는지 확인하기 위함.
그래서 Kafka에서 이미지를 전송하는 간단한 테스트를 진행하였고,
이 과정에서 latency를 얼마나 줄여주는지를 확인해 보았다.(HTTP 프로토콜/Socket과 비교하여)
[현재 까지 결론]
- Apache Kafka는 대량의 요청 처리를 위한 throughtput에 최적화 된 솔루션임.
- 현재는 producer의 몇가지 옵션만 조정하여 테스트한 결과이므로,
- 잠정적인 결과이지만, kafka의 latency를 향상을 위해서는 많은 시도가 필요할 것 같음.
- 즉, 단일 요청의 latency는 확실히 느리지만,
- 대량의 처리를 기준으로 평균 latency를 비교하면 평균적인 latency는 많이 낮아짐.
Test Code : https://github.com/freepsw/kafka-latency-test
Building a Versatile Analytics Pipeline on Top of Apache Spark with Mikhail C...Databricks
It is common for consumer Internet companies to start off with popular third-party tools for analytics needs. Then, when the user base and the company grows, they end up building their own analytics data pipeline and query engine to cope with their data scale, satisfy custom data enrichment and reporting needs and achieve high quality of their data. That’s exactly the path that was taken at Grammarly, the popular online proofreading service.
In this session, Grammarly will share how they improved business and marketing analytics, previously done with Mixpanel, by building their own in-house analytics engine and application on top of Apache Spark. Chernetsov wil touch upon several Spark tweaks and gotchas that they experienced along the way:
– Outputting data to several storages in a single Spark job
– Dealing with Spark memory model, building a custom spillable data-structure for your data traversal
– Implementing a custom query language with parser combinators on top of Spark sql parser
– Custom query optimizer and analyzer when you want not exactly sql
– Flexible-schema storage and query against multi-schema data with schema conflicts
– Custom aggregation functions in Spark SQL
Presentation at Strata Data Conference 2018, New York
The controller is the brain of Apache Kafka. A big part of what the controller does is to maintain the consistency of the replicas and determine which replica can be used to serve the clients, especially during individual broker failure.
Jun Rao outlines the main data flow in the controller—in particular, when a broker fails, how the controller automatically promotes another replica as the leader to serve the clients, and when a broker is started, how the controller resumes the replication pipeline in the restarted broker.
Jun then describes recent improvements to the controller that allow it to handle certain edge cases correctly and increase its performance, which allows for more partitions in a Kafka cluster.
With the rise of the Internet of Things (IoT) and low-latency analytics, streaming data becomes ever more important. Surprisingly, one of the most promising approaches for processing streaming data is SQL. In this presentation, Julian Hyde shows how to build streaming SQL analytics that deliver results with low latency, adapt to network changes, and play nicely with BI tools and stored data. He also describes how Apache Calcite optimizes streaming queries, and the ongoing collaborations between Calcite and the Storm, Flink and Samza projects.
This talk was given Julian Hyde at Apache Big Data conference, Vancouver, on 2016/05/09.
Keeping Spark on Track: Productionizing Spark for ETLDatabricks
ETL is the first phase when building a big data processing platform. Data is available from various sources and formats, and transforming the data into a compact binary format (Parquet, ORC, etc.) allows Apache Spark to process it in the most efficient manner. This talk will discuss common issues and best practices for speeding up your ETL workflows, handling dirty data, and debugging tips for identifying errors.
Speakers: Kyle Pistor & Miklos Christine
This talk was originally presented at Spark Summit East 2017.
Microservices, Node, Dapr and more - Part One (Fontys Hogeschool, Spring 2022)Lucas Jellema
This session does a quick recap of microservices: why do we want them, what problems do they solve and what are the principles around designing and implementing them? The Dapr.io runtime framework for distributed applications is introduced. Dapr provides a sidecar (almost like a personal assistant to a manager) to an application or microservice, a companion process that handles common tasks such as storing and retrieving state, consuming and publishing messages and events, invoking external services and other microservices as well as handling incoming requests. Participants will do a handson lab with Dapr.io and learn how to quickly implement interactions with various technologies, including Redis and MySQL.
Node(JS) is introduced – a server side JavaScript-based programming language that can be used well for implementing microservices. Some of the main characteristics of NodeJS are discussed (functional programming, asynchronous flows, NPM package manager) as well as common use cases (handle incoming HTTP requests, invoke REST APIs). In the second lab, Node and Dapr are used together to implement microservices that interact with databases and message brokers and each other – in a decoupled fashion.
Improved alerting with Prometheus and AlertmanagerJulien Pivotto
One of the reasons we collect metrics is to be able to alert on them. This presentation will introduce you some concepts of PromQL, prometheus and alertmanager to highly improve the quality and reliability of your alerts. This talk will cover different topic, including: - Reducing flapping alerts - Hysteresis - "Time of the day" based alerting - Computed thresholds with data history
In this talk, Jamie will demonstrate some core principles of a minimalist React framework called Next.js - We’ll also learn why SEO isn’t the only benefit of Server Side Rendering in React.
OpenSource API Server based on Node.js API framework built on supported Node.js platform with Tooling and DevOps. Use cases are Omni-channel API Server, Mobile Backend as a Service (mBaaS) or Next Generation Enterprise Service Bus. Key functionality include built in enterprise connectors, ORM, Offline Sync, Mobile and JS SDKs, Isomorphic JavaScript and Graphical API creation tool.
Troubleshooting Kerberos in Hadoop: Taming the BeastDataWorks Summit
Kerberos is the ubiquitous authentication mechanism when it comes to secure any Hadoop Services. With recent updates in Hadoop core and various Apache Hadoop components, inherent Kerberos support has matured and has come a long way.
Understanding & configuring Kerberos is still a challenge but even more painful & frustrating is troubleshooting a Kerberos issue. There are lot of things (small & big) that can go wrong (and will go wrong!). This talk covers the Kerberos debugging part in detail and discusses the tools & tricks that can be used to narrow down any Kerberos issue.
Rather than discussing the issues and their resolution, we will focus on how to approach a Kerberos problem and do's / dont's in Kerberos scene. This talk will provide a step by step guide that will equip the audience for troubleshooting future Kerberos problems.
Agenda is to discuss:
- Systematic approach to Kerberos troubleshooting
- Kerberos Tools available in Hadoop arsenal
- Tips & Tricks to narrow down Kerberos issues quickly
- Some nasty Kerberos issues from Support trenches
Some prior knowledge on Kerberos basics will be appreciated but is not a prerequisite.
Speaker:
Vipin Rathor, Sr. Product Specialist (HDP Security), Hortonworks
Introduction to Apache Flink - Fast and reliable big data processingTill Rohrmann
This presentation introduces Apache Flink, a massively parallel data processing engine which currently undergoes the incubation process at the Apache Software Foundation. Flink's programming primitives are presented and it is shown how easily a distributed PageRank algorithm can be implemented with Flink. Intriguing features such as dedicated memory management, Hadoop compatibility, streaming and automatic optimisation make it an unique system in the world of Big Data processing.
Decoupling your application using Symfony Messenger and eventshmmonteiro
Web applications get more complex over time.
We start with a simple application, create a business on top of it. Start hiring people, and all of a sudden the code does not talk in the same business language. It becomes harder to change.
There are strategies like Domain Driven Design that shows how to put business rules into your code and publish domain events that can be consumed asynchronously by another service.
For example, when a booking is made, we want to send an invoice, reacting to a Booking was confirmed domain event.
By decoupling the code with domain events we have the help of Symfony Messenger.
Symfony messenger allow us to simplify our business allowing to publish and react to those domain events, no matter where we publish them. We can even create specific alarms on some specific events that are important to our business and specific retry strategies.
Apache kafka performance(latency)_benchmark_v0.3SANG WON PARK
Apache Kafka를 이용하여 이미지 데이터를 얼마나 빠르게(with low latency) 전달 가능한지 성능 테스트.
최종 목적은 AI(ML/DL) 모델의 입력으로 대량의 실시간 영상/이미지 데이터를 전달하는 메세지 큐로 사용하기 위하여, Drone/제조공정 등의 장비에서 전송된 이미지를 얼마나 빨리 AI Model로 전달 할 수 있는지 확인하기 위함.
그래서 Kafka에서 이미지를 전송하는 간단한 테스트를 진행하였고,
이 과정에서 latency를 얼마나 줄여주는지를 확인해 보았다.(HTTP 프로토콜/Socket과 비교하여)
[현재 까지 결론]
- Apache Kafka는 대량의 요청 처리를 위한 throughtput에 최적화 된 솔루션임.
- 현재는 producer의 몇가지 옵션만 조정하여 테스트한 결과이므로,
- 잠정적인 결과이지만, kafka의 latency를 향상을 위해서는 많은 시도가 필요할 것 같음.
- 즉, 단일 요청의 latency는 확실히 느리지만,
- 대량의 처리를 기준으로 평균 latency를 비교하면 평균적인 latency는 많이 낮아짐.
Test Code : https://github.com/freepsw/kafka-latency-test
Building a Versatile Analytics Pipeline on Top of Apache Spark with Mikhail C...Databricks
It is common for consumer Internet companies to start off with popular third-party tools for analytics needs. Then, when the user base and the company grows, they end up building their own analytics data pipeline and query engine to cope with their data scale, satisfy custom data enrichment and reporting needs and achieve high quality of their data. That’s exactly the path that was taken at Grammarly, the popular online proofreading service.
In this session, Grammarly will share how they improved business and marketing analytics, previously done with Mixpanel, by building their own in-house analytics engine and application on top of Apache Spark. Chernetsov wil touch upon several Spark tweaks and gotchas that they experienced along the way:
– Outputting data to several storages in a single Spark job
– Dealing with Spark memory model, building a custom spillable data-structure for your data traversal
– Implementing a custom query language with parser combinators on top of Spark sql parser
– Custom query optimizer and analyzer when you want not exactly sql
– Flexible-schema storage and query against multi-schema data with schema conflicts
– Custom aggregation functions in Spark SQL
Presentation at Strata Data Conference 2018, New York
The controller is the brain of Apache Kafka. A big part of what the controller does is to maintain the consistency of the replicas and determine which replica can be used to serve the clients, especially during individual broker failure.
Jun Rao outlines the main data flow in the controller—in particular, when a broker fails, how the controller automatically promotes another replica as the leader to serve the clients, and when a broker is started, how the controller resumes the replication pipeline in the restarted broker.
Jun then describes recent improvements to the controller that allow it to handle certain edge cases correctly and increase its performance, which allows for more partitions in a Kafka cluster.
With the rise of the Internet of Things (IoT) and low-latency analytics, streaming data becomes ever more important. Surprisingly, one of the most promising approaches for processing streaming data is SQL. In this presentation, Julian Hyde shows how to build streaming SQL analytics that deliver results with low latency, adapt to network changes, and play nicely with BI tools and stored data. He also describes how Apache Calcite optimizes streaming queries, and the ongoing collaborations between Calcite and the Storm, Flink and Samza projects.
This talk was given Julian Hyde at Apache Big Data conference, Vancouver, on 2016/05/09.
Keeping Spark on Track: Productionizing Spark for ETLDatabricks
ETL is the first phase when building a big data processing platform. Data is available from various sources and formats, and transforming the data into a compact binary format (Parquet, ORC, etc.) allows Apache Spark to process it in the most efficient manner. This talk will discuss common issues and best practices for speeding up your ETL workflows, handling dirty data, and debugging tips for identifying errors.
Speakers: Kyle Pistor & Miklos Christine
This talk was originally presented at Spark Summit East 2017.
Microservices, Node, Dapr and more - Part One (Fontys Hogeschool, Spring 2022)Lucas Jellema
This session does a quick recap of microservices: why do we want them, what problems do they solve and what are the principles around designing and implementing them? The Dapr.io runtime framework for distributed applications is introduced. Dapr provides a sidecar (almost like a personal assistant to a manager) to an application or microservice, a companion process that handles common tasks such as storing and retrieving state, consuming and publishing messages and events, invoking external services and other microservices as well as handling incoming requests. Participants will do a handson lab with Dapr.io and learn how to quickly implement interactions with various technologies, including Redis and MySQL.
Node(JS) is introduced – a server side JavaScript-based programming language that can be used well for implementing microservices. Some of the main characteristics of NodeJS are discussed (functional programming, asynchronous flows, NPM package manager) as well as common use cases (handle incoming HTTP requests, invoke REST APIs). In the second lab, Node and Dapr are used together to implement microservices that interact with databases and message brokers and each other – in a decoupled fashion.
Improved alerting with Prometheus and AlertmanagerJulien Pivotto
One of the reasons we collect metrics is to be able to alert on them. This presentation will introduce you some concepts of PromQL, prometheus and alertmanager to highly improve the quality and reliability of your alerts. This talk will cover different topic, including: - Reducing flapping alerts - Hysteresis - "Time of the day" based alerting - Computed thresholds with data history
In this talk, Jamie will demonstrate some core principles of a minimalist React framework called Next.js - We’ll also learn why SEO isn’t the only benefit of Server Side Rendering in React.
OpenSource API Server based on Node.js API framework built on supported Node.js platform with Tooling and DevOps. Use cases are Omni-channel API Server, Mobile Backend as a Service (mBaaS) or Next Generation Enterprise Service Bus. Key functionality include built in enterprise connectors, ORM, Offline Sync, Mobile and JS SDKs, Isomorphic JavaScript and Graphical API creation tool.
Presenter: Kenn Knowles, Software Engineer, Google & Apache Beam (incubating) PPMC member
Apache Beam (incubating) is a programming model and library for unified batch & streaming big data processing. This talk will cover the Beam programming model broadly, including its origin story and vision for the future. We will dig into how Beam separates concerns for authors of streaming data processing pipelines, isolating what you want to compute from where your data is distributed in time and when you want to produce output. Time permitting, we might dive deeper into what goes into building a Beam runner, for example atop Apache Apex.
Red Hat Nordics 2020 - Apache Camel 3 the next generation of enterprise integ...Claus Ibsen
In this session, we'll focus on:
Camel 3: Demos of how Camel 3, Camel K and Camel Quarkus all work together, and will provide insights into Camel’s role in the next major release of Red Hat Integration products.
Camel K: This serverless integration platform provides low-code/no-code capabilities, where integrations can be snapped together quickly using the powers from integration patterns and Camel’s extensive set of connectors.
Camel Quarkus: Using Knative (the fast runtime of Quarkus) and Camel K brings awesome serverless features, such as auto-scaling, scaling to zero, and event-based communication, with great integration capabilities from Apache Camel.
You will also hear about the latest Camel sub-project Camel Kafka Connectors which makes it possible to use all the Camel components as Kafka Connect connectors.
Finally we bring details of the roadmap for what is coming up in the Camel projects.
REST seven’s rule was “Code on Demand,” meaning the ability for the server to deliver code able to run on the client, and the recommended language was JavaScript. Some, to use the same code everywhere, tried to do it with Java, or .NET (ActiveX). None of them had long life success in browsers. HTML5 and offline support contributed in the creation of a bunch of APIs which only made sense on server-side in first place: File/FileSystem, Workers, Sockets, Storage/Session, Blob, ImageData. Almost all those APIs, including the not that young XMLHttpRequest, have been designed to be usable via either synchronous or asynchronous APIs. We have today the opportunity to write code really able to either on the server and on the client and then have consistent behaviors and security rules. We can expect interoperable code/libraries/modules, save a lot of developing and debugging time, get more people involved in code we need. Discover already existing opportunities, see some of them working, and envision what the future can come with.
SDVIs and In-Situ Visualization on TACC's StampedeIntel® Software
Speaker: Paul Navrátil, Texas Advanced Computing Center (TACC)
The design emphasis for supercomputing systems has moved from raw performance to performance-per-watt, and as a result, supercomputing architectures are converging on processors with wide vector units and many processing cores per chip. Such processors are capable of performant image rendering purely in software. This improved capability is fortuitous, since the prevailing homogeneous system designs lack dedicated, hardware-accelerated rendering subsystems for use in data visualization. Reliance on this “software-defined” rendering capability will grow in importance since, due to growing data sizes, visualizations must be performed on the same machine where the data is produced. Further, as data sizes outgrow disk I/O capacity, visualization will be increasingly incorporated into the simulation code itself (in situ visualization).
This talk presents recent work in high-fidelity visualization using the OSPRay ray tracing framework on TACC’s local and remote visualization systems. We present work using OSPRay within ParaView Catalyst in situ framework from Kitware, including capitalizing on opportunities to reduce data costs migrating through VTK filters for visualization. We highlight the performance opportunities and advantages of Intel® Advanced Vector Extensions 512, the memory system improvements possible with Intel® Xeon Phi™ processor multi-channel DRAM (MCDRAM) and the Intel® Omni-Path Architecture interconnect.
Node has captured the attention of early adopters by clearly differentiating itself as being asynchronous from the ground up while remaining accessible. Now that server side JavaScript is at the cutting edge of the asynchronous, real time web, it is in a much better position to establish itself as the go to language for also making synchronous, CRUD webapps and gain a stronger foothold on the server.
This talk covers the current state of server side JavaScript beyond Node. It introduces Common Node, a synchronous CommonJS compatibility layer using node-fibers which bridges the gap between the different platforms. We look into Common Node's internals, compare its performance to that of other implementations such as RingoJS and go through some ideal use cases.
Extending DevOps to Big Data Applications with KubernetesNicola Ferraro
DevOps, continuous delivery and modern architectural trends can incredibly speed up the software development process. Big Data applications cannot be an exception and need to keep the same pace.
top nidhi software solution freedownloadvrstrong314
This presentation emphasizes the importance of data security and legal compliance for Nidhi companies in India. It highlights how online Nidhi software solutions, like Vector Nidhi Software, offer advanced features tailored to these needs. Key aspects include encryption, access controls, and audit trails to ensure data security. The software complies with regulatory guidelines from the MCA and RBI and adheres to Nidhi Rules, 2014. With customizable, user-friendly interfaces and real-time features, these Nidhi software solutions enhance efficiency, support growth, and provide exceptional member services. The presentation concludes with contact information for further inquiries.
Prosigns: Transforming Business with Tailored Technology SolutionsProsigns
Unlocking Business Potential: Tailored Technology Solutions by Prosigns
Discover how Prosigns, a leading technology solutions provider, partners with businesses to drive innovation and success. Our presentation showcases our comprehensive range of services, including custom software development, web and mobile app development, AI & ML solutions, blockchain integration, DevOps services, and Microsoft Dynamics 365 support.
Custom Software Development: Prosigns specializes in creating bespoke software solutions that cater to your unique business needs. Our team of experts works closely with you to understand your requirements and deliver tailor-made software that enhances efficiency and drives growth.
Web and Mobile App Development: From responsive websites to intuitive mobile applications, Prosigns develops cutting-edge solutions that engage users and deliver seamless experiences across devices.
AI & ML Solutions: Harnessing the power of Artificial Intelligence and Machine Learning, Prosigns provides smart solutions that automate processes, provide valuable insights, and drive informed decision-making.
Blockchain Integration: Prosigns offers comprehensive blockchain solutions, including development, integration, and consulting services, enabling businesses to leverage blockchain technology for enhanced security, transparency, and efficiency.
DevOps Services: Prosigns' DevOps services streamline development and operations processes, ensuring faster and more reliable software delivery through automation and continuous integration.
Microsoft Dynamics 365 Support: Prosigns provides comprehensive support and maintenance services for Microsoft Dynamics 365, ensuring your system is always up-to-date, secure, and running smoothly.
Learn how our collaborative approach and dedication to excellence help businesses achieve their goals and stay ahead in today's digital landscape. From concept to deployment, Prosigns is your trusted partner for transforming ideas into reality and unlocking the full potential of your business.
Join us on a journey of innovation and growth. Let's partner for success with Prosigns.
Large Language Models and the End of ProgrammingMatt Welsh
Talk by Matt Welsh at Craft Conference 2024 on the impact that Large Language Models will have on the future of software development. In this talk, I discuss the ways in which LLMs will impact the software industry, from replacing human software developers with AI, to replacing conventional software with models that perform reasoning, computation, and problem-solving.
Enhancing Research Orchestration Capabilities at ORNL.pdfGlobus
Cross-facility research orchestration comes with ever-changing constraints regarding the availability and suitability of various compute and data resources. In short, a flexible data and processing fabric is needed to enable the dynamic redirection of data and compute tasks throughout the lifecycle of an experiment. In this talk, we illustrate how we easily leveraged Globus services to instrument the ACE research testbed at the Oak Ridge Leadership Computing Facility with flexible data and task orchestration capabilities.
Field Employee Tracking System| MiTrack App| Best Employee Tracking Solution|...informapgpstrackings
Keep tabs on your field staff effortlessly with Informap Technology Centre LLC. Real-time tracking, task assignment, and smart features for efficient management. Request a live demo today!
For more details, visit us : https://informapuae.com/field-staff-tracking/
Enhancing Project Management Efficiency_ Leveraging AI Tools like ChatGPT.pdfJay Das
With the advent of artificial intelligence or AI tools, project management processes are undergoing a transformative shift. By using tools like ChatGPT, and Bard organizations can empower their leaders and managers to plan, execute, and monitor projects more effectively.
Into the Box Keynote Day 2: Unveiling amazing updates and announcements for modern CFML developers! Get ready for exciting releases and updates on Ortus tools and products. Stay tuned for cutting-edge innovations designed to boost your productivity.
In software engineering, the right architecture is essential for robust, scalable platforms. Wix has undergone a pivotal shift from event sourcing to a CRUD-based model for its microservices. This talk will chart the course of this pivotal journey.
Event sourcing, which records state changes as immutable events, provided robust auditing and "time travel" debugging for Wix Stores' microservices. Despite its benefits, the complexity it introduced in state management slowed development. Wix responded by adopting a simpler, unified CRUD model. This talk will explore the challenges of event sourcing and the advantages of Wix's new "CRUD on steroids" approach, which streamlines API integration and domain event management while preserving data integrity and system resilience.
Participants will gain valuable insights into Wix's strategies for ensuring atomicity in database updates and event production, as well as caching, materialization, and performance optimization techniques within a distributed system.
Join us to discover how Wix has mastered the art of balancing simplicity and extensibility, and learn how the re-adoption of the modest CRUD has turbocharged their development velocity, resilience, and scalability in a high-growth environment.
Navigating the Metaverse: A Journey into Virtual Evolution"Donna Lenk
Join us for an exploration of the Metaverse's evolution, where innovation meets imagination. Discover new dimensions of virtual events, engage with thought-provoking discussions, and witness the transformative power of digital realms."
How Recreation Management Software Can Streamline Your Operations.pptxwottaspaceseo
Recreation management software streamlines operations by automating key tasks such as scheduling, registration, and payment processing, reducing manual workload and errors. It provides centralized management of facilities, classes, and events, ensuring efficient resource allocation and facility usage. The software offers user-friendly online portals for easy access to bookings and program information, enhancing customer experience. Real-time reporting and data analytics deliver insights into attendance and preferences, aiding in strategic decision-making. Additionally, effective communication tools keep participants and staff informed with timely updates. Overall, recreation management software enhances efficiency, improves service delivery, and boosts customer satisfaction.
Cyaniclab : Software Development Agency Portfolio.pdfCyanic lab
CyanicLab, an offshore custom software development company based in Sweden,India, Finland, is your go-to partner for startup development and innovative web design solutions. Our expert team specializes in crafting cutting-edge software tailored to meet the unique needs of startups and established enterprises alike. From conceptualization to execution, we offer comprehensive services including web and mobile app development, UI/UX design, and ongoing software maintenance. Ready to elevate your business? Contact CyanicLab today and let us propel your vision to success with our top-notch IT solutions.
Unleash Unlimited Potential with One-Time Purchase
BoxLang is more than just a language; it's a community. By choosing a Visionary License, you're not just investing in your success, you're actively contributing to the ongoing development and support of BoxLang.
We describe the deployment and use of Globus Compute for remote computation. This content is aimed at researchers who wish to compute on remote resources using a unified programming interface, as well as system administrators who will deploy and operate Globus Compute services on their research computing infrastructure.
Globus Compute wth IRI Workflows - GlobusWorld 2024Globus
As part of the DOE Integrated Research Infrastructure (IRI) program, NERSC at Lawrence Berkeley National Lab and ALCF at Argonne National Lab are working closely with General Atomics on accelerating the computing requirements of the DIII-D experiment. As part of the work the team is investigating ways to speedup the time to solution for many different parts of the DIII-D workflow including how they run jobs on HPC systems. One of these routes is looking at Globus Compute as a way to replace the current method for managing tasks and we describe a brief proof of concept showing how Globus Compute could help to schedule jobs and be a tool to connect compute at different facilities.
Innovating Inference - Remote Triggering of Large Language Models on HPC Clus...Globus
Large Language Models (LLMs) are currently the center of attention in the tech world, particularly for their potential to advance research. In this presentation, we'll explore a straightforward and effective method for quickly initiating inference runs on supercomputers using the vLLM tool with Globus Compute, specifically on the Polaris system at ALCF. We'll begin by briefly discussing the popularity and applications of LLMs in various fields. Following this, we will introduce the vLLM tool, and explain how it integrates with Globus Compute to efficiently manage LLM operations on Polaris. Attendees will learn the practical aspects of setting up and remotely triggering LLMs from local machines, focusing on ease of use and efficiency. This talk is ideal for researchers and practitioners looking to leverage the power of LLMs in their work, offering a clear guide to harnessing supercomputing resources for quick and effective LLM inference.
Globus Connect Server Deep Dive - GlobusWorld 2024Globus
We explore the Globus Connect Server (GCS) architecture and experiment with advanced configuration options and use cases. This content is targeted at system administrators who are familiar with GCS and currently operate—or are planning to operate—broader deployments at their institution.
Understanding Globus Data Transfers with NetSageGlobus
NetSage is an open privacy-aware network measurement, analysis, and visualization service designed to help end-users visualize and reason about large data transfers. NetSage traditionally has used a combination of passive measurements, including SNMP and flow data, as well as active measurements, mainly perfSONAR, to provide longitudinal network performance data visualization. It has been deployed by dozens of networks world wide, and is supported domestically by the Engagement and Performance Operations Center (EPOC), NSF #2328479. We have recently expanded the NetSage data sources to include logs for Globus data transfers, following the same privacy-preserving approach as for Flow data. Using the logs for the Texas Advanced Computing Center (TACC) as an example, this talk will walk through several different example use cases that NetSage can answer, including: Who is using Globus to share data with my institution, and what kind of performance are they able to achieve? How many transfers has Globus supported for us? Which sites are we sharing the most data with, and how is that changing over time? How is my site using Globus to move data internally, and what kind of performance do we see for those transfers? What percentage of data transfers at my institution used Globus, and how did the overall data transfer performance compare to the Globus users?
2. Nicola Ferraro - JBCNConf Barcelona 2017
About Me
Follow me on twitter:
@ni_ferraro
Nicola Ferraro
Software Engineer at Red Hat
Working on Apache Camel, Fabric8,
JBoss Fuse, Fuse Integration
Services for Openshift
3. Nicola Ferraro - JBCNConf Barcelona 2017
Agenda
● What does it mean to be reactive?
○ Reactive Programming
○ Reactive Systems
● Application Integration
○ Enterprise Integration Patterns
○ Apache Camel
● Demo
● Integration in Reactive Systems
○ Patterns
○ Future Challenges
4. Nicola Ferraro - JBCNConf Barcelona 2017
What is Reactive Programming?
The goal of your
application is to:
“put a marble into
the bucket”
Phineas and Ferb
“Chain Reaction”
Disney
5. Nicola Ferraro - JBCNConf Barcelona 2017
What is Reactive Programming?
You design all the
steps (map, flatMap,
filter, kicks and
punches) that lead to
putting a marble in
the bucket.
A fixed schema that is
activated only when a
marble is kicked in
(reactive).
A gameplay on Youtube
Phineas and Ferb “Chain Reaction”
Disney
7. Nicola Ferraro - JBCNConf Barcelona 2017
Streams vs. Request/Response
Is reactive programming only about streams?
No, but even request/response patterns are internally mapped
as sequence of events (at event loop level).
And there’s flatMap.
// for each event, call a function
// and take the results in the stream
stream.flatMap(e -> compute(e))
8. Nicola Ferraro - JBCNConf Barcelona 2017
How a “standard” application looks like?
Multiple “moving pieces”
(threads):
● Concurrency
● Resource contention
● Lock/Wait/Notify
● “One thread per request”
model
● “Thread migration” time
It is fun to play, but
inefficient!
Super Mario Bros 3
Nintendo
9. Nicola Ferraro - JBCNConf Barcelona 2017
What’s wrong with “1 thread per request”?
At some point in the past (~2011), Node.js (it was single
threaded) was faster than many (multithreaded) Java web
servers! (according to some benchmarks… also on multi-core
machines!)
How the hell was this possible?!?!?
#reqs handled per second
Higher is better!
10. Nicola Ferraro - JBCNConf Barcelona 2017
The reactor pattern (event loop)
A “Reactor”
The Simpsons
Event
Take
Execute
Handler
Reactor
And the multi-reactor
Multiple event loops, one two per physical core
(Vert.x)
leverage
asynchronous
I/O
1 thread event
per request
With no
concurrency
issues!
11. Nicola Ferraro - JBCNConf Barcelona 2017
The Golden Rule
Don’t block the event loop!
● Thread.sleep(...)
● synchronized(...)
● statement.executeQuery()
● myLongWorkflow.execute()
● outputStream.write(...)
Blocking operations can be executed
in a external thread pool.
Do not sleep!
Do not block the reactor!
Is asynchronous IO always possible at OS level? https://lwn.net/Articles/724198/
12. Nicola Ferraro - JBCNConf Barcelona 2017
What’s wrong with “1 thread per request”?
Performance comparison of
Tomcat (1 thread per request)
vs RxNetty (2015)
Reasons:
● Thread migration + context
switch
● Slower Garbage Collection
Details:
https://github.com/Netflix-Skunkworks/WSPerfLab
/blob/master/test-results/RxNetty_vs_Tomcat_Apr
il2015.pdf
13. Nicola Ferraro - JBCNConf Barcelona 2017
Limits of “1 thread per request” model
How many concurrent requests you can handle?
1 thread requires 1 MiB of stack memory by default
10k connections ~= 10 GiB of stack memory (just for the
threads)
What about the C10m problem?
http://c10m.robertgraham.com/p/manifesto.html
The
C10k
problem
14. Nicola Ferraro - JBCNConf Barcelona 2017
Reactive Programming vs. Reactive Systems
“Reactive: Readily responsive to a stimulus”, Merriam Webster
● Responsive (react to user requests):
○ Having rapid response times
● Resilient (react to failures)
○ Being responsive also in case of failures (e.g. replication, retry)
● Elastic (react to load)
○ No bottlenecks, can scale according to load
● Message driven (react to events/messages)
○ Communication based on asynchronous message passing, with location
transparency and backpressure
The reactive manifesto: http://www.reactivemanifesto.org/
15. Nicola Ferraro - JBCNConf Barcelona 2017
Reactive “packages”
PROJECT REACTOR
(v. 5)
Toolkits for building Reactive Systems Reactive Programming Frameworks
Help me to classify them ...
streams
?
16. Nicola Ferraro - JBCNConf Barcelona 2017
Agenda
● What does it mean to be reactive?
○ Reactive Programming
○ Reactive Systems
● Application Integration
○ Enterprise Integration Patterns
○ Apache Camel
● Demo
● Integration in Reactive Systems
○ Patterns
○ Future Challenges
17. Nicola Ferraro - JBCNConf Barcelona 2017
Integration
Nobody lives in isolation.
Integration is about:
● Communication (Messaging)
● Converting protocols
● Mapping Bounded Contexts
● Message Correlation
● Routing
● Flow Control
● ...
18. Nicola Ferraro - JBCNConf Barcelona 2017
The integration platform
Apache Camel is a powerful
integration framework based on
enterprise integration patterns!
More than 200 components
Can connect to any platform The new logo (proposal)
by Zoran Regvart
20. Nicola Ferraro - JBCNConf Barcelona 2017
Basic Usage
// Simple routing
from(“jms:queue/orders”)
.log(“Processing order: ${body}”)
.to(“http://myservice”)
.to(“smtp:localhost:25”);
Isn’t “.to()” close to “.map()”?
// A (not so) much complicated
// example
from(“hdfs:/home/nicola/data”)
.unmarshal().json() // json array
.split().body()
.choice()
.when(...)
.to(“jdbc:...”)
.otherwise()
.log(“Skipped ${body}”)
21. Nicola Ferraro - JBCNConf Barcelona 2017
Camel: some EIPs
Fix out of order
messages
Aggregating results in
groups
from(“...”).resequence(header(“timestamp”))...
from(“...”).aggregate(header(“orderId”))...
22. Nicola Ferraro - JBCNConf Barcelona 2017
Camel: some EIPs
Recipient list
Content Based Routing
from(“...”).recipientList(header(“recipient”))...
from(“...”).choice().when()...otherwise()
23. Nicola Ferraro - JBCNConf Barcelona 2017
Camel: some EIPs
Hystrix
Others (important!):
● Redelivery Policy: setup number of redelivery attempts and delays
● Throttler: adjust message speed for slow consumers
● Service Call: integrate with an external service registry (consul, ribbon, kubernetes)
● Load Balancer: balance load to multiple endpoints using custom strategies
from(“...”)
.hystrix()
.to(“http://service”)
.onFallback()
.transform()...
Open and close the circuit to an external service
and provide fallback responses to the client.
S1 S2
24. Nicola Ferraro - JBCNConf Barcelona 2017
Is Camel Reactive?
Camel 3.0 will have a fully reactive core.
Camel 2.20 is not fully reactive, but:
● Uses asynchronous processing by default (no 1 thread pr)
● Supports backpressure and throttling
● Has multiple components for asynchronous I/O
No event loops and reactors, but it’s fast (especially the
latest version)!
25. Nicola Ferraro - JBCNConf Barcelona 2017
The goal: create a bigger reactive system
Integration in Reactive Systems
Browser
(Vert.x)
Microservice 1
(Vert.x)
EventBus Microservice 2
(Vert.x)
Resiliency, location
transparency
MS 1 MS 2
MS 3 MS 4
Another Reactive/Non-reactive Ecosystem
Integration
Microservice Integration
Microservice
Optional
26. Nicola Ferraro - JBCNConf Barcelona 2017
Communication in Reactive Applications
Inside the JVM:
Reactive Streams
http://www.reactive-streams.org/
A specification for asynchronous stream processing with
non-blocking backpressure [?].
PROJECT REACTOR
JVM
Reactive Streams
27. Nicola Ferraro - JBCNConf Barcelona 2017
Reactive Streams Visualized
public interface Publisher<T> {
public void subscribe(Subscriber<? super T> s);
}
public interface Subscriber<T> {
public void onSubscribe(Subscription s);
public void onNext(T t);
public void onError(Throwable t);
public void onComplete();
}
public interface Subscription {
public void request(long n);
public void cancel();
}
public interface Processor<T, R> extends Subscriber<T>,
Publisher<R> {
}
Just this 4 interfaces (and the rules to use them).
Java 9 Flow API: Flow.Publisher, Flow.Subscriber, Flow.Subscription, Flow.Processor
28. Nicola Ferraro - JBCNConf Barcelona 2017
Agenda
● What does it mean to be reactive?
○ Reactive Programming
○ Reactive Systems
● Application Integration
○ Enterprise Integration Patterns
○ Apache Camel
● Demo
● Integration in Reactive Systems
○ Patterns
○ Future Challenges
29. Nicola Ferraro - JBCNConf Barcelona 2017
Demo
Use camel-reactive-streams to exchange data with a reactive
library (Vert.x → rx-java2 → Camel).
Use camel-netty-http to connect a Spring-Boot 2 (Spring 5)
WebFlux service.
Use camel-grpc to forward a stream to a remote service and
get the response stream back.
● https://github.com/nicolaferraro/reactive-demo (note: requires camel 2.20-snapshot)
PROJECT REACTOR
30. Nicola Ferraro - JBCNConf Barcelona 2017
Demo: considerations
I’ve used the generic camel-reactive-streams component but
there’s also a specific connector for Vert.x (connects
directly to the Eventbus):
// Using the camel-vertx component
from(“vertx:raw-points”)
.to(“...”)
.to(“vertx:enhanced-points”);
31. Nicola Ferraro - JBCNConf Barcelona 2017
Agenda
● What does it mean to be reactive?
○ Reactive Programming
○ Reactive Systems
● Application Integration
○ Enterprise Integration Patterns
○ Apache Camel
● Demo
● Integration in Reactive Systems
○ Patterns
○ Future Challenges
32. Nicola Ferraro - JBCNConf Barcelona 2017
Backpressure?
OP Normal Scenario
Normal operator or
boundary betweeen
reactive streams
OP
Too much water
(events)
SLOW
buffer full
pressure in the pipes (like with water!)
Backpressure
“Remember, you can’t put too much water in a nuclear reactor”, Saturday Night Live, 1984
buffer full
Backpressure
33. Nicola Ferraro - JBCNConf Barcelona 2017
Backpressure in Reactive Streams
onSubscribe(sub)
subscribe()
request(1)
onNext(“m1”)
request(2)
onNext(“m2”)
onNext(“m3”)
Publisher
Subscriber
34. Nicola Ferraro - JBCNConf Barcelona 2017
Limited Resources: microservices - no backpressure
Microservice 1
Microservice 2
Microservice 3
Microservice 4
Can’t process all events:
● Timeout
● HTTP 503
In req/resp mode,
you can do circuit
breaking.
In in-only
(streaming) mode,
you will retry
(increase load)
35. Nicola Ferraro - JBCNConf Barcelona 2017
Limited Resources: microservices with backpressure
Microservice 1
Microservice 2
Microservice 4
Microservice 3You can buffer at the
source!
Flow control at
system level
backpressure
Less responsive, but you
can handle peaks
Later, you can scale out
“Microservice 4” to make
the system more
responsive
36. Nicola Ferraro - JBCNConf Barcelona 2017
End-to-End Backpressure: In-Only Stream
Java / JS /
Python
Service 1
Java / JS /
Python
Service 2
backpressured
How???
RSocket
http://rsocket.io/
Application protocol providing reactive streams semantics.
Other solutions?? Designed for efficiency at low level
onNext
onError
onComplete
37. Nicola Ferraro - JBCNConf Barcelona 2017
End-to-End Backpressure: In-Only Stream
This backpressure stuff is not new…
38. Nicola Ferraro - JBCNConf Barcelona 2017
Back to TCP
TPC implements a sliding window protocol for flow control!
Java / JS /
Python
Java / JS /
Python
TCP / IP
data
ack + window size
You cannot send more data
than requested to a TCP
recipient!
TCP is backpressure aware!
39. Nicola Ferraro - JBCNConf Barcelona 2017
Backpressure at Application Level: In-Only Stream
App 1
TCP
Local Backpressure:
do not write too much
App 2
TCP
Local Backpressure:
do not read if can’t process
backpressured
Application Level
Backpressure
Sliding window flow control
Can work also with higher level protocols
● HTTP
● Websocket
● SSE
● gRPC End-to-end backpressure
40. Nicola Ferraro - JBCNConf Barcelona 2017
Backpressure: Stream → Camel
What in case of request/response messaging?
Reactive
Streams
Rx-Java Camel
from(“reactive-streams:events”)
.throttle(3).timePeriodMillis(10000)
.to(“http://dinosaurs.io/api/echo”);
Camel
Producer
External Servicebackpressure
from(“reactive-streams:events?maxInflightExchanges=20”)
.to(“http://dinosaurs.io/api/echo”);
No more than 20 concurrent reqsNo more than 3 reqs in 10 secs
In-Out
41. Nicola Ferraro - JBCNConf Barcelona 2017
Backpressure: Camel → Stream
Camel
Consumer
Reactive
Streams
What happens when backpressure slows down Camel?
Reactor-CoreCamel
from(“jms:events”)
.routePolicy(maxExchangesPolicy)
.to(“reactive-streams:events”);
Message Source
backpressure
Camel will pause the consumer
in case of backpressure
Flux<Message>
42. Nicola Ferraro - JBCNConf Barcelona 2017
Adding Elasticity: Load Balancer
canvas1
canvas2Camel Load Balancer:
from(“...”)
.loadBalance().sticky(canvasIdExpr())
.to(“endpoint1”, “endpoint2”)
Supports:
● Round robin, random, custom
● Failover
● Mixing with ServiceCall EIP (location transparency + load balancing) Works with any protocol:
HTTP, TCP, GRPC, ...
Streams or
RPC
43. Nicola Ferraro - JBCNConf Barcelona 2017
Adding (a bit of) Location Transparency
192.168.0.22
service-1 service-2
Service Registry
● Consul
● Etcd
● Kubernetes
● Ribbon
Camel ServiceCall EIP:
from(“...”)
.serviceCall(“service-2”)
.to(“...”)
44. Nicola Ferraro - JBCNConf Barcelona 2017
Adding Location Transparency and Resiliency
Reactive
Streams
Rx-Java Camel
Camel
Producer
Camel
Consumer
Messaging Broker
(your choice)
● JMS
● Kafka (anti-backpressure)
● AMQP
● MQTT
backpressure
Allows all kind of messaging patterns:
● P2P In-Only
● P2P In-Out
● Pub/Sub
Send messages to:
● Queues
● Topics
backpressure
If you have a fast-enough
messaging broker, you don’t have to
care a lot about backpressure when
writing (anti-backpressure)
45. Nicola Ferraro - JBCNConf Barcelona 2017
@ni_ferraro
That’s all folks!