Rest.li is an open source REST framework that uses asynchronous and non-blocking I/O to build scalable REST architectures. It achieves asynchronous behavior through its components R2 (network communication), D2 (dynamic discovery and load balancing), and by not blocking on I/O. Rest.li provides both callback-based and ParSeq-based approaches for building asynchronous Rest.li servers and clients to avoid blocking and improve throughput. ParSeq allows expressing complex asynchronous workflows through tasks and promises.
Building Cloud-Native App Series - Part 1 of 11
Microservices Architecture Series
Design Thinking, Lean Startup, Agile (Kanban, Scrum),
User Stories, Domain-Driven Design
Multi tier, multi-tenant, multi-problem kafkaTodd Palino
At LinkedIn, the Kafka infrastructure is run as a service: the Streaming team develops and deploys Kafka, but is not the producer or consumer of the data that flows through it. With multiple datacenters, and numerous applications sharing these clusters, we have developed an architecture with multiple pipelines and multiple tiers. Most days, this works out well, but it has led to many interesting problems. Over the years we have worked to develop a number of solutions, most of them open source, to make it possible for us to reliably handle over a trillion messages a day.
YOW2018 - Events and Commands: Developing Asynchronous MicroservicesChris Richardson
The microservice architecture functionally decomposes an application into a set of services. Each service has its own private database that’s only accessible indirectly through the services API. Consequently, implementing queries and transactions that span multiple services is challenging.
In this presentation, you will learn how to solve these distributed data management challenges using asynchronous messaging. I describe how to implement transactions using sagas, which are sequences of local transactions, coordinated using messages. You will learn how to implement queries using Command Query Responsibility Segregation (CQRS), which uses events to maintain replicas. I describe how to use event sourcing, which is an event-centric approach to business logic and persistence, in a microservice architecture.
Real Time UI with Apache Kafka Streaming Analytics of Fast Data and Server PushLucas Jellema
Fast data arrives in real time and potentially high volume. Rapid processing, filtering and aggregation is required to ensure timely reaction and actual information in user interfaces. Doing so is a challenge, make this happen in a scalable and reliable fashion is even more interesting. This session introduces Apache Kafka as the scalable event bus that takes care of the events as they flow in and Kafka Streams for the streaming analytics. Both Java and Node applications are demonstrated that interact with Kafka and leverage Server Sent Events and WebSocket channels to update the Web UI in real time. User activity performed by the audience in the Web UI is processed by the Kafka powered back end and results in live updates on all clients. Kafka Streams and KSQL are used to analyze the real time events in real time and publish events with the live findings.
Building Cloud-Native App Series - Part 1 of 11
Microservices Architecture Series
Design Thinking, Lean Startup, Agile (Kanban, Scrum),
User Stories, Domain-Driven Design
Multi tier, multi-tenant, multi-problem kafkaTodd Palino
At LinkedIn, the Kafka infrastructure is run as a service: the Streaming team develops and deploys Kafka, but is not the producer or consumer of the data that flows through it. With multiple datacenters, and numerous applications sharing these clusters, we have developed an architecture with multiple pipelines and multiple tiers. Most days, this works out well, but it has led to many interesting problems. Over the years we have worked to develop a number of solutions, most of them open source, to make it possible for us to reliably handle over a trillion messages a day.
YOW2018 - Events and Commands: Developing Asynchronous MicroservicesChris Richardson
The microservice architecture functionally decomposes an application into a set of services. Each service has its own private database that’s only accessible indirectly through the services API. Consequently, implementing queries and transactions that span multiple services is challenging.
In this presentation, you will learn how to solve these distributed data management challenges using asynchronous messaging. I describe how to implement transactions using sagas, which are sequences of local transactions, coordinated using messages. You will learn how to implement queries using Command Query Responsibility Segregation (CQRS), which uses events to maintain replicas. I describe how to use event sourcing, which is an event-centric approach to business logic and persistence, in a microservice architecture.
Real Time UI with Apache Kafka Streaming Analytics of Fast Data and Server PushLucas Jellema
Fast data arrives in real time and potentially high volume. Rapid processing, filtering and aggregation is required to ensure timely reaction and actual information in user interfaces. Doing so is a challenge, make this happen in a scalable and reliable fashion is even more interesting. This session introduces Apache Kafka as the scalable event bus that takes care of the events as they flow in and Kafka Streams for the streaming analytics. Both Java and Node applications are demonstrated that interact with Kafka and leverage Server Sent Events and WebSocket channels to update the Web UI in real time. User activity performed by the audience in the Web UI is processed by the Kafka powered back end and results in live updates on all clients. Kafka Streams and KSQL are used to analyze the real time events in real time and publish events with the live findings.
Lambda Architecture with Spark, Spark Streaming, Kafka, Cassandra, Akka and S...Helena Edelson
Regardless of the meaning we are searching for over our vast amounts of data, whether we are in science, finance, technology, energy, health care…, we all share the same problems that must be solved: How do we achieve that? What technologies best support the requirements? This talk is about how to leverage fast access to historical data with real time streaming data for predictive modeling for lambda architecture with Spark Streaming, Kafka, Cassandra, Akka and Scala. Efficient Stream Computation, Composable Data Pipelines, Data Locality, Cassandra data model and low latency, Kafka producers and HTTP endpoints as akka actors...
Mistakes - I’ve made a few. Blunders in event-driven architecture | Simon Aub...HostedbyConfluent
Building systems around an event-driven architecture is a powerful pattern for creating awesome data intensive applications. Apache Kafka simplifies scalability and provides an event-driven backbone for service architectures.
But what can go wrong? Let me share some of my own blunders and lessons learnt in building event driven architecture so you don’t have to repeat my mistakes.
Building Event Driven Services with Kafka StreamsBen Stopford
The Kafka Summit version of this talk is more practical and includes code examples which walk though how to build a streaming application with Kafka Streams.
Let's Build an Inverted Index: Introduction to Apache Lucene/SolrSease
The University Seminar series aim to provide a basic understanding of Open Source Information Retrieval and its application in the real world through the Apache Lucene/Solr technologies.
Producer Performance Tuning for Apache KafkaJiangjie Qin
Kafka is well known for high throughput ingestion. However, to get the best latency characteristics without compromising on throughput and durability, we need to tune Kafka. In this talk, we share our experiences to achieve the optimal combination of latency, throughput and durability for different scenarios.
Paolo Castagna is a Senior Sales Engineer at Confluent. His background is on 'big data' and he has, first hand, saw the shift happening in the industry from batch to stream processing and from big data to fast data. His talk will introduce Kafka Streams and explain why Apache Kafka is a great option and simplification for stream processing.
This presentation introduces every aspects of the Kafka ecosystem:
- Concepts: explain all misleading concepts such as topic vs partition vs replication, producer vs consumer vs consumer group, group leader vs group coordinator, ...
- Advanced concepts: delivery semantic; idempotent producers; isolation levels; differences between the offsets such as High Watermark, Log End Offset, Last Stable Offset ...
- Kafka architecture: explain all Kafka components such as brokers, controllers, zookeeper, ...
- Overview of Kafka security: TLS/SSL, SASL, Kerberos, ...
- Overview of Kafka ecosystem: Kafka Stream, Kafka Connect, Schema Registry, monitoring tools.
- Kafka in Golang: How to use Kafka client in Golang.
- Comparison with other message queues such as RabbitMQ.
Kafka Streams State Stores Being Persistentconfluent
Being Persistent: A Look Into Kafka Streams State Stores, Neil Buesing, Principal Solutions Architect, Rill Data
Meetup link: https://www.meetup.com/TwinCities-Apache-Kafka/events/284002062/
Domain Driven Design - Strategic Patterns and MicroservicesRadosław Maziarka
Presentation describes Domain Driven Design - approach to create applications driven by business domain. I show how to split your monolith base on DDD strategic patterns.
Improving fault tolerance and scaling out in Kafka Streams with Bill Bejeck |...HostedbyConfluent
Kafka Streams is the popular stream processing component of Apache Kafka®. One of its best features is stateful operations. Kafka Streams works hard to ensure stateful operations can scale horizontally and survive failures, but doing so takes time. Kafka Streams offers the concept of ""standby-tasks,"" allowing for near-zero downtime failover, but surprisingly this feature still isn't widely used. The could be for various reasons, from lack of awareness to needing additional resources.
This presentation will cover how standby tasks work and how they're enabled. Additionally, I'll cover the work done in KIP-441 that enables faster scaling out for stateful tasks and provides more balanced stateful assignments. I'll also dive into the consumer rebalance protocol improvements that enable KIP-441 to be effective.
Attendees of this presentation will walk away understanding how and when to use standby tasks, leverage the improvements from KIP-441, and have a deeper understanding of how Kafka Streams works with state.
From a monolith to microservices + REST: The evolution of LinkedIn's architec...Karan Parikh
This is the story of LinkedIn's journey from a monolithic web application to a microservice based architecture, some of the challenges they faced along the way, and the tools they built to make this transition possible, including the Rest.li and Deco frameworks.
The new new competition - How digital platforms change competitive strategyPlatform Revolution
Based on the book PLATFORM REVOLUTION.
Learn more at platformrevolution.com
Follow us on @platrevolution and https://www.facebook.com/platformrevolution/
Lambda Architecture with Spark, Spark Streaming, Kafka, Cassandra, Akka and S...Helena Edelson
Regardless of the meaning we are searching for over our vast amounts of data, whether we are in science, finance, technology, energy, health care…, we all share the same problems that must be solved: How do we achieve that? What technologies best support the requirements? This talk is about how to leverage fast access to historical data with real time streaming data for predictive modeling for lambda architecture with Spark Streaming, Kafka, Cassandra, Akka and Scala. Efficient Stream Computation, Composable Data Pipelines, Data Locality, Cassandra data model and low latency, Kafka producers and HTTP endpoints as akka actors...
Mistakes - I’ve made a few. Blunders in event-driven architecture | Simon Aub...HostedbyConfluent
Building systems around an event-driven architecture is a powerful pattern for creating awesome data intensive applications. Apache Kafka simplifies scalability and provides an event-driven backbone for service architectures.
But what can go wrong? Let me share some of my own blunders and lessons learnt in building event driven architecture so you don’t have to repeat my mistakes.
Building Event Driven Services with Kafka StreamsBen Stopford
The Kafka Summit version of this talk is more practical and includes code examples which walk though how to build a streaming application with Kafka Streams.
Let's Build an Inverted Index: Introduction to Apache Lucene/SolrSease
The University Seminar series aim to provide a basic understanding of Open Source Information Retrieval and its application in the real world through the Apache Lucene/Solr technologies.
Producer Performance Tuning for Apache KafkaJiangjie Qin
Kafka is well known for high throughput ingestion. However, to get the best latency characteristics without compromising on throughput and durability, we need to tune Kafka. In this talk, we share our experiences to achieve the optimal combination of latency, throughput and durability for different scenarios.
Paolo Castagna is a Senior Sales Engineer at Confluent. His background is on 'big data' and he has, first hand, saw the shift happening in the industry from batch to stream processing and from big data to fast data. His talk will introduce Kafka Streams and explain why Apache Kafka is a great option and simplification for stream processing.
This presentation introduces every aspects of the Kafka ecosystem:
- Concepts: explain all misleading concepts such as topic vs partition vs replication, producer vs consumer vs consumer group, group leader vs group coordinator, ...
- Advanced concepts: delivery semantic; idempotent producers; isolation levels; differences between the offsets such as High Watermark, Log End Offset, Last Stable Offset ...
- Kafka architecture: explain all Kafka components such as brokers, controllers, zookeeper, ...
- Overview of Kafka security: TLS/SSL, SASL, Kerberos, ...
- Overview of Kafka ecosystem: Kafka Stream, Kafka Connect, Schema Registry, monitoring tools.
- Kafka in Golang: How to use Kafka client in Golang.
- Comparison with other message queues such as RabbitMQ.
Kafka Streams State Stores Being Persistentconfluent
Being Persistent: A Look Into Kafka Streams State Stores, Neil Buesing, Principal Solutions Architect, Rill Data
Meetup link: https://www.meetup.com/TwinCities-Apache-Kafka/events/284002062/
Domain Driven Design - Strategic Patterns and MicroservicesRadosław Maziarka
Presentation describes Domain Driven Design - approach to create applications driven by business domain. I show how to split your monolith base on DDD strategic patterns.
Improving fault tolerance and scaling out in Kafka Streams with Bill Bejeck |...HostedbyConfluent
Kafka Streams is the popular stream processing component of Apache Kafka®. One of its best features is stateful operations. Kafka Streams works hard to ensure stateful operations can scale horizontally and survive failures, but doing so takes time. Kafka Streams offers the concept of ""standby-tasks,"" allowing for near-zero downtime failover, but surprisingly this feature still isn't widely used. The could be for various reasons, from lack of awareness to needing additional resources.
This presentation will cover how standby tasks work and how they're enabled. Additionally, I'll cover the work done in KIP-441 that enables faster scaling out for stateful tasks and provides more balanced stateful assignments. I'll also dive into the consumer rebalance protocol improvements that enable KIP-441 to be effective.
Attendees of this presentation will walk away understanding how and when to use standby tasks, leverage the improvements from KIP-441, and have a deeper understanding of how Kafka Streams works with state.
From a monolith to microservices + REST: The evolution of LinkedIn's architec...Karan Parikh
This is the story of LinkedIn's journey from a monolithic web application to a microservice based architecture, some of the challenges they faced along the way, and the tools they built to make this transition possible, including the Rest.li and Deco frameworks.
The new new competition - How digital platforms change competitive strategyPlatform Revolution
Based on the book PLATFORM REVOLUTION.
Learn more at platformrevolution.com
Follow us on @platrevolution and https://www.facebook.com/platformrevolution/
في نهاية حياتنا.. لن نُحاسب على عدد شهادات الدبلوم التي حصلنا عليها ولا على كم من المال قد جمعنا ولا عن عدد الانجازات العظيمة التي حققناها. ولكننا سنُحاسب على ”كنت جائعا فاطعمتموني، كنت عريانا فكسيتموني، كنت مشردا فآويتموني““.
رئيس مصر دليلك للاختيار السليم- - بالحب للنمو و الازدهار- ميزان الادراه و ا...Ibrahimia Church Ftriends
كيف تختار رئيس من المرشحيين للرئاسه
بطريقه علميه
ابعد عن تقديراتك اى مؤثرات عاطفيه او تأثيرات للمصلحه تفكير جادى
حتى لو كان مختلف معك فى التيارات الفكريه
كن امين فى الحكم
انسى الشخص الذى تحبه
عشرة نقاط تكون مرشدا واضحا لاختيار الرئيس االذى تريده ان يقود البلاد بالمحبه للإزدهار و الرفاهيه
ميزان القياده
Building Scalable Stateless Applications with RxJavaRick Warren
RxJava is a lightweight open-source library, originally from Netflix, that makes it easy to compose asynchronous data sources and operations. This presentation is a high-level intro to this library and how it can fit into your application.
Threads, Queues, and More: Async Programming in iOSTechWell
To keep your iOS app running butter-smooth at 60 frames per second, Apple recommends doing as many tasks as possible asynchronously or “off the main thread.” Joe Keeley introduces you to some basic concepts of asynchronous programming in iOS. He discusses what threads and queues are, how they are related, and the special significance of the main queue to iOS. Look at what options are available in the iOS SDK to work asynchronously, including NSOperationQueues and Grand Central Dispatch. Take an in depth look at how to implement some common use cases for those options in Swift. Joe pays special attention to networking, one of the most common asynchronous use cases. Spend some time discussing common asynchronous programming pitfalls—and how to avoid them. Leave this session ready to try out asynchronous programming in your iOS app.
Reactive Java: Promises and Streams with Reakt (JavaOne Talk 2016)Rick Hightower
see labs at https://github.com/advantageous/j1-talks-2016
Import based on PPT so there is more notes. This is from our JavaOne Talk 2016 on Reakt, reactive Java programming with promises, circuit breakers, and streams. Reakt is a reactive Java lib that provides promises, streams, and a reactor to handle asynchronous call coordination. It was influenced by the design of promises in ES6. You want to async-call serviceA and then serviceB, take the results of serviceA and serviceB, and then call serviceC. Then, based on the results of call C, call D or E and then return the results to the original caller. Calls to A, B, C, D, and E are all async calls, and none should take longer than 10 seconds. If they do, then return a timeout to the original caller. The whole async call sequence should time out in 20 seconds if it does not complete and should also check for circuit breakers and provide back pressure feedback so the system does not have cascading failures. Learn more in this session.
Reactive Java: Promises and Streams with Reakt (JavaOne talk 2016)Rick Hightower
see labs at https://github.com/advantageous/j1-talks-2016
Import based on PDF. This is from our JavaOne Talk 2016 on Reakt, reactive Java programming with promises, circuit breakers, and streams. Reakt is a reactive Java lib that provides promises, streams, and a reactor to handle asynchronous call coordination. It was influenced by the design of promises in ES6. You want to async-call serviceA and then serviceB, take the results of serviceA and serviceB, and then call serviceC. Then, based on the results of call C, call D or E and then return the results to the original caller. Calls to A, B, C, D, and E are all async calls, and none should take longer than 10 seconds. If they do, then return a timeout to the original caller. The whole async call sequence should time out in 20 seconds if it does not complete and should also check for circuit breakers and provide back pressure feedback so the system does not have cascading failures. Learn more in this session.
"Service Worker: Let Your Web App Feel Like a Native "FDConf
Nowadays web apps become inseparable part of our everyday life. But even playing such a big role they still don’t have a lot of advantages the native ones have. Service Worker is designed to break down these barriers. Installing and updating your app, fully controlling the network cache, intercepting network responses, sending push notifications and doing backgrounds updates. All these becomes possible with Service Worker. Is your web app ready to rock?
OWIN has a concept of middleware. So does ASP.NET 5. Why? What is middleware? How does it help in typical web applications? In this talk we will dive into the code, write some middleware and show how middleware helps you handle cross-cutting concerns in an isolated and re-usable way. I'll compare and contrast the OWIN and ASP.NET 5 middleware concepts and talk about where each is appropriate.
Sharding and Load Balancing in Scala - Twitter's FinagleGeoff Ballinger
My presentation at Mostly Functional (http://mostlyfunctional.com), part of this year's Turing Festival Fringe (http://turingfestival.com) in Edinburgh. The example source code is up on Github at https://github.com/geoffballinger/simple-sharder
Everything you wanted to know about writing async, concurrent http apps in java Baruch Sadogursky
As presented at CodeMotion Tel Aviv:
Facing tens of millions of clients continuously downloading binaries from its repositories, JFrog decided to offer an OSS client that natively supports these downloads. This session shares the main challenges of developing a highly concurrent, resumable, async download library on top of an Apache HTTP client. It also covers other libraries JFrog tested and why it decided to reinvent the wheel. Consider yourself forewarned: lots of HTTP internals, NIO, and concurrency ahead!
Exploring Innovations in Data Repository Solutions - Insights from the U.S. G...Globus
The U.S. Geological Survey (USGS) has made substantial investments in meeting evolving scientific, technical, and policy driven demands on storing, managing, and delivering data. As these demands continue to grow in complexity and scale, the USGS must continue to explore innovative solutions to improve its management, curation, sharing, delivering, and preservation approaches for large-scale research data. Supporting these needs, the USGS has partnered with the University of Chicago-Globus to research and develop advanced repository components and workflows leveraging its current investment in Globus. The primary outcome of this partnership includes the development of a prototype enterprise repository, driven by USGS Data Release requirements, through exploration and implementation of the entire suite of the Globus platform offerings, including Globus Flow, Globus Auth, Globus Transfer, and Globus Search. This presentation will provide insights into this research partnership, introduce the unique requirements and challenges being addressed and provide relevant project progress.
Gamify Your Mind; The Secret Sauce to Delivering Success, Continuously Improv...Shahin Sheidaei
Games are powerful teaching tools, fostering hands-on engagement and fun. But they require careful consideration to succeed. Join me to explore factors in running and selecting games, ensuring they serve as effective teaching tools. Learn to maintain focus on learning objectives while playing, and how to measure the ROI of gaming in education. Discover strategies for pitching gaming to leadership. This session offers insights, tips, and examples for coaches, team leads, and enterprise leaders seeking to teach from simple to complex concepts.
Accelerate Enterprise Software Engineering with PlatformlessWSO2
Key takeaways:
Challenges of building platforms and the benefits of platformless.
Key principles of platformless, including API-first, cloud-native middleware, platform engineering, and developer experience.
How Choreo enables the platformless experience.
How key concepts like application architecture, domain-driven design, zero trust, and cell-based architecture are inherently a part of Choreo.
Demo of an end-to-end app built and deployed on Choreo.
Multiple Your Crypto Portfolio with the Innovative Features of Advanced Crypt...Hivelance Technology
Cryptocurrency trading bots are computer programs designed to automate buying, selling, and managing cryptocurrency transactions. These bots utilize advanced algorithms and machine learning techniques to analyze market data, identify trading opportunities, and execute trades on behalf of their users. By automating the decision-making process, crypto trading bots can react to market changes faster than human traders
Hivelance, a leading provider of cryptocurrency trading bot development services, stands out as the premier choice for crypto traders and developers. Hivelance boasts a team of seasoned cryptocurrency experts and software engineers who deeply understand the crypto market and the latest trends in automated trading, Hivelance leverages the latest technologies and tools in the industry, including advanced AI and machine learning algorithms, to create highly efficient and adaptable crypto trading bots
Providing Globus Services to Users of JASMIN for Environmental Data AnalysisGlobus
JASMIN is the UK’s high-performance data analysis platform for environmental science, operated by STFC on behalf of the UK Natural Environment Research Council (NERC). In addition to its role in hosting the CEDA Archive (NERC’s long-term repository for climate, atmospheric science & Earth observation data in the UK), JASMIN provides a collaborative platform to a community of around 2,000 scientists in the UK and beyond, providing nearly 400 environmental science projects with working space, compute resources and tools to facilitate their work. High-performance data transfer into and out of JASMIN has always been a key feature, with many scientists bringing model outputs from supercomputers elsewhere in the UK, to analyse against observational or other model data in the CEDA Archive. A growing number of JASMIN users are now realising the benefits of using the Globus service to provide reliable and efficient data movement and other tasks in this and other contexts. Further use cases involve long-distance (intercontinental) transfers to and from JASMIN, and collecting results from a mobile atmospheric radar system, pushing data to JASMIN via a lightweight Globus deployment. We provide details of how Globus fits into our current infrastructure, our experience of the recent migration to GCSv5.4, and of our interest in developing use of the wider ecosystem of Globus services for the benefit of our user community.
We describe the deployment and use of Globus Compute for remote computation. This content is aimed at researchers who wish to compute on remote resources using a unified programming interface, as well as system administrators who will deploy and operate Globus Compute services on their research computing infrastructure.
Software Engineering, Software Consulting, Tech Lead.
Spring Boot, Spring Cloud, Spring Core, Spring JDBC, Spring Security,
Spring Transaction, Spring MVC,
Log4j, REST/SOAP WEB-SERVICES.
Globus Compute wth IRI Workflows - GlobusWorld 2024Globus
As part of the DOE Integrated Research Infrastructure (IRI) program, NERSC at Lawrence Berkeley National Lab and ALCF at Argonne National Lab are working closely with General Atomics on accelerating the computing requirements of the DIII-D experiment. As part of the work the team is investigating ways to speedup the time to solution for many different parts of the DIII-D workflow including how they run jobs on HPC systems. One of these routes is looking at Globus Compute as a way to replace the current method for managing tasks and we describe a brief proof of concept showing how Globus Compute could help to schedule jobs and be a tool to connect compute at different facilities.
Advanced Flow Concepts Every Developer Should KnowPeter Caitens
Tim Combridge from Sensible Giraffe and Salesforce Ben presents some important tips that all developers should know when dealing with Flows in Salesforce.
Large Language Models and the End of ProgrammingMatt Welsh
Talk by Matt Welsh at Craft Conference 2024 on the impact that Large Language Models will have on the future of software development. In this talk, I discuss the ways in which LLMs will impact the software industry, from replacing human software developers with AI, to replacing conventional software with models that perform reasoning, computation, and problem-solving.
How to Position Your Globus Data Portal for Success Ten Good PracticesGlobus
Science gateways allow science and engineering communities to access shared data, software, computing services, and instruments. Science gateways have gained a lot of traction in the last twenty years, as evidenced by projects such as the Science Gateways Community Institute (SGCI) and the Center of Excellence on Science Gateways (SGX3) in the US, The Australian Research Data Commons (ARDC) and its platforms in Australia, and the projects around Virtual Research Environments in Europe. A few mature frameworks have evolved with their different strengths and foci and have been taken up by a larger community such as the Globus Data Portal, Hubzero, Tapis, and Galaxy. However, even when gateways are built on successful frameworks, they continue to face the challenges of ongoing maintenance costs and how to meet the ever-expanding needs of the community they serve with enhanced features. It is not uncommon that gateways with compelling use cases are nonetheless unable to get past the prototype phase and become a full production service, or if they do, they don't survive more than a couple of years. While there is no guaranteed pathway to success, it seems likely that for any gateway there is a need for a strong community and/or solid funding streams to create and sustain its success. With over twenty years of examples to draw from, this presentation goes into detail for ten factors common to successful and enduring gateways that effectively serve as best practices for any new or developing gateway.
Climate Science Flows: Enabling Petabyte-Scale Climate Analysis with the Eart...Globus
The Earth System Grid Federation (ESGF) is a global network of data servers that archives and distributes the planet’s largest collection of Earth system model output for thousands of climate and environmental scientists worldwide. Many of these petabyte-scale data archives are located in proximity to large high-performance computing (HPC) or cloud computing resources, but the primary workflow for data users consists of transferring data, and applying computations on a different system. As a part of the ESGF 2.0 US project (funded by the United States Department of Energy Office of Science), we developed pre-defined data workflows, which can be run on-demand, capable of applying many data reduction and data analysis to the large ESGF data archives, transferring only the resultant analysis (ex. visualizations, smaller data files). In this talk, we will showcase a few of these workflows, highlighting how Globus Flows can be used for petabyte-scale climate analysis.
In software engineering, the right architecture is essential for robust, scalable platforms. Wix has undergone a pivotal shift from event sourcing to a CRUD-based model for its microservices. This talk will chart the course of this pivotal journey.
Event sourcing, which records state changes as immutable events, provided robust auditing and "time travel" debugging for Wix Stores' microservices. Despite its benefits, the complexity it introduced in state management slowed development. Wix responded by adopting a simpler, unified CRUD model. This talk will explore the challenges of event sourcing and the advantages of Wix's new "CRUD on steroids" approach, which streamlines API integration and domain event management while preserving data integrity and system resilience.
Participants will gain valuable insights into Wix's strategies for ensuring atomicity in database updates and event production, as well as caching, materialization, and performance optimization techniques within a distributed system.
Join us to discover how Wix has mastered the art of balancing simplicity and extensibility, and learn how the re-adoption of the modest CRUD has turbocharged their development velocity, resilience, and scalability in a high-growth environment.
Check out the webinar slides to learn more about how XfilesPro transforms Salesforce document management by leveraging its world-class applications. For more details, please connect with sales@xfilespro.com
If you want to watch the on-demand webinar, please click here: https://www.xfilespro.com/webinars/salesforce-document-management-2-0-smarter-faster-better/
Quarkus Hidden and Forbidden ExtensionsMax Andersen
Quarkus has a vast extension ecosystem and is known for its subsonic and subatomic feature set. Some of these features are not as well known, and some extensions are less talked about, but that does not make them less interesting - quite the opposite.
Come join this talk to see some tips and tricks for using Quarkus and some of the lesser known features, extensions and development techniques.
3. What is this Rest.li thing
you speak about?
“Rest.li is an open source REST framework for building robust,
scalable RESTful architectures using type-safe bindings and
asynchronous, non-blocking I/O.”
3
4. The Stack
Rest.li Data and REST API
D2 Dynamic Discovery and load balancing
R2 Network communication
4
9. How is R2 async and
non-blocking?
• Netty based async client
• Jetty based server – configuration change (link) needed to run
the server in async mode
Rest.li
D2
R2
9
10. How is D2 async and
non-blocking?
• All communication with ZooKeeper uses the asynchronous
APIs
• ZooKeeper pushes data to clients in case of updates
Rest.li
D2
R2
10
11. How is Rest.li async
and non-blocking?
• Does not handle I/O
• I/O is taken care of by R2
• R2 is async and non-blocking!
• Uses ParSeq when interacting with and delegating to
application code.
Rest.li
D2
R2
11
13. Async Server
Implementations
• The Rest.li framework is async and non-blocking under the
hood.
• As a result of this, if you do any blocking work in your method
implementation it can negatively impact your application
throughput as threads are held up by your application which
are needed by Rest.li. (if you are using Rest.li in async
mode)
13
15. Async Server – Callback
@RestMethod.Get
public void get(final Long id, @CallbackParam final Callback<Greeting>
callback) {
String path = "/data/" + id;
_zkClient.getData(path, false, new DataCallback() {
public void processResult(int i, String s, Object o, byte[] b, Stat
st) {
if (b.length == 0) {
callback.onError(new
RestLiServiceException(HttpStatus.S_404_NOT_FOUND));
}
else {
callback.onSuccess(buildGreeting(b));
}
}
}, null);
} 15
16. Async Server –
Callback Signature
@RestMethod.Get
public void get(final Long id, @CallbackParam
final Callback<Greeting> callback)
com.linkedin.common.Callback
16
17. Async Server – Filling
out the callback
_zkClient.getData(path, false, new DataCallback() {
public void processResult(int i, String s, Object o,
byte[] b, Stat st) {
if (b.length == 0) {
callback.onError(new
RestLiServiceException(HttpStatus.S_404_NOT_FOUND));
}
else {
callback.onSuccess(buildGreeting(b));
}
}
}, null); 17
18. Async Server – Callback
@RestMethod.Get
public void get(final Long id, @CallbackParam final Callback<Greeting>
callback) {
String path = "/data/" + id;
_zkClient.getData(path, false, new DataCallback() {
public void processResult(int i, String s, Object o, byte[] b, Stat
st) {
if (b.length == 0) {
callback.onError(new
RestLiServiceException(HttpStatus.S_404_NOT_FOUND));
}
else {
callback.onSuccess(buildGreeting(b));
}
}
}, null);
} 18
20. Key ParSeq Concepts
• Promise Fully async Java Future + listener mechanism.
• Task Basic unit of work that is similar to Java Callable.
Task<T>
• Engine runs Tasks.
• Context Used by a Task to run sub-Tasks.
• Composition par and seq
(link)
20
21. Async Server – ParSeq
@RestMethod.Get
public Task<Greeting> get(final Long id) {
final Task<FileData> fileDataTask = buildFileDataTask();
final Task<Greeting> mainTask = Tasks.callable("main", new
Callable<Greeting>() {
@Override
public Greeting call() throws Exception {
FileData fileData = fileDataTask.get();
return buildGreetingFromFileData(id, fileData);
}
});
return Tasks.seq(fileDataTask, mainTask);
}
21
22. Async Server – ParSeq
signature
@RestMethod.Get
public Task<Greeting> get(final Long id)
22
23. Async Server – ParSeq body
final Task<FileData> fileDataTask = buildFileDataTask();
23
24. Async Server – ParSeq body
final Task<Greeting> mainTask = Tasks.callable("main", new
Callable<Greeting>() {
@Override
public Greeting call() throws Exception {
FileData fileData = fileDataTask.get();
return buildGreetingFromFileData(id, fileData);
}
});
24
25. Async Server –
assembling the Tasks
return Tasks.seq(fileDataTask, mainTask);
25
fileDataTask mainTask
26. Async Server – ParSeq
@RestMethod.Get
public Task<Greeting> get(final Long id) {
final Task<FileData> fileDataTask = buildFileDataTask();
final Task<Greeting> mainTask = Tasks.callable("main", new
Callable<Greeting>() {
@Override
public Greeting call() throws Exception {
FileData fileData = fileDataTask.get();
return buildGreetingFromFileData(id, fileData);
}
});
return Tasks.seq(fileDataTask, mainTask);
}
26
27. Async Server – ParSeq
• Can use this approach to build complex call graphs and
return the final Task. (more on this later).
• Can use ParSeq tracing
• No callback hell!
27
29. Async Rest.li requests
• RestClient is async and non-blocking under the hood – it
uses ParSeq and Netty.
• Applications can still block the thread if they block on the
results of the Rest.li call!
Response response =
_restClient.sendRequest(BUILDER.get().id(1L).build()).get();
Fortune fortune = response.getEntity();
Blocking on the result of a Future
29
30. Async Rest.li requests
Two Options:
1. Callback based API
2. Use a ParSeqRestClient – This is a wrapper around a
RestClient that returns ParSeq Tasks and Promises.
30
31. Async Rest.li requests
– Callbacks
Callback<Response<Greeting>> cb = new Callback() {
void onSuccess(Response<Greeting> response) {
// do something with the returned Greeting
}
void onError(Throwable e) {
// whoops
}
}
Request<Greeting> getRequest = BUILDERS.get().id(1L).build();
_restClient.sendRequest(getRequest, new RequestContext(), cb);
31
32. Async Rest.li requests – defining
a Callback
Callback<Response<Greeting>> callback = new
Callback() {
void onSuccess(Response<Greeting> response) {
// do something with the returned Greeting
}
void onError(Throwable e) {
// whoops
}
}
32
33. Async Rest.li requests – sending
the request
Request<Greeting> getRequest =
BUILDERS.get().id(1L).build();
_restClient.sendRequest(getRequest,
new RequestContext(),
callback);
33
34. Async Rest.li requests – using
ParSeq
• Callback based API is good for simple use cases (one or two
requests and you are done.)
• But it is hard to express more complicated call flows using the
Callback API
Callback ParSeq
34
40. Conclusion
• Rest.li is async and non-blocking under the hood
• You can use Callbacks and ParSeq Tasks and Promises to
write async Rest.li clients and servers
40
41. Links and references
• Rest.li user guide https://github.com/linkedin/rest.li/wiki/Rest.li-
User-Guide
• Asynchronous servers and clients in Rest.li
https://github.com/linkedin/rest.li/wiki/Asynchronous-Servers-and-
Clients-in-Rest.li
• ParSeq https://github.com/linkedin/parseq
41
R2 – RESTful abstraction built on Netty and Jetty.
D2 – Dynamic discovery and client side load balancing using ZooKeeper.
Rest.li – Data layer + layer that exposes RESTful API that application can use to build RESTful services.
Async I/O basically means that we free up our thread of computation to do other things while we are waiting for the I/O to complete. In other words we make an I/O request and we don’t block waiting on the result of the I/O to appear.
Let’s look at one thread on the server.
Assume both request 1 and request 2 arrive at the same time.
In each request the thread does some work, then makes an I/O request of some sort, and then uses the values from the I/O response to fulfill the request.
In the synchronous model we can’t start processing request 2 until request 1 is done because the thread is blocked on I/O. In the gray portions our server is essentially idle.
In the async model we can start processing request 2 instead of blocking on the I/O to complete.
For the rest of the talk we will assume we are running the R2 server in async mode.
Jetty configuration is needed to run Rest.li in async mode.
If you are running Rest.li in a thread-per-request model async programming will not make much of a difference since the thread cannot be re-used to serve new requests.
ParSeq is a framework that makes it easier to write asynchronous code in Java. Created by LinkedIn and is an open source project.
In this example we are fetching data from ZooKeeper using the async ZooKeeper API and then converting that data into a Greeting object to return.
Need to explicitly specify the callback in the method signature.
Notice the void return type. The next slide will explain how a response is sent back to the client.
The callback API exposes two methods – onSuccess and onError.
In this example we invoke the onSuccess method if the data we get from ZooKeeper has a length > 0.
We invoke the onError method otherwise.
Once either onError or onSucess has been invoked the framework will return a response back to the client.
ParSeq is a library that makes it easy to write complex async task flows in Java. It was created by LinkedIn and is an open source project.
Difference between Task and Callable is that in a Task you can set the result asynchronously.
In this example we read data from the disk asynchronously and use that to build a Greeting object.
buildFileDataTask (implementation not shown here) reads some file on disk using async I/O and returns a Task<FileData>, where FileData (implementation not shown here) is some abstraction for the data being read. We use FileData to build a Greeting to return to the client.
Return ParSeq tasks from methods. These are run by Rest.li using the internal ParSeq engine.
Task<T> if you run this Task using a ParSeq engine it will return you a T eventually.
We first read a file on disk using some async I/O abstraction that gives us back a Task for the data read.
The basic idea to use ParSeq for Rest.li async method implementations is to return Tasks or Promises.
We want to transform the FileData into a Greeting to return to the user. We define a new Task, called mainTask in the example above, to do this.
Within the call method of mainTask we obtain the FileData from the fileDataTask. This is a non-blocking call because of the way we assemble our final call graph (more on this in a bit). Finally, we build a Greeting in the buildGreetingFromFileData (implementation not shown here) method.
This is what will be returned from the overall method.
Tasks.seq is a way to build a call graph between Tasks in ParSeq. In this example we are saying that we want mainTask to run after fileDataTask. This is because mainTask depends on fileDataTask. Once we express our call graph dependency in this way, when we call fileDataTask.get() within the mainTask it will either return right away or throw an exception. This is because ParSeq will make sure that mainTask only gets run AFTER fileDataTask has completed.
Tasks.seq() returns a new Task and this Task will get run by the underlying ParSeq engine within Rest.li.
This call is blocking because sendRequest returns a Future, and calling get() on the Future waits until the operation has been completed.
This is very similar to callback based approached present in Node.js and JavaScript.
The first thing we need to do is define the callback that will be executed when the server sends us a response, or there is an error.
Then we use the Callback based sendRequest API to send the request. Rest.li invokes the callback for you upon getting a result.
Each circle is a response from making a Rest.li request. Each layer represents requests being made in parallel. The yellow circle Is the final result.
b1 and b2 don’t depend on each other so theoretically b2 could have been run in the same level as a1, a2, or a3.
Use the parseq rest client to create three response tasks.
Combine the results from a1-a3 and use that to make another request task b1
Similarly combine the results of b1 and b2 and create another request task c1
Use the Tasks.seq() and Tasks.par() methods to build the call graph. Within each level we want each Task to run in parallel, which is why we use Tasks.par(a1ResponseTask, a2ResponseTask, a3ResponseTask) and Tasks.par(b1ResponseTask, b2ResponseTask). Finally, we want each level to run sequentially one after the other, which is why wrap everything in a Tasks.seq.