The word "Reactive" can be confusing. As the founder of the Reactive Amsterdam meetup, I can tell there are two main topics here: Functional Reactive Programming (here with reference to Android) and "Reactive" in the sense of the Reactive Manifesto.
Python is popular amongst data scientists and engineers for data processing tasks. The big data ecosystem has traditionally been rather JVM centric. Often Java (or Scala) are the only viable option to implement data processing pipelines. That sometimes poses an adoption barrier for organizations that have already invested in other language ecosystems. The Apache Beam project provides a unified programming model for data processing and its ongoing portability effort aims to enable multiple language SDKs (currently Java, Python and Go) on a common set of runners. The combination of Python streaming on the Apache Flink runner is one example. Let’s take a look how the Flink runner translates the Beam model into the native DataStream (or DataSet) API, how the runner is changing to support portable pipelines, how Python user code execution is coordinated with gRPC based services and how a sample pipeline runs on Flink.
The magic behind your Lyft ride prices: A case study on machine learning and ...Karthik Murugesan
Rakesh Kumar and Thomas Weise explore how Lyft dynamically prices its rides with a combination of various data sources, ML models, and streaming infrastructure for low latency, reliability, and scalability—allowing the pricing system to be more adaptable to real-world changes.
Python is popular amongst data scientists and engineers for data processing tasks. The big data ecosystem has traditionally been rather JVM centric. Often Java (or Scala) are the only viable option to implement data processing pipelines. That sometimes poses an adoption barrier for organizations that have already invested in other language ecosystems. The Apache Beam project provides a unified programming model for data processing and its ongoing portability effort aims to enable multiple language SDKs (currently Java, Python and Go) on a common set of runners. The combination of Python streaming on the Apache Flink runner is one example. Let’s take a look how the Flink runner translates the Beam model into the native DataStream (or DataSet) API, how the runner is changing to support portable pipelines, how Python user code execution is coordinated with gRPC based services and how a sample pipeline runs on Flink.
The magic behind your Lyft ride prices: A case study on machine learning and ...Karthik Murugesan
Rakesh Kumar and Thomas Weise explore how Lyft dynamically prices its rides with a combination of various data sources, ML models, and streaming infrastructure for low latency, reliability, and scalability—allowing the pricing system to be more adaptable to real-world changes.
OSDC 2014: Jordan Sissel - Find Happiness in your LogsNETWAYS
Got logs? With so much technology powering your business, you need tools to help you identify problems and analyze past behavior. Apache 2.0-licensed Elasticsearch ELK stack is here to help you process, store, and visualize any kind of logging data, in real time, from any source imaginable!
Log management seems so boring. Log rotation, retention policy, grep, yuck! What are your servers are doing? Did last night's upgrade break anything? How your users are interacting with your products? Why did the site go down last weekend?
Get ready to turn your log pains into awesome visual insights and more!
BAM! Elasticsearch ELK! ELK stands for Elasticsearch, Logstash, and Kibana. Each of these three are lovely, open source projects that, together, give you and your business log management superpowers.
This talk will primarily be done in three parts: open source and community, technology, and use cases.
* The first part will introduce each project and its success as open source software, most notably through supportive and open communities.
* The second part will discuss the each project and the problems solved.
* The third (and most exciting!) part will highlight a variety of use cases and problem that real humans are using Elasticsearch ELK to solve. Live demos of some use cases will be provided.
Attendees will leave the presentation totally full of excitement about this toolset and bursting with fresh ideas about how to tackle their sour logging problems.
Apache Flink 101 - the rise of stream processing and beyondBowen Li
Apache Flink is the most popular and widely adopted streaming processing framework, powering real time stream event computations at extremely large scale in companies like Uber, Lyft, AWS, Alibaba, Pinterest, Splunk, Yelp, etc.
In this talk, we will go over use cases and basic (yet hard to achieve!) requirements of stream processing, and how Flink fills the gaps and stands out with some of its unique core building blocks, like pipelined execution, native event time support, state support, and fault tolerance.
We will also take a look at how Flink is going beyond stream processing into areas like unified data processing, enterprise intergration, AI/machine learning (especially online ML), and serverless computation, and how Flink fits with its distinct value.
SPEAKER: Bowen Li
SPEAKER BIO: Bowen is a committer of Apache Flink, senior engineer at Alibaba, and host of Seattle Flink Meetup.
What's new in confluent platform 5.4 online talkconfluent
To stay informed about the latest features in Confluent Platform 5.4 join Martijn Kieboom Solutions Engineer at Confluent, for the ‘What’s New in Confluent 5.4?’ on February 12 at 11 am GMT/ 12 Noon CET. Martijn will talk through the new features including:
Role-Based Access Control and how it enables highly granular control of permissions and platform access
Structured Audit Logs and how they enable the capture of authorization logs
How Multi-Region Clusters deliver asynchronous replication at the topic level, allowing companies to run a single Kafka Cluster across multiple data-centres
Schema validations role in enabling businesses that run Kafka at scale to deliver data compatibility across platforms
Reactive Programming In Java Using: Project ReactorKnoldus Inc.
The session provides details about reactive programming with reactive streams. The purpose of Reactive Streams is to provide a standard for asynchronous stream processing with non-blocking backpressure.”
This concept is explained using Project reactor.
When Apache Spark Meets TiDB with Xiaoyu MaDatabricks
During the past 10 years, big-data storage layers mainly focus on analytical use cases. When it comes to analytical cases, users usually offload data onto Hadoop cluster and perform queries on HDFS files. People struggle dealing with modifications on append only storage and maintain fragile ETL pipelines.
On the other hand, although Spark SQL has been proven effective parallel query processing engine, some tricks common in traditional databases are not available due to characteristics of storage underneath. TiSpark sits directly on top of a distributed database (TiDB)’s storage engine, expand Spark SQL’s planning with its own extensions and utilizes unique features of database storage engine to achieve functions not possible for Spark SQL on HDFS. With TiSpark, users are able to perform queries directly on changing / fresh data in real time.
The takeaways from this two are twofold:
— How to integrate Spark SQL with a distributed database engine and the benefit of it
— How to leverage Spark SQL’s experimental methods to extend its capacity.
Apache Flink(tm) - A Next-Generation Stream ProcessorAljoscha Krettek
In diesem Vortrag wird es zunächst einen kurzen Überblick über den aktuellen Stand im Bereich der Streaming-Datenanalyse geben. Danach wird es mit einer kleinen Einführung in das Apache-Flink-System zur Echtzeit-Datenanalyse weitergehen, bevor wir tiefer in einige der interessanten Eigenschaften eintauchen werden, die Flink von den anderen Spielern in diesem Bereich unterscheidet. Dazu werden wir beispielhafte Anwendungsfälle betrachten, die entweder direkt von Nutzern stammen oder auf unserer Erfahrung mit Nutzern basieren. Spezielle Eigenschaften, die wir betrachten werden, sind beispielsweise die Unterstützung für die Zerlegung von Events in einzelnen Sessions basierend auf der Zeit, zu der ein Ereignis passierte (event-time), Bestimmung von Zeitpunkten zum jeweiligen Speichern des Zustands eines Streaming-Programms für spätere Neustarts, die effiziente Abwicklung bei sehr großen zustandsorientierten Streaming-Berechnungen und die Zugänglichkeit des Zustandes von außerhalb.
Improving Mobile Payments With Real time Sparkdatamantra
Talk about real world spark streaming implementation for improving mobile payments experience. Presented at Target data meetup at Bangalore by Madhukara Phatak on 22/08/2015.
In the last several months, MLflow has introduced significant platform enhancements that simplify machine learning lifecycle management. Expanded autologging capabilities, including a new integration with scikit-learn, have streamlined the instrumentation and experimentation process in MLflow Tracking. Additionally, schema management functionality has been incorporated into MLflow Models, enabling users to seamlessly inspect and control model inference APIs for batch and real-time scoring. In this session, we will explore these new features. We will share MLflow’s development roadmap, providing an overview of near-term advancements in the platform.
Bighead: Airbnb’s End-to-End Machine Learning Platform with Krishna Puttaswa...Databricks
Airbnb has a wide variety of ML problems ranging from models on traditional structured data to models built on unstructured data such as user reviews, messages and listing images. The ability to build, iterate on, and maintain healthy machine learning models is critical to Airbnb’s success. Many ML Platforms cover data collection, feature engineering, training, deploying, productionalization, and monitoring but few, if any, do all of the above seamlessly.
Bighead aims to tie together various open source and in-house projects to remove incidental complexity from ML workflows. Bighead is built on Python and Spark and can be used in modular pieces as each ML problem presents unique challenges. Through standardization of the path to production, training environments and the methods for collecting and transforming data on Spark, each model is reproducible and iterable.
This talk covers the architecture, the problems that each individual component and the overall system aims to solve, and a vision for the future of machine learning infrastructure. It’s widely adapted in Airbnb and we have variety of models running in production. We have seen the overall model development time go down from many months to days on Bighead. We plan to open source Bighead to allow the wider community to benefit from our work.
Using Kafka to integrate DWH and Cloud Based big data systemsconfluent
Mic Hussey, Senior Systems Engineer, Confluent
Using Kafka to integrate DWH and Cloud Based big data systems
https://www.meetup.com/Stockholm-Apache-Kafka-Meetup-by-Confluent/events/268636234/
Stream processing still evolves and changes at a speed that can make it hard to keep up with the developments. Being at the forefront of stream processing technology, the evolution of Apache Flink has mirrored many of these developments and continues to do so.
We will take you on a journey through the major milestones of stream processing technology in past years, diving into the latest additions that Apache Flink and other communities introduced to the stream processing landscape, such as Streamng SQL, Time Versioned Tables, cluster-library-duality, language portability, etc.
We will take a sneak peek into our crystal ball and present in what the Flink community is working on next.
OSDC 2014: Jordan Sissel - Find Happiness in your LogsNETWAYS
Got logs? With so much technology powering your business, you need tools to help you identify problems and analyze past behavior. Apache 2.0-licensed Elasticsearch ELK stack is here to help you process, store, and visualize any kind of logging data, in real time, from any source imaginable!
Log management seems so boring. Log rotation, retention policy, grep, yuck! What are your servers are doing? Did last night's upgrade break anything? How your users are interacting with your products? Why did the site go down last weekend?
Get ready to turn your log pains into awesome visual insights and more!
BAM! Elasticsearch ELK! ELK stands for Elasticsearch, Logstash, and Kibana. Each of these three are lovely, open source projects that, together, give you and your business log management superpowers.
This talk will primarily be done in three parts: open source and community, technology, and use cases.
* The first part will introduce each project and its success as open source software, most notably through supportive and open communities.
* The second part will discuss the each project and the problems solved.
* The third (and most exciting!) part will highlight a variety of use cases and problem that real humans are using Elasticsearch ELK to solve. Live demos of some use cases will be provided.
Attendees will leave the presentation totally full of excitement about this toolset and bursting with fresh ideas about how to tackle their sour logging problems.
Apache Flink 101 - the rise of stream processing and beyondBowen Li
Apache Flink is the most popular and widely adopted streaming processing framework, powering real time stream event computations at extremely large scale in companies like Uber, Lyft, AWS, Alibaba, Pinterest, Splunk, Yelp, etc.
In this talk, we will go over use cases and basic (yet hard to achieve!) requirements of stream processing, and how Flink fills the gaps and stands out with some of its unique core building blocks, like pipelined execution, native event time support, state support, and fault tolerance.
We will also take a look at how Flink is going beyond stream processing into areas like unified data processing, enterprise intergration, AI/machine learning (especially online ML), and serverless computation, and how Flink fits with its distinct value.
SPEAKER: Bowen Li
SPEAKER BIO: Bowen is a committer of Apache Flink, senior engineer at Alibaba, and host of Seattle Flink Meetup.
What's new in confluent platform 5.4 online talkconfluent
To stay informed about the latest features in Confluent Platform 5.4 join Martijn Kieboom Solutions Engineer at Confluent, for the ‘What’s New in Confluent 5.4?’ on February 12 at 11 am GMT/ 12 Noon CET. Martijn will talk through the new features including:
Role-Based Access Control and how it enables highly granular control of permissions and platform access
Structured Audit Logs and how they enable the capture of authorization logs
How Multi-Region Clusters deliver asynchronous replication at the topic level, allowing companies to run a single Kafka Cluster across multiple data-centres
Schema validations role in enabling businesses that run Kafka at scale to deliver data compatibility across platforms
Reactive Programming In Java Using: Project ReactorKnoldus Inc.
The session provides details about reactive programming with reactive streams. The purpose of Reactive Streams is to provide a standard for asynchronous stream processing with non-blocking backpressure.”
This concept is explained using Project reactor.
When Apache Spark Meets TiDB with Xiaoyu MaDatabricks
During the past 10 years, big-data storage layers mainly focus on analytical use cases. When it comes to analytical cases, users usually offload data onto Hadoop cluster and perform queries on HDFS files. People struggle dealing with modifications on append only storage and maintain fragile ETL pipelines.
On the other hand, although Spark SQL has been proven effective parallel query processing engine, some tricks common in traditional databases are not available due to characteristics of storage underneath. TiSpark sits directly on top of a distributed database (TiDB)’s storage engine, expand Spark SQL’s planning with its own extensions and utilizes unique features of database storage engine to achieve functions not possible for Spark SQL on HDFS. With TiSpark, users are able to perform queries directly on changing / fresh data in real time.
The takeaways from this two are twofold:
— How to integrate Spark SQL with a distributed database engine and the benefit of it
— How to leverage Spark SQL’s experimental methods to extend its capacity.
Apache Flink(tm) - A Next-Generation Stream ProcessorAljoscha Krettek
In diesem Vortrag wird es zunächst einen kurzen Überblick über den aktuellen Stand im Bereich der Streaming-Datenanalyse geben. Danach wird es mit einer kleinen Einführung in das Apache-Flink-System zur Echtzeit-Datenanalyse weitergehen, bevor wir tiefer in einige der interessanten Eigenschaften eintauchen werden, die Flink von den anderen Spielern in diesem Bereich unterscheidet. Dazu werden wir beispielhafte Anwendungsfälle betrachten, die entweder direkt von Nutzern stammen oder auf unserer Erfahrung mit Nutzern basieren. Spezielle Eigenschaften, die wir betrachten werden, sind beispielsweise die Unterstützung für die Zerlegung von Events in einzelnen Sessions basierend auf der Zeit, zu der ein Ereignis passierte (event-time), Bestimmung von Zeitpunkten zum jeweiligen Speichern des Zustands eines Streaming-Programms für spätere Neustarts, die effiziente Abwicklung bei sehr großen zustandsorientierten Streaming-Berechnungen und die Zugänglichkeit des Zustandes von außerhalb.
Improving Mobile Payments With Real time Sparkdatamantra
Talk about real world spark streaming implementation for improving mobile payments experience. Presented at Target data meetup at Bangalore by Madhukara Phatak on 22/08/2015.
In the last several months, MLflow has introduced significant platform enhancements that simplify machine learning lifecycle management. Expanded autologging capabilities, including a new integration with scikit-learn, have streamlined the instrumentation and experimentation process in MLflow Tracking. Additionally, schema management functionality has been incorporated into MLflow Models, enabling users to seamlessly inspect and control model inference APIs for batch and real-time scoring. In this session, we will explore these new features. We will share MLflow’s development roadmap, providing an overview of near-term advancements in the platform.
Bighead: Airbnb’s End-to-End Machine Learning Platform with Krishna Puttaswa...Databricks
Airbnb has a wide variety of ML problems ranging from models on traditional structured data to models built on unstructured data such as user reviews, messages and listing images. The ability to build, iterate on, and maintain healthy machine learning models is critical to Airbnb’s success. Many ML Platforms cover data collection, feature engineering, training, deploying, productionalization, and monitoring but few, if any, do all of the above seamlessly.
Bighead aims to tie together various open source and in-house projects to remove incidental complexity from ML workflows. Bighead is built on Python and Spark and can be used in modular pieces as each ML problem presents unique challenges. Through standardization of the path to production, training environments and the methods for collecting and transforming data on Spark, each model is reproducible and iterable.
This talk covers the architecture, the problems that each individual component and the overall system aims to solve, and a vision for the future of machine learning infrastructure. It’s widely adapted in Airbnb and we have variety of models running in production. We have seen the overall model development time go down from many months to days on Bighead. We plan to open source Bighead to allow the wider community to benefit from our work.
Using Kafka to integrate DWH and Cloud Based big data systemsconfluent
Mic Hussey, Senior Systems Engineer, Confluent
Using Kafka to integrate DWH and Cloud Based big data systems
https://www.meetup.com/Stockholm-Apache-Kafka-Meetup-by-Confluent/events/268636234/
Stream processing still evolves and changes at a speed that can make it hard to keep up with the developments. Being at the forefront of stream processing technology, the evolution of Apache Flink has mirrored many of these developments and continues to do so.
We will take you on a journey through the major milestones of stream processing technology in past years, diving into the latest additions that Apache Flink and other communities introduced to the stream processing landscape, such as Streamng SQL, Time Versioned Tables, cluster-library-duality, language portability, etc.
We will take a sneak peek into our crystal ball and present in what the Flink community is working on next.
Building Scalable Stateless Applications with RxJavaRick Warren
RxJava is a lightweight open-source library, originally from Netflix, that makes it easy to compose asynchronous data sources and operations. This presentation is a high-level intro to this library and how it can fit into your application.
Supercharged java 8 : with cyclops-reactJohn McClean
Overview of the rationale behind cyclops-react and some of it's features, including extended Java Collections, more powerful Sequential and Parallel Streaming, pattern matching, data types (such as Xor cyclops-react Either type, Maybe, Eval).
Java 8 Stream API and RxJava ComparisonJosé Paumard
The slides of my JavaOne talk: Java 8 Stream API and RxJava Comparison: Patterns and Performances.
The spliterators patterns can be found here: https://github.com/JosePaumard/jdk8-spliterators.
Reactive Programming is a relatively new kid on the block: concepts like functional, async and non-blocking have been around for a while but are only now going mainstream. The server side is a front runner, with seminal works as the Reactive Manifesto and many frameworks available. But how about the mobile front-end? In this talk we will experiment with mixing the principles of the Reactive Manifesto to the Android development environment.
For most of us, Reactive Android means using RxJava. In this presentation, I try to borrow a few ideas from the backend world and enrich the concept of Reactive in Android.
Introductory presentation for the Clash of Technologies: RxJS vs RxJava event organized by SoftServe @ betahouse (17.01.2015). Comparison document with questions & answers available here: https://docs.google.com/document/d/1VhuXJUcILsMSP4_6pCCXBP0X5lEVTsmLivKHcUkFvFY/edit#.
Spring 5 Webflux - Advances in Java 2018Trayan Iliev
Brief introduction to distributed stream processing, reactive programming, and novelties in Spring 5, Spring Boot 2, and reactive Spring Data + programming examples in GitHub. More information will be provided during upcoming Spring 5 course: http://iproduct.org/en/courses/spring-mvc-rest/
Microservices with Spring 5 Webflux - jProfessionalsTrayan Iliev
Spring 5 introduces new functional and reactive programming model for building web applications and (micro-)services.
The session @jProfessionals dev conference demonstrates how to build REST microservices using Spring WebFlux and Spring Boot using code examples on GitHub. It includes:
- Introduction to reactive programming, Reactive Streams specification, and project Reactor (as WebFlux infrastructure);
- Comparison between annotation-based and functional reactive ;programming approaches for building REST services with WebFlux;
- Router, handler and filter functions;
- Using reactive repositories and reactive database access with Spring Data;
- Building end-to-end non-blocking reactive web services using Netty-based web runtime;
- Reactive WebClients and integration testing;
- Realtime event streaming to WebClients using JSON Streams, and to JS client using SSE.
Mary Grygleski and myself, gave a very successful workshop to 51 attendees in NYC on April 15th - here is the updated presentation
https://www.linkedin.com/in/mary-grygleski/
https://www.linkedin.com/in/grant-steinfeld/
Reactive Programming on Android - RxAndroid - RxJavaAli Muzaffar
Introduction to RxJava for reactive programming and how to use RxAndroid to do reactive programming on Android.
There is a sample android app to go with the slides that has all the source shown in the project.
Reactive Card Magic: Understanding Spring WebFlux and Project ReactorVMware Tanzu
Spring Framework 5.0 and Spring Boot 2.0 contain groundbreaking technologies known as reactive streams, which enable applications to utilize computing resources efficiently.
In this session, James Weaver will discuss the reactive capabilities of Spring, including WebFlux, WebClient, Project Reactor, and functional reactive programming. The session will be centered around a fun demonstration application that illustrates reactive operations in the context of manipulating playing cards.
Presenter : James Weaver, Pivotal
Presentation from Angular Sofia Meetup event focuses on integration between state-of-the-art Angular, component libraries and supporting technologies, necessary to build a scalable and performant single-page apps. Topics include:
- Composing NGRX Reducers, Selectors and Middleware;
- Computing derived data using Reselect-style memoization with RxJS;
- NGRX Router integration;
- Normalization/denormalization and keeping data locally in IndexedDB;
- Processing Observable (hot) streams of async actions, and isolating the side effects using @Effect decorator with NGRX/RxJS reactive transforms;
- Integration of Material Design with third party component libraries like PrimeNG;
- more: lazy loading, AOT...
Sensor data is streamed in realtime from Arduino + accelerometeres, gyroscopes & compass 3D, ultrasound distance sensor, etc. using UDP protocol. The data processing is done with reactive Java alterantive implementations: callbacks, CompletableFutures and using Spring 5 Reactor library. The web 3D visualization with Three.js is streamed using Server Sent Events (SSE).
A video for the IoT demo is available @YouTube: https://www.youtube.com/watch?v=AB3AWAfcy9U
All source code of the demo is freely available @GitHub: https://github.com/iproduct/reactive-demos-iot
There are more reactive Java demos in the same repository - callbacks, CompletableFuture, realtime event streaming. Soon I'll add a description how to build the device and upload Arduino sketch, as well as describe CompletableFuture and Reactor demos and 3D web visualization part with Three.js. Please stay tuned :)
The Internet is asynchronous, people are asynchronous, the universe is asynchronous. They are now and they always will be. Writing applications which deal correctly with asynchronous data is difficult. Or at least it was. Microsoft open sourced ReactiveX in 2010 to make what used to be some of the hairiest kinds of coding almost easy.
The project was so well received that it has been ported to nearly every major programming language. Versions of ReactiveX exists for .NET, JavaScript, Java, Scala, Clojure, C++, Ruby, Python, Groovy, JRuby, Kotlin, and Swift. The project is open source and community maintain with corporate backing from the likes of Microsoft and Netflix.
Microsoft created the ReactiveX, then called reactive extensions, from the burnt out remains of Project Volta. Project Volta's goal was to extend .NET's to run both on the server and in the browser. A compiler would decide which parts were best to put where. It essentially was the Meteor framework in 2007.
In this talk we will take a deep look at ReactiveX. We will use code samples to show how things are done before and after ReactiveX. The code will be in C# and JavaScript. We will see how ReactiveX makes our lives as developers easier and our code more reactive.
DevFest Belgium 2016.
Overview on some of the reactive frameworks for Android and Java (RxJava 1.x/2.x, Reactor, Akka, Agera). Examples, comparison and interoperability.
Slides for the talk I gave at the 2020 conference "Sofware Circus: Down The Rabbit Hole" . Attendees are given an overview of Deep Learning and a unique dataset to start experimenting. Code and images are available here: https://github.com/ticofab/deep-learning-with-scala
Slides of Maxim Burgerhout from RedHat ( @MaximBurgerhout ). This presentation was given at the Reactive Amsterdam meetup: https://www.meetup.com/Reactive-Amsterdam , in collaboration with GOTO Nights Amsterdam. Recording of the talk is here: https://www.youtube.com/watch?v=X2NFGHQzQok
Ten Frustrations From The Community Trenches (And How To Deal With Them)Fabio Tiriticco
As community managers dealing with people, we are all bound to deal with frustrations at some point. This talk goes over a few common ones and reveals a few tips to deal with them.
We all need friends and Akka just found KubernetesFabio Tiriticco
We all feel alone sometimes. Akka got along well with the VM crew ever since it was born, but new friends and fresh ideas are always necessary. Which is why lately Akka loves spending time with Kubernetes! Maybe the reason why they like each other so much is their sharing of core values such as transparent scalability and resilience.
How do these two technologies compare from a Reactive standpoint? Does one supersede the other? In fact, their powers can be combined to design distributed systems all the way from application code to cloud instance.
Cloud native akka and kubernetes holy grail to elasticityFabio Tiriticco
Akka is the most mature choice to implement the traits of the Reactive Manifesto, thanks to the Actor model. But we need to rely on some external infrastructure to automatically scale up or down our services. We found Docker & Kubernetes to be a perfect match for clustered Akka applications.
My personal highlights from the Reactive Summit 2017. I loved the conference from the beginning till the end and I shared some of that with my Reactive Amsterdam meetup. All content belongs to the respective speakers.
Beyond Fault Tolerance with Actor ProgrammingFabio Tiriticco
Actor Programming is a software building approach that lets you can go beyond fault tolerance and achieve Resilience, which is the capacity of a system to self-heal and spring back into a fresh shape. First I'll introduce the difference between Reactive Programming and Reactive Systems, and then we'll go over a couple of implementation examples using Scala and Akka.
The coupled GitHub repository with the code is here: https://github.com/ticofab/ActorDemo
** Video of this talk is here: https://youtu.be/MQGXrrhGUTw **
The first talk of the Meetup on the 11th of April 2017, hosted by weeronline.nl in their Amsterdam offices.
Streams are everywhere! Akka Streams help us model streaming processes using a very descriptive DSL and optimising resource usage.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
10. RxJava goes hand in hand with Java8’s Lambdas
new Func1<String, Integer>() {
@Override
public Integer call(String s) {
return s.length();
}
}
(String s) -> {
return s.length();
}
s -> s.length();
Retrolamba plugin for
Android < N
11. RxJava: dealing with a stream of items
class Cat {
...
public String getName()
public Color getColor()
public Picture fetchPicture()
...
}
12. RxJava: dealing with a stream of items
Observable.from(myCats);
cat -> Timber.d(cat.getName())
List<Cat> myCats;
Observable obs =
obs.subscribe( );
Observable.from(myCats)
.subscribe(cat -> Timber.d(cat.getName()));
13. RxJava: work on the stream
Observable.from(myCats)
.subscribe(color -> Timber.d(color));
map: function that takes T => outputs R
.map( )cat -> cat.getColor()
14. RxJava: operators to manipulate the stream
Observable.from(myCats)
.distinct()
.delay(2, TimeUnit.SECONDS)
.filter(cat -> cat.getColor().isWhite())
.map(cat -> cat.getName())
.subscribe(name ->
Timber.d(“a unique, delayed white cat called ” + name));
18. RxJava: other ways of creating Observables
// emits on single item and completes
Observable.just
// emits one item after a certain delay
Observable.timer
// emits a series of integers at regular pace
Observable.interval
.. plus many others, and you can create your own
— BUT TRY NOT TO -
19. RxJava: a few disadvantages
• debugging is more difficult
• methods with side effects
• powerful abstraction - lot of stuff under the hood
20. RxJava: demo with Retrofit & Meetup Streams
http://stream.meetup.com/2/rsvp
Meetup Streaming API
25. RxJava: demo with Meetup Streams
https://github.com/ticofab/android-meetup-streams
26. 2006 2016
Servers ~10 The sky is the limit.
Response time seconds milliseconds
Offline maintenance hours what?
Data amount Gigabytes Petabytes
Machines Single core, little distribution
Must work across async
boundaries (location, threads)
Kind of data Request - response Streams (endless)
Reactive in the sense of the Reactive Manifesto
28. The Reactive traits
A reactive computer system must Trait
React to its users Responsive
React to failure and stay available Resilient
React to varying load conditions Elastic
Its components must react to inputs Message-driven
29. Reactive traits: Responsive
• A human who visits a website
• A client which makes a request to a server
• A consumer which contacts a provider
• . . .
A Reactive system responds to inputs and usage from its user.
30. Reactive traits: Elastic
• Scale OUT and IN: use just the right amount
• Elasticity relies on distribution
• Ensure replication in case of failure
Scale on demand to react to varying load
35. Reactive Pattern: Simple Component Pattern
“One component should do only one thing but do it in full. The aim is to
maximise cohesion and minimise coupling between components.”
36. Reactive Pattern: Simple Component Pattern
Actor 1
Actor 3
Actor 2
• contains state
• has a mailbox to receive
and send messages
• contains behaviour logic
• has a supervisor
Actor model Supervisor
37. Example: Synchronous DB access
Activity DB service DB
List<Cat> cats = new ArrayList<Cat>();
try {
cats = dbService.getAllCats();
} catch (ExampleException e) {
// TODO
}
38. Example: DB access with callback
dbService.getAllCats(new OnCompleted() {
@Override
public void onDataRetrieved(List<Cat> cats) {
// TODO
}
});
Activity DB service DB
40. Example: Async Messaging with Actors - messages
// message to ask for cats
public class GetCats {
}
// message to send results back
public class Results {
ArrayList<Cat> mCats;
public Results(ArrayList<Cat> cats) {
mCats = cats;
}
}
41. Example: Async Messaging with Actors - behaviour
// db actor pseudocode
@Override
public void onReceive(Message message) {
}
if (message is GetCats) {
}
// retrieve cats from database
c = contentResolver.query(...)
...
// send cats to the Activity
Results result = new Results(cats);
activityAddress.tell(result)
42. Reactive Patterns: Let it crash!
"Prefer a full component restart to complex internal failure handling".
• failure conditions WILL occur
• they might be rare and hard to reproduce
• it is much better to start clean than to try to recover
• …which might be expensive and difficult!
43. Reactive Patterns: Let it crash!
Actor supervision example
Actor 1
Actor 2
Supervisor
WhateverException!
X
Fix or
Restart
I work both as an Android and Scala engineer. I mostly use scala for backend applications. Those are a few of the companies that I worked for over the past few years. Some time ago, I got so fascinated by this Reactive concept that …
The Reactive principles are cross-technology and cross-domain, so I basically started this meetup..
If we look up “Reactive” in the vocabulary, it is an adjective meaning [READ]
Upon mentioning this word to people in technology, this word somehow comes across as confusing. I think the root is that it can be used in (at least) two context:
The paradigm is more supportive of reasoning about problems in concurrent and parallelized applications.
At its core, functional programming encourages the use of pure functions, that is a function that returns the same value every time when passed the same inputs and without side effects.
Side effects are operations inside a function that act on things outside a function. Maybe modifying some state, or inserting data in the storage. Functions with no side effects are easier to reason about, especially in complex systems.
Immutability: mutable state can be dangerous and therefore state mutation should be confined as much as possible.
Higher order functions - in functional languages, functions can exist as first-class citizens. Meaning that a function can be passed around into another function. This makes the code more composable.
Func1 is an implementation of an interface, with a method “call” that takes a string as a parameter and returns its length. Using lambdas, we can rewrite it as… and because the body of the method has a single instruction we can further simplify it like this.
In the coming examples we will use this simple object: a cat with a name, a color and whose picture we can fetch from some server online.
The Observable class in RxJava has a “from” static method that takes a collection as an input and will output its items one by one. This is called an “Observable”: a stream of data. In our case, a stream of cats.
In order to do something with these items, we need to subscribe through the subscribe function. The subscribe function takes as a parameter the implementation of an interface which takes a cat as an input and does something with it. And it’s usually chained.
You might be wondering “if I want to do something with each cat, I can do it in a classic for-loop”. This is true for these simple examples, but the beauty of RxJava is that it allows you to manipulate the stream without breaking it, and late we’ll see even more reasons.
In this example we use the map operator from RxJava. This operator lets you change the item that gets propagated through the stream. The map operator takes another function whose return type can be different than the input type. In this case we take a cat and return a color. The subscribe at the end now has a color as input.
Because a stream is potentially endless, at some point you might want to unsubscribe. The subscribe function actually returns a Subscription. We can use this subscription later to unsubscribe from the stream.
In the beginning we mentioned that RxJava makes threading easy, so let’s look at how that works.
…and you can even create your own, which can be difficult to get right but it’s a lot of fun.
So, in my experience, using RxJava in Android leads to cleaner and more maintainable code plus it’s actually fun! But there are also disadvantages…
..if we want to use this in an Android app,
The FlatMap operator that you see on the second line is similar to map: it takes an input (in this case the response body of the request) but instead of returning another type, it returns an observable of another type. Basically the “events” function takes the responseBody as input, parses the chunks into separate events and emits them as they come. And this will be the observable that we subscribe to at the end.
[END] this has come in response to the evolution of internet usage:
As the user base of web services grew, different frameworks and tools came along. The one we’ll be talking about here is the Reactive Manifesto, which defines a sort of design guideline to build reactive systems: it describes a few traits that a system should have in order to be reactive. What are the traits of reactive systems?
These traits are somehow interconnected. Now, if we look at these traits one by one…
The load of a system can change over days, weeks or part of the year. Online store have Christmas or black friday sales. Banks might have payday and such. Elasticity means that you are able to use "just the right amount" of resources for the task.
To ease out the distribution of a system, its component must be loosely coupled.
It is defined as the ability of something to jump back into shape after a failure. Resilience means to restore functionality into FULL CAPACITY, and not just crawl on. We want to heal fully. Get hit - survive and keep going: this is FAULT TOLERANCE. The problem with this is that it isn't a sustainable strategy. What we need is RESILIENCE. Resilience goes beyond fault tolerance.
One thing that we'll encounter when designing these systems, is that we'll concentrate on Messages. If only messages are exchanged between components, we have achieved separation and error containment. Focus shifts from class diagrams and interface definitions to message flow: the protocol that defines the conversation between components. Who says what at what time.
The idea is that each component doesn’t share its internal state. In the real world, we don't read in each other's brains. Not yet at least. We use communication.
An actor is an isolated component that contains State, Behavior, a Mailbox, Children and a Supervisor.
The supervisor is especially important for error containment and resilience.
In a typical Java app you would see something like this. What you need to do here is run this code in a separate thread.
Maybe getAllCats() could take a callback. In this case, we need some additional mechanism to signal failures, and the threading is a bit mysterious, plus callbacks can end up in the mythical callback hell
In a typical Java app you would see something like this. What you need to do here is run this code in a separate thread.Maybe getAllCats() could take a callback
… the activity will also have a similar method that receives a Result message and does something with it.
.. recovery which might be more expensive: catch exceptions, store statistics etc. The goal of the restart is to wipe all the state that the component has grown into. Trying to recover from sketchy states can be tricky and generate even more mess.
Decoupling and failure containment is really important as it enables supervision and actor could be restarted by the supervisor. Failure WILL happen, so fault-avoidance is doomed. The only thing we can do is embrace failure and gracefully deal with the consequences.
I reckon you might be more confused than when I started. That’s possibly a good thing.