This document discusses using Akka Streams and Akka HTTP for large scale production applications. It covers why Akka Streams is useful for high performance stream processing. Operationalizing services requires managing the service lifecycle, monitoring pipelines, and ensuring resiliency. Common requirements for production services include managing the service lifecycle, logging, monitoring, and implementing resiliency through techniques like circuit breakers and timeouts. The squbs project provides tools for operationalizing Akka HTTP services at scale through components for lifecycle management, monitoring integration, and resiliency patterns.
Reducing MTTR and False Escalations: Event Correlation at LinkedInMichael Kehoe
LinkedIn’s production stack is made up of over 900 applications and over 2200 internal API’s. With any given application having many interconnected pieces, it is difficult to escalate to the right person in a timely manner.
In order to combat this, LinkedIn built an Event Correlation Engine that monitors service health and maps dependencies between services to correctly escalate to the SRE’s who own the unhealthy service.
We’ll discuss the approach we used in building a correlation engine and how it has been used at LinkedIn to reduce incident impact and provide better quality of life to LinkedIn’s oncall engineers.
LinkedIn serves traffic for its 467 million members from four data centers and multiple PoPs spread geographically around the world. Serving live traffic from from many places at the same time has taken us from a disaster recovery model to a disaster avoidance model where we can take an unhealthy data center or PoP out of rotation and redistribute its traffic to a healthy one within minutes, with virtually no visible impact to users. The geographical distribution of our infrastructure also allows us to optimize the end-user's experience by geo routing users to the best possible PoP and datacenter.
This talk provide details on how LinkedIn shifts traffic between its PoPs and data centers to provide the best possible performance and availability for its members. We will also touch on the complexities of performance in APAC, how IPv6 is helping our members and how LinkedIn stress tests data centers verify its disaster recovery capabilities.
(ATS6-DEV05) Building Interactive Web Applications with the Reporting CollectionBIOVIA
The reporting component collection in AEP provides powerful tools for building user interfaces for web applications, while leveraging the breadth of functionality of AEP for data querying and manipulation. This session will explore some of the tools available for creating web applications using the reporting collection.
(ATS6-APP05) Deploying Contur ELN to large organizationsBIOVIA
Introducing new IT systems that affect many users could be challenging, in particular for large organizations. This session will describe how Contur ELN has been deployed to 1000+ users in different fields of R&D. Case studies will be used to illustrate strategies and practical considerations.
VMware Monitoring-Discover And Monitor Your Virtual EnvironmentSite24x7
Gain a holistic view of your VMware infrastructure. Monitor VMware vSphere hosts and virtual machines (VMs). Get graphical views, alarms and thresholds, out-of-the-box reports, comprehensive fault management and maximum ESX server uptime. Site24x7 vCenter servers allow you to take control of your virtual resources and VMware infrastructure.
Reducing MTTR and False Escalations: Event Correlation at LinkedInMichael Kehoe
LinkedIn’s production stack is made up of over 900 applications and over 2200 internal API’s. With any given application having many interconnected pieces, it is difficult to escalate to the right person in a timely manner.
In order to combat this, LinkedIn built an Event Correlation Engine that monitors service health and maps dependencies between services to correctly escalate to the SRE’s who own the unhealthy service.
We’ll discuss the approach we used in building a correlation engine and how it has been used at LinkedIn to reduce incident impact and provide better quality of life to LinkedIn’s oncall engineers.
LinkedIn serves traffic for its 467 million members from four data centers and multiple PoPs spread geographically around the world. Serving live traffic from from many places at the same time has taken us from a disaster recovery model to a disaster avoidance model where we can take an unhealthy data center or PoP out of rotation and redistribute its traffic to a healthy one within minutes, with virtually no visible impact to users. The geographical distribution of our infrastructure also allows us to optimize the end-user's experience by geo routing users to the best possible PoP and datacenter.
This talk provide details on how LinkedIn shifts traffic between its PoPs and data centers to provide the best possible performance and availability for its members. We will also touch on the complexities of performance in APAC, how IPv6 is helping our members and how LinkedIn stress tests data centers verify its disaster recovery capabilities.
(ATS6-DEV05) Building Interactive Web Applications with the Reporting CollectionBIOVIA
The reporting component collection in AEP provides powerful tools for building user interfaces for web applications, while leveraging the breadth of functionality of AEP for data querying and manipulation. This session will explore some of the tools available for creating web applications using the reporting collection.
(ATS6-APP05) Deploying Contur ELN to large organizationsBIOVIA
Introducing new IT systems that affect many users could be challenging, in particular for large organizations. This session will describe how Contur ELN has been deployed to 1000+ users in different fields of R&D. Case studies will be used to illustrate strategies and practical considerations.
VMware Monitoring-Discover And Monitor Your Virtual EnvironmentSite24x7
Gain a holistic view of your VMware infrastructure. Monitor VMware vSphere hosts and virtual machines (VMs). Get graphical views, alarms and thresholds, out-of-the-box reports, comprehensive fault management and maximum ESX server uptime. Site24x7 vCenter servers allow you to take control of your virtual resources and VMware infrastructure.
Pipeline designer allows users to author their processes and provision them on Falcon. This should make building applications on Falcon over Hadoop fairly trivial. Falcon has the ability to operate with HCatalog tables natively. This means that there is a one to one correspondence between a Falcon feed and an HCatalog table. Between the feed definition in Falcon and the underlying table definition in HCatalog, there is adequate metadata about the data stored underneath. This data (sets of them) can then be operated over by a collection of transformations to extract more refined dataset/feed. This logic (currently represented via Oozie workflow / pig scripts / map-reduce jobs) is typically represented through the Falcon process. In this talk we walk through the details of the pipeline designer and the current state of this feature.
Sma, the hybrid provisioning engine for public cloudsStijn Callebaut
In this session where we are going to demonstrate the power of Service Management Automation (SMA).
The components of the solution and how you can extend the functionality of the engine are the focus points for this automation trip.
Different public clouds are targeted in this automation scenario where we will make use of different System Center products to submit and provision the request.
Less slides, lots of demonstrations where we will explain the complete configuration that supports the demo scenario.
Building REST API using Akka HTTP with ScalaKnoldus Inc.
Akka HTTP helps you to build reactive applications and facilitates seamless integration with Akka, Akka Streams, and Slick. Though there are a number of tools that help build REST APIs, Akka HTTP comes with its unique advantages. It’s more like a general toolkit that provides a complete server and client-side HTTP solution.
The Pdf will walk you through:
1. Brief Introduction of Akka HTTP
2. Akka HTTP Client API
3. Akka HTTP Server API
Guido Appenzeller
CEO
Big Switch Networks
ONS2015: http://bit.ly/ons2015sd
ONS Inspire! Webinars: http://bit.ly/oiw-sd
Watch the talk (video) on ONS Content Archives: http://bit.ly/ons-archives-sd
Apache Flink offers a fast, distributed, and failure-tolerant data-processing engine along with APIs for many different use cases, chief among them stateful stream processing. We give a quick overview of the capabilities of Flink before discussing the current state of Flink, the upcoming new release, and future developments.
http://www.learntek.org/product/apache-flink/
Apache Flink is an open source stream processing framework developed by the Apache Software Foundation. The core of Apache Flink is a distributed streaming dataflow engine written in Java and Scala. Apache Flink’s dataflow programming model provides event-at-a-time processing on both finite and infinite datasets. At a basic level, Flink programs consist of streams and transformations. Conceptually, a stream is a (potentially never-ending) flow of data records, and a transformation is an operation that takes one or more streams as input, and produces one or more output streams as a result. Programs can be written in Java, Scala, Python, and SQL and are automatically compiled and optimized into dataflow programs that are executed in a cluster or cloud environment.
http://www.learntek.org
Learntek is global online training provider on Big Data Analytics, Hadoop, Machine Learning, Deep Learning, IOT, AI, Cloud Technology, DEVOPS, Digital Marketing and other IT and Management courses. We are dedicated to designing, developing and implementing training programs for students, corporate employees and business professional.
Stream data from Apache Kafka for processing with Apache ApexApache Apex
Meetup presentation: How Apache Apex consumes from Kafka topics for real-time time processing and analytics. Learn about features of the Apex Kafka Connector, which is one of the most popular operators in the Apex Malhar operator library, and powers several production use cases. We explain the advanced features this operator provides for high throughput, low latency ingest and how it enables fault tolerant topologies with exactly once processing semantics.
Reactive Streams 1.0.0 is now live, and so are our implementations in Akka Streams 1.0 and Slick 3.0.
Reactive Streams is an engineering collaboration between heavy hitters in the area of streaming data on the JVM. With the Reactive Streams Special Interest Group, we set out to standardize a common ground for achieving statically-typed, high-performance, low latency, asynchronous streams of data with built-in non-blocking back pressure—with the goal of creating a vibrant ecosystem of interoperating implementations, and with a vision of one day making it into a future version of Java.
Akka (recent winner of “Most Innovative Open Source Tech in 2015”) is a toolkit for building message-driven applications. With Akka Streams 1.0, Akka has incorporated a graphical DSL for composing data streams, an execution model that decouples the stream’s staged computation—it’s “blueprint”—from its execution (allowing for actor-based, single-threaded and fully distributed and clustered execution), type safe stream composition, an implementation of the Reactive Streaming specification that enables back-pressure, and more than 20 predefined stream “processing stages” that provide common streaming transformations that developers can tap into (for splitting streams, transforming streams, merging streams, and more).
Slick is a relational database query and access library for Scala that enables loose-coupling, minimal configuration requirements and abstraction of the complexities of connecting with relational databases. With Slick 3.0, Slick now supports the Reactive Streams API for providing asynchronous stream processing with non-blocking back-pressure. Slick 3.0 also allows elegant mapping across multiple data types, static verification and type inference for embedded SQL statements, compile-time error discovery, and JDBC support for interoperability with all existing drivers.
Kafka: Journey from Just Another Software to Being a Critical Part of PayPal ...confluent
PayPal currently processes tens of billions of signals per day from different sources in batch and streaming mode. The data processing platform is the one powering these different analytical needs and use cases, not just at PayPal but our adjacencies like Venmo, Hyperwallet and iZettle. End users of this platform demand access to data insights with as much flexibility as possible to explore it with low processing latency.
One such use case is where our Switchboard(data de-multiplexer) platform where we process approximately 20 billion events daily and provide data to different teams and platforms with-in PayPal and also to platform outside PayPal for more insights. When we started building this platform Kafka was just another asynchronous message processing platform for us but we have seen it evolving to a place where its adds value not just in terms of event processing but also for platform resiliency and scalability.
Takeaway for the audience: Most people work with and have knowledge about data. With this talk I want to present information which is relevant and meaningful to the audience. Information and examples which will make it easier for attendees to understand our complex system and hopefully have some practical takeaways to use Kafka for similar problems on their hand.
The latest distributed system utilizing the cloud is a very complicated configuration in which the components span a plurality of components. Applications for customers are part of products, and service quality targets directly linked to business indicators are needed. Legacy monitoring system based on traditional system management is not linked not only to business indicators but also to measure service quality. Google advocates the idea of site reliability engineering (SRE) and introduces efforts to measure quality of service. Based on the concept of SRE, the service quality monitoring system collects and analyzes logs from various components not only application codes but also whole infrastructure components. Since very large amounts of data must be processed in real time, it is necessary to design carefully with reference to the big data architecture. To utilize this system, you can measure the quality of service, and make it possible to continuously improve the service quality.
Pipeline designer allows users to author their processes and provision them on Falcon. This should make building applications on Falcon over Hadoop fairly trivial. Falcon has the ability to operate with HCatalog tables natively. This means that there is a one to one correspondence between a Falcon feed and an HCatalog table. Between the feed definition in Falcon and the underlying table definition in HCatalog, there is adequate metadata about the data stored underneath. This data (sets of them) can then be operated over by a collection of transformations to extract more refined dataset/feed. This logic (currently represented via Oozie workflow / pig scripts / map-reduce jobs) is typically represented through the Falcon process. In this talk we walk through the details of the pipeline designer and the current state of this feature.
Sma, the hybrid provisioning engine for public cloudsStijn Callebaut
In this session where we are going to demonstrate the power of Service Management Automation (SMA).
The components of the solution and how you can extend the functionality of the engine are the focus points for this automation trip.
Different public clouds are targeted in this automation scenario where we will make use of different System Center products to submit and provision the request.
Less slides, lots of demonstrations where we will explain the complete configuration that supports the demo scenario.
Building REST API using Akka HTTP with ScalaKnoldus Inc.
Akka HTTP helps you to build reactive applications and facilitates seamless integration with Akka, Akka Streams, and Slick. Though there are a number of tools that help build REST APIs, Akka HTTP comes with its unique advantages. It’s more like a general toolkit that provides a complete server and client-side HTTP solution.
The Pdf will walk you through:
1. Brief Introduction of Akka HTTP
2. Akka HTTP Client API
3. Akka HTTP Server API
Guido Appenzeller
CEO
Big Switch Networks
ONS2015: http://bit.ly/ons2015sd
ONS Inspire! Webinars: http://bit.ly/oiw-sd
Watch the talk (video) on ONS Content Archives: http://bit.ly/ons-archives-sd
Apache Flink offers a fast, distributed, and failure-tolerant data-processing engine along with APIs for many different use cases, chief among them stateful stream processing. We give a quick overview of the capabilities of Flink before discussing the current state of Flink, the upcoming new release, and future developments.
http://www.learntek.org/product/apache-flink/
Apache Flink is an open source stream processing framework developed by the Apache Software Foundation. The core of Apache Flink is a distributed streaming dataflow engine written in Java and Scala. Apache Flink’s dataflow programming model provides event-at-a-time processing on both finite and infinite datasets. At a basic level, Flink programs consist of streams and transformations. Conceptually, a stream is a (potentially never-ending) flow of data records, and a transformation is an operation that takes one or more streams as input, and produces one or more output streams as a result. Programs can be written in Java, Scala, Python, and SQL and are automatically compiled and optimized into dataflow programs that are executed in a cluster or cloud environment.
http://www.learntek.org
Learntek is global online training provider on Big Data Analytics, Hadoop, Machine Learning, Deep Learning, IOT, AI, Cloud Technology, DEVOPS, Digital Marketing and other IT and Management courses. We are dedicated to designing, developing and implementing training programs for students, corporate employees and business professional.
Stream data from Apache Kafka for processing with Apache ApexApache Apex
Meetup presentation: How Apache Apex consumes from Kafka topics for real-time time processing and analytics. Learn about features of the Apex Kafka Connector, which is one of the most popular operators in the Apex Malhar operator library, and powers several production use cases. We explain the advanced features this operator provides for high throughput, low latency ingest and how it enables fault tolerant topologies with exactly once processing semantics.
Reactive Streams 1.0.0 is now live, and so are our implementations in Akka Streams 1.0 and Slick 3.0.
Reactive Streams is an engineering collaboration between heavy hitters in the area of streaming data on the JVM. With the Reactive Streams Special Interest Group, we set out to standardize a common ground for achieving statically-typed, high-performance, low latency, asynchronous streams of data with built-in non-blocking back pressure—with the goal of creating a vibrant ecosystem of interoperating implementations, and with a vision of one day making it into a future version of Java.
Akka (recent winner of “Most Innovative Open Source Tech in 2015”) is a toolkit for building message-driven applications. With Akka Streams 1.0, Akka has incorporated a graphical DSL for composing data streams, an execution model that decouples the stream’s staged computation—it’s “blueprint”—from its execution (allowing for actor-based, single-threaded and fully distributed and clustered execution), type safe stream composition, an implementation of the Reactive Streaming specification that enables back-pressure, and more than 20 predefined stream “processing stages” that provide common streaming transformations that developers can tap into (for splitting streams, transforming streams, merging streams, and more).
Slick is a relational database query and access library for Scala that enables loose-coupling, minimal configuration requirements and abstraction of the complexities of connecting with relational databases. With Slick 3.0, Slick now supports the Reactive Streams API for providing asynchronous stream processing with non-blocking back-pressure. Slick 3.0 also allows elegant mapping across multiple data types, static verification and type inference for embedded SQL statements, compile-time error discovery, and JDBC support for interoperability with all existing drivers.
Kafka: Journey from Just Another Software to Being a Critical Part of PayPal ...confluent
PayPal currently processes tens of billions of signals per day from different sources in batch and streaming mode. The data processing platform is the one powering these different analytical needs and use cases, not just at PayPal but our adjacencies like Venmo, Hyperwallet and iZettle. End users of this platform demand access to data insights with as much flexibility as possible to explore it with low processing latency.
One such use case is where our Switchboard(data de-multiplexer) platform where we process approximately 20 billion events daily and provide data to different teams and platforms with-in PayPal and also to platform outside PayPal for more insights. When we started building this platform Kafka was just another asynchronous message processing platform for us but we have seen it evolving to a place where its adds value not just in terms of event processing but also for platform resiliency and scalability.
Takeaway for the audience: Most people work with and have knowledge about data. With this talk I want to present information which is relevant and meaningful to the audience. Information and examples which will make it easier for attendees to understand our complex system and hopefully have some practical takeaways to use Kafka for similar problems on their hand.
The latest distributed system utilizing the cloud is a very complicated configuration in which the components span a plurality of components. Applications for customers are part of products, and service quality targets directly linked to business indicators are needed. Legacy monitoring system based on traditional system management is not linked not only to business indicators but also to measure service quality. Google advocates the idea of site reliability engineering (SRE) and introduces efforts to measure quality of service. Based on the concept of SRE, the service quality monitoring system collects and analyzes logs from various components not only application codes but also whole infrastructure components. Since very large amounts of data must be processed in real time, it is necessary to design carefully with reference to the big data architecture. To utilize this system, you can measure the quality of service, and make it possible to continuously improve the service quality.
Debugging Microservices - key challenges and techniques - Microservices Odesa...Lohika_Odessa_TechTalks
Microservice architecture is widespread our days. It comes with a lot of benefits and challenges to solve. Main goal of this talk is to go through troubleshooting and debugging in the distributed micro-service world. Topic would cover:
main aspects of the logging,
monitoring,
distributed tracing,
debugging services on the cluster.
About speaker:
Andrеy Kolodnitskiy is Staff engineer in the Lohika and his primary focus is around distributed systems, microservices and JVM based languages.
Majority of time engineers spend debugging and fixing the issues. This talk will be dedicated to best practicies and tools Andrеys team uses on its project which do help to find issues more efficiently.
Service Stampede: Surviving a Thousand ServicesAnil Gursel
How many services do you have? 5, 10, 100? How do you even run large number of services? A micro service may be relatively simple. But services also mean distributed systems, which are inherently complex. 5 services are complex. A thousand services across many generations are at least 200 times as complex. How do we deal with such complexity?
This talk discusses service architecture at Internet scale, the need for larger transaction density, larger horizontal and vertical scale, more predictable latencies under stress, and the need for standardization and visibility. We’ll dive into how we build our latest generation service infrastructure based on Scala and Akka to serve the needs of such a large scale ecosystem.
Lastly, have the cake and eat it too. No, we’re not keeping all the goodies only to ourselves. They are all there for you in open source.
Lessons Learned From PayPal: Implementing Back-Pressure With Akka Streams And...Lightbend
Akka Streams and its amazing handling of streaming with back-pressure should be no surprise to anyone. But it takes a couple of use cases to really see it in action - especially in use cases where the amount of work continues to increase as you’re processing it. This is where back-pressure really shines.
In this talk for Architects and Dev Managers by Akara Sucharitakul, Principal MTS for Global Platform Frameworks at PayPal, Inc., we look at how back-pressure based on Akka Streams and Kafka is being used at PayPal to handle very bursty workloads.
In addition, Akara will also share experiences in creating a platform based on Akka and Akka Streams that currently processes over 1 billion transactions per day (on just 8 VMs), with the aim of helping teams adopt these technologies. In this webinar, you will:
*Start with a sample web crawler use case to examine what happens when each processing pass expands to a larger and larger workload to process.
*Review how we use the buffering capabilities in Kafka and the back-pressure with asynchronous processing in Akka Streams to handle such bursts.
*Look at lessons learned, plus some constructive “rants” about the architectural components, the maturity, or immaturity you’ll expect, and tidbits and open source goodies like memory-mapped stream buffers that can be helpful in other Akka Streams and/or Kafka use cases.
“Microservices” have become a trendy development strategy. Hosting and running such services used to be pretty painful... but here comes Service Fabric! Let’s take a closer look at this platform, its different development models and all the features it offers, and not only for microservices!
Transforming Legacy Applications Into Dynamically Scalable Web ServicesAdam Takvam
The tools and technologies used to power the modern data center are evolving at a pace faster than most companies can keep up. Aging web services built on LAMP, WAMP, or ASP cannot readily take advantage of the latest in scalable web platforms and technologies. In this presentation, we will discuss what factors must be considered in order for your aging web service to take advantage of technologies such as Apache Mesos, Marathon, Docker, Apache Kafka, and more.
This talk is intended for software developers, operations, and IT managers who are looking to modernize existing privately-hosted web applications. We will look at the transformation of the data center from a high-level perspective, examining before and after topology examples using Key Performance Indicators and Key Performance Metrics to show how levering modern design principles can both improve application performance and reduce operational costs. Next we will look at some example applications and show what needs to be done from both the software development and infrastructure perspectives to successfully accomplish the transformation.
Creating a Centralized Consumer Profile Management Service with WebSphere Dat...Prolifics
In this presentation will talk about how one of the world's leading Financial Institutions, leveraged WebSphere DataPower to provide a set of centralized consumer profile management services. This central service would be leveraged by internal and external applications, and would align with enterprise marketing capabilities. The solution included a complex security model which included the following products: Tivoli Directory Server, Tivoli Access Manager and Tivoli Federated Identity Manager. We will describe how to build complex orchestrations in WebSphere DataPower, and also go through some of the performance tuning options we implemented to achieve a high degree of efficiency.
From the Trenches: Effectively Scaling Your Cloud Infrastructure and Optimizi...Allan Mangune
Decks I used in my previous presentation at Softcon. I shared you my experience on how to design a cloud infrastructure that easily scales; and optimize your database objects and write your SQL code for speed.
Resilience Planning & How the Empire Strikes BackC4Media
Video and slides synchronized, mp3 and slide download available at URL http://bit.ly/1pGpnbd.
Bhakti Mehta approaches best practices for building resilient, stable and predictable services: preventing cascading failures, timeouts pattern, retry pattern, circuit breakers and other techniques which have been pervasively used at Blue Jeans Network. Filmed at qconsf.com.
Bhakti Mehta is the author of "RESTful Java Patterns and Best practices” and "Developing RESTful Services with JAX-RS 2.0, WebSockets, and JSON”. Bhakti is a Senior Software Engineer at Blue Jeans Network. As part of her current role, she works on developing RESTful services that can be consumed by ISV partners and the developer community.
Microsoft Azure and Windows Application monitoringSite24x7
Monitor all your Microsoft applications and Azure services from a single console.
About Site24x7:
Site24x7 offers unified cloud monitoring for DevOps and IT operations. Monitor the experience of real users accessing websites and applications from desktop and mobile devices. In-depth monitoring capabilities enable DevOps teams to monitor and troubleshoot applications, servers and network infrastructure including private and public clouds. End user experience monitoring is done from 50+ locations across the world and various wireless carriers. For more information on Site24x7, please visit http://www.site24x7.com/.
Forums: https://forums.site24x7.com/
Facebook: http://www.facebook.com/Site24x7
Twitter: http://twitter.com/site24x7
Google+: https://plus.google.com/+Site24x7
LinkedIn: https://www.linkedin.com/company/site...
View Blogs: http://blogs.site24x7.com/
Flink Forward San Francisco 2018: Dave Torok & Sameer Wadkar - "Embedding Fl...Flink Forward
Operationalizing Machine Learning models is never easy. Our team at Comcast has been challenged with operationalizing predictive ML models to improve customer care experiences. Using Apache Flink we have been able to apply real-time streaming to all aspects of the Machine Learning lifecycle. This includes data feature exploration and preparation by data scientists, deploying live models to serve near-real-time predictions, and validating results for model retraining and iteration. We will share best practices and lessons learned from Flink’s role in our operationalized lifecycle including:
• Executing as the “Prediction Pipeline” – a model container environment for near-real-time streaming and batch predictions
• Preparing streaming features and data sets for model training, as input for production model predictions, and for a continually-updated customer context
• Using connected streams and savepoints for “Live in the Dark”, multi-variant testing, and validation scenarios
• Incorporating Flink’s Queryable State as an approach to the online “Feature Store” – a data catalog for reuse by multiple models and use cases
• Enabling versioned models, versioned feature sets, and versioned data through DevOps approaches.
Event Bus as Backbone for Decoupled Microservice Choreography (JFall 2017)Lucas Jellema
Microservices are independent, encapsulated entities that produce meaningful results and business functionality in tentative collaboration. Events and pub/sub are great for allowing such decoupled interaction. Using Apache Kafka as robust, distributed, real-time, high volume event bus, this session demonstrates how microservices packaged with Docker and implemented in Java, Node, Python and SQL collaborate unknowingly. The microservices respond to social (media) events - courtesy of IFTTT - and publish results to multiple channels. The event bus operates across cloud services and on premises platforms such as Kubernetes: both the bus and the microservices can run anywhere. A microservices platform is discussed with generic capabilities.
Outline: presentation summary
- intro microservices objectives, focus on decoupled collaboration
- demo four mservices in different technologies (Node, Java, ...) ; no direct dependencies; show the code (running on its own), show the packing into a container and the step of running the containers on a container management platform, using both Kubernetes and a Container Cloud Service (later on this will further the point of collaborating between microservices that are widely separated)
- discuss generic capabilities of a microservices platform (facilities required in many microservices that should be available as microservice - such as cache, log, authenticate (and compare with Java EE application server)
- demo a microservice providing a generic cache functionality (based on MongoDB)
- outline the desired choreography (a four step workflow that requires participation from various microservices); briefly discuss routing slips and the Saga pattern
- discuss use of events and need of event bus
- intro Kafka
- demo pub and sub from each mservice to Kafka
- link IFTTT to Kafka (for demo: use ngrok to expose local Kafka to IFTTT cloud)
- demo end-to-end Social event=>IFTTT=>Kafka=>choreographed mservices=> final result
- demo: extend one of the microservices: change the code, package a new container image version and update the running version in the container platform; demonstrate that new workflows leverage the new version
- demo: move a microservice from on premises to cloud - showing that the decoupled nature of the mservices mean that this move does not have any impact
- demo: show a change in the logic of the routing slip; none of the mservices require any change for a changed workflow choreography to be executed
- discuss cloud deployment of event bus + mservices
The Oracle Application Container Cloud as the Microservices Platform (APAC OU...Lucas Jellema
Microservices are independent, encapsulated entities that produce meaningful results and business functionality in tentative collaboration. Microservices need a platform to run on and to provide generic capabilities such as data caching, an event bus, access to RDBMS and File System. This platform should handle scaling and fail over of the microservices.
The Application Container Cloud runs and automatically scales applications built in various technologies such as Node, Java, PHP and Python, it provides caching and access to an event bus and database in the cloud. This session demonstrates how multiple microservices are deployed to and run on ACC, using these capabilities.
Similar to Taking Akka Streams & Akka Http to Large Scale Production Applications (20)
OpenFOAM solver for Helmholtz equation, helmholtzFoam / helmholtzBubbleFoamtakuyayamamoto1800
In this slide, we show the simulation example and the way to compile this solver.
In this solver, the Helmholtz equation can be solved by helmholtzFoam. Also, the Helmholtz equation with uniformly dispersed bubbles can be simulated by helmholtzBubbleFoam.
A Comprehensive Look at Generative AI in Retail App Testing.pdfkalichargn70th171
Traditional software testing methods are being challenged in retail, where customer expectations and technological advancements continually shape the landscape. Enter generative AI—a transformative subset of artificial intelligence technologies poised to revolutionize software testing.
How to Position Your Globus Data Portal for Success Ten Good PracticesGlobus
Science gateways allow science and engineering communities to access shared data, software, computing services, and instruments. Science gateways have gained a lot of traction in the last twenty years, as evidenced by projects such as the Science Gateways Community Institute (SGCI) and the Center of Excellence on Science Gateways (SGX3) in the US, The Australian Research Data Commons (ARDC) and its platforms in Australia, and the projects around Virtual Research Environments in Europe. A few mature frameworks have evolved with their different strengths and foci and have been taken up by a larger community such as the Globus Data Portal, Hubzero, Tapis, and Galaxy. However, even when gateways are built on successful frameworks, they continue to face the challenges of ongoing maintenance costs and how to meet the ever-expanding needs of the community they serve with enhanced features. It is not uncommon that gateways with compelling use cases are nonetheless unable to get past the prototype phase and become a full production service, or if they do, they don't survive more than a couple of years. While there is no guaranteed pathway to success, it seems likely that for any gateway there is a need for a strong community and/or solid funding streams to create and sustain its success. With over twenty years of examples to draw from, this presentation goes into detail for ten factors common to successful and enduring gateways that effectively serve as best practices for any new or developing gateway.
Globus Connect Server Deep Dive - GlobusWorld 2024Globus
We explore the Globus Connect Server (GCS) architecture and experiment with advanced configuration options and use cases. This content is targeted at system administrators who are familiar with GCS and currently operate—or are planning to operate—broader deployments at their institution.
top nidhi software solution freedownloadvrstrong314
This presentation emphasizes the importance of data security and legal compliance for Nidhi companies in India. It highlights how online Nidhi software solutions, like Vector Nidhi Software, offer advanced features tailored to these needs. Key aspects include encryption, access controls, and audit trails to ensure data security. The software complies with regulatory guidelines from the MCA and RBI and adheres to Nidhi Rules, 2014. With customizable, user-friendly interfaces and real-time features, these Nidhi software solutions enhance efficiency, support growth, and provide exceptional member services. The presentation concludes with contact information for further inquiries.
Prosigns: Transforming Business with Tailored Technology SolutionsProsigns
Unlocking Business Potential: Tailored Technology Solutions by Prosigns
Discover how Prosigns, a leading technology solutions provider, partners with businesses to drive innovation and success. Our presentation showcases our comprehensive range of services, including custom software development, web and mobile app development, AI & ML solutions, blockchain integration, DevOps services, and Microsoft Dynamics 365 support.
Custom Software Development: Prosigns specializes in creating bespoke software solutions that cater to your unique business needs. Our team of experts works closely with you to understand your requirements and deliver tailor-made software that enhances efficiency and drives growth.
Web and Mobile App Development: From responsive websites to intuitive mobile applications, Prosigns develops cutting-edge solutions that engage users and deliver seamless experiences across devices.
AI & ML Solutions: Harnessing the power of Artificial Intelligence and Machine Learning, Prosigns provides smart solutions that automate processes, provide valuable insights, and drive informed decision-making.
Blockchain Integration: Prosigns offers comprehensive blockchain solutions, including development, integration, and consulting services, enabling businesses to leverage blockchain technology for enhanced security, transparency, and efficiency.
DevOps Services: Prosigns' DevOps services streamline development and operations processes, ensuring faster and more reliable software delivery through automation and continuous integration.
Microsoft Dynamics 365 Support: Prosigns provides comprehensive support and maintenance services for Microsoft Dynamics 365, ensuring your system is always up-to-date, secure, and running smoothly.
Learn how our collaborative approach and dedication to excellence help businesses achieve their goals and stay ahead in today's digital landscape. From concept to deployment, Prosigns is your trusted partner for transforming ideas into reality and unlocking the full potential of your business.
Join us on a journey of innovation and growth. Let's partner for success with Prosigns.
Innovating Inference - Remote Triggering of Large Language Models on HPC Clus...Globus
Large Language Models (LLMs) are currently the center of attention in the tech world, particularly for their potential to advance research. In this presentation, we'll explore a straightforward and effective method for quickly initiating inference runs on supercomputers using the vLLM tool with Globus Compute, specifically on the Polaris system at ALCF. We'll begin by briefly discussing the popularity and applications of LLMs in various fields. Following this, we will introduce the vLLM tool, and explain how it integrates with Globus Compute to efficiently manage LLM operations on Polaris. Attendees will learn the practical aspects of setting up and remotely triggering LLMs from local machines, focusing on ease of use and efficiency. This talk is ideal for researchers and practitioners looking to leverage the power of LLMs in their work, offering a clear guide to harnessing supercomputing resources for quick and effective LLM inference.
Unleash Unlimited Potential with One-Time Purchase
BoxLang is more than just a language; it's a community. By choosing a Visionary License, you're not just investing in your success, you're actively contributing to the ongoing development and support of BoxLang.
Providing Globus Services to Users of JASMIN for Environmental Data AnalysisGlobus
JASMIN is the UK’s high-performance data analysis platform for environmental science, operated by STFC on behalf of the UK Natural Environment Research Council (NERC). In addition to its role in hosting the CEDA Archive (NERC’s long-term repository for climate, atmospheric science & Earth observation data in the UK), JASMIN provides a collaborative platform to a community of around 2,000 scientists in the UK and beyond, providing nearly 400 environmental science projects with working space, compute resources and tools to facilitate their work. High-performance data transfer into and out of JASMIN has always been a key feature, with many scientists bringing model outputs from supercomputers elsewhere in the UK, to analyse against observational or other model data in the CEDA Archive. A growing number of JASMIN users are now realising the benefits of using the Globus service to provide reliable and efficient data movement and other tasks in this and other contexts. Further use cases involve long-distance (intercontinental) transfers to and from JASMIN, and collecting results from a mobile atmospheric radar system, pushing data to JASMIN via a lightweight Globus deployment. We provide details of how Globus fits into our current infrastructure, our experience of the recent migration to GCSv5.4, and of our interest in developing use of the wider ecosystem of Globus services for the benefit of our user community.
Climate Science Flows: Enabling Petabyte-Scale Climate Analysis with the Eart...Globus
The Earth System Grid Federation (ESGF) is a global network of data servers that archives and distributes the planet’s largest collection of Earth system model output for thousands of climate and environmental scientists worldwide. Many of these petabyte-scale data archives are located in proximity to large high-performance computing (HPC) or cloud computing resources, but the primary workflow for data users consists of transferring data, and applying computations on a different system. As a part of the ESGF 2.0 US project (funded by the United States Department of Energy Office of Science), we developed pre-defined data workflows, which can be run on-demand, capable of applying many data reduction and data analysis to the large ESGF data archives, transferring only the resultant analysis (ex. visualizations, smaller data files). In this talk, we will showcase a few of these workflows, highlighting how Globus Flows can be used for petabyte-scale climate analysis.
How Recreation Management Software Can Streamline Your Operations.pptxwottaspaceseo
Recreation management software streamlines operations by automating key tasks such as scheduling, registration, and payment processing, reducing manual workload and errors. It provides centralized management of facilities, classes, and events, ensuring efficient resource allocation and facility usage. The software offers user-friendly online portals for easy access to bookings and program information, enhancing customer experience. Real-time reporting and data analytics deliver insights into attendance and preferences, aiding in strategic decision-making. Additionally, effective communication tools keep participants and staff informed with timely updates. Overall, recreation management software enhances efficiency, improves service delivery, and boosts customer satisfaction.
In software engineering, the right architecture is essential for robust, scalable platforms. Wix has undergone a pivotal shift from event sourcing to a CRUD-based model for its microservices. This talk will chart the course of this pivotal journey.
Event sourcing, which records state changes as immutable events, provided robust auditing and "time travel" debugging for Wix Stores' microservices. Despite its benefits, the complexity it introduced in state management slowed development. Wix responded by adopting a simpler, unified CRUD model. This talk will explore the challenges of event sourcing and the advantages of Wix's new "CRUD on steroids" approach, which streamlines API integration and domain event management while preserving data integrity and system resilience.
Participants will gain valuable insights into Wix's strategies for ensuring atomicity in database updates and event production, as well as caching, materialization, and performance optimization techniques within a distributed system.
Join us to discover how Wix has mastered the art of balancing simplicity and extensibility, and learn how the re-adoption of the modest CRUD has turbocharged their development velocity, resilience, and scalability in a high-growth environment.
Quarkus Hidden and Forbidden ExtensionsMax Andersen
Quarkus has a vast extension ecosystem and is known for its subsonic and subatomic feature set. Some of these features are not as well known, and some extensions are less talked about, but that does not make them less interesting - quite the opposite.
Come join this talk to see some tips and tricks for using Quarkus and some of the lesser known features, extensions and development techniques.
Custom Healthcare Software for Managing Chronic Conditions and Remote Patient...Mind IT Systems
Healthcare providers often struggle with the complexities of chronic conditions and remote patient monitoring, as each patient requires personalized care and ongoing monitoring. Off-the-shelf solutions may not meet these diverse needs, leading to inefficiencies and gaps in care. It’s here, custom healthcare software offers a tailored solution, ensuring improved care and effectiveness.
Paketo Buildpacks : la meilleure façon de construire des images OCI? DevopsDa...Anthony Dahanne
Les Buildpacks existent depuis plus de 10 ans ! D’abord, ils étaient utilisés pour détecter et construire une application avant de la déployer sur certains PaaS. Ensuite, nous avons pu créer des images Docker (OCI) avec leur dernière génération, les Cloud Native Buildpacks (CNCF en incubation). Sont-ils une bonne alternative au Dockerfile ? Que sont les buildpacks Paketo ? Quelles communautés les soutiennent et comment ?
Venez le découvrir lors de cette session ignite
Exploring Innovations in Data Repository Solutions - Insights from the U.S. G...Globus
The U.S. Geological Survey (USGS) has made substantial investments in meeting evolving scientific, technical, and policy driven demands on storing, managing, and delivering data. As these demands continue to grow in complexity and scale, the USGS must continue to explore innovative solutions to improve its management, curation, sharing, delivering, and preservation approaches for large-scale research data. Supporting these needs, the USGS has partnered with the University of Chicago-Globus to research and develop advanced repository components and workflows leveraging its current investment in Globus. The primary outcome of this partnership includes the development of a prototype enterprise repository, driven by USGS Data Release requirements, through exploration and implementation of the entire suite of the Globus platform offerings, including Globus Flow, Globus Auth, Globus Transfer, and Globus Search. This presentation will provide insights into this research partnership, introduce the unique requirements and challenges being addressed and provide relevant project progress.
Experience our free, in-depth three-part Tendenci Platform Corporate Membership Management workshop series! In Session 1 on May 14th, 2024, we began with an Introduction and Setup, mastering the configuration of your Corporate Membership Module settings to establish membership types, applications, and more. Then, on May 16th, 2024, in Session 2, we focused on binding individual members to a Corporate Membership and Corporate Reps, teaching you how to add individual members and assign Corporate Representatives to manage dues, renewals, and associated members. Finally, on May 28th, 2024, in Session 3, we covered questions and concerns, addressing any queries or issues you may have.
For more Tendenci AMS events, check out www.tendenci.com/events
5. Production at Scale… Production What???
• One service is no service
• Services never come alone, they come in
flocks, herds, domains, organizations
• Services come in different platforms,
implementation languages, technologies
• To be manageable in the big scheme of
things, they have to walk like a duck and
quack like a duck
• Three basic requirement sets for
services:
• Service Lifecycle
• Manageability
• Resiliency
6. Service Lifecycle Requirements
• Observing and acting upon service instance
state changes
• Exposing lifecycle state to external
infrastructure
• Manage internal state transitions without data
loss
7. Manageability Requirements
• Logging
• Uniformity of logs across services
• Monitoring
• Collect metrics, consistently across
services and technologies
• Tracing/Correlation
• Within services
• Across services
• Troubleshooting
• Log bad requests
• Log timeouts
• Intrusion detection
• Suspicious calls
• Metering
• How many times are you allowed to call
• Authentication
• Organization’s policies and mechanisms
• Authorization
• Organization’s policies and mechanisms
10. End-to-End Streams
Why???
Provides back-pressure through all
the components
No known places like unbounded
buffers or mailboxes to OOM
Process what you can take
End-to-End Streams, End-to-End
Resiliency
11. Managing the Service Lifecycle
• Extended Content Verification (ECV)
• Enables/disables load balancer traffic
• Provided through admin UI
• Traffic enabled when “Active”
• Perpetual Stream
• Starts when ”Active”
• Stops incoming and drains stream at
“Stopping”
• Stopping is the hard part
• Without losing data in async systems
Starting
Active
Stopping
Stopped
12. The Pipeline
• Provided by Infra, not part of application logic
• Application can override/add/remove
• Separation of concerns
• Allows standardization of request/response handling across large number of applications
• Similar architecture for client and server side
• Pipeline components:
BidiFlow[RequestContext, RequestContext, RequestContext, RequestContext, NotUsed]
RequestContext: [Request, Option[Try[Response]], Attributes]
• Allows processing request, response, or even short-cutting the request from biz logic
• Pipeline assembly by putting BidiFlow components together (atop)
• Fully utilizes Akka Streams fusing
16. Stage Gates Separates one part of flow from another
Synchronous internal state
Custom BidiFlow keeps the gate’s state
synchronous and encapsulated
Best known implementation of
resiliency components
Can be used as pipeline component
19. State Sharing Some gates share state
Gates to shared resource
• Shared local/remote actor
• Shared service/database
Create explicit state holder like
CircuitBreakerState
Share state holder between
materializations
21. What to Monitor
• Requests rate
• Open Connections – current number of active stream materializations
• New Connection rate – HTTP stream materialization rate
• In-flight requests
• Stream collapses, and reasons – Client dropping connections, etc.
• Http response codes
• Standard system and network statistics
22. Monitoring Internet-facing services
• Akka HTTP is not the most tolerant
• Routing API makes even stricter assumptions about the
requests
• Pipeline handlers for internet-facing services:
• Request sanitizer – also capture stats on non-compliant requests
• Request logger – capture requests that failed the sanitize
Be tolerant with others and strict with yourself
24. squbs is not… A framework by its own
A programming model – use Akka
All or nothing – Components/patterns
can mostly be used independently
25. squbs
Akka for large
scale deployments
Bootstrap
Lifecycle management
Loosely-coupled module system
Integration hooks for logging,
monitoring, ops integration
26. squbs
Akka for large
scale deployments
JSON console
HttpClient with pluggable resolver and
monitoring/logging hooks
Test tools and interfaces
Goodies:
- Activators and G8 templates
for Scala & Java
- Programming patterns and helpers for
Akka and Akka Stream Use cases…, and
growing
27. squbs Components available on Alpakka
PersistentBuffer
BroadcastBuffer
Stream Circuit Breaker
Stream Deduplicator
Stream Timeout
Stream Retry
http://developer.lightbend.com/docs/alpakka/current/external-components.html
28. What’s Next? squbs 1.0 – on Akka 2.5, Scala 2.12
Refined documentation & API
More goodies, components
Better monitoring & stats
Beyond squbs 1.0 – Operationalizing
distributed patterns at large scale!!!
29. Summary
• Akka Streams & Akka HTTP for high throughput, high burst services
• Resilient: Back-pressure keeps system stable under load
• Operationalization is a big deal!
• Provide the right hooks and tools to understand a running system
• Lifecycle hooks and pipeline allow standards and separation of concern
• Resiliency gates/components essential to building resilient systems
• squbs: Eat your cake, too!
• Functionality without sacrificing performance
• Provides operationalization hooks and components for Akka HTTP/Akka
Streams
30. Q&A – Feedback Appreciated
Join us on – link from https://github.com/paypal/squbs
@squbs, @S_Akara