Building Information Systems using Event Modeling (Bobby Calderwood, Evident ...confluent
"Event Modeling is a fairly new information system modeling discipline created by Adam Dymitruk that is heavily influenced by CQRS and Event Sourcing. Its lineage follows from Event Storming, Design Thinking, and other modeling practices from the Agile and Domain-Driven Design communities. The methodology emphasizes simplicity (there are only four model ingredients) and inclusion of non-developer participants.
Like other modeling disciplines, Event Modeling is sufficiently general to enable collaborative learning and knowledge exchange among UI/UX designers, software engineers and architects, and business domain experts. But it's also sufficiently expressive and specific to be directly actionable by the implementors of the information system described by the model.
During this talk, we'll:
* Build an Event Model of a simple information system, including wire-framing the UI/UX experience
* Explore how to proceed from model to implementation using Kafka, its Streams and Connect APIs, and KSQL
* Jump-start the implementation by generating code directly from the Event Model
* Track and measure the work of implementation by generating tasks directly from the Event Model"
Use ksqlDB to migrate core-banking processing from batch to streaming | Mark ...HostedbyConfluent
Core banking systems are batch oriented: typically with heavy overnight batch cycles before business opens each morning. In this talk I will explain some of the common interface points between core-banking infrastructure and event streaming systems. Then I will focus on how to do stream processing using ksqlDB for core-banking shaped data: showing how to do common operation using various ksqlDB functions. The key features are avro-record keys and multi-key joins (ksqlDB 0.15), schema management and state store planning.
The Problem is Data: Gwen Shapira, Confluent, Serverless NYC 2018iguazio
Real-world architectures are different from code samples and small examples, and as we build more complex and mature Serverless architectures, we often encounter unexpected challenges.
This talk will start by discussing the challenges involved in building data processing architectures using stateless infrastructures. We'll review patterns from event-driven architectures, see how they apply to Serverless architectures and propose practical solutions to common pain-points.
In this guest webinar by Kevin Webber, we cover the entire architecture of a Reactive system, from a responsive UI implemented with Vue.js, to a fully event sourced collection of microservices implemented with Java, Lagom, Cassandra, and Kafka.
For the full recording, visit: https://www.lightbend.com/blog/full-stack-reactive-in-practice-webinar
The Serverless Native Mindset: Ben Kehoe, iRobot, Serverless NYC 2018iguazio
Serverless architecture is a fantastic enabler for lean teams, smaller bills, and more robust systems, but its fundamental advantage is that it allows your organization to focus on creating business value, not solving technology problems. However, gaining this advantage involves giving up control over your technology stack in favor of managed services, and this organizational aspect is more difficult than any of the technological hurdles faced by serverless developers. In this talk, I'll explain how to adopt a mindset that embraces serverless and why, in spite of these pitfalls, serverless architecture is absolutely worth the effort.
Building Information Systems using Event Modeling (Bobby Calderwood, Evident ...confluent
"Event Modeling is a fairly new information system modeling discipline created by Adam Dymitruk that is heavily influenced by CQRS and Event Sourcing. Its lineage follows from Event Storming, Design Thinking, and other modeling practices from the Agile and Domain-Driven Design communities. The methodology emphasizes simplicity (there are only four model ingredients) and inclusion of non-developer participants.
Like other modeling disciplines, Event Modeling is sufficiently general to enable collaborative learning and knowledge exchange among UI/UX designers, software engineers and architects, and business domain experts. But it's also sufficiently expressive and specific to be directly actionable by the implementors of the information system described by the model.
During this talk, we'll:
* Build an Event Model of a simple information system, including wire-framing the UI/UX experience
* Explore how to proceed from model to implementation using Kafka, its Streams and Connect APIs, and KSQL
* Jump-start the implementation by generating code directly from the Event Model
* Track and measure the work of implementation by generating tasks directly from the Event Model"
Use ksqlDB to migrate core-banking processing from batch to streaming | Mark ...HostedbyConfluent
Core banking systems are batch oriented: typically with heavy overnight batch cycles before business opens each morning. In this talk I will explain some of the common interface points between core-banking infrastructure and event streaming systems. Then I will focus on how to do stream processing using ksqlDB for core-banking shaped data: showing how to do common operation using various ksqlDB functions. The key features are avro-record keys and multi-key joins (ksqlDB 0.15), schema management and state store planning.
The Problem is Data: Gwen Shapira, Confluent, Serverless NYC 2018iguazio
Real-world architectures are different from code samples and small examples, and as we build more complex and mature Serverless architectures, we often encounter unexpected challenges.
This talk will start by discussing the challenges involved in building data processing architectures using stateless infrastructures. We'll review patterns from event-driven architectures, see how they apply to Serverless architectures and propose practical solutions to common pain-points.
In this guest webinar by Kevin Webber, we cover the entire architecture of a Reactive system, from a responsive UI implemented with Vue.js, to a fully event sourced collection of microservices implemented with Java, Lagom, Cassandra, and Kafka.
For the full recording, visit: https://www.lightbend.com/blog/full-stack-reactive-in-practice-webinar
The Serverless Native Mindset: Ben Kehoe, iRobot, Serverless NYC 2018iguazio
Serverless architecture is a fantastic enabler for lean teams, smaller bills, and more robust systems, but its fundamental advantage is that it allows your organization to focus on creating business value, not solving technology problems. However, gaining this advantage involves giving up control over your technology stack in favor of managed services, and this organizational aspect is more difficult than any of the technological hurdles faced by serverless developers. In this talk, I'll explain how to adopt a mindset that embraces serverless and why, in spite of these pitfalls, serverless architecture is absolutely worth the effort.
MongoDB and Machine Learning with FlowableFlowable
Joram Barrez, Principal Software Engineer at Flowable, explains how to run Flowable on MongoDB.
It was presented at the Flowfest 2018 in Barcelona, Spain
Nordstrom's Event-Sourced Architecture and Kafka-as-a-Service | Adam Weyant a...HostedbyConfluent
As a 120 year-old company, Nordstrom was facing numerous challenges as a result of an aging, service-oriented, architecture. Developers needing to implement reporting for analytics separately from core functionality resulted in questionable data quality for analytical purposes. Scaling dependent services in harmony to not overwhelm each other was a struggle faced by many, if not most, teams. Several years into a company-wide transition to an event-sourced architecture, Nordstrom has solved these and various other problems. By leveraging the capabilities of Apache Kafka and Confluent, combined with a deep organizational focus on well-defined business event schemas, a singular event can be used for analytical, functional, operational, and model building purposes. This session will describe this architecture and the lessons learned while building it, with a focus on the internally built, multi-tenant, multi-cluster, Kafka-as-a-Service platform that enables it.
Migrating from One Cloud Provider to Another (Without Losing Your Data or You...HostedbyConfluent
If you’re considering -- or planning -- a cloud migration, you may be concerned about risks to your data and your mental health. Migrations at scale are fraught with risk. You absolutely can’t lose data, compromise its integrity, or suffer downtime, so you want to be slow and careful. On the other hand, you’re paying two providers for every day the migration goes on, so you need to move as fast as possible.
Unity Technologies accumulates lots of data. We recently moved our data infrastructure as part of a major cloud migration from Amazon Web Services (AWS) to Google Cloud Platform (GCP).
To minimize risk and costs our team used Apache Kafka and Confluent Platform, while engaging Confluent Platform Professional Services to help ensure a speedy and seamless migration. Kafka was already serving as the backbone to our data infrastructure, which handles over half a million events per second, and during the migration it also served as the bridge between AWS and GCP.
Join us at this session to learn about the processes and tools used, the challenges faced, and the lessons learned as we moved our operations and petabytes of data from AWS to GCP with zero downtime.
Introduction to Apache Kafka and Confluent... and why they matter!Paolo Castagna
This is a short introduction to Apache Kafka and Confluent (the company founded by the creator of Kafka). The slides cover Apache Kafka APIs including Kafka Connect and Kafka Streams (part of Apache Kafka). Other open source, ASL licensed, projects are mentioned: #KSQL, Schema Registry, REST Proxy, etc.
Many thanks to Codemotion and Seacom for hosting the event.
Event & Data Mesh as a Service: Industrializing Microservices in the Enterpri...HostedbyConfluent
Kafka is widely positioned as the proverbial "central nervous system" of the enterprise. In this session, we explore how the central nervous system can be used to build a mesh topology & unified catalog of enterprise wide events, enabling development teams to build event driven architectures faster & better.
The central theme of this topic is also aligned to seeking idioms from API Management, Service Meshes, Workflow management and Service orchestration. We compare how these approaches can be harmonized with Kafka.
We will also touch upon the topic of how this relates to Domain Driven Design, CQRS & other patterns in microservices.
Some potential takeaways for the discerning audience:
1. Opportunities in a platform approach to Event Driven Architecture in the enterprise
2. Adopting a product mindset around Data & Event Streams
3. Seeking harmony with allied enterprise applications
SingleStore & Kafka: Better Together to Power Modern Real-Time Data Architect...HostedbyConfluent
To remain competitive, organizations need to democratize access to fast analytics, not only to gain real-time insights on their business but also to power smart apps that need to react in the moment. In this session, you will learn how Kafka and SingleStore enable modern, yet simple data architecture to analyze both fast paced incoming data as well as large historical datasets. In particular, you will understand why SingleStore is well suited process data streams coming from Kafka.
Maximize the Business Value of Machine Learning and Data Science with Kafka (...confluent
Today, many companies that have lots of data are still struggling to derive value from machine learning (ML) and data science investments. Why? Accessing the data may be difficult. Or maybe it’s poorly labeled. Or vital context is missing. Or there are questions around data integrity. Or standing up an ML service can be cumbersome and complex.
At Nuuly, we offer an innovative clothing rental subscription model and are continually evolving our ML solutions to gain insight into the behaviors of our unique customer base as well as provide personalized services. In this session, I’ll share how we used event streaming with Apache Kafka® and Confluent Cloud to address many of the challenges that may be keeping your organization from maximizing the business value of machine learning and data science. First, you’ll see how we ensure that every customer interaction and its business context is collected. Next, I’ll explain how we can replay entire interaction histories using Kafka as a transport layer as well as a persistence layer and a business application processing layer. Order management, inventory management, logistics, subscription management – all of it integrates with Kafka as the common backbone. These data streams enable Nuuly to rapidly prototype and deploy dynamic ML models to support various domains, including pricing, recommendations, product similarity, and warehouse optimization. Join us and learn how Kafka can help improve machine learning and data science initiatives that may not be delivered to their full potential.
In this talk, Confluent co-founder and CEO, Jay Kreps will cover the rise of two trends:
1. The rise of Apache Kafka and event streams
2. The rise of the public cloud and cloud-native data systems
... and the problems we need to solve as these two trends come together.
How to use Standard SQL over Kafka: From the basics to advanced use cases | F...HostedbyConfluent
Several different frameworks have been developed to draw data from Kafka and maintain standard SQL over continually changing data. This provides an easy way to query and transform data - now accessible by orders of magnitude more users.
At the same time, using Standard SQL against changing data is a new pattern for many engineers and analysts. While the language hasn’t changed, we’re still in the early stages of understanding the power of SQL over Kafka - and in some interesting ways, this new pattern introduces some exciting new idioms.
In this session, we’ll start with some basic use cases of how Standard SQL can be effectively used over events in Kafka- including how these SQL engines can help teams that are brand new to streaming data get started. From there, we’ll cover a series of more advanced functions and their implications, including:
- WHERE clauses that contain time change the validity intervals of your data; you can programmatically introduce and retract records based on their payloads!
- LATERAL joins turn streams of query arguments into query results; they will automatically share their query plans and resources!
- GROUP BY aggregations can be applied to ever-growing data collections; reduce data that wouldn't even fit in a database in the first place.
We'll review in-production examples where each of these cases make unmodified Standard SQL, run and maintain over data streams in Kafka, and provide the functionality of bespoke stream processors.
Creating an Elastic Platform Using Kafka and Microservices in OpenShift confluent
(Pradeep Chintam, American Express Global Business Travel) Kafka Summit SF 2018
When a new project, Global Trip Record was launched at American Express GBT and we were looking for a robust, scalable and fault-tolerant middleware to handle all the orchestration and connectivity needs of the project.
The existing solution was monolithic, and we wanted to convert that to a microservices framework, but the biggest challenge was managing the increasing number of external applications that are connected to the platform. Any slow external application or partner system connected to the platform was slowing down the entire platform. There is always a need for partner systems to go offline or a need to resend the entire day’s data, especially with a system like our data lake where the data volumes are huge.
After evaluating multiple solutions, we settled on Apache Kafka, and started with a simple implementation of around 100,000 messages to just decouple one partner system and the core platform.
Today, we are running our microservices (Docker) running in OpenShift (Kubernetes) processing Kafka Streams, running real-time anomaly detection using Kafka Streams, powering our data lake through Kafka, feeding our distributed caching layer (Apache Ignite) and connecting all internal and external systems using Kafka. With a total of more than 10 million messages per day, i.e., 1.5TB of data with just a small three-node cluster, we are one happy platform for over a year now. With the kind of stability, flexibility and success in our project, a lot of other teams started and will soon be in production with Kafka Steams. The powerful combination of Kafka and OpenShift has proven to be an easily scalable model with great elasticity to the entire platform.
How to build an event driven architecture with kafka and kafka connectLoi Nguyen
How to build an event driven architecture with Kafka & Kafka Connect?
Bài talk chia sẻ về quá trình 2 năm ứng dụng Kafka, Kafka Connect để chuyển đổi mô hình hệ thống của Vexere từ Monolithic thành Microservice, event driven
- Event driven architecture là gì?
- Làm thế nào để xây dựng 1 hệ thống event driven architecture một cách hiệu qủa bằng Kafka và Kafka Connect
onnect
- Các usercase hữu ích với Kafka & Kafka Connect
- Kinh nghiệm thực tế và các bài học rút ra
How a Data Mesh is Driving our Platform | Trey Hicks, GlooHostedbyConfluent
At Gloo.us, we face a challenge in providing platform data to heterogeneous applications in a way that eliminates access contention, avoids high latency ETLs, and ensures consistency for many teams. We're solving this problem by adopting Data Mesh principles and leveraging Kafka, Kafka Connect, and Kafka streams to build an event driven architecture to connect applications to the data they need. A domain driven design keeps the boundaries between specialized process domains and singularly focused data domains clear, distinct, and disciplined. Applying the principles of a Data Mesh, process domains assume the responsibility of transforming, enriching, or aggregating data rather than relying on these changes at the source of truth -- the data domains. Architecturally, we've broken centralized big data lakes into smaller data stores that can be consumed into storage managed by process domains.
This session covers how we’re applying Kafka tools to enable our data mesh architecture. This includes how we interpret and apply the data mesh paradigm, the role of Kafka as the backbone for a mesh of connectivity, the role of Kafka Connect to generate and consume data events, and the use of KSQL to perform minor transformations for consumers.
Software Architecture for Cloud InfrastructureTapio Rautonen
Distributed systems are hard to build. Software architecture must be carefully crafted to suit cloud infrastructure.
Design for failure. Learn from failure. Adopt new cloud compatible design patterns and follow the guidelines during the journey of building cloud native applications.
MongoDB and Machine Learning with FlowableFlowable
Joram Barrez, Principal Software Engineer at Flowable, explains how to run Flowable on MongoDB.
It was presented at the Flowfest 2018 in Barcelona, Spain
Nordstrom's Event-Sourced Architecture and Kafka-as-a-Service | Adam Weyant a...HostedbyConfluent
As a 120 year-old company, Nordstrom was facing numerous challenges as a result of an aging, service-oriented, architecture. Developers needing to implement reporting for analytics separately from core functionality resulted in questionable data quality for analytical purposes. Scaling dependent services in harmony to not overwhelm each other was a struggle faced by many, if not most, teams. Several years into a company-wide transition to an event-sourced architecture, Nordstrom has solved these and various other problems. By leveraging the capabilities of Apache Kafka and Confluent, combined with a deep organizational focus on well-defined business event schemas, a singular event can be used for analytical, functional, operational, and model building purposes. This session will describe this architecture and the lessons learned while building it, with a focus on the internally built, multi-tenant, multi-cluster, Kafka-as-a-Service platform that enables it.
Migrating from One Cloud Provider to Another (Without Losing Your Data or You...HostedbyConfluent
If you’re considering -- or planning -- a cloud migration, you may be concerned about risks to your data and your mental health. Migrations at scale are fraught with risk. You absolutely can’t lose data, compromise its integrity, or suffer downtime, so you want to be slow and careful. On the other hand, you’re paying two providers for every day the migration goes on, so you need to move as fast as possible.
Unity Technologies accumulates lots of data. We recently moved our data infrastructure as part of a major cloud migration from Amazon Web Services (AWS) to Google Cloud Platform (GCP).
To minimize risk and costs our team used Apache Kafka and Confluent Platform, while engaging Confluent Platform Professional Services to help ensure a speedy and seamless migration. Kafka was already serving as the backbone to our data infrastructure, which handles over half a million events per second, and during the migration it also served as the bridge between AWS and GCP.
Join us at this session to learn about the processes and tools used, the challenges faced, and the lessons learned as we moved our operations and petabytes of data from AWS to GCP with zero downtime.
Introduction to Apache Kafka and Confluent... and why they matter!Paolo Castagna
This is a short introduction to Apache Kafka and Confluent (the company founded by the creator of Kafka). The slides cover Apache Kafka APIs including Kafka Connect and Kafka Streams (part of Apache Kafka). Other open source, ASL licensed, projects are mentioned: #KSQL, Schema Registry, REST Proxy, etc.
Many thanks to Codemotion and Seacom for hosting the event.
Event & Data Mesh as a Service: Industrializing Microservices in the Enterpri...HostedbyConfluent
Kafka is widely positioned as the proverbial "central nervous system" of the enterprise. In this session, we explore how the central nervous system can be used to build a mesh topology & unified catalog of enterprise wide events, enabling development teams to build event driven architectures faster & better.
The central theme of this topic is also aligned to seeking idioms from API Management, Service Meshes, Workflow management and Service orchestration. We compare how these approaches can be harmonized with Kafka.
We will also touch upon the topic of how this relates to Domain Driven Design, CQRS & other patterns in microservices.
Some potential takeaways for the discerning audience:
1. Opportunities in a platform approach to Event Driven Architecture in the enterprise
2. Adopting a product mindset around Data & Event Streams
3. Seeking harmony with allied enterprise applications
SingleStore & Kafka: Better Together to Power Modern Real-Time Data Architect...HostedbyConfluent
To remain competitive, organizations need to democratize access to fast analytics, not only to gain real-time insights on their business but also to power smart apps that need to react in the moment. In this session, you will learn how Kafka and SingleStore enable modern, yet simple data architecture to analyze both fast paced incoming data as well as large historical datasets. In particular, you will understand why SingleStore is well suited process data streams coming from Kafka.
Maximize the Business Value of Machine Learning and Data Science with Kafka (...confluent
Today, many companies that have lots of data are still struggling to derive value from machine learning (ML) and data science investments. Why? Accessing the data may be difficult. Or maybe it’s poorly labeled. Or vital context is missing. Or there are questions around data integrity. Or standing up an ML service can be cumbersome and complex.
At Nuuly, we offer an innovative clothing rental subscription model and are continually evolving our ML solutions to gain insight into the behaviors of our unique customer base as well as provide personalized services. In this session, I’ll share how we used event streaming with Apache Kafka® and Confluent Cloud to address many of the challenges that may be keeping your organization from maximizing the business value of machine learning and data science. First, you’ll see how we ensure that every customer interaction and its business context is collected. Next, I’ll explain how we can replay entire interaction histories using Kafka as a transport layer as well as a persistence layer and a business application processing layer. Order management, inventory management, logistics, subscription management – all of it integrates with Kafka as the common backbone. These data streams enable Nuuly to rapidly prototype and deploy dynamic ML models to support various domains, including pricing, recommendations, product similarity, and warehouse optimization. Join us and learn how Kafka can help improve machine learning and data science initiatives that may not be delivered to their full potential.
In this talk, Confluent co-founder and CEO, Jay Kreps will cover the rise of two trends:
1. The rise of Apache Kafka and event streams
2. The rise of the public cloud and cloud-native data systems
... and the problems we need to solve as these two trends come together.
How to use Standard SQL over Kafka: From the basics to advanced use cases | F...HostedbyConfluent
Several different frameworks have been developed to draw data from Kafka and maintain standard SQL over continually changing data. This provides an easy way to query and transform data - now accessible by orders of magnitude more users.
At the same time, using Standard SQL against changing data is a new pattern for many engineers and analysts. While the language hasn’t changed, we’re still in the early stages of understanding the power of SQL over Kafka - and in some interesting ways, this new pattern introduces some exciting new idioms.
In this session, we’ll start with some basic use cases of how Standard SQL can be effectively used over events in Kafka- including how these SQL engines can help teams that are brand new to streaming data get started. From there, we’ll cover a series of more advanced functions and their implications, including:
- WHERE clauses that contain time change the validity intervals of your data; you can programmatically introduce and retract records based on their payloads!
- LATERAL joins turn streams of query arguments into query results; they will automatically share their query plans and resources!
- GROUP BY aggregations can be applied to ever-growing data collections; reduce data that wouldn't even fit in a database in the first place.
We'll review in-production examples where each of these cases make unmodified Standard SQL, run and maintain over data streams in Kafka, and provide the functionality of bespoke stream processors.
Creating an Elastic Platform Using Kafka and Microservices in OpenShift confluent
(Pradeep Chintam, American Express Global Business Travel) Kafka Summit SF 2018
When a new project, Global Trip Record was launched at American Express GBT and we were looking for a robust, scalable and fault-tolerant middleware to handle all the orchestration and connectivity needs of the project.
The existing solution was monolithic, and we wanted to convert that to a microservices framework, but the biggest challenge was managing the increasing number of external applications that are connected to the platform. Any slow external application or partner system connected to the platform was slowing down the entire platform. There is always a need for partner systems to go offline or a need to resend the entire day’s data, especially with a system like our data lake where the data volumes are huge.
After evaluating multiple solutions, we settled on Apache Kafka, and started with a simple implementation of around 100,000 messages to just decouple one partner system and the core platform.
Today, we are running our microservices (Docker) running in OpenShift (Kubernetes) processing Kafka Streams, running real-time anomaly detection using Kafka Streams, powering our data lake through Kafka, feeding our distributed caching layer (Apache Ignite) and connecting all internal and external systems using Kafka. With a total of more than 10 million messages per day, i.e., 1.5TB of data with just a small three-node cluster, we are one happy platform for over a year now. With the kind of stability, flexibility and success in our project, a lot of other teams started and will soon be in production with Kafka Steams. The powerful combination of Kafka and OpenShift has proven to be an easily scalable model with great elasticity to the entire platform.
How to build an event driven architecture with kafka and kafka connectLoi Nguyen
How to build an event driven architecture with Kafka & Kafka Connect?
Bài talk chia sẻ về quá trình 2 năm ứng dụng Kafka, Kafka Connect để chuyển đổi mô hình hệ thống của Vexere từ Monolithic thành Microservice, event driven
- Event driven architecture là gì?
- Làm thế nào để xây dựng 1 hệ thống event driven architecture một cách hiệu qủa bằng Kafka và Kafka Connect
onnect
- Các usercase hữu ích với Kafka & Kafka Connect
- Kinh nghiệm thực tế và các bài học rút ra
How a Data Mesh is Driving our Platform | Trey Hicks, GlooHostedbyConfluent
At Gloo.us, we face a challenge in providing platform data to heterogeneous applications in a way that eliminates access contention, avoids high latency ETLs, and ensures consistency for many teams. We're solving this problem by adopting Data Mesh principles and leveraging Kafka, Kafka Connect, and Kafka streams to build an event driven architecture to connect applications to the data they need. A domain driven design keeps the boundaries between specialized process domains and singularly focused data domains clear, distinct, and disciplined. Applying the principles of a Data Mesh, process domains assume the responsibility of transforming, enriching, or aggregating data rather than relying on these changes at the source of truth -- the data domains. Architecturally, we've broken centralized big data lakes into smaller data stores that can be consumed into storage managed by process domains.
This session covers how we’re applying Kafka tools to enable our data mesh architecture. This includes how we interpret and apply the data mesh paradigm, the role of Kafka as the backbone for a mesh of connectivity, the role of Kafka Connect to generate and consume data events, and the use of KSQL to perform minor transformations for consumers.
Software Architecture for Cloud InfrastructureTapio Rautonen
Distributed systems are hard to build. Software architecture must be carefully crafted to suit cloud infrastructure.
Design for failure. Learn from failure. Adopt new cloud compatible design patterns and follow the guidelines during the journey of building cloud native applications.
'How to build efficient backend based on microservice architecture' by Anton ...OdessaJS Conf
This speech about micro-services, approaches, and practices in their construction. How to effectively build communication between micro-services and what approaches are commonly used for this.
We will talk a little about distributed transactions. Will touch the topic of infrastructure, monitoring, and scaling components. I want to inspire my listeners to develop themselves in the direction of backend development. Force to look towards scalable application architecture.
You cannot find this information in the documentation :) This speech will also consist of real-life examples.
With the increased presence of cloud and hosted services, enterprises are relying more on cloud services to reap benefits of economies of scale, gradually shifting the burdens of maintaining infrastructure to cloud providers. Functions as a service (FaaS) is the next step in this shift. FaaS focuses on running an operation on demand without having to worry about the infrastructure or the scale.
AWS Lambdas provide an easy way to create serverless operations, helping enterprises to reduce their infrastructure costs. Yet at times, these transitions are hindered due to the need of changing consumer apps. WSO2 API Manager 3.1 makes this transition smoother by allowing organizations to expose RESTFul interfaces using Lambdas.
WSO2 API Manager 3.1 enables you to secure, throttle, manage, and monitor APIs created out of Lambda operations, minimizing impact on consumer applications.
This deck explores:
- How you can use Lambdas for Backend processing
- Exposing a Lambda function as a REST API in WSO2 API Manager
- Underlying architecture and different design options that are available for you
Scalable complex event processing on samza @UBERShuyi Chen
The Marketplace data team at Uber has built a scalable complex event processing platform to solve many challenging real time data needs for various Uber products. This platform has been in production for almost a year and it has proven to be very flexible to solve many use cases. In this talk, we will share in detail the design and architecture of the platform, and how we employ Samza, Kafka, and Siddhi at scale.
This slides was presented at Stream Processing Meetup @ LinkedIn on June 15 2016.
Skillenza Build with Serverless Challenge - Advanced Serverless ConceptsDhaval Nagar
Skillenza is back with another game-changing virtual hackathon for you. Seize this amazing opportunity to create projects on serverless architecture. For those of you who are not acquainted with it, serverless architectures are system designs that use third-party services to build and run applications.
As developers, this helps you to gain better scalability and flexibility without needing any administration to manage infrastructure. So you can build quicker and at a reduced cost as well.
https://skillenza.com/challenge/build-with-serverless-online-hackathon-aws
[WSO2Con Asia 2018] Architecting for Container-native EnvironmentsWSO2
This slide deck explores architectural choices for making applications and integration services first class citizens in a container native environment.
Learn more: https://wso2.com/library/conference/2018/08/wso2con-asia-2018-architecting-for-container-native-environments/
Introduction to amazon web services for developersCiklum Ukraine
Introduction to Amazon Web Services for developers
About presenter
Roman Gomolko with 11 years of experience in development including 4 years of day-to-day work with Amazon Web Services.
Disclaimer
Cloud-hosting is buzz-word for a while and in my talk I would like to give an introduction to Amazon Web Services (AWS).
We will talk about basic building blocks of AWS like EC2, ELB, ASG, S3, CloudFront, RDS, IAM, VPC and other scary or funny abbreviations.
Then we will discuss how to migrate existing applications to AWS. This topic includes:
• how to design infrastructure and services to use when migrating
• how to choose proper instance types
• how to estimate infrastructure cost
• how it will affect performance of application migrated
Then we will make an overview of services provided by AWS and possible apply in your current of future applications:
• SQS
• DynamoDB
• Kinesis
• CloudSearch
• CodeDeploy
• CloudFormation
And if we survive we will talk a little how to design Cloud applications. That’s mainly about general principles.
My talk mostly targeted towards decision makers and decisions pushers of small and medium size companies which are consider “going cloud” or already moving into this direction. Everyone interested in gaining knowledge in these areas are welcomed as well.
We will spend around 2–3 hours together and you will be able to pitch-in any questions until we totally goes away from original plan.
How we have used ansible for real-time industry use cases and Integration with enterprise tools. Infra provisioning and config management using ansible and automating routine tasks.
Highly available and scalable web hosting can be complex and expensive. Learn how Amazon Web Services provides the reliable, scalable, secure, and high performance infrastructure required for web applications while enabling an elastic, scale out and scale down infrastructure to match IT costs in real time as customer traffic fluctuates.
Analyze key aspects to be considered before embarking on your cloud journey. The presentation outlines the strategies, approach, and choices that need to be made, to ensure a smooth transition to the cloud.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
3. What is Serverless Today?
● Fully Managed Services - from application run times to
databases to security
● Offloading Server Management Tasks to the Cloud
Provider
● Focused on ease of operation and development
4. Current Value of Serverless
● Goes beyond ease of running and deploying an application
● Value in Security with the “least-privilege approach”
○ I.e. function is only given permission to talk to a single
database or service
○ Isolates blast radius to that limited set of services
● Netflix utilizes a servless model to fill the “through” - i.e. run
background jobs like building ML models when resource
utilization is low / off-peak
● GraalVM optimizing cold starts and image size
5. Example of Serverless - Knative + Tekton
● Building Blocks to simplify deploying and running functions
on Kubernetes and Istio (or any L7 Load Balancer)
● Git Source -> Container -> Deploy -> Run -> Manage
● Auto Scaling / Scale to 0
● Load balancing and request routing to efficiently utilize
resources
● Monitoring to make sure the service is still running well.
● Migration to new instances as they become available.
● Downsides - YAML
6. Focus on both new economic
models and ease of development
and operations
What should serverless be?
7. Beyond Today’s Serverless
● We need a better name - not about serverless or even the
abstraction over servers.
● Serverless about a model that matches supply with
demand - James Ward
8. Berkeley View on Serverless Computing
● Decoupled computation and storage AND computation and
storage are also provisioned and priced independently
● Executing code without managing resource allocation
● Paying in proportion to resources used instead of for
resources allocated
Cloud Programming Simplified: A Berkeley View on Serverless
Computing
10. Challenges in True Serverless
● Almost impossible to auto scale data store
● Pricing model of many back-end services doesn’t match
pricing model of application runtimes
11. Where is serverless userful today?
● Parallel processing tasks—invoked on demand & intermittently,
● Low traffic applications—enterprise IT services, and spiky workloads
● Stateless web applications—serving static content form S3 (or similar)
● Orchestration functions—integration/coordination of calls to third-party
services
● Composing chains of functions—stateless workflow management,
connected via data dependencies
● Job scheduling—CRON jobs, triggers, etc.
13. Common Limitation of Serverless?
● Functions are stateless, ephemeral, short-lived: expensive to lose
computational context & rehydrate
● Durable state is always “somewhere else”
● No co-location of state and processing
● No direct addressability—all communication over external storage
● Limited options for managing & coordinating distributed state
● Limited options for modelling data consistency guarantees
16. Abstract over State
● Similar to how Spark Abstracts State
● Currently we abstract over computing resources and
communication with Kubernetes and Istio
● Future of Serverless will be Abstracting over State
● State will be automatically managed on IN and OUT
● State Managed by a Framework
● Monitor state management for automation
18. Traditional CRUD Database Access Won’t Work
● Difficult to Impossible to manage DB Connection Pools
● Unconstrained DB access => hard to automate operations
● Hard to understand the intention of each access.
● Is this operation a read or write?
● Can it be cached?
● Can consistency be relaxed, or strong consistency needed?
● Can operations proceed during partitions/failures?
19. Better Models for Distributed State
● CRDTs
● Counters
● Registers
● Sets
● Maps
● Event Sourcing
● Append only
Logging
20. What are CRDTs?
Data types that guarantee convergence to the
same value in spite of network delays, partitions
and message reordering
http://book.mixu.net/distsys/eventual.html
21. CRDT’s Provide a Distinct View of Data
● Not just a place to dump values like a traditional Data Store
● Abstraction of the data type
● Data Structure that tells how to build the value
22. CRDT Convergent Operations
● Associative (a+(b+c)=(a+b)+c) - grouping doesn't matter
● Commutative (a+b=b+a) - order of application doesn't
matter
● Idempotent (a+a=a) - duplication does not matter
23. Value of CRDTs in Distributed Systems
● Replicate data across the network without any
synchronization mechanism
● Avoid distributed locks, two-phase commit,
● etc.
● Consistency without Consensus
24. Event Sourcing in Stateful Serverless
● Event Sourcing is an ideal model for Serverless Computing
● Event Log tracks which events have been applied and what
is the current state
● Snapshotting to avoid replay of entire event history
25. State needs to be ACID 2.0
● Associative
● Commutative
● Idempotent
● Distributed
29. Azure Durable Functions
● https://docs.microsoft.com/en-us/azure/azure-functions/durab
le/durable-functions-overview
● Executed using Azure Function Runtime
● Built using Durable Task Framework
● Maintain execution state via Event Sourcing
● State is backed by Azure Store Tables
● Uses an Actor Like Programming Model
30. Lightbend Cloud State
● https://cloudstate.io/
● Built on top of Akka Actors with Actor Persistence and
Clustering
● Leverages familiar paradigm of Actor Programming Model
31.
32.
33. Cloud State - Low-Latency & High Throughput
● Managing in-memory durable session state across
individual requests
● Low-latency serving of dynamic in-memory models
● Real-time stream processing
● Distributed resilient transactional workflows
● Shared collaborative workspaces
● Leader election, counting, voting
34.
35. Enables new Programming Models
The promise of Stateful Serverless is revolutionary and will
grow to dominate the future of Cloud - Jonas Bonér
36. Resources
● Lightbend Cloud State
● Azure Durable Functions
● Flink Stateful Functions
● Microsoft Dapr
● Towards Stateful Serverless - Jonas Bonér
● Akka Distributed Data and CRDTs - Ryan Knight