This document summarizes a presentation about building serverless applications using event streams. It discusses what serverless computing means for developers, common use cases like APIs and stream processing using functions as a service (FaaS). It also covers using event streams and message buses to build event-driven architectures and decouple services. Key aspects covered include event-driven design principles, using queues to control parallelism, and designing independent, scalable queue workers.
Billions of Messages in Real Time: Why Paypal & LinkedIn Trust an Engagement ...confluent
(Bruno Simic, Solutions Engineer, Couchbase)
Breakout during Confluent’s streaming event in Munich. This three-day hands-on course focused on how to build, manage, and monitor clusters using industry best-practices developed by the world’s foremost Apache Kafka™ experts. The sessions focused on how Kafka and the Confluent Platform work, how their main subsystems interact, and how to set up, manage, monitor, and tune your cluster.
How to build 1000 microservices with Kafka and thriveNatan Silnitsky
This talk is about the Wix ecosystem for event driven architecture on top of Kafka.
I share the best practices, SDKs and tools we have created in order to be able to scale our distributed system to more than 1000 microservices.
Implementing data center to data center replication for a distributed databaseJ On The Beach
Implementing data center to data center replication for a distributed database by Ewout Prangsma
ArangoDB is a scalable, distributed multi-model database. However, for this talk, it is not necessary to know what this means. Rather the only crucial fact is that it is distributed and written in C++.
Before you stop reading: This talk is about a golang success story. Namely, we had to implement resilient data center to data center (DC2DC) replication for ArangoDB clusters from scratch within 6 weeks (plus some time for testing and debugging). Therefore, we built upon – ArangoDB’s HTTP-based API for asynchronous replication, – the existing golang driver, – the fault tolerant scalable message queue system Kafka, – a lot of existing golang libraries and – golang’s fantastic capabilities for parallelism, communication and data manipulation and pulled this task off. This talk is the story of this project with its many challenges and successes and ends with a surprising revelation about which of the above we did not actually need in the end.
Scylla Summit 2022: An Odyssey to ScyllaDB and Apache KafkaScyllaDB
Will LaForest is the Public Sector CTO for Confluent. In his current position, Will evangelizes the benefits of Apache Kafka, event-driven data in motion architecture, and open-source software is addressing mission challenges in the Government. He has spent 25 years wrangling data at massive scale. His technical career spans diverse areas from software engineering, NoSQL, data science, cloud computing, machine learning, and building statistical visualization software but began with code slinging at DARPA as a teenager. Will holds degrees in mathematics and physics from the University of Virginia.
To watch all of the recordings hosted during Scylla Summit 2022 visit our website here: https://www.scylladb.com/summit.
Battle-tested event-driven patterns for your microservices architecture - Sca...Natan Silnitsky
During the past couple of years I’ve implemented or have witnessed implementations of several key patterns of event-driven messaging designs on top of Kafka that have facilitated creating a robust distributed microservices system at Wix that can easily handle increasing traffic and storage needs with many different use-cases.
In this talk I will share these patterns with you, including:
* Consume and Project (data decoupling)
* End-to-end Events (Kafka+websockets)
* In memory KV stores (consume and query with 0-latency)
* Events transactions (Exactly Once Delivery)
Connecting Kafka Across Multiple AWS VPCs confluent
(Benoit Carrière, Expedia) Kafka Summit SF 2018
As Expedia, the world’s largest online travel agency, moved to a multi Virtual Private Cloud (VPC) strategy in AWS, we faced the challenge of making our systems accessible, or using other systems, across many VPCs. In most cases, a secure internet-facing endpoint or doing VPC Peering should do the work, right?
But what if the system isn’t a typical HTTP-based microservice? What if it’s a distributed, partitioned and binary protocol-based system, where anyone talks to everyone all the time? That’s exactly what we encountered when we tried to make our Kafka accessible to our clients. We solved this problem by leveraging Apache Kafka®‘s distributive nature, using AWS’ new VPC Endpoint technology and their recent Network Load Balancer, some Route53 records and a bit of creativity!
In this session, I’ll dive into:
-Our use case: Kafka accessible to other VPCs
-Why we didn’t go with internet-facing endpoint or use VPC Peering
-A brief description on how VPC endpoints work
-Our solution to the problem: That’s where the fun starts.
Billions of Messages in Real Time: Why Paypal & LinkedIn Trust an Engagement ...confluent
(Bruno Simic, Solutions Engineer, Couchbase)
Breakout during Confluent’s streaming event in Munich. This three-day hands-on course focused on how to build, manage, and monitor clusters using industry best-practices developed by the world’s foremost Apache Kafka™ experts. The sessions focused on how Kafka and the Confluent Platform work, how their main subsystems interact, and how to set up, manage, monitor, and tune your cluster.
How to build 1000 microservices with Kafka and thriveNatan Silnitsky
This talk is about the Wix ecosystem for event driven architecture on top of Kafka.
I share the best practices, SDKs and tools we have created in order to be able to scale our distributed system to more than 1000 microservices.
Implementing data center to data center replication for a distributed databaseJ On The Beach
Implementing data center to data center replication for a distributed database by Ewout Prangsma
ArangoDB is a scalable, distributed multi-model database. However, for this talk, it is not necessary to know what this means. Rather the only crucial fact is that it is distributed and written in C++.
Before you stop reading: This talk is about a golang success story. Namely, we had to implement resilient data center to data center (DC2DC) replication for ArangoDB clusters from scratch within 6 weeks (plus some time for testing and debugging). Therefore, we built upon – ArangoDB’s HTTP-based API for asynchronous replication, – the existing golang driver, – the fault tolerant scalable message queue system Kafka, – a lot of existing golang libraries and – golang’s fantastic capabilities for parallelism, communication and data manipulation and pulled this task off. This talk is the story of this project with its many challenges and successes and ends with a surprising revelation about which of the above we did not actually need in the end.
Scylla Summit 2022: An Odyssey to ScyllaDB and Apache KafkaScyllaDB
Will LaForest is the Public Sector CTO for Confluent. In his current position, Will evangelizes the benefits of Apache Kafka, event-driven data in motion architecture, and open-source software is addressing mission challenges in the Government. He has spent 25 years wrangling data at massive scale. His technical career spans diverse areas from software engineering, NoSQL, data science, cloud computing, machine learning, and building statistical visualization software but began with code slinging at DARPA as a teenager. Will holds degrees in mathematics and physics from the University of Virginia.
To watch all of the recordings hosted during Scylla Summit 2022 visit our website here: https://www.scylladb.com/summit.
Battle-tested event-driven patterns for your microservices architecture - Sca...Natan Silnitsky
During the past couple of years I’ve implemented or have witnessed implementations of several key patterns of event-driven messaging designs on top of Kafka that have facilitated creating a robust distributed microservices system at Wix that can easily handle increasing traffic and storage needs with many different use-cases.
In this talk I will share these patterns with you, including:
* Consume and Project (data decoupling)
* End-to-end Events (Kafka+websockets)
* In memory KV stores (consume and query with 0-latency)
* Events transactions (Exactly Once Delivery)
Connecting Kafka Across Multiple AWS VPCs confluent
(Benoit Carrière, Expedia) Kafka Summit SF 2018
As Expedia, the world’s largest online travel agency, moved to a multi Virtual Private Cloud (VPC) strategy in AWS, we faced the challenge of making our systems accessible, or using other systems, across many VPCs. In most cases, a secure internet-facing endpoint or doing VPC Peering should do the work, right?
But what if the system isn’t a typical HTTP-based microservice? What if it’s a distributed, partitioned and binary protocol-based system, where anyone talks to everyone all the time? That’s exactly what we encountered when we tried to make our Kafka accessible to our clients. We solved this problem by leveraging Apache Kafka®‘s distributive nature, using AWS’ new VPC Endpoint technology and their recent Network Load Balancer, some Route53 records and a bit of creativity!
In this session, I’ll dive into:
-Our use case: Kafka accessible to other VPCs
-Why we didn’t go with internet-facing endpoint or use VPC Peering
-A brief description on how VPC endpoints work
-Our solution to the problem: That’s where the fun starts.
High cardinality time series search: A new level of scale - Data Day Texas 2016Eric Sammer
Modern search systems provide incredible feature sets, developer-friendly APIs, and low latency indexing and query response. By some measures, these systems operate "at scale," but rarely is that quantified. Customers of Rocana typically look to push ingest rates in excess of 1 million events per second, retaining years of data online for query, with the expectation of sub-second response times for any reasonably sized subset of data.
We quickly found that the tradeoffs made by general purpose search systems, while right for common use cases, were less appropriate for these high cardinality, large scale use cases.
This session details the architecture, tradeoffs, and interesting implementation decisions made in building a new time series optimized distributed search system using Apache Lucene, Kafka, and HDFS. Data ingestion and durability, index and metadata organization, storage, query scheduling and optimization, and failure modes will be covered. Finally, a summary of the results achieved will be shown.
"The Grail: React based Isomorph apps framework" Эльдар ДжафаровFwdays
Since Nodejs came into my life the idea of architecture that would allow me to build SPA apps that would render on server as well as on client. With Grail, Reactjs and React router this is possible right now without any side effects and with any kind of backend API.
Advanced Caching Patterns used by 2000 microservices - Devoxx UkraineNatan Silnitsky
Wix has a huge scale of traffic. more than 500 billion HTTP requests and more than 1.5 billion Kafka business events per day. This talk goes through 3 Caching Patterns that are used by Wix's 2000 microservices in order to provide the best experience for Wix users along with saving costs and increasing availability.
A cache will reduce latency, by avoiding the need of a costly query to a DB, a HTTP request to a Wix servicer, or a 3rd-party service. It will reduce the needed scale to service these costly requests. It will also improve reliability, by making sure some data can be returned even if aforementioned DB or 3rd-party service are currently unavailable.
The patterns include:
* Configuration Data Cache - persisted locally or to S3
* HTTP Reverse Proxy Caching - using Varnish Cache
* (Dynamo)DB+CDC based Cache and more - for unlimited capacity with continuously updating LRU cache on top each pattern is optimal for other use cases, but all allow to reduce costs and gain performance and resilience.
Building an Event-oriented Data Platform with Kafka, Eric Sammer confluent
While we frequently talk about how to build interesting products on top of machine and event data, the reality is that collecting, organizing, providing access to, and managing this data is where most people get stuck. Many organizations understand the use cases around their data – fraud detection, quality of service and technical operations, user behavior analysis, for example – but are not necessarily data infrastructure experts. In this session, we’ll follow the flow of data through an end to end system built to handle tens of terabytes an hour of event-oriented data, providing real time streaming, in-memory, SQL, and batch access to this data. We’ll go into detail on how open source systems such as Hadoop, Kafka, Solr, and Impala/Hive are actually stitched together; describe how and where to perform data transformation and aggregation; provide a simple and pragmatic way of managing event metadata; and talk about how applications built on top of this platform get access to data and extend its functionality.
Attendees will leave this session knowing not just which open source projects go into a system such as this, but how they work together, what tradeoffs and decisions need to be addressed, and how to present a single general purpose data platform to multiple applications. This session should be attended by data infrastructure engineers and architects planning, building, or maintaining similar systems.
Video and slides synchronized, mp3 and slide download available at URL http://bit.ly/1FQYcP0.
Gian Merlino presents the advantages, challenges, and best practices to deploying and maintaining lambda architectures in the real world, using the infrastructure at Metamarkets as a case study. Filmed at qconsf.com.
Gian Merlino is a senior software engineer at Metamarkets, responsible for the infrastructure behind its data ingestion pipelines and is a committer on the Druid project.
Speaker: Matt Howlett, Software Engineer, Confluent
This presentation provides a technical overview of Apache Kafka® and covers some of its popular use cases.
Eventing Things - A Netflix Original! (Nitin Sharma, Netflix) Kafka Summit SF...confluent
Netflix Studio spent 8 Billion dollars on content in 2018. When the stakes are so high, it is paramount to track changes to the core studio metadata, spend on our content, forecasting and more to enable the business to make efficient and effective decisions. Embracing a Kappa architecture with Kafka enables us to build an enterprise grade message bus. By having event processing be the de-facto paved path for syncing core entities, it provides traceability and data quality verification as first class citizens for every change published.This talk will also get into the nuts and bolts of the eventing and stream processing paradigm and why it is the best fit for our use case, versus alternative architectures with similar benefits We will do a deep dive into the fascinating world of Netflix Studios and how eventing and stream processing are revolutionizing the world of movie productions and the production finance infrastructure.
Familiarity with capabilities google scripts and ways to extend existing functionality of documentation work flow. Quick look through types of script, services interplay, and weak points. A few examples from own experience how to optimize your routine work.
Stream Processing Live Traffic Data with Kafka StreamsTom Van den Bulck
In this workshop we will set up a streaming framework which will process realtime data of traffic sensors installed within the Belgian road system.
Starting with the intake of the data, you will learn best practices and the recommended approach to split the information into events in a way that won't come back to haunt you.
With some basic stream operations (count, filter, ... ) you will get to know the data and experience how easy it is to get things done with Spring Boot & Spring Cloud Stream.
But since simple data processing is not enough to fulfill all your streaming needs, we will also let you experience the power of windows.
After this workshop, tumbling, sliding and session windows hold no more mysteries and you will be a true streaming wizard.
Scylla Summit 2022: Overcoming the Performance Cost of Streaming TransactionsScyllaDB
For a long time distributed transactions have been known to make systems slow and unavailable. However, recent academic and industry advancements such as RAMP, Occult, Calvin and TAPIR have changed the landscape and made transactions remarkably fast.
In this talk, Denis Rystsov, Staff Engineer at Vectorized will explain how the Redpanda streaming data platform utilizes modern transactional approaches and pushes the envelope further by adjusting these concepts to the streaming workload. He'll share benchmarks and explain what makes Redpanda transactions so fast.
To watch all of the recordings hosted during Scylla Summit 2022 visit our website here: https://www.scylladb.com/summit.
Building Microservices with Apache Kafka by Colin McCabeData Con LA
Abstract:- Building distributed systems is challenging. Luckily, Apache Kafka provides a powerful toolkit for putting together big services as a set of scalable, decoupled components. In this talk, I'll describe some of the design tradeoffs when building microservices, and how Kafka's powerful abstractions can help. I'll also talk a little bit about what the community has been up to with Kafka Streams, Kafka Connect, and exactly-once semantics.
Building a distributed Key-Value store with Cassandraaaronmorton
Slides from my talk at Kiwi Pycon in 2010.
Covers why we chose Cassandra, overview of it's feature and data model, and how we implemented our application.
Enterprise systems such as enterprise resources planning (EPR), system monitoring, and stock transaction and financial transaction systems generate an enormous amount of events that can contain useful data; processing these data efficiently can help to gain a competitive advantage. Implementing custom solutions to capture, analyze, and act on these Big Data imposes challenges and operational complexities. WSO2 CEP and WSO2 BAM together provide a solution to deliver a low-latency, high-volume, and scalable environment enabling data collection, real-time as well as batch analysis and firing notifications of multiple types across numerous endpoints.
High cardinality time series search: A new level of scale - Data Day Texas 2016Eric Sammer
Modern search systems provide incredible feature sets, developer-friendly APIs, and low latency indexing and query response. By some measures, these systems operate "at scale," but rarely is that quantified. Customers of Rocana typically look to push ingest rates in excess of 1 million events per second, retaining years of data online for query, with the expectation of sub-second response times for any reasonably sized subset of data.
We quickly found that the tradeoffs made by general purpose search systems, while right for common use cases, were less appropriate for these high cardinality, large scale use cases.
This session details the architecture, tradeoffs, and interesting implementation decisions made in building a new time series optimized distributed search system using Apache Lucene, Kafka, and HDFS. Data ingestion and durability, index and metadata organization, storage, query scheduling and optimization, and failure modes will be covered. Finally, a summary of the results achieved will be shown.
"The Grail: React based Isomorph apps framework" Эльдар ДжафаровFwdays
Since Nodejs came into my life the idea of architecture that would allow me to build SPA apps that would render on server as well as on client. With Grail, Reactjs and React router this is possible right now without any side effects and with any kind of backend API.
Advanced Caching Patterns used by 2000 microservices - Devoxx UkraineNatan Silnitsky
Wix has a huge scale of traffic. more than 500 billion HTTP requests and more than 1.5 billion Kafka business events per day. This talk goes through 3 Caching Patterns that are used by Wix's 2000 microservices in order to provide the best experience for Wix users along with saving costs and increasing availability.
A cache will reduce latency, by avoiding the need of a costly query to a DB, a HTTP request to a Wix servicer, or a 3rd-party service. It will reduce the needed scale to service these costly requests. It will also improve reliability, by making sure some data can be returned even if aforementioned DB or 3rd-party service are currently unavailable.
The patterns include:
* Configuration Data Cache - persisted locally or to S3
* HTTP Reverse Proxy Caching - using Varnish Cache
* (Dynamo)DB+CDC based Cache and more - for unlimited capacity with continuously updating LRU cache on top each pattern is optimal for other use cases, but all allow to reduce costs and gain performance and resilience.
Building an Event-oriented Data Platform with Kafka, Eric Sammer confluent
While we frequently talk about how to build interesting products on top of machine and event data, the reality is that collecting, organizing, providing access to, and managing this data is where most people get stuck. Many organizations understand the use cases around their data – fraud detection, quality of service and technical operations, user behavior analysis, for example – but are not necessarily data infrastructure experts. In this session, we’ll follow the flow of data through an end to end system built to handle tens of terabytes an hour of event-oriented data, providing real time streaming, in-memory, SQL, and batch access to this data. We’ll go into detail on how open source systems such as Hadoop, Kafka, Solr, and Impala/Hive are actually stitched together; describe how and where to perform data transformation and aggregation; provide a simple and pragmatic way of managing event metadata; and talk about how applications built on top of this platform get access to data and extend its functionality.
Attendees will leave this session knowing not just which open source projects go into a system such as this, but how they work together, what tradeoffs and decisions need to be addressed, and how to present a single general purpose data platform to multiple applications. This session should be attended by data infrastructure engineers and architects planning, building, or maintaining similar systems.
Video and slides synchronized, mp3 and slide download available at URL http://bit.ly/1FQYcP0.
Gian Merlino presents the advantages, challenges, and best practices to deploying and maintaining lambda architectures in the real world, using the infrastructure at Metamarkets as a case study. Filmed at qconsf.com.
Gian Merlino is a senior software engineer at Metamarkets, responsible for the infrastructure behind its data ingestion pipelines and is a committer on the Druid project.
Speaker: Matt Howlett, Software Engineer, Confluent
This presentation provides a technical overview of Apache Kafka® and covers some of its popular use cases.
Eventing Things - A Netflix Original! (Nitin Sharma, Netflix) Kafka Summit SF...confluent
Netflix Studio spent 8 Billion dollars on content in 2018. When the stakes are so high, it is paramount to track changes to the core studio metadata, spend on our content, forecasting and more to enable the business to make efficient and effective decisions. Embracing a Kappa architecture with Kafka enables us to build an enterprise grade message bus. By having event processing be the de-facto paved path for syncing core entities, it provides traceability and data quality verification as first class citizens for every change published.This talk will also get into the nuts and bolts of the eventing and stream processing paradigm and why it is the best fit for our use case, versus alternative architectures with similar benefits We will do a deep dive into the fascinating world of Netflix Studios and how eventing and stream processing are revolutionizing the world of movie productions and the production finance infrastructure.
Familiarity with capabilities google scripts and ways to extend existing functionality of documentation work flow. Quick look through types of script, services interplay, and weak points. A few examples from own experience how to optimize your routine work.
Stream Processing Live Traffic Data with Kafka StreamsTom Van den Bulck
In this workshop we will set up a streaming framework which will process realtime data of traffic sensors installed within the Belgian road system.
Starting with the intake of the data, you will learn best practices and the recommended approach to split the information into events in a way that won't come back to haunt you.
With some basic stream operations (count, filter, ... ) you will get to know the data and experience how easy it is to get things done with Spring Boot & Spring Cloud Stream.
But since simple data processing is not enough to fulfill all your streaming needs, we will also let you experience the power of windows.
After this workshop, tumbling, sliding and session windows hold no more mysteries and you will be a true streaming wizard.
Scylla Summit 2022: Overcoming the Performance Cost of Streaming TransactionsScyllaDB
For a long time distributed transactions have been known to make systems slow and unavailable. However, recent academic and industry advancements such as RAMP, Occult, Calvin and TAPIR have changed the landscape and made transactions remarkably fast.
In this talk, Denis Rystsov, Staff Engineer at Vectorized will explain how the Redpanda streaming data platform utilizes modern transactional approaches and pushes the envelope further by adjusting these concepts to the streaming workload. He'll share benchmarks and explain what makes Redpanda transactions so fast.
To watch all of the recordings hosted during Scylla Summit 2022 visit our website here: https://www.scylladb.com/summit.
Building Microservices with Apache Kafka by Colin McCabeData Con LA
Abstract:- Building distributed systems is challenging. Luckily, Apache Kafka provides a powerful toolkit for putting together big services as a set of scalable, decoupled components. In this talk, I'll describe some of the design tradeoffs when building microservices, and how Kafka's powerful abstractions can help. I'll also talk a little bit about what the community has been up to with Kafka Streams, Kafka Connect, and exactly-once semantics.
Building a distributed Key-Value store with Cassandraaaronmorton
Slides from my talk at Kiwi Pycon in 2010.
Covers why we chose Cassandra, overview of it's feature and data model, and how we implemented our application.
Enterprise systems such as enterprise resources planning (EPR), system monitoring, and stock transaction and financial transaction systems generate an enormous amount of events that can contain useful data; processing these data efficiently can help to gain a competitive advantage. Implementing custom solutions to capture, analyze, and act on these Big Data imposes challenges and operational complexities. WSO2 CEP and WSO2 BAM together provide a solution to deliver a low-latency, high-volume, and scalable environment enabling data collection, real-time as well as batch analysis and firing notifications of multiple types across numerous endpoints.
Talk presented by Aarón Fas & Andrés Viedma at the JBcnConf 2015.
'Microservices' is one of the most popular buzzwords in the industry now, but are they really a step forward? Or they might be more a problem than a solution? When are they really helpful? How should they be addressed? What challenges will we face if we decide to implement a microservices based architecture?
One year ago, Tuenti moved from a monolithic PHP backend to a Java + PHP microservices architecture. In this talk, we'll share our experiences so far: how we addressed the change, how we implemented it, why we think it's been valuable for us (and how is that related to the company culture), why it might not be a good idea for your company / application and, mostly, what lessons we have learned from this experience.
BDM39: HP Vertica BI: Sub-second big data analytics your users and developers...Big Data Montreal
Despite how fantastic pigs look with lipstick on and how magical elephants look with wings attached, there remains a large gap between what popular big data stacks offer and what end users demand in terms of reporting agility and speed. Join us to learn how Montreal-based AdGear, an advertising technology company, faced challenges as its data volume increased. You will hear how AdGear's data stack evolved to meet these challenges, and how HP Vertica's architecture and features changed the game.
(by Mina Naguib, Technical Director of Platform Engineering at AdGear).
https://youtu.be/tzQUUCuVjVc
OSDC 2018 | From Monolith to Microservices by Paul Puschmann_NETWAYS
Scaling up from two developer teams supporting a monolith to more than 20 developer teams powering a micro-service landscape is not only a matter of technical excellence but also the matter of culture and collaboration. This talk will show the positive aspects of our evolution as well as the things we learned to improve on.
Video and slides synchronized, mp3 and slide download available at URL http://bit.ly/2lGNybu.
Stefan Krawczyk discusses how his team at StitchFix use the cloud to enable over 80 data scientists to be productive. He also talks about prototyping ideas, algorithms and analyses, how they set up & keep schemas in sync between Hive, Presto, Redshift & Spark and make access easy for their data scientists, etc. Filmed at qconsf.com..
Stefan Krawczyk is Algo Dev Platform Lead at StitchFix, where he’s leading development of the algorithm development platform. He spent formative years at Stanford, LinkedIn, Nextdoor & Idibon, working on everything from growth engineering, product engineering, data engineering, to recommendation systems, NLP, data science and business intelligence.
Apache Flink 101 - the rise of stream processing and beyondBowen Li
Apache Flink is the most popular and widely adopted streaming processing framework, powering real time stream event computations at extremely large scale in companies like Uber, Lyft, AWS, Alibaba, Pinterest, Splunk, Yelp, etc.
In this talk, we will go over use cases and basic (yet hard to achieve!) requirements of stream processing, and how Flink fills the gaps and stands out with some of its unique core building blocks, like pipelined execution, native event time support, state support, and fault tolerance.
We will also take a look at how Flink is going beyond stream processing into areas like unified data processing, enterprise intergration, AI/machine learning (especially online ML), and serverless computation, and how Flink fits with its distinct value.
SPEAKER: Bowen Li
SPEAKER BIO: Bowen is a committer of Apache Flink, senior engineer at Alibaba, and host of Seattle Flink Meetup.
Single Source of Truth for Network AutomationAndy Davidson
The importance of building a single source of truth for information within your organisation, when you embark upon a network automation project. Simply automating router configuration steps is not "network automation".
What is a data platform? Why do we need one? And how to build one in the cloud? This talk covers the essential engineering facets of a data platform: flows, persistence, access, standardization and data processing. How these facets combine into a unified platform and how and what cloud technologies as managed services and serverless help/challenge us to build it into a powerful business tool.
These are slides from a presentation from a "code naturally" meetup we held on 30/4 2018.
How do you apply modern Cloud-native patterns to your apps? In this talk, you'll find how to use frameworks like Spring Boot & Spring Cloud to build agile & resilient apps, leveraging Cloud platforms. Get the app source code here: https://github.com/alexandreroman/yatc.
How Netflix Monitors Applications in Near Real-time w Amazon Kinesis - ABD401...Amazon Web Services
Thousands of services work in concert to deliver millions of hours of video streams to Netflix customers every day. These applications vary in size, function, and technology, but they all make use of the Netflix network to communicate. Understanding the interactions between these services is a daunting challenge both because of the sheer volume of traffic and the dynamic nature of deployments. In this session, we first discuss why Netflix chose Kinesis Streams to address these challenges at scale. We then dive deep into how Netflix uses Kinesis Streams to enrich network traffic logs and identify usage patterns in real time. Lastly, we cover how Netflix uses this system to build comprehensive dependency maps, increase network efficiency, and improve failure resiliency. From this session, you'll learn how to build a real-time application monitoring system using network traffic logs and get real-time, actionable insights.
The need for gleaning answers from data in real-time is moving from nicety to a necessity. There are few options to analyze the never-ending stream of unbounded data at scale. Let’s compare and contrast the core principles and technologies the different open source solutions available to help with this endeavor, and where in the future processing engines need to evolve to solve processing needs at scale. These findings are based on the experience of continuing to build a scalable solution in the cloud to process over 700 billion events at Netflix, and how we are embarking on the next journey to evolve unbounded data processing engines.
Similar to Using Event Streams in Serverless Applications (20)
Understanding Nidhi Software Pricing: A Quick Guide 🌟
Choosing the right software is vital for Nidhi companies to streamline operations. Our latest presentation covers Nidhi software pricing, key factors, costs, and negotiation tips.
📊 What You’ll Learn:
Key factors influencing Nidhi software price
Understanding the true cost beyond the initial price
Tips for negotiating the best deal
Affordable and customizable pricing options with Vector Nidhi Software
🔗 Learn more at: www.vectornidhisoftware.com/software-for-nidhi-company/
#NidhiSoftwarePrice #NidhiSoftware #VectorNidhi
In the ever-evolving landscape of technology, enterprise software development is undergoing a significant transformation. Traditional coding methods are being challenged by innovative no-code solutions, which promise to streamline and democratize the software development process.
This shift is particularly impactful for enterprises, which require robust, scalable, and efficient software to manage their operations. In this article, we will explore the various facets of enterprise software development with no-code solutions, examining their benefits, challenges, and the future potential they hold.
May Marketo Masterclass, London MUG May 22 2024.pdfAdele Miller
Can't make Adobe Summit in Vegas? No sweat because the EMEA Marketo Engage Champions are coming to London to share their Summit sessions, insights and more!
This is a MUG with a twist you don't want to miss.
Introducing Crescat - Event Management Software for Venues, Festivals and Eve...Crescat
Crescat is industry-trusted event management software, built by event professionals for event professionals. Founded in 2017, we have three key products tailored for the live event industry.
Crescat Event for concert promoters and event agencies. Crescat Venue for music venues, conference centers, wedding venues, concert halls and more. And Crescat Festival for festivals, conferences and complex events.
With a wide range of popular features such as event scheduling, shift management, volunteer and crew coordination, artist booking and much more, Crescat is designed for customisation and ease-of-use.
Over 125,000 events have been planned in Crescat and with hundreds of customers of all shapes and sizes, from boutique event agencies through to international concert promoters, Crescat is rigged for success. What's more, we highly value feedback from our users and we are constantly improving our software with updates, new features and improvements.
If you plan events, run a venue or produce festivals and you're looking for ways to make your life easier, then we have a solution for you. Try our software for free or schedule a no-obligation demo with one of our product specialists today at crescat.io
A Study of Variable-Role-based Feature Enrichment in Neural Models of CodeAftab Hussain
Understanding variable roles in code has been found to be helpful by students
in learning programming -- could variable roles help deep neural models in
performing coding tasks? We do an exploratory study.
- These are slides of the talk given at InteNSE'23: The 1st International Workshop on Interpretability and Robustness in Neural Software Engineering, co-located with the 45th International Conference on Software Engineering, ICSE 2023, Melbourne Australia
Code reviews are vital for ensuring good code quality. They serve as one of our last lines of defense against bugs and subpar code reaching production.
Yet, they often turn into annoying tasks riddled with frustration, hostility, unclear feedback and lack of standards. How can we improve this crucial process?
In this session we will cover:
- The Art of Effective Code Reviews
- Streamlining the Review Process
- Elevating Reviews with Automated Tools
By the end of this presentation, you'll have the knowledge on how to organize and improve your code review proces
Large Language Models and the End of ProgrammingMatt Welsh
Talk by Matt Welsh at Craft Conference 2024 on the impact that Large Language Models will have on the future of software development. In this talk, I discuss the ways in which LLMs will impact the software industry, from replacing human software developers with AI, to replacing conventional software with models that perform reasoning, computation, and problem-solving.
Globus Connect Server Deep Dive - GlobusWorld 2024Globus
We explore the Globus Connect Server (GCS) architecture and experiment with advanced configuration options and use cases. This content is targeted at system administrators who are familiar with GCS and currently operate—or are planning to operate—broader deployments at their institution.
Unleash Unlimited Potential with One-Time Purchase
BoxLang is more than just a language; it's a community. By choosing a Visionary License, you're not just investing in your success, you're actively contributing to the ongoing development and support of BoxLang.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Zoom is a comprehensive platform designed to connect individuals and teams efficiently. With its user-friendly interface and powerful features, Zoom has become a go-to solution for virtual communication and collaboration. It offers a range of tools, including virtual meetings, team chat, VoIP phone systems, online whiteboards, and AI companions, to streamline workflows and enhance productivity.
Navigating the Metaverse: A Journey into Virtual Evolution"Donna Lenk
Join us for an exploration of the Metaverse's evolution, where innovation meets imagination. Discover new dimensions of virtual events, engage with thought-provoking discussions, and witness the transformative power of digital realms."
Mobile App Development Company In Noida | Drona InfotechDrona Infotech
Looking for a reliable mobile app development company in Noida? Look no further than Drona Infotech. We specialize in creating customized apps for your business needs.
Visit Us For : https://www.dronainfotech.com/mobile-application-development/
Software Engineering, Software Consulting, Tech Lead, Spring Boot, Spring Cloud, Spring Core, Spring JDBC, Spring Transaction, Spring MVC, OpenShift Cloud Platform, Kafka, REST, SOAP, LLD & HLD.
1. Serverless Application
Using Event Streams
to orchestrate a
Jonathan Dee ( jon ) jd@tikisprings.com
November 20, 2018 Serverless Toronto Meetup Group
2. Where are we ??
Minecraft content and materials are trademarks and copyrights of Mojang and its licensors. All rights reserved.
3. Serverless means...
● No servers to provision or manage
● Scales with usage
● Never pay for idle
● Built-in high-availability and durability
4. Serverless means...
● No servers to provision or manage
● Scales with usage
● Never pay for idle
● Built-in high-availability and durability
But what does Serverless mean to
developers and architects ?
5. ● Abstracts away the idea of a server node
Serverless means...
● No servers to provision or manage
● Scales with usage
● Never pay for idle
● Built-in high-availability and durability
But what does Serverless mean to
developers and architects ?
virtual machine, instance, container
● Presents compute resources as high-level, reliable APIs
6. Leverage what cloud providers and
serverless development kits give you
● compute
● storage, state
● streams
● queues
● network
● observability
● analytics
● build, deploy
● security
● backup, audit
7. ● compute
● storage, state
● streams
● queues
● network
● observability
● analytics
● build, deploy
● security
● backup, audit
Leverage what cloud providers and
serverless development kits give you
8. ● compute
● storage, state
● streams
● queues
● network
● observability
● analytics
● build, deploy
● security
● backup, audit
Leverage what cloud providers and
serverless development kits give you
9. ● compute
● storage, state
● streams
● queues
● network
● observability
● analytics
● build, deploy
● security
● backup, audit
Leverage what cloud providers and
serverless development kits give you
11. Common use cases (faas)
● Endpoint
API-Gateway faas
● Trigger
Object Store faas
● Stream Processing
Event Source faas store
faas store
faas store
faas store
12. Common use cases (faas)
● Endpoint
API-Gateway faas
● Trigger
Object Store faas
● Stream Processing
Event Source faas store
faas store
faas store
faas store
all singleton faas implementations
25. ❏Event Streams
❏Message Bus
● Simplify integration
● Create an Extensible architecture
● Promote Event Driven Design
26. Event Driven Design
Not so much about the Things
● Domain Objects
○ Customer
○ Order
● Entities
○ Customer
○ CustomerType
○ OrderHeader
○ OrderDetail
More about the Verbs
● What's Happened
○ new customer was created
○ order was updated
● The Events
○ customerAdded
○ orderUpdated
27. ● A Notification
● State, or State Transfer
● Causality
● History
Event Driven Design
● Facts of Information
○ Immutable (can't change, or be retracted)
● Events might invalidate, or supercede past Facts
● Events can be ignored by certain observers
● Knowledge is the accumulation of Facts !
What is an Event ? What are its characteristics ?
Common use cases:
28. Event Driven Design
Commands Events
● METHOD / ACTION on a Object
● Imperative:
eg: CreateOrder, ShipProduct
● Represents something that HAS HAPPENED
● Past-Tense:
eg: OrderCreated, ProductShipped
1. About Intent
2. Directed
3. Targeted destination
4. Control Focused
1. Intentless
2. Anonymous
3. Others Observe, some Ignore
4. Autonomy
(compare / contrast)
29. Event Driven Design
Commands Events
● METHOD / ACTION on a Object
● Imperative:
eg: CreateOrder, ShipProduct
● Represents something that HAS HAPPENED
● Past-Tense:
eg: OrderCreated, ProductShipped
1. About Intent
2. Directed
3. Targeted destination
4. Control Focused
1. Intentless
2. Anonymous
3. Others Observe, some Ignore
4. Autonomy
(compare / contrast)
30. API Handler
Program Logic
Database Lookup
Processing
Database Write
↵ Processing
↵ Database Lookup
↵ Program Logic
Synchronous call stack
R
U
N
T
I
M
E
40. In a synchronous systems flow
What about doing it the other way ?
c.f. Little's Law
Can you provide a sustained Request Rate by adjusting concurrency ?
feedback loop
** Inspired by: "When Serverless Gets In the Way of Scalability" by Lily Li and Christian Zommerfelds, D2L, @ function18, Toronto, 2018
- Fixing one bottleneck, can result in just moving the bottleneck elsewhere
- Not always easy to apply back-pressure where needed
46. Jonas Bonér , QConNewYork2018, DesigningEventsFirstMicroservices
Concept analogous to what we've seen in
Microservices Design (moving from monolith)
47. Jonas Bonér , QConNewYork2018, DesigningEventsFirstMicroservices
56. Publish/Subscribe, Queues, Streams
Which ?
Serverless Streams, Topics, Queues, & APIs!
How to Pick the Right Serverless Application Pattern
From:
Chris Munns – Senior Developer Advocate – AWS Serverless, August 2018
58. Event Sourcing
● The Event Stream is the source of truth
● The database is just a snapshot of accumulated
events at a certain point in time
59. Event Sourcing
● The Event Stream is the source of truth
✓ RDBMS already works in a similar way internally
● The database is just a snapshot of accumulated
events at a certain point in time
60. Event Sourcing
● The Event Stream is the source of truth
✓ RDBMS already works in a similar way internally
✓ Microservices only keep subset snapshots of what they're interested in
● The database is just a snapshot of accumulated
events at a certain point in time
61. Event Sourcing
● The Event Stream is the source of truth
✓ RDBMS already works in a similar way internally
✓ Microservices only keep subset snapshots of what they're interested in
✓ Can replay the log whenever needed
○ For auditing, tracing, adding observability metrics
○ On Failure
○ For Replication
○ For historic debugging
● The database is just a snapshot of accumulated
events at a certain point in time
62. Event Sourcing
● The Event Stream is the source of truth
✓ RDBMS already works in a similar way internally
✓ Microservices only keep subset snapshots of what they're interested in
✓ Can replay the log whenever needed
○ For auditing, tracing, adding observability metrics
○ On Failure
○ For Replication
○ For historic debugging
● The database is just a snapshot of accumulated
events at a certain point in time
Time Travel !!
63. Go build something !
● Take advantage of Free Tiers
All major cloud providers offer some form of this
● Check out: AWS Appsync
Build data driven apps with real time and offline
● Check out: AWS Amplify
easily integrate cloud services into your front-end framework
● Check out: AWS Serverless Application Repo
64. Jonathan Dee
jd@tikisprings.com
● cloud architecture
● serverless computing
● microservices design
● decoupling monolithic systems
● legacy migration
● database evolution