goto; London: Keeping your Cloud Footprint in CheckCoburn Watson
Presented on the "Lean" track at goto; London September 17th, 2015. Covers how Netflix manages cloud cost efficiency in light of innovation and reliability drivers.
Cloud Capacity Planning Tooling - South Bay SRE Meetup Aug-09-2016Coburn Watson
Sebastien de Larquier from our Data Analytics and Engineering team discusses the tools and associated methodology we apply to tackle our cloud capacity planning needs at Netflix.
#lspe Q1 2013 dynamically scaling netflix in the cloudCoburn Watson
Meetup presentation on how Netflix dynamically scales in the cloud. It covers topics primarily related to AWS autoscaling and provides some "day-in-the-life" data.
goto; London: Keeping your Cloud Footprint in CheckCoburn Watson
Presented on the "Lean" track at goto; London September 17th, 2015. Covers how Netflix manages cloud cost efficiency in light of innovation and reliability drivers.
Cloud Capacity Planning Tooling - South Bay SRE Meetup Aug-09-2016Coburn Watson
Sebastien de Larquier from our Data Analytics and Engineering team discusses the tools and associated methodology we apply to tackle our cloud capacity planning needs at Netflix.
#lspe Q1 2013 dynamically scaling netflix in the cloudCoburn Watson
Meetup presentation on how Netflix dynamically scales in the cloud. It covers topics primarily related to AWS autoscaling and provides some "day-in-the-life" data.
Surge 2013: Maximizing Scalability, Resiliency, and Engineering Velocity in t...Coburn Watson
Surge 2013 presentation which covers how Netflix maximizes engineering velocity while keeping risks to scalability, reliability, and performance in check.
Introduction to Apache Kafka and Confluent... and why they matter!Paolo Castagna
This is a short introduction to Apache Kafka and Confluent (the company founded by the creator of Kafka). The slides cover Apache Kafka APIs including Kafka Connect and Kafka Streams (part of Apache Kafka). Other open source, ASL licensed, projects are mentioned: #KSQL, Schema Registry, REST Proxy, etc.
Many thanks to Codemotion and Seacom for hosting the event.
Modernization patterns to refactor a legacy application into event driven mic...Bilgin Ibryam
A use-case-driven introduction to the most common design patterns for modernizing monolithic legacy applications to microservices using Apache Kafka, Debezium, and Kubernetes.
Applying ML on your Data in Motion with AWS and Confluent | Joseph Morais, Co...HostedbyConfluent
Event-driven application architectures are becoming increasingly common as a large number of users demand more interactive, real-time, and intelligent responses. Yet it can be challenging to decide how to capture and perform real-time data analysis and deliver differentiating experiences. Join experts from Confluent and AWS to learn how to build Apache Kafka®-based streaming applications backed by machine learning models. Adopting the recommendations will help you establish repeatable patterns for high performing event-based apps.
Simplifying Event Streaming: Tools for Location Transparency and Data Evoluti...confluent
At Under Armour Connected Fitness, we’ve built an event streaming platform on top of Kafka and the Confluent stack that makes it easy for developers to produce and consume schema-based events without requiring direct knowledge of Kafka. We are constantly trying to improve the developer experience. The platform consists of multiple federated Kafka clusters, a schema registry, a topology service, an archiver and specialized client libraries and Web / CLI tools that assist developers with producer and consumer workflows.
In this talk, we will take a deeper dive into the design and implementation of a Scala/Java implementation of our client library that allows developers to produce or consume events without worrying about the underlying infrastructure and their location while enjoying the benefits of data compatibility through schemas. We’ll also look at an HTTP based client proxy that exposes the same API but for languages without our native support. Finally, we’ll walk through Web and CLI tools we built to make working with the platform easier.
The content of this talk will be primarily aimed at software developers looking for ideas on how to build Kafka client tools that allow producer/consumer interactions protected by schema-based event definitions while hiding details of the underlying infrastructure.
Real-Time Bidding (RTB) is a service offered by advertising networks to agencies. The agencies decide on the value of advertising opportunities in real-time and bid accordingly on behalf of their advertising clients. Typically the window of opportunity for bids to be calculated from provided consumer details (e.g. cookies) and then submitted is 100ms.
Real-Time Dynamic Data Export Using the Kafka Ecosystemconfluent
(Preston Thompson, Braze) Kafka Summit SF 2018
If you collect billions of data points every day and create billions more sending and tracking messages, then you know you need to get your infrastructure right. Our clients use Braze to engage their users over their lifecycle via push notifications, emails, in-app messages and more. Using our Currents product, clients can enable multiple configurable integrations to export this event data in real time to a variety of third-party systems, allowing them to tightly integrate with the rest of their operations and understand the impacts of their engagement strategy.
We use Kafka and the Kafka ecosystem to power this high volume real-time export. As you’d expect in a big data environment, we take data collected from a variety of sources—our SDKs, email partner APIs, our own systems—and produce it to Kafka, with topics for each type of event (about 30 types). Kafka Streams filters and transforms this data according to the configurations set by our clients. Clients can choose which types of events should be sent to which third-party systems. Kafka Connect helps to export the data to third-party systems in real time using custom developed connectors. We run a connector instance for each integration for each customer that consumes from the integration-specific topic. On top of it all, we built a service to manage the pipeline. The service provides configurations to the Streams application and also creates topics for new integrations and uses the Connect REST API to create and manage connectors.
In this talk, I will discuss:
-How we started our journey in designing this large-scale streaming architecture
-Why streaming technologies were necessary to solve our technology and business issues
-The lessons we learned along the way that can help you with your Kafka-based architecture
Confluent On Azure: Why you should add Confluent to your Azure toolkit | Alic...HostedbyConfluent
As a data professional, you are the glue that makes cross-platform integrations possible. With the increase in adoption of hybrid cloud architectures, Kafka is an increasingly relevant tool for building data pipelines between platforms and accelerating delivery on cloud projects. Early exposure to Kafka on Azure capabilities gives you an edge to build better mousetraps at the design phase.
Customers already running Kafka on premises and are looking to extend Kafka systems to Azure can get started quickly with Confluent Cloud. Additionally, DevOps for self-managed options can be easily scalable with Ansible for Virtual Machines or containers via Azure Kubernetes Services or Azure Container Instances.
This session is presented from the Microsoft Solution Architect perspective by Israel Ekpo, Microsoft Cloud Solution Architect and Alicia Moniz, Microsoft MVP. They will cover use cases and scenarios, along with key Azure integration points and architecture patterns.
Billions of Messages in Real Time: Why Paypal & LinkedIn Trust an Engagement ...confluent
(Bruno Simic, Solutions Engineer, Couchbase)
Breakout during Confluent’s streaming event in Munich. This three-day hands-on course focused on how to build, manage, and monitor clusters using industry best-practices developed by the world’s foremost Apache Kafka™ experts. The sessions focused on how Kafka and the Confluent Platform work, how their main subsystems interact, and how to set up, manage, monitor, and tune your cluster.
Kafka for Real-Time Event Processing in Serverless Environmentsconfluent
(Jeff Sharpe + Alex Srisuwan, Capital One) Kafka Summit SF 2018
Using Kafka as a platform messaging bus is common, but bridging communication between real-time and asynchronous components can become complicated, especially when dealing with serverless environments. This has become increasingly common in modern banking where events need to be processed at near-real-time speed. Serverless environments are well-suited to address these needs, and Kafka remains an excellent solution for providing the reliable, resilient communication layer between serverless components and dedicated stream processing services.
In this talk, we will examine some of the strengths and weaknesses of using Kafka for real-time communication, some tips for efficient interactions with Kafka and AWS Lambda, and a number of useful patterns for maximizing the strengths of Kafka and serverless components.
Kafka and Stream Processing, Taking Analytics Real-time, Mike Spicerconfluent
Do you think that analytics are run on stored data sets? Think again, the combination of Apache Kafka and Stream Processing enables analytics on real-time data streams.
First, we give a brief overview of stream processing and how it differs from the request response model of analytics on stored data. Next, we cover the characteristics of Kafka which make it such a good fit for Stream processing and why they matter. Finally, we show a number of use cases which highlight how stream processing is being used to do real-time analytics at scale with very low latency.
MongoDB and Machine Learning with FlowableFlowable
Joram Barrez, Principal Software Engineer at Flowable, explains how to run Flowable on MongoDB.
It was presented at the Flowfest 2018 in Barcelona, Spain
Event Driven Architecture with a RESTful Microservices Architecture (Kyle Ben...confluent
Tinder’s Quickfire Pipeline powers all things data at Tinder. It was originally built using AWS Kinesis Firehoses and has since been extended to use both Kafka and other event buses. It is the core of Tinder’s data infrastructure. This rich data flow of both client and backend data has been extended to service a variety of needs at Tinder, including Experimentation, ML, CRM, and Observability, allowing backend developers easier access to shared client side data. We perform this using many systems, including Kafka, Spark, Flink, Kubernetes, and Prometheus. Many of Tinder’s systems were natively designed in an RPC first architecture.
Things we’ll discuss decoupling your system at scale via event-driven architectures include:
– Powering ML, backend, observability, and analytical applications at scale, including an end to end walk through of our processes that allow non-programmers to write and deploy event-driven data flows.
– Show end to end the usage of dynamic event processing that creates other stream processes, via a dynamic control plane topology pattern and broadcasted state pattern
– How to manage the unavailability of cached data that would normally come from repeated API calls for data that’s being backfilled into Kafka, all online! (and why this is not necessarily a “good” idea)
– Integrating common OSS frameworks and libraries like Kafka Streams, Flink, Spark and friends to encourage the best design patterns for developers coming from traditional service oriented architectures, including pitfalls and lessons learned along the way.
– Why and how to avoid overloading microservices with excessive RPC calls from event-driven streaming systems
– Best practices in common data flow patterns, such as shared state via RocksDB + Kafka Streams as well as the complementary tools in the Apache Ecosystem.
– The simplicity and power of streaming SQL with microservices
Joram Barrez and Tijs Rademakers, Principal Software Engineer at Flowable present the current state of (Flowable)things.
It was presented at the Flowfest 2018 in Barcelona, Spain
Keeping Analytics Data Fresh in a Streaming Architecture | John Neal, QlikHostedbyConfluent
Qlik is an industry leader across its solution stack, both on the Data Integration side of things with Qlik Replicate (real-time CDC) and Qlik Compose (data warehouse and data lake automation), and on the Analytics side with Qlik Sense. These two “sides” of Qlik are coming together more frequently these days as the need for “always fresh” data increases across organizations.
When real-time streaming applications are the topic du jour, those companies are looking to Apache Kafka to provide the architectural backbone those applications require. Those same companies turn to Qlik Replicate to put the data from their enterprise database systems into motion at scale, whether that data resides in “legacy” mainframe databases; traditional relational databases such as Oracle, MySQL, or SQL Server; or applications such as SAP and SalesForce.
In this session we will look in depth at how Qlik Replicate can be used to continuously stream changes from a source database into Apache Kafka. From there, we will explore how a purpose-built consumer can be used to provide the bridge between Apache Kafka and an analytics application such as Qlik Sense.
Getting Started with Serverless Architectures | AWS Public Sector Summit 2016Amazon Web Services
By building your application with AWS Lambda, Amazon API Gateway, and Amazon DynamoDB, you can free yourself from the burden of managing servers while gaining agility and simple scaling. After introducing the basics of building microservices with AWS Lambda and Amazon API Gateway, the session highlights how the Democratic National Committee (DNC) Technology Team uses AWS Lambda and Amazon DynamoDB microservices to provide campaigns and state parties customized applications on top of a core data platform. This serverless architecture has helped the DNC Technology Team improve their microservice functionality and development process, ensuring their applications are performant through the extremely erratic usage levels of a campaign cycle.
Surge 2013: Maximizing Scalability, Resiliency, and Engineering Velocity in t...Coburn Watson
Surge 2013 presentation which covers how Netflix maximizes engineering velocity while keeping risks to scalability, reliability, and performance in check.
Introduction to Apache Kafka and Confluent... and why they matter!Paolo Castagna
This is a short introduction to Apache Kafka and Confluent (the company founded by the creator of Kafka). The slides cover Apache Kafka APIs including Kafka Connect and Kafka Streams (part of Apache Kafka). Other open source, ASL licensed, projects are mentioned: #KSQL, Schema Registry, REST Proxy, etc.
Many thanks to Codemotion and Seacom for hosting the event.
Modernization patterns to refactor a legacy application into event driven mic...Bilgin Ibryam
A use-case-driven introduction to the most common design patterns for modernizing monolithic legacy applications to microservices using Apache Kafka, Debezium, and Kubernetes.
Applying ML on your Data in Motion with AWS and Confluent | Joseph Morais, Co...HostedbyConfluent
Event-driven application architectures are becoming increasingly common as a large number of users demand more interactive, real-time, and intelligent responses. Yet it can be challenging to decide how to capture and perform real-time data analysis and deliver differentiating experiences. Join experts from Confluent and AWS to learn how to build Apache Kafka®-based streaming applications backed by machine learning models. Adopting the recommendations will help you establish repeatable patterns for high performing event-based apps.
Simplifying Event Streaming: Tools for Location Transparency and Data Evoluti...confluent
At Under Armour Connected Fitness, we’ve built an event streaming platform on top of Kafka and the Confluent stack that makes it easy for developers to produce and consume schema-based events without requiring direct knowledge of Kafka. We are constantly trying to improve the developer experience. The platform consists of multiple federated Kafka clusters, a schema registry, a topology service, an archiver and specialized client libraries and Web / CLI tools that assist developers with producer and consumer workflows.
In this talk, we will take a deeper dive into the design and implementation of a Scala/Java implementation of our client library that allows developers to produce or consume events without worrying about the underlying infrastructure and their location while enjoying the benefits of data compatibility through schemas. We’ll also look at an HTTP based client proxy that exposes the same API but for languages without our native support. Finally, we’ll walk through Web and CLI tools we built to make working with the platform easier.
The content of this talk will be primarily aimed at software developers looking for ideas on how to build Kafka client tools that allow producer/consumer interactions protected by schema-based event definitions while hiding details of the underlying infrastructure.
Real-Time Bidding (RTB) is a service offered by advertising networks to agencies. The agencies decide on the value of advertising opportunities in real-time and bid accordingly on behalf of their advertising clients. Typically the window of opportunity for bids to be calculated from provided consumer details (e.g. cookies) and then submitted is 100ms.
Real-Time Dynamic Data Export Using the Kafka Ecosystemconfluent
(Preston Thompson, Braze) Kafka Summit SF 2018
If you collect billions of data points every day and create billions more sending and tracking messages, then you know you need to get your infrastructure right. Our clients use Braze to engage their users over their lifecycle via push notifications, emails, in-app messages and more. Using our Currents product, clients can enable multiple configurable integrations to export this event data in real time to a variety of third-party systems, allowing them to tightly integrate with the rest of their operations and understand the impacts of their engagement strategy.
We use Kafka and the Kafka ecosystem to power this high volume real-time export. As you’d expect in a big data environment, we take data collected from a variety of sources—our SDKs, email partner APIs, our own systems—and produce it to Kafka, with topics for each type of event (about 30 types). Kafka Streams filters and transforms this data according to the configurations set by our clients. Clients can choose which types of events should be sent to which third-party systems. Kafka Connect helps to export the data to third-party systems in real time using custom developed connectors. We run a connector instance for each integration for each customer that consumes from the integration-specific topic. On top of it all, we built a service to manage the pipeline. The service provides configurations to the Streams application and also creates topics for new integrations and uses the Connect REST API to create and manage connectors.
In this talk, I will discuss:
-How we started our journey in designing this large-scale streaming architecture
-Why streaming technologies were necessary to solve our technology and business issues
-The lessons we learned along the way that can help you with your Kafka-based architecture
Confluent On Azure: Why you should add Confluent to your Azure toolkit | Alic...HostedbyConfluent
As a data professional, you are the glue that makes cross-platform integrations possible. With the increase in adoption of hybrid cloud architectures, Kafka is an increasingly relevant tool for building data pipelines between platforms and accelerating delivery on cloud projects. Early exposure to Kafka on Azure capabilities gives you an edge to build better mousetraps at the design phase.
Customers already running Kafka on premises and are looking to extend Kafka systems to Azure can get started quickly with Confluent Cloud. Additionally, DevOps for self-managed options can be easily scalable with Ansible for Virtual Machines or containers via Azure Kubernetes Services or Azure Container Instances.
This session is presented from the Microsoft Solution Architect perspective by Israel Ekpo, Microsoft Cloud Solution Architect and Alicia Moniz, Microsoft MVP. They will cover use cases and scenarios, along with key Azure integration points and architecture patterns.
Billions of Messages in Real Time: Why Paypal & LinkedIn Trust an Engagement ...confluent
(Bruno Simic, Solutions Engineer, Couchbase)
Breakout during Confluent’s streaming event in Munich. This three-day hands-on course focused on how to build, manage, and monitor clusters using industry best-practices developed by the world’s foremost Apache Kafka™ experts. The sessions focused on how Kafka and the Confluent Platform work, how their main subsystems interact, and how to set up, manage, monitor, and tune your cluster.
Kafka for Real-Time Event Processing in Serverless Environmentsconfluent
(Jeff Sharpe + Alex Srisuwan, Capital One) Kafka Summit SF 2018
Using Kafka as a platform messaging bus is common, but bridging communication between real-time and asynchronous components can become complicated, especially when dealing with serverless environments. This has become increasingly common in modern banking where events need to be processed at near-real-time speed. Serverless environments are well-suited to address these needs, and Kafka remains an excellent solution for providing the reliable, resilient communication layer between serverless components and dedicated stream processing services.
In this talk, we will examine some of the strengths and weaknesses of using Kafka for real-time communication, some tips for efficient interactions with Kafka and AWS Lambda, and a number of useful patterns for maximizing the strengths of Kafka and serverless components.
Kafka and Stream Processing, Taking Analytics Real-time, Mike Spicerconfluent
Do you think that analytics are run on stored data sets? Think again, the combination of Apache Kafka and Stream Processing enables analytics on real-time data streams.
First, we give a brief overview of stream processing and how it differs from the request response model of analytics on stored data. Next, we cover the characteristics of Kafka which make it such a good fit for Stream processing and why they matter. Finally, we show a number of use cases which highlight how stream processing is being used to do real-time analytics at scale with very low latency.
MongoDB and Machine Learning with FlowableFlowable
Joram Barrez, Principal Software Engineer at Flowable, explains how to run Flowable on MongoDB.
It was presented at the Flowfest 2018 in Barcelona, Spain
Event Driven Architecture with a RESTful Microservices Architecture (Kyle Ben...confluent
Tinder’s Quickfire Pipeline powers all things data at Tinder. It was originally built using AWS Kinesis Firehoses and has since been extended to use both Kafka and other event buses. It is the core of Tinder’s data infrastructure. This rich data flow of both client and backend data has been extended to service a variety of needs at Tinder, including Experimentation, ML, CRM, and Observability, allowing backend developers easier access to shared client side data. We perform this using many systems, including Kafka, Spark, Flink, Kubernetes, and Prometheus. Many of Tinder’s systems were natively designed in an RPC first architecture.
Things we’ll discuss decoupling your system at scale via event-driven architectures include:
– Powering ML, backend, observability, and analytical applications at scale, including an end to end walk through of our processes that allow non-programmers to write and deploy event-driven data flows.
– Show end to end the usage of dynamic event processing that creates other stream processes, via a dynamic control plane topology pattern and broadcasted state pattern
– How to manage the unavailability of cached data that would normally come from repeated API calls for data that’s being backfilled into Kafka, all online! (and why this is not necessarily a “good” idea)
– Integrating common OSS frameworks and libraries like Kafka Streams, Flink, Spark and friends to encourage the best design patterns for developers coming from traditional service oriented architectures, including pitfalls and lessons learned along the way.
– Why and how to avoid overloading microservices with excessive RPC calls from event-driven streaming systems
– Best practices in common data flow patterns, such as shared state via RocksDB + Kafka Streams as well as the complementary tools in the Apache Ecosystem.
– The simplicity and power of streaming SQL with microservices
Joram Barrez and Tijs Rademakers, Principal Software Engineer at Flowable present the current state of (Flowable)things.
It was presented at the Flowfest 2018 in Barcelona, Spain
Keeping Analytics Data Fresh in a Streaming Architecture | John Neal, QlikHostedbyConfluent
Qlik is an industry leader across its solution stack, both on the Data Integration side of things with Qlik Replicate (real-time CDC) and Qlik Compose (data warehouse and data lake automation), and on the Analytics side with Qlik Sense. These two “sides” of Qlik are coming together more frequently these days as the need for “always fresh” data increases across organizations.
When real-time streaming applications are the topic du jour, those companies are looking to Apache Kafka to provide the architectural backbone those applications require. Those same companies turn to Qlik Replicate to put the data from their enterprise database systems into motion at scale, whether that data resides in “legacy” mainframe databases; traditional relational databases such as Oracle, MySQL, or SQL Server; or applications such as SAP and SalesForce.
In this session we will look in depth at how Qlik Replicate can be used to continuously stream changes from a source database into Apache Kafka. From there, we will explore how a purpose-built consumer can be used to provide the bridge between Apache Kafka and an analytics application such as Qlik Sense.
Getting Started with Serverless Architectures | AWS Public Sector Summit 2016Amazon Web Services
By building your application with AWS Lambda, Amazon API Gateway, and Amazon DynamoDB, you can free yourself from the burden of managing servers while gaining agility and simple scaling. After introducing the basics of building microservices with AWS Lambda and Amazon API Gateway, the session highlights how the Democratic National Committee (DNC) Technology Team uses AWS Lambda and Amazon DynamoDB microservices to provide campaigns and state parties customized applications on top of a core data platform. This serverless architecture has helped the DNC Technology Team improve their microservice functionality and development process, ensuring their applications are performant through the extremely erratic usage levels of a campaign cycle.
Modern Cloud-Native Streaming Platforms: Event Streaming Microservices with K...confluent
Microservices, events, containers, and orchestrators are dominating our vernacular today. As operations teams adapt to support these technologies in production, cloud-native platforms like Cloud Foundry and Kubernetes have quickly risen to serve as force multipliers of automation, productivity and value. Kafka is providing developers a critically important component as they build and modernize applications to cloud-native architecture. This talk will explore:
• Why cloud-native platforms and why run Kafka on Kubernetes?
• What kind of workloads are best suited for this combination?
• Tips to determine the path forward for legacy monoliths in your application portfolio
• Running Kafka as a Streaming Platform on Container Orchestration
[Capitole du Libre] #serverless - mettez-le en oeuvre dans votre entreprise...Ludovic Piot
Tout comme le Cloud IaaS avant lui, le serverless promet de faciliter le succès de vos projets en accélérant le Time to Market et en fluidifiant les relations entre Devs et Ops.
Mais sa mise en œuvre au sein d’une entreprise reste complexe et coûteuse.
Après 2 ans à mettre en place des plateformes managées de ce type, nous partagons nos expériences de ce qu’il faut faire pour mettre en œuvre du serverless en entreprise, en évitant les douleurs et en limitant les contraintes au maximum.
Tout d’abord l’architecture technique, avec 2 implémentations très différentes : Kubernetes et Helm d’un côté, Clever Cloud on-premise de l’autre.
Ensuite, la mise en place et l’utilisation d’OpenFaaS. Comment tester et versionner du Function as a Service. Mais aussi les problématiques de blue/green deployment, de rolling update, d’A/B testing. Comment diagnostiquer rapidement les dépendances et les communications entre services.
Enfin, en abordant les sujets chers à la production : * vulnerability management et patch management, * hétérogénéïté du parc, * monitoring et alerting, * gestion des stacks obsolètes, etc.
Leapfrog into Serverless - a Deloitte-Amtrak Case Study | Serverless Confere...Gary Arora
This talk was delivered at the Serverless Conference in New York City in 2017. Deloitte and Amtrak built a Serverless Cloud-Native solution on AWS for real-time operational datastore and near real-time reporting data mart that modernized Amtrak's legacy systems & applications. With Serverless solutions, we are able leapfrog over several rungs of computing evolution.
Gary Arora is a Cloud Solutions Architect at Deloitte Consulting, specializing on Azure & AWS.
Modern Cloud-Native Streaming Platforms: Event Streaming Microservices with A...confluent
Microservices, events, containers, and orchestrators are dominating our vernacular today. As operations teams adapt to support these technologies in production, cloud-native platforms like Pivotal Cloud Foundry and Kubernetes have quickly risen to serve as force multipliers of automation, productivity and value.
Apache Kafka® is providing developers a critically important component as they build and modernize applications to cloud-native architecture.
This talk will explore:
• Why cloud-native platforms and why run Apache Kafka on Kubernetes?
• What kind of workloads are best suited for this combination?
• Tips to determine the path forward for legacy monoliths in your application portfolio
• Demo: Running Apache Kafka as a Streaming Platform on Kubernetes
2016-06 - Design your api management strategy - AWS - Microservices on AWSSmartWave
Morning session started with a presentation on working with a micro-services API gateway in hybrid architectures, by Jean-Pierre LeGoaller, Architect at AWS. We learned how to greatly reduce coding efforts, make applications far more efficient, and decrease errors all at the same time, using small and flexible Micro-services with an API Gateway. Jean-Pierre then illustrated the benefits of AWS lambda function to run seamlessly codes as a service in AWS high-availability compute infrastructure.
AWS re:Invent 2016: The State of Serverless Computing (SVR311)Amazon Web Services
Join us to learn about the state of serverless computing from Dr. Tim Wagner, General Manager of AWS Lambda. Dr. Wagner discusses the latest developments from AWS Lambda and the serverless computing ecosystem. He talks about how serverless computing is becoming a core component in how companies build and run their applications and services, and he also discusses how serverless computing will continue to evolve.
Amazon aws big data demystified | Introduction to streaming and messaging flu...Omid Vahdaty
amazon aws big data demystified meetup:
https://www.meetup.com/AWS-Big-Data-Demystified/
Introduction to streaming and messaging flume kafka sqs kinesis
What if there were an easier way to perform big data analysis with less setup, instant scaling, and no servers to provision and manage? With serverless computing, you can perform real-time stream processing of multiple data types without needing to spin up servers or install software. Come learn how you can use AWS Lambda with Amazon Kinesis to analyze streaming data in real-time and then store the results in a managed NoSQL database such as Amazon DynamoDB. You’ll learn tips and tricks for doing in-line processing, data manipulation, and even distributed MapReduce on large data sets.
What is Serverless Computing?
From its Introduction, Architecture, Characteristics, Commercial Platform, Benefits and Drawbacks, Use Cases and Workloads, to its API Composition.
AWS Meetup - Nordstrom Data Lab and the AWS CloudNordstromDataLab
The Nordstrom Data Lab is building out an API that powers product recommendations for our customer online and beyond. Recommendo, our flagship product, was built from the ground up using Node.js and AWS in a little over three months. Since launch in November 2013 we've served up over three billion recommendations and survived Black Friday and Cyber Monday without breaking a sweat. We'll be sharing our learnings for building and operating a high traffic API on the AWS platform as a service focusing on Node.js, Elastic Beanstalk, and DynamoDB. Additionally we'll discuss some of the cultural challenges and opportunities presented when adopting the public cloud at a large corporate IT organization. In short, we believe there are tremendous advantages to be had for enterprises willing to make the leap to the cloud.
Enterprise Resource Planning System includes various modules that reduce any business's workload. Additionally, it organizes the workflows, which drives towards enhancing productivity. Here are a detailed explanation of the ERP modules. Going through the points will help you understand how the software is changing the work dynamics.
To know more details here: https://blogs.nyggs.com/nyggs/enterprise-resource-planning-erp-system-modules/
Gamify Your Mind; The Secret Sauce to Delivering Success, Continuously Improv...Shahin Sheidaei
Games are powerful teaching tools, fostering hands-on engagement and fun. But they require careful consideration to succeed. Join me to explore factors in running and selecting games, ensuring they serve as effective teaching tools. Learn to maintain focus on learning objectives while playing, and how to measure the ROI of gaming in education. Discover strategies for pitching gaming to leadership. This session offers insights, tips, and examples for coaches, team leads, and enterprise leaders seeking to teach from simple to complex concepts.
Top 7 Unique WhatsApp API Benefits | Saudi ArabiaYara Milbes
Discover the transformative power of the WhatsApp API in our latest SlideShare presentation, "Top 7 Unique WhatsApp API Benefits." In today's fast-paced digital era, effective communication is crucial for both personal and professional success. Whether you're a small business looking to enhance customer interactions or an individual seeking seamless communication with loved ones, the WhatsApp API offers robust capabilities that can significantly elevate your experience.
In this presentation, we delve into the top 7 distinctive benefits of the WhatsApp API, provided by the leading WhatsApp API service provider in Saudi Arabia. Learn how to streamline customer support, automate notifications, leverage rich media messaging, run scalable marketing campaigns, integrate secure payments, synchronize with CRM systems, and ensure enhanced security and privacy.
Check out the webinar slides to learn more about how XfilesPro transforms Salesforce document management by leveraging its world-class applications. For more details, please connect with sales@xfilespro.com
If you want to watch the on-demand webinar, please click here: https://www.xfilespro.com/webinars/salesforce-document-management-2-0-smarter-faster-better/
Large Language Models and the End of ProgrammingMatt Welsh
Talk by Matt Welsh at Craft Conference 2024 on the impact that Large Language Models will have on the future of software development. In this talk, I discuss the ways in which LLMs will impact the software industry, from replacing human software developers with AI, to replacing conventional software with models that perform reasoning, computation, and problem-solving.
OpenFOAM solver for Helmholtz equation, helmholtzFoam / helmholtzBubbleFoamtakuyayamamoto1800
In this slide, we show the simulation example and the way to compile this solver.
In this solver, the Helmholtz equation can be solved by helmholtzFoam. Also, the Helmholtz equation with uniformly dispersed bubbles can be simulated by helmholtzBubbleFoam.
Understanding Globus Data Transfers with NetSageGlobus
NetSage is an open privacy-aware network measurement, analysis, and visualization service designed to help end-users visualize and reason about large data transfers. NetSage traditionally has used a combination of passive measurements, including SNMP and flow data, as well as active measurements, mainly perfSONAR, to provide longitudinal network performance data visualization. It has been deployed by dozens of networks world wide, and is supported domestically by the Engagement and Performance Operations Center (EPOC), NSF #2328479. We have recently expanded the NetSage data sources to include logs for Globus data transfers, following the same privacy-preserving approach as for Flow data. Using the logs for the Texas Advanced Computing Center (TACC) as an example, this talk will walk through several different example use cases that NetSage can answer, including: Who is using Globus to share data with my institution, and what kind of performance are they able to achieve? How many transfers has Globus supported for us? Which sites are we sharing the most data with, and how is that changing over time? How is my site using Globus to move data internally, and what kind of performance do we see for those transfers? What percentage of data transfers at my institution used Globus, and how did the overall data transfer performance compare to the Globus users?
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I ...Juraj Vysvader
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I didn't get rich from it but it did have 63K downloads (powered possible tens of thousands of websites).
Providing Globus Services to Users of JASMIN for Environmental Data AnalysisGlobus
JASMIN is the UK’s high-performance data analysis platform for environmental science, operated by STFC on behalf of the UK Natural Environment Research Council (NERC). In addition to its role in hosting the CEDA Archive (NERC’s long-term repository for climate, atmospheric science & Earth observation data in the UK), JASMIN provides a collaborative platform to a community of around 2,000 scientists in the UK and beyond, providing nearly 400 environmental science projects with working space, compute resources and tools to facilitate their work. High-performance data transfer into and out of JASMIN has always been a key feature, with many scientists bringing model outputs from supercomputers elsewhere in the UK, to analyse against observational or other model data in the CEDA Archive. A growing number of JASMIN users are now realising the benefits of using the Globus service to provide reliable and efficient data movement and other tasks in this and other contexts. Further use cases involve long-distance (intercontinental) transfers to and from JASMIN, and collecting results from a mobile atmospheric radar system, pushing data to JASMIN via a lightweight Globus deployment. We provide details of how Globus fits into our current infrastructure, our experience of the recent migration to GCSv5.4, and of our interest in developing use of the wider ecosystem of Globus services for the benefit of our user community.
Developing Distributed High-performance Computing Capabilities of an Open Sci...Globus
COVID-19 had an unprecedented impact on scientific collaboration. The pandemic and its broad response from the scientific community has forged new relationships among public health practitioners, mathematical modelers, and scientific computing specialists, while revealing critical gaps in exploiting advanced computing systems to support urgent decision making. Informed by our team’s work in applying high-performance computing in support of public health decision makers during the COVID-19 pandemic, we present how Globus technologies are enabling the development of an open science platform for robust epidemic analysis, with the goal of collaborative, secure, distributed, on-demand, and fast time-to-solution analyses to support public health.
Listen to the keynote address and hear about the latest developments from Rachana Ananthakrishnan and Ian Foster who review the updates to the Globus Platform and Service, and the relevance of Globus to the scientific community as an automation platform to accelerate scientific discovery.
GraphSummit Paris - The art of the possible with Graph TechnologyNeo4j
Sudhir Hasbe, Chief Product Officer, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Top Features to Include in Your Winzo Clone App for Business Growth (4).pptxrickgrimesss22
Discover the essential features to incorporate in your Winzo clone app to boost business growth, enhance user engagement, and drive revenue. Learn how to create a compelling gaming experience that stands out in the competitive market.
Quarkus Hidden and Forbidden ExtensionsMax Andersen
Quarkus has a vast extension ecosystem and is known for its subsonic and subatomic feature set. Some of these features are not as well known, and some extensions are less talked about, but that does not make them less interesting - quite the opposite.
Come join this talk to see some tips and tricks for using Quarkus and some of the lesser known features, extensions and development techniques.
Climate Science Flows: Enabling Petabyte-Scale Climate Analysis with the Eart...Globus
The Earth System Grid Federation (ESGF) is a global network of data servers that archives and distributes the planet’s largest collection of Earth system model output for thousands of climate and environmental scientists worldwide. Many of these petabyte-scale data archives are located in proximity to large high-performance computing (HPC) or cloud computing resources, but the primary workflow for data users consists of transferring data, and applying computations on a different system. As a part of the ESGF 2.0 US project (funded by the United States Department of Energy Office of Science), we developed pre-defined data workflows, which can be run on-demand, capable of applying many data reduction and data analysis to the large ESGF data archives, transferring only the resultant analysis (ex. visualizations, smaller data files). In this talk, we will showcase a few of these workflows, highlighting how Globus Flows can be used for petabyte-scale climate analysis.
Software Engineering, Software Consulting, Tech Lead, Spring Boot, Spring Cloud, Spring Core, Spring JDBC, Spring Transaction, Spring MVC, OpenShift Cloud Platform, Kafka, REST, SOAP, LLD & HLD.
Unleash Unlimited Potential with One-Time Purchase
BoxLang is more than just a language; it's a community. By choosing a Visionary License, you're not just investing in your success, you're actively contributing to the ongoing development and support of BoxLang.
Globus Compute wth IRI Workflows - GlobusWorld 2024Globus
As part of the DOE Integrated Research Infrastructure (IRI) program, NERSC at Lawrence Berkeley National Lab and ALCF at Argonne National Lab are working closely with General Atomics on accelerating the computing requirements of the DIII-D experiment. As part of the work the team is investigating ways to speedup the time to solution for many different parts of the DIII-D workflow including how they run jobs on HPC systems. One of these routes is looking at Globus Compute as a way to replace the current method for managing tasks and we describe a brief proof of concept showing how Globus Compute could help to schedule jobs and be a tool to connect compute at different facilities.
Introducing Crescat - Event Management Software for Venues, Festivals and Eve...Crescat
Crescat is industry-trusted event management software, built by event professionals for event professionals. Founded in 2017, we have three key products tailored for the live event industry.
Crescat Event for concert promoters and event agencies. Crescat Venue for music venues, conference centers, wedding venues, concert halls and more. And Crescat Festival for festivals, conferences and complex events.
With a wide range of popular features such as event scheduling, shift management, volunteer and crew coordination, artist booking and much more, Crescat is designed for customisation and ease-of-use.
Over 125,000 events have been planned in Crescat and with hundreds of customers of all shapes and sizes, from boutique event agencies through to international concert promoters, Crescat is rigged for success. What's more, we highly value feedback from our users and we are constantly improving our software with updates, new features and improvements.
If you plan events, run a venue or produce festivals and you're looking for ways to make your life easier, then we have a solution for you. Try our software for free or schedule a no-obligation demo with one of our product specialists today at crescat.io
5. - Inefficient
- Costly
- Difficult operations
- Control over data
- Costly with higher demands
- Dynamic scalability not seamless
- Easier creation of dedicated
hosts
- Vendor and technology lock-in
- Difficult migration of projects
that are not cloud-ready well-
written
- Flexible and scalable
Dedicated server IaaS PaaS
Server(full?) - transition to cloud
6. Serverless computing is a cloud
computing execution model in which the
cloud provider dynamically manages the
allocation of machine resources.
Pricing is based on the actual amount of
resources consumed by an application,
rather than on pre-purchased units of
capacity.
9. Suitable for…
• Event-driven applications
• Messages from IoT devices
• Pay-per-click services
• Stateless applications
• Applications APIs
• Mobile backend as a service (MBaaS)
• Scheduled events
• Parsing data at certain time of the day
10. Not suitable for…
• Long running operations
• Need to be broken down
• Stateful applications
• State must be stored separately
13. Pitfall 1: Vendor control and lock-in
• Limits on execution time
• Monitoring, health checks defined by providers
)(xf< 2 min
14. Pitfall 2: Thin backend, thick clients
• Lots of logic needs to be handled by client
• For multiple clients, it could mean a lot of repetition
)(xf
15. Pitfall 3: Startup latency (in JVM)
• If there are longer times between executions or the calls are
infrequent
)(xf
16. Pitfall 4: Look ma’, no operations!
)(xf
…also, no server optimizations
19. Stateless, single purpose, event driven
Good in combination with other vendor-specific PaaS resources
Pay only for total execution time of function
Service providers supporting cloud functions:
Google Cloud Functions, AWS Lambda, Azure functions, IBM OpenWhisk,…
Tooling is not great
Develop either in browser or prepare for a lot of boilerplate
Honorary mention: Serverless Framework, Spring Cloud Functions
)(xf
Topic transitions…
New technologies aren’t really new
New (old) technologies are being utilized for new, modern purposes
Cloud is a first-class citizen when it comes to developing new software
Many possibilities on how to utilize this computing power, not everything is suitable
Let’s look at some real life scenarios…
Typical web application (client/server) setup:
Client sends HTTP request to REST API
Application receives request through a server socket
Application processes the request and sends HTTP response with body (data)
In-house server hosting
Only feasible for big companies, e.g. U.S. Steel
IaaS
You can use service providers
Or you can develop your own infrastructure
PaaS
Also CaaS (containers) and SaaS
Connection of Serverless and FaaS – Serverless is utilizing FaaS model
Event-sourcing – AWS Lambda can be used to process events, like processing Kinesis stream (lambda polls the stream)
Showcase:
AWS Console
Create Lambda function
You can test the function directly
Upload new version or save it directly in browser
API Gateway, IAM Roles
Kinesis stream receives data, the lambda processes the stream logging errors, DynamoDB is used to store the errors.
You develop your application not using specific features, but abstractions (e.g. Event sourcing, Function composition, CQRS)
User state (session) management
Shared data
Also valid for interpreted languages, application is passivated after certain time of inactivity, it may be up in few milliseconds, but you have to bear it in mind.
It is a black box, to which you don’t have any access
https://serverless.com/
Watch out for pricing used in combination with other services. You may pay a lot for the least denominator.
Take into account: API Gateway, IAM access, S3 Bucket, DynamoDB, etc.
Spring Cloud Functoins - Dave Syer @ OpenSlava 2017