This document discusses using AWS Lambda for serverless computing on the cloud. It covers topics such as what AWS Lambda is, its motivations, under the hood details of how it works, integrations with other AWS services, limitations, logging, configuration management, security, error handling, monitoring, alerting, testing, deployment practices, performance considerations including cold starts, and examples of using AWS Lambda at OpsGenie for incident management and data replication.
My talk at Scala Bay Meetup at Netflix about Powering the Partner APIs with Scalatra and Netflix OSS. This talk was delivered on September 9th 2013, at 8 PM at Netflix, Los Gatos.
Pulsar Architectural Patterns for CI/CD Automation and Self-Service_Devin BostStreamNative
We examine real-world architectural patterns involving Apache Pulsar to automate the creation of function and pub/sub flows for improved operational scalability and ease of management. We’ll cover CI/CD automation patterns and reveal our innovative approach of leveraging streaming data to create a self-service platform that automates the provisioning of new users. We will also demonstrate the innovative approach of creating function flows through patterns and configuration, enabling non-developer users to create entire function flows simply by changing configurations. These patterns enable us to drive the automation of managing Pulsar to a whole new level. We also cover CI/CD for on-prem, GCP, and AWS users.
This is Part 2 of this presentation: https://www.youtube.com/watch?v=pmaCG...
In summary, we will cover:
CI/CD for on-prem, GCP, and AWS users
Automated creation of function flows by configuration
Automated provisioning of pub/sub users and topics
Architectural patterns and best practices that enable automation
Overstock has leveraged Pulsar as the backbone of a self-service data fabric, a unified data platform to enable users to publish and consume data across the company and integrate with other services. We utilized Pulsar to solve a data governance problem, and Pulsar has performed marvelously. To support our real-world production use cases, we have developed message flows, integrations, and architectural patterns to solve common use cases, maximize value, simplify ease-of-use, automate management, and unify company data and services around this new platform.
Securing your Pulsar Cluster with Vault_Chris KelloggStreamNative
Learn how to secure a Pulsar cluster with Hashicorp Vault and deploy it on Kubernetes. Vault provides a secure way to generate tokens and store sensitive data and Pulsar has a pluggable architecture for authentication, authorization and secret management. This talk will walk through how to create custom plugins for Vault, integrate them with Pulsar and then deploy a Pulsar cluster on Kubernetes.
[Demo session] 관리형 Kafka 서비스 - Oracle Event Hub ServiceOracle Korea
오라클 클라우드에서는 카프카를 관리형 서비스로 제공합니다. 밋업 세션에서는 관리형 카프카 서비스의 편의성을 소개하고 카프카 서비스의 데모를 진행합니다. 또한 MSA, 빅데이터 및 Blockchain의 인프라로 카프카가 핵심 위치를 갖는 것 뿐만 아니라 오라클 클라우드의 통합 핵심 컴포넌트로 카프카는 중요한 의미를 갖습니다.
오라클 클라우드의 통합 컴포넌트로 카프카의 역할과 주요 서비스의 구성을 소개합니다.
* 본 세션은 “입문자/초급자/중급자” 분들께 두루 적합한 세션입니다.
Overcoming the Perils of Kafka Secret Sprawl (Tejal Adsul, Confluent) Kafka S...confluent
Secrets are indisputably the biggest risk area in the authentication arena and Apache Kafka is no exception. Kafka services are typically configured using properties files which contain plain text secret configurations, upon startup these configurations are transmitted in clear text to different components, stored in filesystem, internal topics and logs thus creating a secret sprawl.
This talk will deep dive into how we can eliminate this secret sprawl by adding Config Providers to integrate with centralized management systems such as Vault, Keywhiz, or AWS Secrets Manager.
We’ll cover
Security implications of clear text secrets and secret sprawl
Insecure parsing of secrets configurations in Kafka
Know how about Kafka Config Providers
Centralized Management Systems
How to secure Kafka with CP and CMS
Trust but Verify ~ Demo
With Apache Kafka 0.9, the community has introduced a number of features to make data streams secure. In this talk, we’ll explain the motivation for making these changes, discuss the design of Kafka security, and explain how to secure a Kafka cluster. We will cover common pitfalls in securing Kafka, and talk about ongoing security work.
My talk at Scala Bay Meetup at Netflix about Powering the Partner APIs with Scalatra and Netflix OSS. This talk was delivered on September 9th 2013, at 8 PM at Netflix, Los Gatos.
Pulsar Architectural Patterns for CI/CD Automation and Self-Service_Devin BostStreamNative
We examine real-world architectural patterns involving Apache Pulsar to automate the creation of function and pub/sub flows for improved operational scalability and ease of management. We’ll cover CI/CD automation patterns and reveal our innovative approach of leveraging streaming data to create a self-service platform that automates the provisioning of new users. We will also demonstrate the innovative approach of creating function flows through patterns and configuration, enabling non-developer users to create entire function flows simply by changing configurations. These patterns enable us to drive the automation of managing Pulsar to a whole new level. We also cover CI/CD for on-prem, GCP, and AWS users.
This is Part 2 of this presentation: https://www.youtube.com/watch?v=pmaCG...
In summary, we will cover:
CI/CD for on-prem, GCP, and AWS users
Automated creation of function flows by configuration
Automated provisioning of pub/sub users and topics
Architectural patterns and best practices that enable automation
Overstock has leveraged Pulsar as the backbone of a self-service data fabric, a unified data platform to enable users to publish and consume data across the company and integrate with other services. We utilized Pulsar to solve a data governance problem, and Pulsar has performed marvelously. To support our real-world production use cases, we have developed message flows, integrations, and architectural patterns to solve common use cases, maximize value, simplify ease-of-use, automate management, and unify company data and services around this new platform.
Securing your Pulsar Cluster with Vault_Chris KelloggStreamNative
Learn how to secure a Pulsar cluster with Hashicorp Vault and deploy it on Kubernetes. Vault provides a secure way to generate tokens and store sensitive data and Pulsar has a pluggable architecture for authentication, authorization and secret management. This talk will walk through how to create custom plugins for Vault, integrate them with Pulsar and then deploy a Pulsar cluster on Kubernetes.
[Demo session] 관리형 Kafka 서비스 - Oracle Event Hub ServiceOracle Korea
오라클 클라우드에서는 카프카를 관리형 서비스로 제공합니다. 밋업 세션에서는 관리형 카프카 서비스의 편의성을 소개하고 카프카 서비스의 데모를 진행합니다. 또한 MSA, 빅데이터 및 Blockchain의 인프라로 카프카가 핵심 위치를 갖는 것 뿐만 아니라 오라클 클라우드의 통합 핵심 컴포넌트로 카프카는 중요한 의미를 갖습니다.
오라클 클라우드의 통합 컴포넌트로 카프카의 역할과 주요 서비스의 구성을 소개합니다.
* 본 세션은 “입문자/초급자/중급자” 분들께 두루 적합한 세션입니다.
Overcoming the Perils of Kafka Secret Sprawl (Tejal Adsul, Confluent) Kafka S...confluent
Secrets are indisputably the biggest risk area in the authentication arena and Apache Kafka is no exception. Kafka services are typically configured using properties files which contain plain text secret configurations, upon startup these configurations are transmitted in clear text to different components, stored in filesystem, internal topics and logs thus creating a secret sprawl.
This talk will deep dive into how we can eliminate this secret sprawl by adding Config Providers to integrate with centralized management systems such as Vault, Keywhiz, or AWS Secrets Manager.
We’ll cover
Security implications of clear text secrets and secret sprawl
Insecure parsing of secrets configurations in Kafka
Know how about Kafka Config Providers
Centralized Management Systems
How to secure Kafka with CP and CMS
Trust but Verify ~ Demo
With Apache Kafka 0.9, the community has introduced a number of features to make data streams secure. In this talk, we’ll explain the motivation for making these changes, discuss the design of Kafka security, and explain how to secure a Kafka cluster. We will cover common pitfalls in securing Kafka, and talk about ongoing security work.
Building a Messaging Solutions for OVHcloud with Apache Pulsar_Pierre ZembStreamNative
OVHcloud is the biggest European cloud provider. From dedicated servers to Managed Kubernetes, from VMware® based Hosted Private Cloud to OpenStack-based Public Cloud, we have over 1.4 million customers worldwide.
Internally, we have been running Apache Kafka for years, and despite all the skills obtained operating multiples clusters with millions of messages per second, we decided to shift and build the foundation of our 'topic-as-a-service' product called ioStream on Apache Pulsar.
In this talk, you will have the insights of why we decided to use Apache Pulsar instead of Apache Kafka as the core of ioStream. We will tell you our journey to use Apache Pulsar, from our deployments to the management, what did work and what did not.
Strata London 2018: Multi-everything with Apache PulsarStreamlio
Ivan Kelly offers an overview of Apache Pulsar, a durable, distributed messaging system, underpinned by Apache BookKeeper, that provides the enterprise features necessary to guarantee that your data is where is should be and only accessible by those who should have access. Ivan explores the features built into Pulsar that will help your organization stay in compliance with key requirements and regulations, for multi-data center replication, multi-tenancy, role-based access control, and end-to-end encryption. Ivan concludes by explaining why Pulsar’s multi-data center story will alleviate headaches for the operations teams ensuring compliance with GDPR.
KSQL and Security: The Current State of Affairs (Victoria Xia, Confluent) Kaf...confluent
As KSQL-users move from development to production, security becomes an important consideration. Because KSQL is built on top of Kafka Streams, which in turn is built on top of Kafka Consumers and Producers, KSQL can leverage existing security functionality, including SSL encryption and SASL authentication in communications with Kafka brokers. However, authentication and authorization between KSQL servers and KSQL clients is a different story. As of December 2018, SSL for communication between KSQL clients and servers is enabled for the REST API, but not yet for the CLI. By April 2019, SSL will be supported in the KSQL CLI, and additional security functionality including SASL authentication, ACLs, audit logs, and RBAC will be in the works as well. This talk will cover the security options available for KSQL, including any new options added by April 2019, and will also include a preview of features to come. Audience members will leave with an understanding of what security features are currently available, how to configure them, current limitations, and upcoming features. The talk may also include common pitfalls and tips for debugging a KSQL security setup.
Ever wished you had a list of cheat codes to unleash the full power of AWS Lambda for your production workload? Come learn how to build a robust, scalable, and highly available serverless application using AWS Lambda. In this session, we discuss hacks and tricks for maximizing your AWS Lambda performance, such as leveraging customer reuse, using the 500 MB scratch space and local cache, creating custom metrics for managing operations, aligning upstream and downstream services to scale along with Lambda, and many other workarounds and optimizations across your entire function lifecycle.
You also learn how Hearst converted its real-time clickstream analytics data pipeline from a server-based model to a serverless one. The infrastructure of the data pipeline relied on Amazon EC2 instances and cron jobs to shepherd data through the process. In 2016, Hearst converted its data pipeline architecture to a serverless process that relies on event triggers and the power of AWS Lambda. By moving from a time-based process to a trigger-based process, Hearst improved its pipeline latency times by 50%.
Building Out Your Kafka Developer CDC Ecosystemconfluent
Building Out Your Kafka Developer CDC Ecosystem, Neil Buesing, VP of Streaming Technologies for Object Partners (OPI)
Meetup Link: https://www.meetup.com/TwinCities-Apache-Kafka/events/272944023/
Stream-Native Processing with Pulsar FunctionsStreamlio
The Apache Pulsar messaging solution can perform lightweight, extensible processing on messaging as they stream through the system. This presentation provides an overview of this new functionality.
(GAM402) Turbine: A Microservice Approach to 3 Billion Game RequestsAmazon Web Services
Turbine shares lessons learned from their new microservice game platform, which used Docker, Amazon EC2, Elastic Load Balancing, and Amazon ElastiCache to scale up as the game exceeded expectations. Learn about their Docker-based microservices architecture and how they integrated it with a legacy multiplatform game-traffic stack. Turbine shares how they gracefully degraded their services rather than going down and how they dealt with unpredictable client behavior. Hear how they resharded their live MongoDB clusters while the game was running. Finally, learn how they broke their game-event traffic into a separate Kafka-based analytics system, which handled the ingestion of over two billion events a day.
Big data event streaming is very common part of any big data Architecture. Of the available open source big data streaming technologies Apache Kafka stands out because of it realtime, distributed, and reliable characteristics. This is possible because of the Kafka Architecture. This talk highlights those features.
10 Lessons Learned from using Kafka in 1000 microservices - ScalaUANatan Silnitsky
Kafka is the bedrock of Wix’s distributed Mega Microservices system.
Over the years we have learned a lot about how to successfully scale our event-driven architecture to roughly 1400 mostly Scala microservices.
In this talk, you will learn about 10 key decisions and steps you can take in order to safely scale-up your Kafka-based system.
These Include:
* How to increase dev velocity of event-driven style code.
* How to optimize working with Kafka in polyglot setting
* How to migrate from request-reply to event-driven
* How to tackle multiple DCs environment.
AWS re:Invent 2016: Development Workflow with Docker and Amazon ECS (CON302)Amazon Web Services
Keeping consistent environments across your development, test, and production systems can be a complex task. Docker containers offer a way to develop and test your application in the same environment in which it runs in production. You can use tools such as the ECS CLI and Docker Compose for local testing of applications; Jenkins and AWS CodePipeline for building and workflow orchestration; Amazon EC2 Container Registry to store your container images; and Amazon EC2 Container Service to manage and scale containers. In this session, you will learn how to build containers into your development workflow and orchestrate container deployments using Amazon ECS. You will hear how Okta runs 30,000 tests per developer commit and releases 10,000 new lines of code each week to production with a CI system based on 100% AWS services. We'll also discuss how Okta uses ECS for parallelized testing in CI and for production microservices in a multi-region, always on cloud service.
Scaling customer engagement with apache pulsarStreamNative
Iterable's platform is used by marketers to reach hundreds of millions of users every day, and those numbers are quickly growing. Iterable's infrastructure is built with pub-sub messaging at it's core, so the reliability, scalability and flexibility provided by that system are business critical.
In this talk we'll discuss why Iterable chose Pulsar as a pub-sub messaging system, as well as how Iterable is taking advantage of some of more recently added features in Pulsar. We'll also talk about some of the challenges we encountered, where we think Pulsar can improve, and some contributions we've made to the open source community around Pulsar.
Show Me Kafka Tools That Will Increase My Productivity! (Stephane Maarek, Dat...confluent
In the Apache Kafka world, there is such a great diversity of open source tools available (I counted over 50!) that it’s easy to get lost. Over the years I have dealt with Kafka, I have learned to particularly enjoy a few of them that save me a tremendous amount of time over performing manual tasks. I will be sharing my experience and doing live demos of my favorite Kafka tools, so that you too can hopefully increase your productivity and efficiency when managing and administering Kafka. Come learn about the latest and greatest tools for CLI, UI, Replication, Management, Security, Monitoring, and more!
Kafka Summit SF 2017 - Best Practices for Running Kafka on Docker Containersconfluent
Docker containers provide an ideal foundation for running Kafka-as-a-Service on-premises or in the public cloud. However, using Docker containers in production environments poses some challenges – including container management, scheduling, network configuration and security, and performance. In this session, we’ll share lessons learned from implementing Kafka-as-a-Service with Docker containers.
Presented at Kafka Summit SF 2017 by Nanda Vijaydev
DevOps Days Tel Aviv - Serverless ArchitectureAntons Kranga
Slides from Serverless Architecture with AWS workshop that has been delivered in Tel Aviv at December 2016 and XP Days in Kyiv at November. We go in details about AWS Lambda and give few implementation blueprints targeted to web applications
Building a Messaging Solutions for OVHcloud with Apache Pulsar_Pierre ZembStreamNative
OVHcloud is the biggest European cloud provider. From dedicated servers to Managed Kubernetes, from VMware® based Hosted Private Cloud to OpenStack-based Public Cloud, we have over 1.4 million customers worldwide.
Internally, we have been running Apache Kafka for years, and despite all the skills obtained operating multiples clusters with millions of messages per second, we decided to shift and build the foundation of our 'topic-as-a-service' product called ioStream on Apache Pulsar.
In this talk, you will have the insights of why we decided to use Apache Pulsar instead of Apache Kafka as the core of ioStream. We will tell you our journey to use Apache Pulsar, from our deployments to the management, what did work and what did not.
Strata London 2018: Multi-everything with Apache PulsarStreamlio
Ivan Kelly offers an overview of Apache Pulsar, a durable, distributed messaging system, underpinned by Apache BookKeeper, that provides the enterprise features necessary to guarantee that your data is where is should be and only accessible by those who should have access. Ivan explores the features built into Pulsar that will help your organization stay in compliance with key requirements and regulations, for multi-data center replication, multi-tenancy, role-based access control, and end-to-end encryption. Ivan concludes by explaining why Pulsar’s multi-data center story will alleviate headaches for the operations teams ensuring compliance with GDPR.
KSQL and Security: The Current State of Affairs (Victoria Xia, Confluent) Kaf...confluent
As KSQL-users move from development to production, security becomes an important consideration. Because KSQL is built on top of Kafka Streams, which in turn is built on top of Kafka Consumers and Producers, KSQL can leverage existing security functionality, including SSL encryption and SASL authentication in communications with Kafka brokers. However, authentication and authorization between KSQL servers and KSQL clients is a different story. As of December 2018, SSL for communication between KSQL clients and servers is enabled for the REST API, but not yet for the CLI. By April 2019, SSL will be supported in the KSQL CLI, and additional security functionality including SASL authentication, ACLs, audit logs, and RBAC will be in the works as well. This talk will cover the security options available for KSQL, including any new options added by April 2019, and will also include a preview of features to come. Audience members will leave with an understanding of what security features are currently available, how to configure them, current limitations, and upcoming features. The talk may also include common pitfalls and tips for debugging a KSQL security setup.
Ever wished you had a list of cheat codes to unleash the full power of AWS Lambda for your production workload? Come learn how to build a robust, scalable, and highly available serverless application using AWS Lambda. In this session, we discuss hacks and tricks for maximizing your AWS Lambda performance, such as leveraging customer reuse, using the 500 MB scratch space and local cache, creating custom metrics for managing operations, aligning upstream and downstream services to scale along with Lambda, and many other workarounds and optimizations across your entire function lifecycle.
You also learn how Hearst converted its real-time clickstream analytics data pipeline from a server-based model to a serverless one. The infrastructure of the data pipeline relied on Amazon EC2 instances and cron jobs to shepherd data through the process. In 2016, Hearst converted its data pipeline architecture to a serverless process that relies on event triggers and the power of AWS Lambda. By moving from a time-based process to a trigger-based process, Hearst improved its pipeline latency times by 50%.
Building Out Your Kafka Developer CDC Ecosystemconfluent
Building Out Your Kafka Developer CDC Ecosystem, Neil Buesing, VP of Streaming Technologies for Object Partners (OPI)
Meetup Link: https://www.meetup.com/TwinCities-Apache-Kafka/events/272944023/
Stream-Native Processing with Pulsar FunctionsStreamlio
The Apache Pulsar messaging solution can perform lightweight, extensible processing on messaging as they stream through the system. This presentation provides an overview of this new functionality.
(GAM402) Turbine: A Microservice Approach to 3 Billion Game RequestsAmazon Web Services
Turbine shares lessons learned from their new microservice game platform, which used Docker, Amazon EC2, Elastic Load Balancing, and Amazon ElastiCache to scale up as the game exceeded expectations. Learn about their Docker-based microservices architecture and how they integrated it with a legacy multiplatform game-traffic stack. Turbine shares how they gracefully degraded their services rather than going down and how they dealt with unpredictable client behavior. Hear how they resharded their live MongoDB clusters while the game was running. Finally, learn how they broke their game-event traffic into a separate Kafka-based analytics system, which handled the ingestion of over two billion events a day.
Big data event streaming is very common part of any big data Architecture. Of the available open source big data streaming technologies Apache Kafka stands out because of it realtime, distributed, and reliable characteristics. This is possible because of the Kafka Architecture. This talk highlights those features.
10 Lessons Learned from using Kafka in 1000 microservices - ScalaUANatan Silnitsky
Kafka is the bedrock of Wix’s distributed Mega Microservices system.
Over the years we have learned a lot about how to successfully scale our event-driven architecture to roughly 1400 mostly Scala microservices.
In this talk, you will learn about 10 key decisions and steps you can take in order to safely scale-up your Kafka-based system.
These Include:
* How to increase dev velocity of event-driven style code.
* How to optimize working with Kafka in polyglot setting
* How to migrate from request-reply to event-driven
* How to tackle multiple DCs environment.
AWS re:Invent 2016: Development Workflow with Docker and Amazon ECS (CON302)Amazon Web Services
Keeping consistent environments across your development, test, and production systems can be a complex task. Docker containers offer a way to develop and test your application in the same environment in which it runs in production. You can use tools such as the ECS CLI and Docker Compose for local testing of applications; Jenkins and AWS CodePipeline for building and workflow orchestration; Amazon EC2 Container Registry to store your container images; and Amazon EC2 Container Service to manage and scale containers. In this session, you will learn how to build containers into your development workflow and orchestrate container deployments using Amazon ECS. You will hear how Okta runs 30,000 tests per developer commit and releases 10,000 new lines of code each week to production with a CI system based on 100% AWS services. We'll also discuss how Okta uses ECS for parallelized testing in CI and for production microservices in a multi-region, always on cloud service.
Scaling customer engagement with apache pulsarStreamNative
Iterable's platform is used by marketers to reach hundreds of millions of users every day, and those numbers are quickly growing. Iterable's infrastructure is built with pub-sub messaging at it's core, so the reliability, scalability and flexibility provided by that system are business critical.
In this talk we'll discuss why Iterable chose Pulsar as a pub-sub messaging system, as well as how Iterable is taking advantage of some of more recently added features in Pulsar. We'll also talk about some of the challenges we encountered, where we think Pulsar can improve, and some contributions we've made to the open source community around Pulsar.
Show Me Kafka Tools That Will Increase My Productivity! (Stephane Maarek, Dat...confluent
In the Apache Kafka world, there is such a great diversity of open source tools available (I counted over 50!) that it’s easy to get lost. Over the years I have dealt with Kafka, I have learned to particularly enjoy a few of them that save me a tremendous amount of time over performing manual tasks. I will be sharing my experience and doing live demos of my favorite Kafka tools, so that you too can hopefully increase your productivity and efficiency when managing and administering Kafka. Come learn about the latest and greatest tools for CLI, UI, Replication, Management, Security, Monitoring, and more!
Kafka Summit SF 2017 - Best Practices for Running Kafka on Docker Containersconfluent
Docker containers provide an ideal foundation for running Kafka-as-a-Service on-premises or in the public cloud. However, using Docker containers in production environments poses some challenges – including container management, scheduling, network configuration and security, and performance. In this session, we’ll share lessons learned from implementing Kafka-as-a-Service with Docker containers.
Presented at Kafka Summit SF 2017 by Nanda Vijaydev
DevOps Days Tel Aviv - Serverless ArchitectureAntons Kranga
Slides from Serverless Architecture with AWS workshop that has been delivered in Tel Aviv at December 2016 and XP Days in Kyiv at November. We go in details about AWS Lambda and give few implementation blueprints targeted to web applications
Webinar: Serverless Architectures with AWS Lambda and MongoDB AtlasMongoDB
It’s easier than ever to power serverless architectures with our managed MongoDB as a service, MongoDB Atlas. In this session, we will explore the rise of serverless architectures and how they’ve rapidly integrated into public and private cloud offerings.
Speaker spoke about features and benefits of the AWS Lambda service and explained how to increase system performance by using AWS services.
This presentation by Mykhailo Brodskyi (Senior Software Engineer, Consultant, GlobalLogic, Kharkiv), was delivered at GlobalLogic Kharkiv Java Conference 2018 on June 10, 2018.
GOTO Stockholm - AWS Lambda - Logic in the cloud without a back-endIan Massingham
Slides from my session at Goto Stockholm where I talked about AWS Lambda and how it can be used to build reliable, scalable & low-cost applications, without servers for you to manage.
Special thanks to James Hall at Parallax for allowing me to talk about the awesome application that they built using AWS Lambda, Amazon API Gateway & Amazon DynanmoDB :)
Alex Casalboni and Austen Collins discuss the evolution of Serverless. Learn about the exciting new trend that's redefining the cloud computing industry in this in-depth webinar designed to teach you the basics of serverless computing and design.
Infrastructure at Scale: Apache Kafka, Twitter Storm & Elastic Search (ARC303...Amazon Web Services
"This is a technical architect's case study of how Loggly has employed the latest social-media-scale technologies as the backbone ingestion processing for our multi-tenant, geo-distributed, and real-time log management system. This presentation describes design details of how we built a second-generation system fully leveraging AWS services including Amazon Route 53 DNS with heartbeat and latency-based routing, multi-region VPCs, Elastic Load Balancing, Amazon Relational Database Service, and a number of pro-active and re-active approaches to scaling computational and indexing capacity.
The talk includes lessons learned in our first generation release, validated by thousands of customers; speed bumps and the mistakes we made along the way; various data models and architectures previously considered; and success at scale: speeds, feeds, and an unmeltable log processing engine."
Utah Code Camp is a computer technology conference hosted annually by Utah Geek Events in Salt Lake City, UT. This presentation is an introduction to cloud computing and the Amazon AWS Cloud platform.
With AWS Lambda, you can easily build scalable microservices for mobile, web, and IoT applications or respond to events from other AWS services without managing infrastructure. In this session, you’ll see demonstrations and hear more about newly launched features. We’ll show you how to use Lambda to build web, mobile, or IoT backends and voice-enabled apps, and we'll show you how to extend both AWS and third party services by triggering Lambda functions. We’ll also provide productivity and performance tips for getting the most out of your Lambda functions and show how cloud native architectures use Lambda to eliminate “cold servers” and excess capacity without sacrificing scalability or responsiveness.
Apache Camel v3, Camel K and Camel QuarkusClaus Ibsen
In this session, we will explore key challenges with function interactions and coordination, addressing these problems using Enterprise Integration Patterns (EIP) and modern approaches with the latest innovations from the Apache Camel community:
Apache Camel is the Swiss army knife of integration, and the most powerful integration framework. In this session you will hear about the latest features in the brand new 3rd generation.
Camel K, is a lightweight integration platform that enables Enterprise Integration Patterns to be used natively on any Kubernetes cluster. When used in combination with Knative, a framework that adds serverless building blocks to Kubernetes, and the subatomic execution environment of Quarkus, Camel K can mix serverless features such as auto-scaling, scaling to zero, and event-based communication with the outstanding integration capabilities of Apache Camel.
- Apache Camel 3
- Camel K
- Camel Quarkus
We will show how Camel K works. We’ll also use examples to demonstrate how Camel K makes it easier to connect to cloud services or enterprise applications using some of the 300 components that Camel provides.
Cloud-Native Integration with Apache Camel on Kubernetes (Copenhagen October ...Claus Ibsen
Cloud-native applications of the future will consist of hybrid workloads: stateful applications, batch jobs, microservices, and functions, wrapped as Linux containers and deployed via Kubernetes on any cloud.
In this session, we will explore key challenges with function interactions and coordination, addressing these problems using Enterprise Integration Patterns (EIP) and modern approaches with the latest innovations from the Apache Camel community:
- Apache Camel 3
- Camel K
- Camel Quarkus
Apache Camel is the Swiss army knife of integration, and the most powerful integration framework. In this session you will hear about the latest features in the brand new 3rd generation.
Camel K, is a lightweight integration platform that enables Enterprise Integration Patterns to be used natively on any Kubernetes cluster. When used in combination with Knative, a framework that adds serverless building blocks to Kubernetes, and the subatomic execution environment of Quarkus, Camel K can mix serverless features such as auto-scaling, scaling to zero, and event-based communication with the outstanding integration capabilities of Apache Camel.
We will show how Camel K works. We'll also use examples to demonstrate how Camel K makes it easier to connect to cloud services or enterprise applications using some of the 300 components that Camel provides.
게임을 위한 Cloud Native on AWS
IT 기술이 변화하며 클라우드를 보다 적극적으로 사용하는 게임사가 늘어나는 추세입니다. 게임 고객분들이 다양한 시각에서 AWS Cloud Service를 보다 효과적으로 잘 사용할 수 있는 방법을 소개합니다. 또한, 고객분들께서 개발에 집중하고 효율적으로 운영할 수 있도록 AWS가 어떠한 도움을 드리는지에 대해 말씀드리고자 합니다.
My Unsafe - Unsafe Interceptor, Native Memory Leak Tracker and Access Checker on the JVM
MySafe intercepts (instruments) sun.misc.Unsafe calls and keeps records of allocated memories. So it can give the allocated memory informations, detect the invalid memory accesses and find origins of native memory leaks.
Top Features to Include in Your Winzo Clone App for Business Growth (4).pptxrickgrimesss22
Discover the essential features to incorporate in your Winzo clone app to boost business growth, enhance user engagement, and drive revenue. Learn how to create a compelling gaming experience that stands out in the competitive market.
Top 7 Unique WhatsApp API Benefits | Saudi ArabiaYara Milbes
Discover the transformative power of the WhatsApp API in our latest SlideShare presentation, "Top 7 Unique WhatsApp API Benefits." In today's fast-paced digital era, effective communication is crucial for both personal and professional success. Whether you're a small business looking to enhance customer interactions or an individual seeking seamless communication with loved ones, the WhatsApp API offers robust capabilities that can significantly elevate your experience.
In this presentation, we delve into the top 7 distinctive benefits of the WhatsApp API, provided by the leading WhatsApp API service provider in Saudi Arabia. Learn how to streamline customer support, automate notifications, leverage rich media messaging, run scalable marketing campaigns, integrate secure payments, synchronize with CRM systems, and ensure enhanced security and privacy.
First Steps with Globus Compute Multi-User EndpointsGlobus
In this presentation we will share our experiences around getting started with the Globus Compute multi-user endpoint. Working with the Pharmacology group at the University of Auckland, we have previously written an application using Globus Compute that can offload computationally expensive steps in the researcher's workflows, which they wish to manage from their familiar Windows environments, onto the NeSI (New Zealand eScience Infrastructure) cluster. Some of the challenges we have encountered were that each researcher had to set up and manage their own single-user globus compute endpoint and that the workloads had varying resource requirements (CPUs, memory and wall time) between different runs. We hope that the multi-user endpoint will help to address these challenges and share an update on our progress here.
Listen to the keynote address and hear about the latest developments from Rachana Ananthakrishnan and Ian Foster who review the updates to the Globus Platform and Service, and the relevance of Globus to the scientific community as an automation platform to accelerate scientific discovery.
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I ...Juraj Vysvader
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I didn't get rich from it but it did have 63K downloads (powered possible tens of thousands of websites).
Large Language Models and the End of ProgrammingMatt Welsh
Talk by Matt Welsh at Craft Conference 2024 on the impact that Large Language Models will have on the future of software development. In this talk, I discuss the ways in which LLMs will impact the software industry, from replacing human software developers with AI, to replacing conventional software with models that perform reasoning, computation, and problem-solving.
Climate Science Flows: Enabling Petabyte-Scale Climate Analysis with the Eart...Globus
The Earth System Grid Federation (ESGF) is a global network of data servers that archives and distributes the planet’s largest collection of Earth system model output for thousands of climate and environmental scientists worldwide. Many of these petabyte-scale data archives are located in proximity to large high-performance computing (HPC) or cloud computing resources, but the primary workflow for data users consists of transferring data, and applying computations on a different system. As a part of the ESGF 2.0 US project (funded by the United States Department of Energy Office of Science), we developed pre-defined data workflows, which can be run on-demand, capable of applying many data reduction and data analysis to the large ESGF data archives, transferring only the resultant analysis (ex. visualizations, smaller data files). In this talk, we will showcase a few of these workflows, highlighting how Globus Flows can be used for petabyte-scale climate analysis.
GraphSummit Paris - The art of the possible with Graph TechnologyNeo4j
Sudhir Hasbe, Chief Product Officer, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Graspan: A Big Data System for Big Code AnalysisAftab Hussain
We built a disk-based parallel graph system, Graspan, that uses a novel edge-pair centric computation model to compute dynamic transitive closures on very large program graphs.
We implement context-sensitive pointer/alias and dataflow analyses on Graspan. An evaluation of these analyses on large codebases such as Linux shows that their Graspan implementations scale to millions of lines of code and are much simpler than their original implementations.
These analyses were used to augment the existing checkers; these augmented checkers found 132 new NULL pointer bugs and 1308 unnecessary NULL tests in Linux 4.4.0-rc5, PostgreSQL 8.3.9, and Apache httpd 2.2.18.
- Accepted in ASPLOS ‘17, Xi’an, China.
- Featured in the tutorial, Systemized Program Analyses: A Big Data Perspective on Static Analysis Scalability, ASPLOS ‘17.
- Invited for presentation at SoCal PLS ‘16.
- Invited for poster presentation at PLDI SRC ‘16.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Quarkus Hidden and Forbidden ExtensionsMax Andersen
Quarkus has a vast extension ecosystem and is known for its subsonic and subatomic feature set. Some of these features are not as well known, and some extensions are less talked about, but that does not make them less interesting - quite the opposite.
Come join this talk to see some tips and tricks for using Quarkus and some of the lesser known features, extensions and development techniques.
AI Pilot Review: The World’s First Virtual Assistant Marketing SuiteGoogle
AI Pilot Review: The World’s First Virtual Assistant Marketing Suite
👉👉 Click Here To Get More Info 👇👇
https://sumonreview.com/ai-pilot-review/
AI Pilot Review: Key Features
✅Deploy AI expert bots in Any Niche With Just A Click
✅With one keyword, generate complete funnels, websites, landing pages, and more.
✅More than 85 AI features are included in the AI pilot.
✅No setup or configuration; use your voice (like Siri) to do whatever you want.
✅You Can Use AI Pilot To Create your version of AI Pilot And Charge People For It…
✅ZERO Manual Work With AI Pilot. Never write, Design, Or Code Again.
✅ZERO Limits On Features Or Usages
✅Use Our AI-powered Traffic To Get Hundreds Of Customers
✅No Complicated Setup: Get Up And Running In 2 Minutes
✅99.99% Up-Time Guaranteed
✅30 Days Money-Back Guarantee
✅ZERO Upfront Cost
See My Other Reviews Article:
(1) TubeTrivia AI Review: https://sumonreview.com/tubetrivia-ai-review
(2) SocioWave Review: https://sumonreview.com/sociowave-review
(3) AI Partner & Profit Review: https://sumonreview.com/ai-partner-profit-review
(4) AI Ebook Suite Review: https://sumonreview.com/ai-ebook-suite-review
Software Engineering, Software Consulting, Tech Lead, Spring Boot, Spring Cloud, Spring Core, Spring JDBC, Spring Transaction, Spring MVC, OpenShift Cloud Platform, Kafka, REST, SOAP, LLD & HLD.
Prosigns: Transforming Business with Tailored Technology SolutionsProsigns
Unlocking Business Potential: Tailored Technology Solutions by Prosigns
Discover how Prosigns, a leading technology solutions provider, partners with businesses to drive innovation and success. Our presentation showcases our comprehensive range of services, including custom software development, web and mobile app development, AI & ML solutions, blockchain integration, DevOps services, and Microsoft Dynamics 365 support.
Custom Software Development: Prosigns specializes in creating bespoke software solutions that cater to your unique business needs. Our team of experts works closely with you to understand your requirements and deliver tailor-made software that enhances efficiency and drives growth.
Web and Mobile App Development: From responsive websites to intuitive mobile applications, Prosigns develops cutting-edge solutions that engage users and deliver seamless experiences across devices.
AI & ML Solutions: Harnessing the power of Artificial Intelligence and Machine Learning, Prosigns provides smart solutions that automate processes, provide valuable insights, and drive informed decision-making.
Blockchain Integration: Prosigns offers comprehensive blockchain solutions, including development, integration, and consulting services, enabling businesses to leverage blockchain technology for enhanced security, transparency, and efficiency.
DevOps Services: Prosigns' DevOps services streamline development and operations processes, ensuring faster and more reliable software delivery through automation and continuous integration.
Microsoft Dynamics 365 Support: Prosigns provides comprehensive support and maintenance services for Microsoft Dynamics 365, ensuring your system is always up-to-date, secure, and running smoothly.
Learn how our collaborative approach and dedication to excellence help businesses achieve their goals and stay ahead in today's digital landscape. From concept to deployment, Prosigns is your trusted partner for transforming ideas into reality and unlocking the full potential of your business.
Join us on a journey of innovation and growth. Let's partner for success with Prosigns.
2. Who Am I?
● Senior Software Engineer @ OpsGenie
● Co-organizer of Serverless Meetup Turkey
● Oracle Open-Source Contributor
● PhD. Cand. @ METU Computer Eng.
● 8+ years in software development
● Hard-core JVM ninja
● Actively working on Serverless and AWS Lambda
● Part-time Big Data researcher
● Building a new product Thundra
2
6. “If your PaaS can efficiently start
instances in 20 ms that run for half
a second, then call it serverless.”
Adrian Cockcroft, VP Cloud Architecture Strategy at AWS
6
7. What is AWS Lambda?
- AWS’s FaaS (Function as a Service)
- Run code without provisioning or managing servers
- Support invocation types:
- Request/response (sync)
- Event driven (async)
- Supported languages:
- Go
- C#
- Java
- Node.js
- Python
7
9. Why AWS Lambda?
- PAYG - pay as you go
- Highly available
- Scale fast
- Horizontally
- Vertically
- Don’t manage servers
- Built-in integration with other AWS services
- Security
9
11. The Devil is in the Detail
- Container reuse
- Container freeze
- More memory => More CPU
- One execution per container at any time
- At least one delivery guarantee
- New container for
- new deployments
- configuration updates
- even for environment variable updates
- Destroy container when timeout 11
12. How to Limit Container?
- cgroup (Control Group)
- Engineers at Google started the work in 2006
- Merged into Linux kernel in January 2008
- Can limit
- CPU
- Memory
- Disk bandwidth
- Network bandwidth
12
13. CPU Throttling
- cgcreate
- cgcreate -g cpu:/cg1
- cgcreate -g cpu:/cg2
- cpu.cfs_quota_us / cpu.cfs_period_us
- how to configure cg1 to run 0.2 seconds out of every 1 second?
- cgset -r cpu.cfs_quota_us=200000 cpu.cfs_period_us=1000000 cg1
- cpu.shares
- how to configure 1:2 CPU usage ratio between cg1 and cg2?
- cgset -r cpu.shared=512 cg1
- cgset -r cpu.shared=1024 cg2
- why not used by AWS Lambda? 13
17. Resource Limits
Max execution time 5 minutes
Memory allocation range 128 MB - 3 GB
Ephemeral disk capacity ("/tmp" space) 512 MB
Invoke request body payload size (sync invocation) 6 MB
Invoke request body payload size (async invocation) 128 K
Number of file descriptors 1024
Number of processes and threads (combined total) 1024
17
18. Deployment Limits
Function deployment package size (compressed .zip/.jar file) 50 MB
Size of code/dependencies that you can zip into a deployment
package (uncompressed .zip/.jar size)
250 MB
Total size of all the deployment packages per region 75 GB
Total size of environment variables set 4 KB
18
19. Execution Limits
- Account level concurrent execution limit is 1000
- It per region
- It is soft limit
- Function level concurrent execution limit
- It is reserved
- the value is deducted from the unreserved concurrency pool
- ENI Limit is 350
- It is for Lambda function in VPC
- It is per region
- It is soft limit
19
21. Writing Logs
- Logs are written to CloudWatch asynchronously
- Log group per function
- /aws/lambda/my-func
- Log stream per container under log group
- 2018/01/27/[$LATEST]f95da1aaf0384ed6ad642d8299f7503d
- How to log
- Standard output/error
- Lambda API
21
23. Collecting Logs
- Subscribe to CloudWatch log groups
- Only one subscription per log group
- Filter by pattern
- Stream to AWS Lambda
- Stream to AWS Elasticsearch
23
25. Environment Variables
- No limit to the number of env. variables
- Max total size is 4 KB
- Must start with letters [a-zA-Z]
- Can only contain alphanumeric char. and “_” [a-zA-Z0-9_]
- KMS
- Encrypt at rest (default)
- Encrypt in transit
25
26. SSM
- Centralized config management
- share between functions
- update once
- Fine-grained access to sensitive data via IAM
- Integrates with KMS out-of-the-box
- Records a history of changes
26
28. VPC
- Define/select VPC and configure
- Subnets (recommended one subnet in each AZ)
- Security Groups
- To be able to access internet
- NAT Gateway
- Internet Gateway
- Route table configuration
- Be aware of ENI limit (default 350)
- Sure that subnet has enough IP address range for ENI
28
29. Role
- Each Lambda function has an associated IAM role
- For accessing AWS resources
- grant the role the necessary permissions that your Lambda function needs
- for ex. permission to Lambda for putting item to DynamoDB table
- For non-stream based event sources
- grant the event source permissions to invoke function
- for ex. perm. to S3 bucket for invoking Lambda on upload
- For stream based event sources
- grant AWS Lambda permissions for the relevant stream actions
- for ex. perm. to Lambda for getting Kinesis stream records to be invoked
29
30. Others
- Inbound connections are blocked
- For outbound connections only TCP/IP sockets are
supported
- “ptrace” (debugging) system calls are blocked
- TCP port 25 is also blocked as an anti-spam measure
30
32. Retries
- For sync invocations (Lambda API call, …)
- client is responsible for retries
- For async invocations
- Non-Stream based events (S3, SNS, CloudWatch, …)
- retry a few times (2 or more) with delays
- If still fails, put in to DLQ (if specified=
- Stream based events (Kinesis , DynamoDB streams)
- retry until succeeded or
- retry until data expires
32
33. DLQ - Dead Letter Queue
- Can be
- SNS topic
- SQS queue
- Requests are redirected if the invocation is
- Asynchronous and
- Event source is non-stream based (S3, SNS, …)
- Requires permission to access to the DLQ resource
- Monitor “DeadLetterErrors” metrics
33
37. CloudWatch Metrics
- Following metrics are supported per function basis:
- Invocation
- Errors
- Duration
- Dead Letter Error
- Throttles
- Iterator Age
- Following metrics are supported across all functions:
- Concurrent Executions
- Unreserved Concurrent Executions
37
38. Distributed Tracing with AWS X-Ray
- Shows durations, responses and errors
- Segment for Lambda invocation
- Sub-Segments for
- initialization
- calls to external services
- custom ones
- Custom properties
- can be queried over “Annotation”
- can be stored on “Metadata” as raw
38
41. API Logging with CloudTrail
- CloudTrail can log
- function definition/configuration CRUD
- function invocations
- log entry contains information about
- who generated the request
- the requested action
- the action parameters
- ...
- CloudTrail logs can be published to
- S3
- SNS 41
42. Full Observability with Thundra
- Provides three pillars of observability:
- Trace
- Metric
- Log
- Zero overhead with async data publishing
- Has automated instrumentation and profiling support
- Integrated with AWS X-Ray
- www.thundra.io
42
46. Creating Alarm
- Create alarm on CloudWatch by metrics
- Following metrics are supported per function basis:
- Following metrics are supported across all functions:
- Concurrent Executions
- Unreserved Concurrent Executions
- Notify through SNS
- E-Mail
- Lambda
- ...
- Duration
- Errors
- Invocations
- Throttles
46
48. Writing Test
- Unit Test
- do our objects works as expected themselves?
- Integration Test
- does our objects work well together?
- Functional Test
- does the whole system work from end to end?
- Local Lambda development
- SAM Local
- LocalStack
- Cloud9 48
49. SAM Local
- Works with SAM template
- Simulates some AWS service events (not services)
- S3, Kinesis, DynamoDB, Cloudwatch, Scheduled Event, API GW
- Runs API Gateway locally
- Allows debugging on local
- https://github.com/awslabs/aws-sam-local
49
50. LocalStack
- Spins up the many core Cloud APIs on your local
- Lambda, API GW, DynamoDB, Kinesis, Firehose, S3, SNS, SQS, ...
- Supports error injection
- ProvisionedThroughputExceededException, ...
- Can be run on docker
- Integrated with some test frameworks
- JUnit for Java
- nosetests for Python
- https://github.com/localstack/localstack
50
53. Versioning
- Each deploy/upload is a new version
- Aliases map to versions
- There is N:1 relation
- An alias can only be mapped to only one version
- A version can be mapped by multiple aliases
- By default latest version (“$LATEST”) is invoked
- Shift traffic using aliases with weighted versions
53
64. What Does Affect Cold Start?
- Depends on language
- Java and C# has more cold start overhead
- Depends on code size
- Smaller artifact size = less cold start (not significantly)
- Depends on memory size
- More memory = less cold start
- Depends on network configuration
- VPC has more cold start overhead (because of ENI)
- SSL handshake has more cold start overhead
- Depends on application and 3rd party libs 64
65. Cold start times by language + memory
read.acloud.guru/does-coding-language-memory-or-package-size-affect-cold-starts-of-aws-lambda-a15e26d12c76
65
66. Response times by language
read.acloud.guru/does-coding-language-memory-or-package-size-affect-cold-starts-of-aws-lambda-a15e26d12c76
Average response time
Maximum response time
66
67. Cold Start on JVM
- Loading and initializing
- Application classes
- Core JDK classes
- Security (SSL, encryption, …) related JDK classes
- Initializing 3rd party libraries/frameworks
- AWS SDK
- Spring, Jackson, ...
67
68. How to Startup Faster on JVM? [1]
- Enable CDS (Class Data Sharing)
- -Xshare:on
- Already enabled by AWS Lambda
- Enable AppCDS (Application Class Data Sharing)
- -XX:+UseAppCDS -XX:SharedArchiveFile=hello.jsa
- For OpenJDK only available at Java 9 :(
- Use AOT (Ahead of Time Compilation)
- Build custom runtime image with “jlink”
- Only available at Java 9 :(
68
69. How to Startup Faster on JVM? [2]
- Use Tiered Compilation
- Tiered compilation is disabled on AWS Lambda
- -XX:+TieredCompilation -XX:TieredStopAtLevel=1
- Disable bytecode verification
- -Xverify:none
- No classpath scan
- Prefer programmatic or XML configuration for Spring
- Prefer lightweight libraries if possible
- Spring => Guava, Dagger, ...
- Jackson => Gson, ...
69
70. Warmup
- Periodically send empty messages
- So AWS Lambda might think that container is active
- Not perfect solution for cold start
- AWS’s new experimental container pre-initializer
- How to keep multiple containers up?
- https://github.com/opsgenie/sirocco
70