The document discusses serverless cloud architecture patterns, including event-driven and messaging patterns like the saga orchestration pattern, resiliency patterns like storage first and circuit breaker, and queue patterns like priority queues. It provides examples of implementing these patterns in AWS using services like API Gateway, SQS, Step Functions and Kinesis. The talk introduces the patterns, covers their benefits and considerations, and shows how they can solve problems like handling unpredictable spikes in load and building resilient, scalable systems.
Cloud computing fundamentals with Microsoft AzureRadoslav Gatev
The presentation tries to describe the fundamentals of the cloud and covers the following topics:
What is cloud?
Pros & Cons
Deployment models
Service Models
SLAs
Workloads
Microsoft Azure
Core Azure Services: Storage, App Service, SQL Database, Cosmos DB
DevOps
This document discusses big data analytics tools and technologies. It begins with an overview of big data challenges and available tools. It then discusses Packetloop, a company that provides big data security analytics using tools like Amazon EMR, Cassandra, and PostgreSQL on AWS. Next, it discusses how EMR and Redshift from AWS can be used as big data tools for tasks like batch processing, data warehousing, and live analytics. It concludes by discussing how Intel technologies can help power big data platforms by providing optimized processors, networking, and storage to enable analytics at scale.
This document discusses building an IoT-enabled smoker device using AWS services. The architecture includes an IoT device with sensors that collects cooking data and sends it to AWS. In the cloud, the data flows through several serverless services - a data service stores the data, a detection service checks for cooking thresholds, and a notification service alerts the user. The architecture was improved over time to use AWS Greengrass on the device and event-driven lambdas in the cloud. The final system reliably captures cooking data and notifies the user to ensure great BBQ.
VMworld 2013: VMware NSX: A Customer’s Perspective VMworld
VMworld 2013
Taruna Gandhi, VMware
Jason Puig, Symantec
Richard Sillito, WestJet
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
Cloud computing fundamentals with Microsoft AzureRadoslav Gatev
The presentation tries to describe the fundamentals of the cloud and covers the following topics:
What is cloud?
Pros & Cons
Deployment models
Service Models
SLAs
Workloads
Microsoft Azure
Core Azure Services: Storage, App Service, SQL Database, Cosmos DB
DevOps
This document discusses big data analytics tools and technologies. It begins with an overview of big data challenges and available tools. It then discusses Packetloop, a company that provides big data security analytics using tools like Amazon EMR, Cassandra, and PostgreSQL on AWS. Next, it discusses how EMR and Redshift from AWS can be used as big data tools for tasks like batch processing, data warehousing, and live analytics. It concludes by discussing how Intel technologies can help power big data platforms by providing optimized processors, networking, and storage to enable analytics at scale.
This document discusses building an IoT-enabled smoker device using AWS services. The architecture includes an IoT device with sensors that collects cooking data and sends it to AWS. In the cloud, the data flows through several serverless services - a data service stores the data, a detection service checks for cooking thresholds, and a notification service alerts the user. The architecture was improved over time to use AWS Greengrass on the device and event-driven lambdas in the cloud. The final system reliably captures cooking data and notifies the user to ensure great BBQ.
VMworld 2013: VMware NSX: A Customer’s Perspective VMworld
VMworld 2013
Taruna Gandhi, VMware
Jason Puig, Symantec
Richard Sillito, WestJet
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
This document provides an overview of cloud concepts including cloud native applications, infrastructure as code, automation, microservices, serverless computing, deployment methods, chaos engineering, and observability. Specifically, it discusses how cloud native applications are loosely coupled and scale independently, the benefits of modeling infrastructure as code and storing it in version control, and techniques for automating infrastructure provisioning, testing, and deployments. It also covers asynchronous communication, event-driven architectures, blue/green and canary deployments, and using chaos engineering experiments to test system reliability in production environments.
Taming the cost of your first cloud - CCCEU 2014Tim Mackey
Today everyone is talking about clouds, and a few are building them, but far fewer are operating successful clouds. In this session we'll examine a variety of paradigm shifts IT makes when moving from a traditional virtualization and management mindset to operating a successful cloud. For most organizations, without careful planning the hype of a cloud solution can quickly overcome its capabilities and pre-existing best practices can combine to create the worst possible cloud scenario -- a cloud which isn't economical to operate, and which is more cumbersome to manage than a traditional virtualization farm.
Key topics covered include:
- Successful transition of operational and management paradigm
- How the VM density of clouds change Ops
- What it means to monitor the network in a cloud environment, at hyper-dense virtualization levels
- Preventing storage costs from outpacing delivery costs
The document discusses CloudStack deployments at various organizations. It describes how Telia Latvija uses CloudStack to deliver advanced IaaS services and a state-of-the-art video platform. It provides details on LeaseWeb's CloudStack implementation across its global data centers. It also discusses how Education Networks of America (ENA) leverages CloudStack to provide comprehensive infrastructure as a service solutions to K-12 schools, higher education, and libraries across North America. Autodesk's enterprise cloud services are also highlighted, which are built on CloudStack and provide on-demand self-service infrastructure. Finally, cloud.ca describes its regional cloud which addresses the need for Canadian-owned cloud infrastructure.
Venture capitalist Matt Ocko’s 20-year track record of success in the startup world has given him unique insight into how AWS has changed the venture financing process. In this session, you’ll learn about industries susceptible to disruption by AWS-based startups, and where VCs are willing to take new risks on those startups, including the heavily-regulated medical, government, financial, and industrial sectors. Matt will talk about how new, supercomputing startups are now possible because of AWS technologies. Hear about how using AWS technologies can actually reduce risk – and reduce time to customer penetration – from a VC perspective, and how to go from ‘AWS to Series A’ in 5 easy pieces.
Real-Time Streaming: Move IMS Data to Your Cloud Data WarehousePrecisely
With over 22,000 transactions processed every second, your mainframe IMS is a critical source of data for the cloud data warehouses that feed analytics, customer experience or regulatory initiatives. However, extracting data from mainframe IMS can be time-consuming and costly, leading to the exclusion of IMS data from cloud data warehouses all together – and leaving valuable insights unseen.
Never ignore or manually extract mainframe IMS data again. In this on-demand webcast, you will learn how Connect CDC enables your team to develop integrations quickly and easily between mainframe IMS and cloud data warehouses in the most cost-effective way possible.
This document provides an overview of moving applications to the cloud. It discusses various cloud opportunities including cost reduction, enterprise growth, and fast innovation. It also covers managing desktops and devices in the cloud, as well as popular applications that can be used in the cloud like email, conferencing software, and CRM. Finally, it summarizes several cloud platforms including Amazon Web Services, Microsoft Azure, Google Apps, and Amazon cloud services like EC2, S3, SQS, and RDS.
Webinar Slides: MySQL Data Protection: Medical SaaS Manages Sensitive HIPAA C...Continuent
Cloud-Based Active/Passive Tungsten MySQL Clusters @ Modernizing Medicine
Modernizing Medicine, a Continent customer since 2012, is a large Florida-based SaaS provider dealing with sensitive (PHI) medical data. ModMed offers electronic health records keeping, practice management, revenue cycle management, and data analytics for thousands of doctors.
Watch this webinar replay with Continuent CEO Eero Teerikorpi to learn about how ModMed dealt with a lack of high availability in AWS with the help of Continuent Tungsten. AWS EC2 instances, underlying storage, and the management interface are not highly available by default. Also hear about the benefits this customer was able to reap from our solutions including continuous operations, high availability, scalability, HIPAA Compliance, and better data protection.
AGENDA
- Continuent Introduction
- How to easily deploy MySQL Tungsten Clusters in AWS and recover from multi-zone/multi-region AWS outages
- Continuent Tungsten Solutions and Benefits
- Key Benefit Highlight: Continuous MySQL Operations with Data Protection
- Q&A
PRESENTER
Eero Teerikorpi - Founder and CEO, Continuent - is a 7-time serial entrepreneur who has more than 30 years of high-tech management and enterprise software experience. Eero has been in the MySQL marketplace virtually since day one, from the early 2000s. Eero has held top management positions at various cross-Atlantic entities (CEO at Alcom Corporation, President at Capslock, Executive Board Member at Esker S.A.) Eero started his career as a Product Manager at Apple Computer in Finland in the mid-80s. Eero also owns and manages a boutique NOET Vineyards producing high-quality dry-farmed Cabernet Sauvignon.
Eero is a former Navy officer and still an avid sailor on San Francisco Bay and around the world. Eero is a very active sportsman: a 4+ tennis player, a rookie golfer, a very careful mountain biker, and an experienced (40+ years) skier, both slalom and cross-country.
Viktor Petersson is the VP of Business Development at CloudSigma. The presentation discusses CloudSigma's philosophy of providing customers with sophisticated and customized cloud infrastructure options. It promotes the benefits of CloudSigma such as configurable server sizes, location flexibility, and dynamic pricing. The presentation also summarizes CloudSigma's product features like SSD storage, private VLANs, and utility billing. Finally, it describes how CloudSigma can provide a true hybrid cloud through integration with storage provider Zadara.
Decentralized cloud firewall framework with resources provisioning cost optim...aish006
University - Visvesvaraya Technological University
College - Global Academy of Technology
IEEE paper - 2015
by - G AISHWARYA, ALOK KUMAR, GAURAV KUMAR MISHRA, KEDAR RAVINDRA KULKARNI
under the guidance of - Dr. LATHA C A
Intellias CQRS Framework - is a cutting-edge cloud-native framework for massive-scale event-driven microservice solutions.
CQRS Framework designed as a part of IntelliGrowth cloud platform for managing mission-critical business processes by a team of Top CoE architects and engineers.
Event Grid - quiet event to revolutionize Azure and moreSean Feldman
The document discusses Microsoft's Event Grid service. It provides a 3-sentence summary:
Event Grid is a fully managed event routing service that can handle billions of events per week to trigger workflows and functions. It uses a pub/sub model to allow event publishers to emit events to topics, which then causes matching subscriptions to receive the events. Event Grid is designed to be cloud native, serverless friendly, and handle large-scale event processing reliably and securely across Microsoft Azure and other cloud services and applications.
The document discusses Google Cloud Platform and its capabilities for building, storing, and analyzing IT infrastructure in the cloud. It highlights key services including Compute Engine, App Engine, Cloud Storage, Cloud Datastore, Cloud SQL, BigQuery, and Cloud Endpoints. The platform offers scalable, reliable and secure computing resources with options for infrastructure, platform and software services as a utility.
VMworld 2013: Symantec’s Real-World Experience with a VMware Software-Defined...VMworld
VMworld 2013
Jeremiah Cornelius, VMware
Jason Puig, Symantec
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
The document discusses how cloud computing provides businesses with more agility through faster deployment of applications and infrastructure. It outlines some of the key benefits of cloud such as reducing delivery time and costs for IT while avoiding vendor lock-in. The document advocates for a hybrid cloud approach using public cloud for flexibility combined with private and dedicated servers for workloads that require more customization or higher performance. It also emphasizes the need for a cultural shift towards a DevOps model of collaboration between development and operations teams.
Estimating the Total Costs of Your Cloud Analytics Platform DATAVERSITY
Organizations today need a broad set of enterprise data cloud services with key data functionality to modernize applications and utilize machine learning. They need a platform designed to address multi-faceted needs by offering multi-function Data Management and analytics to solve the enterprise’s most pressing data and analytic challenges in a streamlined fashion. They need a worry-free experience with the architecture and its components.
A complete machine learning infrastructure cost for the first modern use case at a midsize to large enterprise will be anywhere from $2M to $14M. Get this data point as you take the next steps on your journey.
How to Build Multi-disciplinary Analytics Applications on a Shared Data PlatformCloudera, Inc.
The document discusses building multi-disciplinary analytics applications on a shared data platform. It describes challenges with traditional fragmented approaches using multiple data silos and tools. A shared data platform with Cloudera SDX provides a common data experience across workloads through shared metadata, security, and governance services. This approach optimizes key design goals and provides business benefits like increased insights, agility, and decreased costs compared to siloed environments. An example application of predictive maintenance is given to improve fleet performance.
Transitioning to the Cloud: Implications for Reliability, Redundancy & Recove...RightScale
This document summarizes a panel discussion on transitioning to the cloud and implications for reliability, redundancy, and recoverability. The panel discusses common cloud projects like web applications and backup. Managing complexity, automation, and portability between clouds are key challenges. RightScale and Zmanda present cloud management platforms to automate provisioning and backup across multiple cloud vendors. Customers need skills in cloud application architectures and automation to benefit from cloud offerings.
This document provides an overview of Google Cloud Fundamentals. It introduces Andrew Liaskovski as the teacher and covers various Google Cloud topics including migration, security, DevOps, big data, and disaster recovery services. It also discusses CloudZone's full service package including consulting, managed services, and professional services. The rest of the document focuses on specific Google Cloud products and services such as Compute Engine, App Engine, Container Engine, Cloud Storage, Cloud SQL, networking, big data, and machine learning.
This document provides an overview of cloud concepts including cloud native applications, infrastructure as code, automation, microservices, serverless computing, deployment methods, chaos engineering, and observability. Specifically, it discusses how cloud native applications are loosely coupled and scale independently, the benefits of modeling infrastructure as code and storing it in version control, and techniques for automating infrastructure provisioning, testing, and deployments. It also covers asynchronous communication, event-driven architectures, blue/green and canary deployments, and using chaos engineering experiments to test system reliability in production environments.
Taming the cost of your first cloud - CCCEU 2014Tim Mackey
Today everyone is talking about clouds, and a few are building them, but far fewer are operating successful clouds. In this session we'll examine a variety of paradigm shifts IT makes when moving from a traditional virtualization and management mindset to operating a successful cloud. For most organizations, without careful planning the hype of a cloud solution can quickly overcome its capabilities and pre-existing best practices can combine to create the worst possible cloud scenario -- a cloud which isn't economical to operate, and which is more cumbersome to manage than a traditional virtualization farm.
Key topics covered include:
- Successful transition of operational and management paradigm
- How the VM density of clouds change Ops
- What it means to monitor the network in a cloud environment, at hyper-dense virtualization levels
- Preventing storage costs from outpacing delivery costs
The document discusses CloudStack deployments at various organizations. It describes how Telia Latvija uses CloudStack to deliver advanced IaaS services and a state-of-the-art video platform. It provides details on LeaseWeb's CloudStack implementation across its global data centers. It also discusses how Education Networks of America (ENA) leverages CloudStack to provide comprehensive infrastructure as a service solutions to K-12 schools, higher education, and libraries across North America. Autodesk's enterprise cloud services are also highlighted, which are built on CloudStack and provide on-demand self-service infrastructure. Finally, cloud.ca describes its regional cloud which addresses the need for Canadian-owned cloud infrastructure.
Venture capitalist Matt Ocko’s 20-year track record of success in the startup world has given him unique insight into how AWS has changed the venture financing process. In this session, you’ll learn about industries susceptible to disruption by AWS-based startups, and where VCs are willing to take new risks on those startups, including the heavily-regulated medical, government, financial, and industrial sectors. Matt will talk about how new, supercomputing startups are now possible because of AWS technologies. Hear about how using AWS technologies can actually reduce risk – and reduce time to customer penetration – from a VC perspective, and how to go from ‘AWS to Series A’ in 5 easy pieces.
Real-Time Streaming: Move IMS Data to Your Cloud Data WarehousePrecisely
With over 22,000 transactions processed every second, your mainframe IMS is a critical source of data for the cloud data warehouses that feed analytics, customer experience or regulatory initiatives. However, extracting data from mainframe IMS can be time-consuming and costly, leading to the exclusion of IMS data from cloud data warehouses all together – and leaving valuable insights unseen.
Never ignore or manually extract mainframe IMS data again. In this on-demand webcast, you will learn how Connect CDC enables your team to develop integrations quickly and easily between mainframe IMS and cloud data warehouses in the most cost-effective way possible.
This document provides an overview of moving applications to the cloud. It discusses various cloud opportunities including cost reduction, enterprise growth, and fast innovation. It also covers managing desktops and devices in the cloud, as well as popular applications that can be used in the cloud like email, conferencing software, and CRM. Finally, it summarizes several cloud platforms including Amazon Web Services, Microsoft Azure, Google Apps, and Amazon cloud services like EC2, S3, SQS, and RDS.
Webinar Slides: MySQL Data Protection: Medical SaaS Manages Sensitive HIPAA C...Continuent
Cloud-Based Active/Passive Tungsten MySQL Clusters @ Modernizing Medicine
Modernizing Medicine, a Continent customer since 2012, is a large Florida-based SaaS provider dealing with sensitive (PHI) medical data. ModMed offers electronic health records keeping, practice management, revenue cycle management, and data analytics for thousands of doctors.
Watch this webinar replay with Continuent CEO Eero Teerikorpi to learn about how ModMed dealt with a lack of high availability in AWS with the help of Continuent Tungsten. AWS EC2 instances, underlying storage, and the management interface are not highly available by default. Also hear about the benefits this customer was able to reap from our solutions including continuous operations, high availability, scalability, HIPAA Compliance, and better data protection.
AGENDA
- Continuent Introduction
- How to easily deploy MySQL Tungsten Clusters in AWS and recover from multi-zone/multi-region AWS outages
- Continuent Tungsten Solutions and Benefits
- Key Benefit Highlight: Continuous MySQL Operations with Data Protection
- Q&A
PRESENTER
Eero Teerikorpi - Founder and CEO, Continuent - is a 7-time serial entrepreneur who has more than 30 years of high-tech management and enterprise software experience. Eero has been in the MySQL marketplace virtually since day one, from the early 2000s. Eero has held top management positions at various cross-Atlantic entities (CEO at Alcom Corporation, President at Capslock, Executive Board Member at Esker S.A.) Eero started his career as a Product Manager at Apple Computer in Finland in the mid-80s. Eero also owns and manages a boutique NOET Vineyards producing high-quality dry-farmed Cabernet Sauvignon.
Eero is a former Navy officer and still an avid sailor on San Francisco Bay and around the world. Eero is a very active sportsman: a 4+ tennis player, a rookie golfer, a very careful mountain biker, and an experienced (40+ years) skier, both slalom and cross-country.
Viktor Petersson is the VP of Business Development at CloudSigma. The presentation discusses CloudSigma's philosophy of providing customers with sophisticated and customized cloud infrastructure options. It promotes the benefits of CloudSigma such as configurable server sizes, location flexibility, and dynamic pricing. The presentation also summarizes CloudSigma's product features like SSD storage, private VLANs, and utility billing. Finally, it describes how CloudSigma can provide a true hybrid cloud through integration with storage provider Zadara.
Decentralized cloud firewall framework with resources provisioning cost optim...aish006
University - Visvesvaraya Technological University
College - Global Academy of Technology
IEEE paper - 2015
by - G AISHWARYA, ALOK KUMAR, GAURAV KUMAR MISHRA, KEDAR RAVINDRA KULKARNI
under the guidance of - Dr. LATHA C A
Intellias CQRS Framework - is a cutting-edge cloud-native framework for massive-scale event-driven microservice solutions.
CQRS Framework designed as a part of IntelliGrowth cloud platform for managing mission-critical business processes by a team of Top CoE architects and engineers.
Event Grid - quiet event to revolutionize Azure and moreSean Feldman
The document discusses Microsoft's Event Grid service. It provides a 3-sentence summary:
Event Grid is a fully managed event routing service that can handle billions of events per week to trigger workflows and functions. It uses a pub/sub model to allow event publishers to emit events to topics, which then causes matching subscriptions to receive the events. Event Grid is designed to be cloud native, serverless friendly, and handle large-scale event processing reliably and securely across Microsoft Azure and other cloud services and applications.
The document discusses Google Cloud Platform and its capabilities for building, storing, and analyzing IT infrastructure in the cloud. It highlights key services including Compute Engine, App Engine, Cloud Storage, Cloud Datastore, Cloud SQL, BigQuery, and Cloud Endpoints. The platform offers scalable, reliable and secure computing resources with options for infrastructure, platform and software services as a utility.
VMworld 2013: Symantec’s Real-World Experience with a VMware Software-Defined...VMworld
VMworld 2013
Jeremiah Cornelius, VMware
Jason Puig, Symantec
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
The document discusses how cloud computing provides businesses with more agility through faster deployment of applications and infrastructure. It outlines some of the key benefits of cloud such as reducing delivery time and costs for IT while avoiding vendor lock-in. The document advocates for a hybrid cloud approach using public cloud for flexibility combined with private and dedicated servers for workloads that require more customization or higher performance. It also emphasizes the need for a cultural shift towards a DevOps model of collaboration between development and operations teams.
Estimating the Total Costs of Your Cloud Analytics Platform DATAVERSITY
Organizations today need a broad set of enterprise data cloud services with key data functionality to modernize applications and utilize machine learning. They need a platform designed to address multi-faceted needs by offering multi-function Data Management and analytics to solve the enterprise’s most pressing data and analytic challenges in a streamlined fashion. They need a worry-free experience with the architecture and its components.
A complete machine learning infrastructure cost for the first modern use case at a midsize to large enterprise will be anywhere from $2M to $14M. Get this data point as you take the next steps on your journey.
How to Build Multi-disciplinary Analytics Applications on a Shared Data PlatformCloudera, Inc.
The document discusses building multi-disciplinary analytics applications on a shared data platform. It describes challenges with traditional fragmented approaches using multiple data silos and tools. A shared data platform with Cloudera SDX provides a common data experience across workloads through shared metadata, security, and governance services. This approach optimizes key design goals and provides business benefits like increased insights, agility, and decreased costs compared to siloed environments. An example application of predictive maintenance is given to improve fleet performance.
Transitioning to the Cloud: Implications for Reliability, Redundancy & Recove...RightScale
This document summarizes a panel discussion on transitioning to the cloud and implications for reliability, redundancy, and recoverability. The panel discusses common cloud projects like web applications and backup. Managing complexity, automation, and portability between clouds are key challenges. RightScale and Zmanda present cloud management platforms to automate provisioning and backup across multiple cloud vendors. Customers need skills in cloud application architectures and automation to benefit from cloud offerings.
This document provides an overview of Google Cloud Fundamentals. It introduces Andrew Liaskovski as the teacher and covers various Google Cloud topics including migration, security, DevOps, big data, and disaster recovery services. It also discusses CloudZone's full service package including consulting, managed services, and professional services. The rest of the document focuses on specific Google Cloud products and services such as Compute Engine, App Engine, Container Engine, Cloud Storage, Cloud SQL, networking, big data, and machine learning.
Similar to Serverless cloud architecture patterns (20)
Building a serverless AI powered translation serviceJimmy Dahlqvist
We'll craft a serverless, event-driven Slack bot, that not only translates your text with accuracy but also breathes life into it with voice generation. Leveraging the power of AWS's cloud, we'll use services like AWS StepFunctions, EventBridge, and Lambda with the advanced AI capabilities of AWS Translate and Polly. This session is not just a talk; it's a live, interactive experience where we'll build the solution right before your eyes.
Jimmy Dahlqvist gave a presentation on building a serverless AI-powered translation bot using AWS services. He discussed generative AI and how it can create new content using large foundation models trained on massive datasets. The presentation covered Amazon Translate for text translation, Amazon Polly for text-to-speech, and Amazon Comprehend for natural language processing. Dahlqvist also discussed services like Amazon API Gateway, Amazon EventBridge, AWS Step Functions and AWS Lambda that could be used to build a serverless architecture for an AI translation bot application on AWS. He concluded with an overview of the architecture for such a translation service using various AWS AI and serverless services.
The document discusses building an IoT-enabled smoker device using AWS services. It provides an overview of the architecture, which includes an IoT device with sensors that collects cooking data and sends it to AWS IoT Greengrass. The data is then processed by serverless AWS services including a data service to store the data in DynamoDB, a detection service using AWS Lambda to monitor for cooking thresholds, and a notification service to alert the user. The system was improved over time to use a more event-driven architecture and decouple the different services.
The document discusses different EventBridge patterns for routing events in AWS applications. It describes single and multi-bus patterns that can be used within a single AWS account or across multiple accounts. The centralized single bus pattern provides easy integration but has single points of failure, while the distributed multi bus pattern avoids single points of failure but is more complex to design. The document also shares a client use case that takes a hybrid approach, using multiple buses within a single account for ingress, egress and internal events.
The document summarizes discussions from re:Invent 2021 about sustainability, serverless computing, community initiatives, and low-code/no-code tools. On sustainability, AWS is adding it as a new pillar and tools like the carbon footprint calculator. Serverless options were expanded for services like S3, MSK, Redshift, and EMR. The community is using new forums like re:Post and improvements to CDK. Low-code tools like Amplify Studio and SageMaker Canvas make app and model building more accessible. The takeaways note sustainability as a hot topic and continued investment in serverless.
CI/CD (continuous integration/continuous delivery) aims to automate software delivery through repeatable processes. It acts as both the first and last line of defense by integrating automated testing into development workflows and deploying to production only when tests pass. The document outlines how CI/CD enables pull request testing, merge testing, nightly testing, blue/green deployments, canary releases, and chaos engineering to catch issues early and validate changes before production.
The document discusses the history and principles of chaos engineering. It began in 2004 at Amazon and was further developed and popularized at Netflix in 2010-2012 when they created tools like Chaos Monkey and open sourced their Simian Army. Key aspects of chaos engineering discussed include defining the steady state of a system, monitoring key metrics, starting with small and reversible experiments, automating experiments to run often, and shifting mindsets to proactively address failures. The overall goal is to build confidence in a system's ability to withstand failures through experimentation.
Road to an asynchronous device registration APIJimmy Dahlqvist
The document describes the process of developing an asynchronous device registration API to address performance issues in the initial version. The first version resulted in multiple API calls and was impacted by cold starts and spikes from lambda throttling. Several improvements were then made, including making registration asynchronous without the client needing to know when it was completed, using DynamoDB for faster lookups instead of S3, and Elasticsearch for better searching instead of CloudSearch. This led to faster and more even load handling as well as a more robust backup system. However, a lesson was learned that combining low lambda concurrency with a low max receive count in the redrive policy can be problematic.
This document discusses GitOps and how it was implemented using an Alexa skill called jBot. Some key points:
- GitOps uses Git as the single source of truth for infrastructure changes and ensures an audit trail for all changes. Pull requests are at the heart of GitOps for reviewing and approving changes.
- With jBot, voice commands can be used to create pull requests, merge them, and deploy to environments like production. It uses AWS services like Step Functions, CodeBuild, and CloudFormation to automate the CI/CD pipelines.
- When a pull request is opened or closed, EventBridge triggers Step Functions workflows that comment on the PR, build the code, create temporary environments for testing
Honeypots Unveiled: Proactive Defense Tactics for Cyber Security, Phoenix Sum...APNIC
Adli Wahid, Senior Internet Security Specialist at APNIC, delivered a presentation titled 'Honeypots Unveiled: Proactive Defense Tactics for Cyber Security' at the Phoenix Summit held in Dhaka, Bangladesh from 23 to 24 May 2024.
Discover the benefits of outsourcing SEO to Indiadavidjhones387
"Discover the benefits of outsourcing SEO to India! From cost-effective services and expert professionals to round-the-clock work advantages, learn how your business can achieve digital success with Indian SEO solutions.
HijackLoader Evolution: Interactive Process HollowingDonato Onofri
CrowdStrike researchers have identified a HijackLoader (aka IDAT Loader) sample that employs sophisticated evasion techniques to enhance the complexity of the threat. HijackLoader, an increasingly popular tool among adversaries for deploying additional payloads and tooling, continues to evolve as its developers experiment and enhance its capabilities.
In their analysis of a recent HijackLoader sample, CrowdStrike researchers discovered new techniques designed to increase the defense evasion capabilities of the loader. The malware developer used a standard process hollowing technique coupled with an additional trigger that was activated by the parent process writing to a pipe. This new approach, called "Interactive Process Hollowing", has the potential to make defense evasion stealthier.
Securing BGP: Operational Strategies and Best Practices for Network Defenders...APNIC
Md. Zobair Khan,
Network Analyst and Technical Trainer at APNIC, presented 'Securing BGP: Operational Strategies and Best Practices for Network Defenders' at the Phoenix Summit held in Dhaka, Bangladesh from 23 to 24 May 2024.
8. @jimmydahlqvist
Envelope Wrapper
• Wrap original message in an envelope
• Seperation of information
• Use predefined keys
• Improved filtering and debugging
• Additional overhead
15. @jimmydahlqvist
Storage First Benefits
• Assured Data Durability
• Processing Flexibility
• Level the processing load
• High volume data ingestion
• De-duplication of data
16. @jimmydahlqvist
Storage First Things to consider
• Potential for increased latency
• Architectural complexity
• Need for robust storage solutions
• Maintaining data integrity
• Risk of over-optimization
20. @jimmydahlqvist
Circuit Breaker Benefits
• Avoid cascading failures
• Enhance system resilience
• Protect system resources
• Provide failover possibility
• Improve user experience
21. @jimmydahlqvist
Circuit Breaker Things to consider
• Need for configuration
• Risk of early circuit break
• Good observability required
• System complexity increase
• When to recover after a failure
I started working with cloud several years ago, I was part of a team that built a new system that was going to handle information coming from end users.
What was very important in this solution was that no messages coming from the end users could be lost, we needed to ensure we processed them all.
We decided to go for a serverless solution and used AWS ApiGateway together with an SQS queue, and then processed the messages in an async way.
At that time I didn’t realize that this pattern actually had a name…..
This is what we are going to talk about here today, we will look at some of the patterns I use the most and that I think is the most essential.
It’s clearly a oppinionated talk from that perspective, but in the end we should have put some names to them.
PIZZA EXAMPLES!
Before we start and deep dive into different patterns, we should establish some common ground.
And define some patterns, definitions and concepts that I will return to in several of the patterns
Event Producers are a system or a service that create and publish events and commands. AWS services, clients, Saas applications and more can be a producer.
Event Router this is a service or system that routes events and commands to consumers. This can be queues, event brokers, etc. There are several AWS services that can act as message router, such as SQS, SNS, IoT Core, and EventBridge.
Event Consumer are the system or service that react on, consume, specific events or commands and carry out work accordingly. Our consumers can be services implemented with AWS services, it can be other SaaS services, API Endpoints, and other.
Orchestration
A centralized control pattern where a single orchestrator (often a service or function) dictates the control flow, making decisions about which functions should be executed, in which order, and managing data flow between them.
AWS StepFunctions!
Choreography:
A decentralized control pattern where each service or function knows what to do when an event occurs. There's no central authority directing traffic; rather, services interact in a loosely coupled manner based on events.
AWS Event Bridge
Key Points:
Central Control: One service/function dictates the flow.
Predictable Flow: Control flow is predefined and can be visualized easily.
Tight Coupling: The orchestrator is often tightly coupled with services, knowing about their interfaces and data.
Decentralized Control: No single point dictates the flow; services/functions react to events.
Loose Coupling: Services are decoupled, only knowing about the events they produce or consume.
Scalable & Flexible: Easy to add or modify services without changing the entire system.
Self-Managed: Services handle their own failures and compensating actions based on events.
By using the we wrap the original message in an envelope, that way we can….
We need to use prdefined keys, as this gives both the producer and consumer….
Metadata – data pattern….. Very popular
Invented back in 2020 by Sheen Brials at Lego Group….
The data key is the original message, the payload.
The metadata key gives us the possibility to add additional information ABOUT the message.
The similarities
Use JSON for all messages, this is an opinionated design, but it’s a good pratcice for a well designed message system.
So don’t use txt, xml, yaml, protobuf
Now let’s move into the relalm of Resiliency and some patterns that can help us.
Definition: The "Storage First" architecture pattern emphasizes putting persistent storage at the forefront of system design, ensuring data durability and availability before considering other components.
Key Principle: Designing systems around the notion that data storage is the central pillar, optimizing for data retention, retrieval, and resilience.
When to Use:
High data ingestion systems where data loss is critical.
Systems requiring consistent backup and failover capabilities.
Applications where real-time processing is secondary to data capture.
Data Durability: Ensuring data is stored safely reduces risks related to data loss.
Flexibility: Once data is stored, it can be processed, transformed, or analyzed in various ways without worrying about initial capture.
Scalability Level the processing load: By prioritizing storage, systems can efficiently handle large volumes of data without immediate processing.
In a high volume event-driven system, slow consumers can slow down the producers if the consumer process the event synchronously. Instead, by storing the event immediately, reporting success, and then process on their own time, the producers will not be slowed down.
By storing the data before processing, the consumers can also implement an efficient message de-duplication for data that has been sent twice. In most event-driven architectures data will be delivered with a “At-least-once” approach.
Cost-Efficient: Optimizing for storage can lead to reduced costs in data retrieval and processing.
Latency: Prioritizing storage might increase the time it takes to process or access the data in real-time scenarios.
Complexity: Designing with storage in mind may lead to intricate architectures, especially when integrating with diverse processing systems.
Prerequisites: Requires robust and often expensive storage solutions to ensure data durability and high availability.
Data Integrity: Ensuring data stored is accurate and consistent can pose challenges, especially in high ingestion systems.
Potential for Over-Optimization: There's a risk of over-investing in storage without considering the balance of other architectural needs.
PIZZA ORDER!!
Definition: A design pattern used in software development to improve system stability and prevent cascading failures by detecting faults and halting system operations, much like an electrical circuit breaker.
Key Principle: The circuit breaker monitors requests to a service and "trips" (or opens) to stop sending requests to a failing service, giving it time to recover.
When to Use:
Microservices architectures where failures in one service might cascade to others.
Systems that rely on external services or APIs that might be unreliable.
Applications where preserving system functionality during partial failures is crucial.
System Stability: Reduces the risk of system-wide outages due to a single point of failure.
Resource Protection: Prevents resource exhaustion by halting requests to a failing component.
Enhanced User Experience: By avoiding system hang or timeouts, users receive quicker feedback even during failures.
Facilitates System Recovery: Provides failing components an opportunity to recover without being inundated with requests.
Predictable Failures: System components fail in a predictable manner, allowing for easier troubleshooting and maintenance.
Configuration Overhead: Proper thresholds and timeouts need to be set, which might require fine-tuning.
Risk of False Positives: Might trip during transient failures, causing unnecessary disruption.
Complexity: Introduces additional logic and monitoring into the system.
Dependency on Monitoring: Requires robust monitoring and alerting to function effectively.
Recovery Strategy: Deciding when and how to close (or reset) the circuit breaker can be challenging.
Get circuit status from DynamoDB
If Closed carry out work and update status
If open –
Check if sufficient time has passed for a retry
If so
Carry out work and update status
if NO
no retry
Now let’s look at a combination with Storage First where we like to process messages in a Queue using a Lambda Function.
I need to give credit to Cristoph Gerkens that created the first variant that I later modified.
In this:
Lambda send logs and metrics to CloudWatch -> in case of failures this trigger an alarm that invokes a Lambda function, that disabled the Lambda integration on the queue.
When the Alarm goes back to OK this will invoke a stepfunction, that polls a messages from the queue and makes an test invoke with that message to the lambda function doing the work, if this is an success the integration is enabled.
EventBridge on a schedule invokes the stepfunction that check if the integration is enabled, if not it does an test invoke.
This is a very famous quote….
And it’s very true, everything fails all the time and we need to be able to handle failures and retry.
PIZZA schedule delivery….
Our application need to handle failures and retry the operations.
And we should not just retry….. We need to retry with an exponential backoff.
Meaning that we first retry after 1s then 2s, 4s, 8s and so on…..
This will:
Reduce System load and strain and let’t the failing component breathe…..
It creates a better user experience
We can save cost by not burning CPU cycles
There is an very interesting study done by AWS on this topic, that show how retries cluster in a large distrubited system.
So we should not just do retry with backoff, we should also add jitter to this.
Meaning that we add a random sleep to each backoff operation.
This will then avoid synced synced retries….
It will distribute the load on the failing component more even, because after healing in case of a synced retry from many clients this can cause another failure and outage.
It will increase the success rate since we distrubite the retries.
And it’s a very adaptive way of handling retry
Now the same study with jitter added, we can see that the clustering is less frequent.
So ho can we now implement retries in a smart way in serverless AWS?
First, let’s introduce a retry envelope, based on the envelope wrapper pattern.
Here we add metadata about retries so we can keep track of attempts, when it was last run etc.
So how could a implementation in StepFunctions then look like?
We can utlize the StepFunctions built in ability to catch errors. When Lambda fail e use the retry metadata to calculate a wait time based on that.
We then can then wait thet amount of time and try again.
If we reach our upper limit of number of retrties we add the message to a DLQ for manual processing.
So if we look at an StepFunctions visualization
Here is a setup where Lambda get invoked asynch by something.
Credit given to Luc…. That first wrote about this setup.
Lambda Fails -> onFailure destination to SQS.
SQS polled by Lambda event source.
Retry manager checks the ”Retry Metadata” puts back to queue and set the visibility timeout.
Next time message is returned, the Manager invokes the “Work Lambda” and the loop continue….
That was a few resiellncy patterns…
Now let us move over and look at some event-driven and messagsing patterns….
A pattern I use very often is the Saga Pattern. Which is a way to manage long running transactions.
It’s possible to use both in an orchestration and choreography scenario, but we will focus on choreography.
In this pattern, each completed transaction will publish a domain event to inform on what just happened sp the next part in the saga can pick up.
With the Saga pattern, even if we're not using traditional ACID transactions, we can still ensure data is consistent across services.
As each service in the pattern is loosely coupled, we gain the flexibility to develop, deploy, and scale services independently.
If one transaction fails, the entire system doesn’t crash. Instead, compensating actions are triggered to rectify the inconsistency.
There might be additional complexity in the system, so it might be hard to track the transactions and where in the chain it is.
The system will be Eventual consistency, since a transaction can be in the middle of everything.
We must make sure we use the envelope wrapper so we can add eventid to it so we can track the saga.
Testinh might be hard, since it might require us to run the full saga for every test
So by using eventbridge each service can publish events that other services can track…
So in this example it all start with a Pizza Order being created…..
The Data Enricher Pattern is about enhancing the value of data by combining it with other relevant data sources.
At its core, the Data Enricher Pattern aims to elevate the inherent value of raw data by integrating it with additional context. This enhancement transforms simple data into information, making it more actionable.
This richer dataset is a boon for decision-makers, offering a more comprehensive view of the situation.
One of the pattern's strengths is its automation, streamlining data processes and reducing manual errors.
Furthermore, it's a versatile pattern. Whether you're integrating data from APIs, databases, or other systems, the pattern facilitates this combination, offering a more holistic dataset.
Importantly, as your business grows and evolves, so can your data enrichment sources, ensuring your data remains relevant and valuable.
One of the top considerations when adopting the Data Enricher Pattern is maintaining the data's integrity and quality. The enrichment should add value, not distort or degrade the original data.
Reliability is another concern. If you're depending on third-party data sources, their availability and trustworthiness become crucial.
As you integrate diverse data sources, be prepared to handle complex integration scenarios, ensuring smooth data flow.
Also, while the pattern aims to provide enriched data, it's important to be aware of potential latency, especially if real-time data processing is essential.
Lastly, and perhaps most importantly, when dealing with sensitive or third-party data, uphold the highest standards of data privacy and security.
PIZZA membership…. Discounts…
Messages are processed based on their priority rather than their arrival order
High value messages processed first
We can use computer resources efficient
We can improve the system responisvness for high priority customers. We can see it as customers with high member status get processed first.
This we need to consider in this pattern is how do we:
Define priority -> Must be clear
There is an increased complexity with more queues
Queue starvation can happen…. Low prio never get processed.
In this case we can’t guarntee the order and Normal prio and high prio will be intermixed.
Instead we would need to do something like this…..
Where we check the prio of the message and if there are high prio messages in queue or not.
Next queue pattern that we need to talk about is queue based load leveling….. This is when we use a queue in between a high volume producer and a slow consumer, where the producer have high spikes.
That way we can level the load on the consumer and ensure it doesn’t get over whelmed in the spikes.
Increased system stability
Handle spikes
Protect downstream services
What we need to consider when implementing this, is that it will come with increased latency -> Since messages during a spike will be in the queue longer.
The data integrity we need to consider that – the system will be eventually consistence
And backperassure? How do we handle the case when the queue keep growing and the consumer can’t keep up?
There are two ways to handle it. Eirher we add more consumers or we set TTL on the messages and send old messages to a DLQ or discard them.
If you remember the very first story I told…. We actually used load leveling with storage first.
This scenario was from Sony and we processed messages from mobile phones. Every time a phone started up it sent us a message.
So when there was a new sw for the phone, we got huge spikes, since most pople actually update their phones at the same time…..