This document discusses blue/green deployments on AWS. Blue/green deployments allow near zero downtime releases and rollbacks by shifting traffic between two identical environments running different application versions (blue and green). The document describes how AWS services like Route 53, ELB, Auto Scaling, Elastic Beanstalk, OpsWorks, and CloudFormation can enable blue/green techniques. It also addresses considerations for managing data with deployments.
Application optimization is one of the key things that we need to handle to ensure that we have success and pleased stakeholders. Should we use memory caching, shared resources, or an external cache? Join this session in a journey when we discover how we can interlace Azure Functions and Azure Redis Cache in such a way to optimize our applications with a minimum impact on development effort and technical depth.
Migrating your Existing Applications to the CloudNestweaver
Cloud migration is the process of moving the applications, data or business elements from on-premise location or organization computer to the cloud or migrating from one cloud environment to another. The transition of data between cloud environments is called cloud to cloud migration and transitioning to a different cloud provider is called cloud service migration. In some cases, the successful migration to service provider environment may require middleware or cloud integration tool to bridge the gap between the service provider and the customer.
Mass Migration Strategy - A Key Step in the Enterprise Transformation - AWS C...AWS Germany
Vortrag "Mass Migration Strategy - A Key Step in the Enterprise Transformation" von Angelo Crippa beim AWS Cloud Web Day für Mittelstand und Großunternehmen. Alle Videos und Präsentationen finden Sie hier: http://amzn.to/1VUJZsT
Ritech Solutions - Go For Launch Overview (AWS) Oliver Wells
Go for Launch® is a disciplined, packaged process that migrates IT applications to the cloud and provisions compute power, storage and other resources, gaining you access to a suite of elastic IT infrastructure services as your business demands them
How a Business-First, Agile Cloud Migration Factory Approach Powers Digital S...Cognizant
By embracing a “factory” approach to hybrid cloud migration, IT organizations can more easily deliver enhanced operational agility and cost efficiencies, while widening the scope for business innovation.
There are many questions on what are the best steps and ways to migrate to the cloud better. Enterprises need to have specific steps to follow when migrating to the cloud.
In this solution, we identify those specific steps and processes and how it can be adapted best.
To know more, please get in touch with us at info@blazeclan.com
Planning for a (Mostly) Hassle-Free Cloud Migration | VTUG 2016 Winter WarmerJoe Conlin
There is no "one right way" when it comes to a cloud migration or cloud transformation, and in this 2016 VTUG talk I explore some of the methods that have proven successful in my experience.
CRM Trilogix; Migrating Legacy Systems to the CloudCraig F.R Read
With years of experience in Cloud migration, AWS migration and maintaining and running Cloud CRM software in order to optimise business processes, we have developed a determined and detailed approach to Cloud migrations.
Application optimization is one of the key things that we need to handle to ensure that we have success and pleased stakeholders. Should we use memory caching, shared resources, or an external cache? Join this session in a journey when we discover how we can interlace Azure Functions and Azure Redis Cache in such a way to optimize our applications with a minimum impact on development effort and technical depth.
Migrating your Existing Applications to the CloudNestweaver
Cloud migration is the process of moving the applications, data or business elements from on-premise location or organization computer to the cloud or migrating from one cloud environment to another. The transition of data between cloud environments is called cloud to cloud migration and transitioning to a different cloud provider is called cloud service migration. In some cases, the successful migration to service provider environment may require middleware or cloud integration tool to bridge the gap between the service provider and the customer.
Mass Migration Strategy - A Key Step in the Enterprise Transformation - AWS C...AWS Germany
Vortrag "Mass Migration Strategy - A Key Step in the Enterprise Transformation" von Angelo Crippa beim AWS Cloud Web Day für Mittelstand und Großunternehmen. Alle Videos und Präsentationen finden Sie hier: http://amzn.to/1VUJZsT
Ritech Solutions - Go For Launch Overview (AWS) Oliver Wells
Go for Launch® is a disciplined, packaged process that migrates IT applications to the cloud and provisions compute power, storage and other resources, gaining you access to a suite of elastic IT infrastructure services as your business demands them
How a Business-First, Agile Cloud Migration Factory Approach Powers Digital S...Cognizant
By embracing a “factory” approach to hybrid cloud migration, IT organizations can more easily deliver enhanced operational agility and cost efficiencies, while widening the scope for business innovation.
There are many questions on what are the best steps and ways to migrate to the cloud better. Enterprises need to have specific steps to follow when migrating to the cloud.
In this solution, we identify those specific steps and processes and how it can be adapted best.
To know more, please get in touch with us at info@blazeclan.com
Planning for a (Mostly) Hassle-Free Cloud Migration | VTUG 2016 Winter WarmerJoe Conlin
There is no "one right way" when it comes to a cloud migration or cloud transformation, and in this 2016 VTUG talk I explore some of the methods that have proven successful in my experience.
CRM Trilogix; Migrating Legacy Systems to the CloudCraig F.R Read
With years of experience in Cloud migration, AWS migration and maintaining and running Cloud CRM software in order to optimise business processes, we have developed a determined and detailed approach to Cloud migrations.
Microsoft Cloudscape 2015 surveyed the CIOs of 100 Indian enterprises to gain insights into cloud adoption trends and challenges faced by them. This survey highlights that as most business experimenting with different approaches to the cloud, availability of cloud skills and training seems to be the top concerns for enterprises, edging out security and privacy related concerns.
Cloud Migration Patterns: A Multi-Cloud Architectural PerspectivePooyan Jamshidi
Cloud migration requires an engineering, verifiable, measurable, transparent and repeatable approach rather than an ad-hoc approach based on trial and error.
We describe a comprehensive set of (multi-)cloud migration patterns from an architectural perspective. In this work, we focus on application components and their migration to the multi-cloud environments. We define and characterize the patterns with concrete usage scenario. We also describe the process for migration pattern selection, composition and extension.
(ENT206) Migrating Thousands of Workloads to AWS at Enterprise Scale | AWS re...Amazon Web Services
Migrating workloads to AWS in an enterprise environment is not easy, but with the right approach, an enterprise-sized organization can migrate thousands of instances to AWS quickly and cost effectively. You can leave this session with a good understanding of the migration framework used to assess an enterprise application portfolio and how to move thousands of instances to AWS in a quick and repeatable fashion.
In this session, we describe the components of Accenture's cloud migration framework, including tools and capabilities provided by Accenture, AWS, and third-party software solutions, and how enterprises can leverage these techniques to migrate efficiently and effectively. The migration framework covers:
- Defining an overall cloud strategy
- Assessing the business requirements, including application and data requirements
- Creating the right AWS architecture and environment
- Moving applications and data using automated migration tools- Services to manage the migrated environment
Imaginea's take on how an organization can seamlessly migrate to Cloud. Aligning your IT strategy accordingly and move to cloud step by step is explained here.
The Netflix recipe for migrating your organization from building a datacenter based product to a cloud based product. First presented at the Silicon Valley Cloud Computing Meetup "Speak Cloudy to Me" on Saturday April 30th, 2011
AWS Cost Optimization: Strategies for Maximizing Cloud EfficiencyLucy Zeniffer
Discover effective strategies for optimizing costs in AWS with 'AWS Cost Optimization: Strategies for Maximizing Cloud Efficiency.' This concise guide explores techniques to streamline your cloud expenses while maximizing performance and resource utilization. From right-sizing instances to leveraging AWS cost management tools, unlock the keys to efficient cloud spending and enhanced ROI.
Showcase the strategies used in software upgrades by employing our professionally designed Deployment Strategies PowerPoint Presentation Slides. Discuss the approaches of deployment along with assumptions and risks with the help of the application deployment PPT slideshow. The slides also cover the pattern of rolling deployment. Take the assistance of software update strategy PPT theme and describe the architecture of the rolling deployment. Explain the blue-green deployment strategies with examples. Showcase how to create blue-green deployment strategies with the help of a ready-to-use PPT slide deck. Take the assistance of strategic deployment PPT templates and explain the working of the canary deployment environment. Captivate and inform your audience at the same time by using our readily available PPT slideshow. Guide your audience through a canary deployment pattern by using ready-to-use PPT layouts. It also represents the technique for testing the new version of the application. The slides also represent the comparison of deployment strategies on different bases. https://bit.ly/3vWRPsv
Amazon Cloud can reduce your IT maintenance / investment, bringing worlds efficient infrastructure that is required to meet your scaling up/down online software applications / data / access and processing speed. We can prepare the report on your ROI/advantages/savings/gaps/timeline if the web applications you have/use may get moved to Amazon cloud with all required compliance.
This complete deck can be used to present to your team. It has PPT slides on various topics highlighting all the core areas of your business needs. This complete deck focuses on Deployment Strategy PowerPoint Presentation Slides and has professionally designed templates with suitable visuals and appropriate content. This deck consists of total of thirty one slides. All the slides are completely customizable for your convenience. You can change the colour, text and font size of these templates. You can add or delete the content if needed. Get access to this professionally designed complete presentation by clicking the download button below. http://bit.ly/2xs626S
10 Key Digital Infrastructure ConsiderationsCognizant
As digital becomes ever-more essential to revenue growth and market relevance, underlying infrastructure must be made as efficient as possible to realize its true potential. By prioritizing foundational technology modernization and simplification, organizations can accelerate their transformation into the digital era.
Overview of Amazon Web Services - kwiecień 2017LCloud
Wartościowy przegląd chmury Amazon Web Services.
Opracowanie przygotowanie przez AWS, zawiera informacje o dostępnych modelach dostarczania chmury, usługach, bezpieczeństwie, zgodności, deploymencie, infrastrukturze.
Amazon has a long history of using a decentralized IT infrastructure. This arrangement enabled our development teams to access compute and storage resources on demand, and it has increased overall productivity and agility.
A Study on Replication and Failover Cluster to Maximize System UptimeYogeshIJTSRD
Different types of clients over the globe uses Cloud services because cloud computing involves various features and advantages such as building cost effectives solutions for business, scale resources up and down according to the current demand and many more. But from the cloud provider point of view, there are many challenges that need to be faced in order to ensure a hassle free service delivery to the clients. One such problem is to maintain high availability of services. This project aims at presenting a high available HA solution for business continuity and disaster recovery through configuration of various other services such as load balancing, elasticity and replication. Miss Pratiksha Bhagawati | Mrs. Priya N "A Study on Replication and Failover Cluster to Maximize System Uptime" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-5 | Issue-4 , June 2021, URL: https://www.ijtsrd.compapers/ijtsrd41249.pdf Paper URL: https://www.ijtsrd.comcomputer-science/other/41249/a-study-on-replication-and-failover-cluster-to-maximize-system-uptime/miss-pratiksha-bhagawati
Booz Allen Hamilton offers an integrated suite of cloud capabilities, deep subject matter expertise, and unparalleled hands-on experience with a broad range of cloud technology products.
Microsoft Cloudscape 2015 surveyed the CIOs of 100 Indian enterprises to gain insights into cloud adoption trends and challenges faced by them. This survey highlights that as most business experimenting with different approaches to the cloud, availability of cloud skills and training seems to be the top concerns for enterprises, edging out security and privacy related concerns.
Cloud Migration Patterns: A Multi-Cloud Architectural PerspectivePooyan Jamshidi
Cloud migration requires an engineering, verifiable, measurable, transparent and repeatable approach rather than an ad-hoc approach based on trial and error.
We describe a comprehensive set of (multi-)cloud migration patterns from an architectural perspective. In this work, we focus on application components and their migration to the multi-cloud environments. We define and characterize the patterns with concrete usage scenario. We also describe the process for migration pattern selection, composition and extension.
(ENT206) Migrating Thousands of Workloads to AWS at Enterprise Scale | AWS re...Amazon Web Services
Migrating workloads to AWS in an enterprise environment is not easy, but with the right approach, an enterprise-sized organization can migrate thousands of instances to AWS quickly and cost effectively. You can leave this session with a good understanding of the migration framework used to assess an enterprise application portfolio and how to move thousands of instances to AWS in a quick and repeatable fashion.
In this session, we describe the components of Accenture's cloud migration framework, including tools and capabilities provided by Accenture, AWS, and third-party software solutions, and how enterprises can leverage these techniques to migrate efficiently and effectively. The migration framework covers:
- Defining an overall cloud strategy
- Assessing the business requirements, including application and data requirements
- Creating the right AWS architecture and environment
- Moving applications and data using automated migration tools- Services to manage the migrated environment
Imaginea's take on how an organization can seamlessly migrate to Cloud. Aligning your IT strategy accordingly and move to cloud step by step is explained here.
The Netflix recipe for migrating your organization from building a datacenter based product to a cloud based product. First presented at the Silicon Valley Cloud Computing Meetup "Speak Cloudy to Me" on Saturday April 30th, 2011
AWS Cost Optimization: Strategies for Maximizing Cloud EfficiencyLucy Zeniffer
Discover effective strategies for optimizing costs in AWS with 'AWS Cost Optimization: Strategies for Maximizing Cloud Efficiency.' This concise guide explores techniques to streamline your cloud expenses while maximizing performance and resource utilization. From right-sizing instances to leveraging AWS cost management tools, unlock the keys to efficient cloud spending and enhanced ROI.
Showcase the strategies used in software upgrades by employing our professionally designed Deployment Strategies PowerPoint Presentation Slides. Discuss the approaches of deployment along with assumptions and risks with the help of the application deployment PPT slideshow. The slides also cover the pattern of rolling deployment. Take the assistance of software update strategy PPT theme and describe the architecture of the rolling deployment. Explain the blue-green deployment strategies with examples. Showcase how to create blue-green deployment strategies with the help of a ready-to-use PPT slide deck. Take the assistance of strategic deployment PPT templates and explain the working of the canary deployment environment. Captivate and inform your audience at the same time by using our readily available PPT slideshow. Guide your audience through a canary deployment pattern by using ready-to-use PPT layouts. It also represents the technique for testing the new version of the application. The slides also represent the comparison of deployment strategies on different bases. https://bit.ly/3vWRPsv
Amazon Cloud can reduce your IT maintenance / investment, bringing worlds efficient infrastructure that is required to meet your scaling up/down online software applications / data / access and processing speed. We can prepare the report on your ROI/advantages/savings/gaps/timeline if the web applications you have/use may get moved to Amazon cloud with all required compliance.
This complete deck can be used to present to your team. It has PPT slides on various topics highlighting all the core areas of your business needs. This complete deck focuses on Deployment Strategy PowerPoint Presentation Slides and has professionally designed templates with suitable visuals and appropriate content. This deck consists of total of thirty one slides. All the slides are completely customizable for your convenience. You can change the colour, text and font size of these templates. You can add or delete the content if needed. Get access to this professionally designed complete presentation by clicking the download button below. http://bit.ly/2xs626S
10 Key Digital Infrastructure ConsiderationsCognizant
As digital becomes ever-more essential to revenue growth and market relevance, underlying infrastructure must be made as efficient as possible to realize its true potential. By prioritizing foundational technology modernization and simplification, organizations can accelerate their transformation into the digital era.
Overview of Amazon Web Services - kwiecień 2017LCloud
Wartościowy przegląd chmury Amazon Web Services.
Opracowanie przygotowanie przez AWS, zawiera informacje o dostępnych modelach dostarczania chmury, usługach, bezpieczeństwie, zgodności, deploymencie, infrastrukturze.
Amazon has a long history of using a decentralized IT infrastructure. This arrangement enabled our development teams to access compute and storage resources on demand, and it has increased overall productivity and agility.
A Study on Replication and Failover Cluster to Maximize System UptimeYogeshIJTSRD
Different types of clients over the globe uses Cloud services because cloud computing involves various features and advantages such as building cost effectives solutions for business, scale resources up and down according to the current demand and many more. But from the cloud provider point of view, there are many challenges that need to be faced in order to ensure a hassle free service delivery to the clients. One such problem is to maintain high availability of services. This project aims at presenting a high available HA solution for business continuity and disaster recovery through configuration of various other services such as load balancing, elasticity and replication. Miss Pratiksha Bhagawati | Mrs. Priya N "A Study on Replication and Failover Cluster to Maximize System Uptime" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-5 | Issue-4 , June 2021, URL: https://www.ijtsrd.compapers/ijtsrd41249.pdf Paper URL: https://www.ijtsrd.comcomputer-science/other/41249/a-study-on-replication-and-failover-cluster-to-maximize-system-uptime/miss-pratiksha-bhagawati
Booz Allen Hamilton offers an integrated suite of cloud capabilities, deep subject matter expertise, and unparalleled hands-on experience with a broad range of cloud technology products.
SERVICE LEVEL AGREEMENT BASED FAULT TOLERANT WORKLOAD SCHEDULING IN CLOUD COM...ijgca
Cloud computing is a concept of providing user and application oriented services in a virtual environment.Users can use the various cloud services as per their requirements dynamically. Different users have
different requirements in terms of application reliability, performance and fault tolerance. Static and rigid
fault tolerance policies provide a consistent degree of fault tolerance as well as overhead. In this research
work we have proposed a method to implement dynamic fault tolerance considering customerrequirements. The cloud users have been classified in to sub classes as per the fault olerance requirements.Their jobs have also been classified into compute intensive and data intensive categories. The varying
degree of fault tolerance has been applied consisting of replication and input buffer. From the simulation
based experiments we have found that the proposed dynamic method performs better than the existing methods.
SERVICE LEVEL AGREEMENT BASED FAULT TOLERANT WORKLOAD SCHEDULING IN CLOUD COM...ijgca
Cloud computing is a concept of providing user and application oriented services in a virtual environment.
Users can use the various cloud services as per their requirements dynamically. Different users have
different requirements in terms of application reliability, performance and fault tolerance. Static and rigid
fault tolerance policies provide a consistent degree of fault tolerance as well as overhead. In this research
work we have proposed a method to implement dynamic fault tolerance considering customer
requirements. The cloud users have been classified in to sub classes as per the fault tolerance requirements.
Their jobs have also been classified into compute intensive and data intensive categories. The varying
degree of fault tolerance has been applied consisting of replication and input buffer. From the simulation
based experiments we have found that the proposed dynamic method performs better than the existing
methods.
SERVICE LEVEL AGREEMENT BASED FAULT TOLERANT WORKLOAD SCHEDULING IN CLOUD COM...ijgca
Cloud computing is a concept of providing user and application oriented services in a virtual environment.
Users can use the various cloud services as per their requirements dynamically. Different users have
different requirements in terms of application reliability, performance and fault tolerance. Static and rigid
fault tolerance policies provide a consistent degree of fault tolerance as well as overhead. In this research
work we have proposed a method to implement dynamic fault tolerance considering customer
requirements. The cloud users have been classified in to sub classes as per the fault tolerance requirements.
Their jobs have also been classified into compute intensive and data intensive categories. The varying
degree of fault tolerance has been applied consisting of replication and input buffer. From the simulation
based experiments we have found that the proposed dynamic method performs better than the existing
methods.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
3. Amazon Web Services – Blue/Green Deployments on AWS July 2016
Page 3 of 35
Contents
Abstract 5
Introduction 5
Blue/Green Deployment Methodology 5
Benefits of Blue/Green 6
Define the Environment Boundary 7
AWS Tools and Services Enabling Blue/Green Deployments 8
Amazon Route 53 9
Elastic Load Balancing 9
Auto Scaling 9
AWS Elastic Beanstalk 10
AWS OpsWorks 10
AWS CloudFormation 10
Amazon CloudWatch 10
Techniques 11
Update DNS Routing with Amazon Route 53 11
Swap the Auto Scaling Group Behind Elastic Load Balancer 14
Update Auto Scaling Group Launch Configurations 17
Swap the Environment of an Elastic Beanstalk Application 20
Clone a Stack in AWS OpsWorks and Update DNS 24
Best Practices for Managing Data Synchronization and Schema Changes 27
Decoupling Schema Changes from Code Changes 28
When Blue/Green Deployments Are Not Recommended 29
Conclusion 31
Contributors 32
Appendix 32
4. Amazon Web Services – Blue/Green Deployments on AWS July 2016
Page 4 of 35
Comparison of Blue Green Deployment Techniques 32
Document Revisions 34
Notes 34
5. Amazon Web Services – Blue/Green Deployments on AWS July 2016
Page 5 of 35
Abstract
Blue/green deployment is a technique for releasing applications by shifting traffic
between two identical environments running different versions of the application.
Blue/green deployments can mitigate common risks associated with deploying
software, such as downtime and rollback capability. This paper provides an
overview of the blue/green deployment methodology and describes techniques
customers can implement using Amazon Web Services (AWS) services and tools.
This paper also addresses considerations around the data tier, which is an
important component of most applications.
Introduction
In a traditional approach to application deployment, you typically fix a failed
deployment by redeploying an older, stable version of the application.
Redeployment in traditional data centers is typically done on the same set of
resources due to the cost and effort of provisioning additional resources.
Although this approach works, it has many shortcomings. Rollback isn’t easy
because it’s implemented by redeployment of an older version from scratch. This
process takes time, making the application potentially unavailable for long
periods. Even in situations where the application is only impaired, a rollback is
required, which overwrites the faulty version. As a result, you have no
opportunity to debug the faulty application in place.
Applying the principles of agility, scalability, utility consumption, and
automation capabilities of the AWS platform can shift the paradigm of
application deployment. This enables a better deployment technique called
blue/green deployment.
Blue/Green Deployment Methodology
Blue/green deployments provide near zero-downtime release and rollback
capabilities. The fundamental idea behind blue/green deployment is to shift
traffic between two identical environments that are running different versions of
your application. The blue environment represents the current application
version serving production traffic. In parallel, the green environment is staged
running a different version of your application. After the green environment is
ready and tested, production traffic is redirected from blue to green. If any
6. Amazon Web Services – Blue/Green Deployments on AWS July 2016
Page 6 of 35
problems are identified, you can roll back by reverting traffic back to the blue
environment.
Figure 1: Basic blue/green example
Although blue/green deployment isn’t a new concept, you don’t commonly see it
used in traditional, on-premises hosted environments due to the cost and effort
required to provision additional resources. The advent of cloud computing
dramatically changes how easy and cost-effective it is to adopt the blue/green
approach to deploying software.
Benefits of Blue/Green
Traditionally, with in-place upgrades, it was difficult to validate your new
application version in a production deployment while also continuing to run your
old version of the application. Blue/green deployments provide a level of
isolation between your blue and green application environments. It ensures
spinning up a parallel green environment does not affect resources underpinning
your blue environment. This isolation reduces your deployment risk.
7. Amazon Web Services – Blue/Green Deployments on AWS July 2016
Page 7 of 35
After you deploy the green environment, you have the opportunity to validate it.
You might do that with test traffic before sending production traffic to the green
environment, or by using a very small fraction of production traffic, to better
reflect real user traffic. This is called canary analysis or canary testing. If you
discover the green environment is not operating as expected, there is no impact
on the blue environment. You can route traffic back to it, minimizing impaired
operation or downtime, and limiting the blast radius of impact.
This ability to simply roll traffic back to the still-operating blue environment is a
key benefit of blue/green deployments. You can roll back to the blue environment
at any time during the deployment process. Impaired operation or downtime is
minimized because impact is limited to the window of time between green
environment issue detection and shift of traffic back to the blue environment.
Furthermore, impact is limited to the portion of traffic going to the green
environment, not all traffic. If the blast radius of deployment errors is reduced, so
is the overall deployment risk.
Blue/green deployments also fit well with continuous integration and continuous
deployment (CI/CD) workflows, in many cases limiting their complexity. Your
deployment automation would have to consider fewer dependencies on an
existing environment, state, or configuration. Your new green environment gets
launched onto an entirely new set of resources.
In AWS, blue/green deployments also provide cost optimization benefits. You’re
not tied to the same underlying resources. So if the performance envelope of the
application changes from one version to another, you simply launch the new
environment with optimized resources, whether that means fewer resources or
just different compute resources. You also don’t have to run an overprovisioned
architecture for an extended period of time. During the deployment, you can
scale out the green environment as more traffic gets sent to it and scale the blue
environment back in as it receives less traffic. Once the deployment succeeds, you
decommission the blue environment and stop paying for the resources it was
using.
Define the Environment Boundary
When planning for blue/green deployments, you have to think about your
environment boundary—where have things changed and what needs to be
8. Amazon Web Services – Blue/Green Deployments on AWS July 2016
Page 8 of 35
deployed to make those changes live. The scope of your environment is
influenced by a number of factors, as described in the following table.
Factors Criteria
Application architecture Dependencies,loosely/tightlycoupled
Organizational Speed and number of iterations
Risk and complexity Blastradius and impactoffailed deployment
People Expertise of teams
Process Testing/QA, rollback capability
Cost Operating budgets,additional resources
T able 1: Factors that affect environment boundary
For example, organizations operating applications that are based on the
microservices architecture pattern could have smaller environment boundaries
because of the loose coupling and well-defined interfaces between the individual
services. Organizations running legacy, monolithic apps can still leverage
blue/green deployments, but the environment scope can be wider and the testing
more extensive. Regardless of the environment boundary, you should leverage
automation wherever you can to streamline the process, reduce human error, and
control your costs.
AWS Tools and Services Enabling
Blue/Green Deployments
AWS provides a number of tools and services to help you automate and
streamline your deployments and infrastructure through the AWS API, which you
can leverage using the web console, CLI tools, SDKs, and IDEs.1 Because many
services are available in the AWS ecosystem, the following is not a complete list.
Instead, this list provides an overview of only the services we discuss in this
paper. You may find software solutions outside of AWS to help automate and
monitor your infrastructure and deployment, but this paper focuses on AWS
services.
9. Amazon Web Services – Blue/Green Deployments on AWS July 2016
Page 9 of 35
Amazon Route 53
Amazon Route 53 is a highly available and scalable authoritative DNS service that
routes user requests for Internet-based resources tothe appropriate destination.2
Amazon Route 53 runs on a global network of DNS servers providing customers
with added features, such as routing based on health checks, geography, and
latency. DNS is a classic approach to blue/green deployments, allowing
administrators to direct traffic by simply updating DNS records in the hosted
zone. Also, time to live (TTL) can be adjusted for resource records; this is
important for an effective DNS pattern because a shorter TTL allows record
changes to propagate faster to clients.
Elastic Load Balancing
Another common approach to routing traffic for a blue/green deployment is
through the use of load balancing technologies. Elastic Load Balancing
distributes incoming application traffic across designated Amazon Elastic
Compute Cloud (Amazon EC2) instances.3 Elastic Load Balancing scales in
response to incoming requests, performs health checking against Amazon EC2
resources, and naturally integrates with other AWS tools, such as Auto Scaling.
This makes it a great option for customers who want to increase application fault
tolerance.
Auto Scaling
Auto Scaling helps maintain application availability and lets customers scale EC2
capacity up or down automatically according to defined conditions.4 The
templates used to launch EC2 instances in an Auto Scaling group are called
launch configurations. You can attach different versions of launch configuration
to an Auto Scaling group to enable blue/green deployment. You can also
configure Auto Scaling for use with an Elastic Load Balancing load balancer. In
this configuration, Elastic Load Balancing balances the traffic across the EC2
instances running in an Auto Scaling group. You define termination policies in
Auto Scaling groups to determine which EC2 instances to remove during a
scaling action. As explained in the Auto Scaling Developer Guide, Auto Scaling
also allows instances to be placed in Standby state, instead of termination, which
helps with quick rollback when required.5 Both Auto Scaling's termination
policies and Standby state enable blue/green deployment.
10. Amazon Web Services – Blue/Green Deployments on AWS July 2016
Page 10 of 35
AWS Elastic Beanstalk
AWS Elastic Beanstalk is a fast and simple way to get an application up and
running on AWS.6 It’s perfect for developers who want to deploy code without
worrying about managing the underlying infrastructure. Elastic Beanstalk
supports Auto Scaling and Elastic Load Balancing, both of which enable
blue/green deployment. Elastic Beanstalk makes it easy torun multiple versions
of your application and provides capabilities to swap the environment URLs,
facilitating blue/green deployment.
AWS OpsWorks
AWS OpsWorks is a configuration management service based on Chef that allows
customers to deploy and manage application stacks on AWS.7 Customers can
specify resource and application configuration, and deploy and monitor running
resources. OpsWorks simplifies cloning entire stacks when you’re preparing
blue/green environments.
AWS CloudFormation
AWS CloudFormation provides customers with the ability to describe the AWS
resources they need through JSON formatted templates.8 This service provides
very powerful automation capabilities for provisioning blue/green environments
and facilitating updates to switch traffic, whether through Route 53 DNS, Elastic
Load Balancing, etc. The service can be used as part of a larger infrastructure as
code strategy, where infrastructure is provisioned and managed using code and
software development techniques, such as version control and continuous
integration, in a manner similar to how application code is treated.
Amazon CloudWatch
Amazon CloudWatch is a monitoring service for AWS Cloud resources and the
applications you run on AWS.9 CloudWatch can collect and track metrics, collect
and monitor log files, and set alarms. It provides system-wide visibility into
resource utilization, application performance, and operational health, which are
key toearly detection of application health in blue/green deployments.
11. Amazon Web Services – Blue/Green Deployments on AWS July 2016
Page 11 of 35
Techniques
The following techniques are examples of how you can implement blue/green on
AWS. While we highlight specific services in each technique, you may have other
services or tools to implement the same pattern. Choose the appropriate pattern
based on the existing architecture, the nature of the application, and goals for
software deployment in your organization. Experiment as much as possible to
gain experience for your environment and understand how the different
deployment risk factors interact with your specific workload.
Update DNS Routing with Amazon Route 53
DNS routing through record updates is a common approach to blue/green
deployments. DNS is the mechanism for switching traffic from the blue
environment to the green and vice versa, if rollback is necessary. This approach
works with a wide variety of environment configurations, as long as you can
express the endpoint into the environment as a DNS name or IP address.
In AWS, this technique applies to environments that are:
Single instances, with a public or Elastic IP address
Groups of instances behind an Elastic Load Balancing load balancer, or
third-party load balancer
Instances in an Auto Scaling group with an Elastic Load Balancing load
balancer as the front end
Services running on an Amazon EC2 Container Service (Amazon ECS)
cluster fronted by an Elastic Load Balancing load balancer
Elastic Beanstalk environment web tiers
Other configurations that expose an IP or DNS endpoint
12. Amazon Web Services – Blue/Green Deployments on AWS July 2016
Page 12 of 35
Figure 2: Classic DNS pattern
Figure 2 shows how Amazon Route 53 manages the DNS hosted zone. By
updating the alias record,1 0 you can route traffic from the blue environment tothe
green environment.
You can shift traffic all at once or you can do a weighted distribution. With
Amazon Route 53, you can define a percentage of traffic to go to the green
environment and gradually update the weights until the green environment
carries the full production traffic. A weighted distribution provides the ability to
perform canary analysis where a small percentage of production traffic is
introduced to a new environment. You can test the new code and monitor for
errors, limiting the blast radius if any issues are encountered. It also allows the
green environment to scale out to support the full production load if you’re using
Elastic Load Balancing, for example. Elastic Load Balancing automatically scales
its request-handling capacity to meet the inbound application traffic; the process
of scaling isn’t instant, so we recommend that you test, observe, and understand
13. Amazon Web Services – Blue/Green Deployments on AWS July 2016
Page 13 of 35
your traffic patterns. Load balancers can also be pre-warmed (configured for
optimum capacity) through a support request.1 1
Figure 3: Classic DNS-weighted distribution
If issues arise during the deployment, you achieve rollback by updating the DNS
record to shift traffic back to the blue environment. Although DNS routing is
simple to implement for blue/green, the question is how quickly can you
complete a rollback. DNS TTL determines how long clients cache query results.
However, with older clients and potentially misbehaving clients in the wild,
certain sessions may still be tied to the previous environment.
Although rollback can be challenging, this pattern certainly has the benefit of
enabling a granular transition at your own pace to allow for more substantial
testing and for scaling activities. Tohelp manage costs, consider using Auto
Scaling for the EC2 instances to scale out the resources based on actual demand.
This works well with the gradual shift using Amazon Route 53 weighted
distribution. For a full cutover, be sure to tune your Auto Scaling policy to scale
14. Amazon Web Services – Blue/Green Deployments on AWS July 2016
Page 14 of 35
as expected and remember that the new Elastic Load Balancing endpoint may
need time to scale up as well.
Swap the Auto Scaling Group Behind Elastic Load
Balancer
If DNS complexities are prohibitive, consider using load balancing for traffic
management to your blue and green environments. This technique uses Auto
Scaling to manage the EC2 resources for your blue and green environments,
scaling up or down based on actual demand. You can also control the Auto
Scaling group size by updating your maximum desired instance counts for your
particular group.
Auto Scaling also integrates with Elastic Load Balancing, so any new instances
are automatically added to the load balancing pool if they pass the health checks
governed by the load balancer. Elastic Load Balancing tests the health of your
registered EC2 instances with a simple ping or a more sophisticated connection
attempt or request. Health checks occur at configurable intervals and have
defined thresholds to determine whether an instance is identified as healthy or
unhealthy. For example, you could have an Elastic Load Balancing health check
policy that pings port 80 every 20 seconds and, after passing a threshold of 10
successful pings, reports the instance as being InService. If enough ping requests
time out, then the instance is reported to be OutofService. Used in concert with
Auto Scaling, an instance that is OutofService could be replaced if the Auto
Scaling policy dictates. Conversely, for scale-down activities, the load balancer
removes the EC2 instance from the pool and drains current connections before
they terminate.
15. Amazon Web Services – Blue/Green Deployments on AWS July 2016
Page 15 of 35
Figure 4: Swap Auto Scaling group pattern
Figure 4 shows the environment boundary reduced to the Auto Scaling group. A
blue group carries the production load while a green group is staged and
deployed with the new code. When it’s time to deploy, you simply attach the
green group to the existing load balancer to introduce traffic to the new
environment. For HTTP/HTTPS listeners, the load balancer favors the green
Auto Scaling group because it uses a least outstanding requests routing
algorithm, as explained in the Elastic Load Balancing Developer Guide.1 2 You can
control how much traffic is introduced by adjusting the size of your green group
up or down.
16. Amazon Web Services – Blue/Green Deployments on AWS July 2016
Page 16 of 35
Figure 5: Blue Auto Scaling group nodes in standby and decommission
As you scale up the green Auto Scaling group, you can take blue Auto Scaling
group instances out of service by either terminating them or putting them in
Standby state, which is discussed in the Auto Scaling Developer Guide.1 3 Standby
is a good option because if you need to roll back to the blue environment, you
only have to put your blue server instances back in service and they're ready to
go.1 4 As soon as the green group is scaled up without issues, you can
decommission the blue group by adjusting the group size to zero. If you need to
roll back, detach the load balancer from the green group or reduce the group size
of the green group to zero.
This pattern’s traffic management capabilities aren’t as granular as the classic
DNS, but you could still exercise control through the configuration of the Auto
Scaling groups. For example, you could have a larger fleet of smaller instances
with finer scaling policies, which would also help control costs of scaling. Because
the complexities of DNS are removed, the traffic shift itself is more expedient. In
17. Amazon Web Services – Blue/Green Deployments on AWS July 2016
Page 17 of 35
addition, with an already warm load balancer, you can be confident that you’ll
have the capacity to support production load.
Update Auto Scaling Group Launch Configurations
Auto Scaling groups have their own launch configurations. A launch
configuration contains information like the Amazon Machine Image (AMI) ID,
instance type, key pair, one or more security groups, and a block device mapping.
You can associate only one launch configuration with an Auto Scaling group at a
time, and it can’t be modified after you create it. To change the launch
configuration associated with an Auto Scaling group, replace the existing launch
configuration with a new one. After a new launch configuration is in place, any
new instances that are launched use the new launch configuration parameters,
but existing instances are not affected. When Auto Scaling removes instances
(referred toas scaling in) from the group, the default termination policy is to
remove instances with the oldest launch configuration. However, you should
know that if the Availability Zones were unbalanced to begin with, then Auto
Scaling could remove an instance with a new launch configuration to balance the
zones. In such situations, you should have processes in place to compensate for
this effect.
Toimplement this technique, you start with an Auto Scaling group and Elastic
Load Balancing load balancer. The current launch configuration has the blue
environment.
18. Amazon Web Services – Blue/Green Deployments on AWS July 2016
Page 18 of 35
Figure 6: Launch configuration update pattern
Todeploy the new version of the application in the green environment, update
the Auto Scaling group with the new launch configuration, and then scale the
Auto Scaling group to twice its original size.
19. Amazon Web Services – Blue/Green Deployments on AWS July 2016
Page 19 of 35
Figure 7: Scale up green launch configuration
Then, shrink the Auto Scaling group back to the original size. By default,
instances with the old launch configuration are removed first. You can also
leverage a group’s Standby state to temporarily remove instances from an Auto
Scaling group, as explained in the Auto Scaling Developer Guide.1 5 Having the
instance in Standby state helps in quick rollbacks, if required. As soon as you’re
confident about the newly deployed version of the application, you can
permanently remove instances in Standby state.
20. Amazon Web Services – Blue/Green Deployments on AWS July 2016
Page 20 of 35
Figure 8: Scale down blue launch configuration
Toperform a rollback, update the Auto Scaling group with the old launch
configuration. Then, do the preceding steps in reverse. Or if the instances are in
Standby state, bring them back online.
Swap the Environment of an Elastic Beanstalk
Application
Elastic Beanstalk enables quick and easy deployment and management of
applications without having to worry about the infrastructure that runs those
applications. To deploy an application using Elastic Beanstalk, upload an
application version in the form of an application bundle (for example, java .war
file or .zip file), and then provide some information about your application.
Based on application information, Elastic Beanstalk deploys the application in
21. Amazon Web Services – Blue/Green Deployments on AWS July 2016
Page 21 of 35
the blue environment and provides a URL to access the environment (typically
for web server environments).
Elastic Beanstalk provides several deployment policies that you can configure to
use, ranging from policies that perform an in-place update on existing instances,
to immutable deployment using a set of new instances. Because Elastic Beanstalk
performs an in-place update when you update your application versions, your
application may become unavailable to users for a short period of time.
However, you can avoid this downtime by deploying the new version to a
separate environment. The existing environment’s configuration is copied and
used to launch the green environment with the new version of the application.
The new—green—environment will have its own URL. When it’s time to promote
the green environment to serve production traffic, you can use Elastic Beanstalk's
Swap Environment URLs feature, as explained in the AWS Elastic Beanstalk
Developer Guide.1 6
Toimplement this technique, you would use Elastic Beanstalk to spin up the blue
environment.
22. Amazon Web Services – Blue/Green Deployments on AWS July 2016
Page 22 of 35
Figure 9: Elastic Beanstalk environment
Elastic Beanstalk provides an environment URL when the application is up and
running. Then, the green environment is spun up with its own environment URL.
At this time, twoenvironments are up and running, but only the blue
environment is serving production traffic.
23. Amazon Web Services – Blue/Green Deployments on AWS July 2016
Page 23 of 35
Figure 10: Prepare green Elastic Beanstalk environment
Topromote the green environment to serve production traffic, you go to the
environment's dashboard in the Elastic Beanstalk console and choose Swap
Environment URL from the Actions menu. Elastic Beanstalk performs a DNS
switch, which typically takes a few minutes. Refer to the technique Update DNS
Routing with Amazon Route 53 for the factors to consider when performing a
DNS switch. When the DNS changes have propagated, you can terminate the blue
environment. To perform a rollback, invoke Swap Environment URL again.
24. Amazon Web Services – Blue/Green Deployments on AWS July 2016
Page 24 of 35
Figure 11: Decommission blue Elastic Beanstalkenvironment
Clone a Stack in AWS OpsWorks and Update DNS
AWS OpsWorks has the concept of stacks, which are logical groupings of AWS
resources (EC2 instances, Amazon RDS, Elastic Load Balancing, and so on) that
have a common purpose and should be logically managed together. Stacks are
made of one or more layers. A layer represents a set of EC2 instances that serve a
particular purpose, such as serving applications or hosting a database server.
When a data store is part of the stack, you should be aware of certain data
management challenges. We discuss those in depth in the next section.
Toimplement this technique in AWS OpsWorks, bring up the blue environment
/stack with the current version of the application.1 7
25. Amazon Web Services – Blue/Green Deployments on AWS July 2016
Page 25 of 35
Figure 12: AWS OpsWorks stack
Next, create the green environment/stack with the newer version of application.
At this point, the green environment is not receiving any traffic. If Elastic Load
Balancing needs to be pre-warmed1 8, you can do that at this time.
26. Amazon Web Services – Blue/Green Deployments on AWS July 2016
Page 26 of 35
Figure 13: Clone stack to create green environment
When it’s time to promote the green environment/stack into production, update
DNS records to point to the green environment/stack’s load balancer. You can
also do this DNS flip gradually by using the Amazon Route 53 weighted routing
policy. This technique involves updating DNS, so be aware of DNS issues
discussed in the technique in Update DNS Routing with Amazon Route 53.
27. Amazon Web Services – Blue/Green Deployments on AWS July 2016
Page 27 of 35
Figure 14: Decommission blue stack
Best Practices for Managing Data
Synchronization and Schema Changes
Managing data synchronization across two distinct environments can be
complex, depending on the number of data stores in use, the intricacy of the data
model, and the data consistency requirements.
Both the blue and green environments need up-to-date data:
The green environment needs up-to-date data access because it’s
becoming the new production environment.
The blue environment needs up-to-date data in the event of a rollback,
when production is then either shifted back or kept on the blue
environment.
28. Amazon Web Services – Blue/Green Deployments on AWS July 2016
Page 28 of 35
Broadly, you accomplish this by having both the green and blue environments
share the same data stores. Unstructured data stores, such as Amazon Simple
Storage Service (Amazon S3) object storage, NoSQL databases, and shared file
systems are often easier to share between the two environments. Structured data
stores, such as relational database management systems (RDBMS), where the
data schema can diverge between the environments, typically require additional
considerations.
Decoupling Schema Changes from Code Changes
A general recommendation is to decouple schema changes from the code
changes. This way, the relational database is outside of the environment
boundary defined for the blue/green deployment and shared between the blue
and green environments. The two approaches for performing the schema changes
are often used in tandem:
The schema is changed first, before the blue/green code deployment.
Database updates must be backward compatible, so the old version of the
application can still interact with the data.
The schema is changed last, after the blue/green code deployment. Code
changes in the new version of the application must be backward compatible
with the old schema.
Schema modifications in the former approach are often additive. You add fields
to tables, new entities, and relationships. If needed, you can use triggers or
asynchronous processes to populate these new constructs with data based on
data changes performed by the old application version.
You need to follow coding best practices when developing applications to ensure
your application can tolerate the presence of additional fields in existing tables,
even if they are not used. When table row values are read and mapped into source
code structures (objects, array hashes, etc.), your code should ignore fields it can’t
map instead of causing application runtime errors.
Schema modifications in the latter approach are often deletive. You remove
unneeded fields, entities, and relationships, or merge and consolidate them. By
this time, the old application version is no longer operational.
29. Amazon Web Services – Blue/Green Deployments on AWS July 2016
Page 29 of 35
Figure 15: Decoupled schema and code changes
There’s an increased risk involved when managing schema changes in this way:
failures in the schema modification process can impact your production
environment. Your additive changes can bring down the old application because
of an undocumented issue where best practices weren’t followed or where the
new application version still has a dependency on a deleted field somewhere in
the code.
Tomitigate risk appropriately, this pattern places a heavy emphasis on your pre-
deployment software lifecycle steps. Be sure to have a strong testing phase and
framework and a strong QA phase. Performing the deployment in a testing
environment can help identify these sorts of issues early, before the push to
production.
When Blue/Green Deployments Are Not
Recommended
As blue/green deployments become more popular, developers and companies are
constantly applying the methodology to new and innovative use cases. However,
there are some common use case patterns where applying this methodology, even
if possible, isn’t recommended.
These are cases where implementing blue/green deployment introduces too
much risk, whether due to workarounds or additional “moving parts” in the
deployment process. These complexities can introduce additional points of
failure, or opportunities for the process to break down, that may negate any risk
mitigation benefits blue/green deployments bring in the first place.
30. Amazon Web Services – Blue/Green Deployments on AWS July 2016
Page 30 of 35
The following scenarios highlight patterns that may not be well suited for
blue/green deployments.
Are your schema changes too complex to decouple from the code
changes? Is sharing of data stores not feasible?
In some scenarios, sharing a data store isn’t desired or feasible. Schema changes
are too complex to decouple. Data locality introduces too much performance
degradation to the application, as when the blue and green environments are in
geographically disparate regions. All these situations require a solution where the
data store is inside the deployment environment boundary and tightly coupled to
the blue and green applications, respectively.
This requires data changes to be synchronized—propagated from the blue
environment to the green one, and vice versa. The systems and processes to
accomplish this are generally complex and limited by the data consistency
requirements of your application. This means that during the deployment itself,
you have to also manage the reliability, scalability and performance of that
synchronization workload, adding risk to the deployment.
Does your application need to be “deployment aware”?
You have touse feature flags to control the behavior of the application during the
blue/green deployment. This is often a consideration in conjunction with the
inability to effectively decouple schema and code changes. Your application code
would execute additional or alternate subroutines during the deployment, to keep
data in sync, or perform other deployment-related duties. These routines are
enabled and turned off, as the case may be, during the deployment by using
configuration flags.
This practice also introduces additional risk and complexity and typically isn’t
recommended with blue/green deployments. The goal of blue/green deployments
is to achieve immutable infrastructure, where you don’t make changes to your
application after it’s deployed, but redeploy altogether. That way, you ensure the
same code is operating in a production setting and in the deployment setting,
reducing overall risk factors.
31. Amazon Web Services – Blue/Green Deployments on AWS July 2016
Page 31 of 35
Does your commercial off-the-shelf (COTS) application come with a
predefined update/upgrade process that isn’t blue/green deployment
friendly?
Many commercial software vendors provide their own update and upgrade
process for their applications that they have tested and validated for distribution.
While vendors are increasingly adopting the principles of immutable
infrastructure and automated deployment, not all software products have those
capabilities to date.
Working around the vendor’s recommended update and deployment practices to
try toimplement or simulate a blue/green deployment process may also
introduce unnecessary risk that can potentially negate the benefits of this
methodology.
Conclusion
Application deployment has associated risks. But the advent of cloud computing,
deployment and automation frameworks, and new deployment techniques, such
as blue/green, help mitigate risks, such as human error, process, downtime, and
rollback capability. The AWS utility billing model and breadth of automation
tools make it much easier for customers to move fast and cost-effectively
implement blue/green deployments at scale.
32. Amazon Web Services – Blue/Green Deployments on AWS July 2016
Page 32 of 35
Contributors
The following individuals and organizations contributed to this document:
George John, Solutions Architect, Amazon Web Services
Andy Mui, Solutions Architect, Amazon Web Services
Vlad Vlasceanu, Solutions Architect, Amazon Web Services
Appendix
Comparison of Blue Green Deployment Techniques
The following table offers an overview and comparison of the different
blue/green deployment techniques discussed in this paper. The risk potential is
evaluated from desirable lower risk () to less desirable higher risk ( ).
Technique Risk Category Risk Potential Reasoning
Update DNS
Routing with
Amazon Route 53
Application Issues Facilitates canary analysis
Application Performance Gradual switch,traffic splitmanagement
People/Process Errors Depends on automation framework,overall
simple process
Infrastructure Failures Depends on automation framework
Rollback DNS TTL complexities (reaction time,
flip/flop)
Cost Optimized via Auto Scaling
Swap the Auto
Scaling group
behind Elastic
Load Balancer
Application Issues Facilitates canary analysis
Application Performance Less granular traffic splitmanagement,
already warm load balancer
People/Process Errors Depends on automation framework
Infrastructure Failures Auto Scaling
Rollback No DNS complexities
Cost Optimized via Auto Scaling
Update Auto
Scaling Group
Application Issues Detection of errors/issues in a heterogeneous
fleet is complex
33. Amazon Web Services – Blue/Green Deployments on AWS July 2016
Page 33 of 35
Technique Risk Category Risk Potential Reasoning
launch
configurations
Application Performance Less granular traffic split,initial traffic load
People/Process Errors Depends on automation framework
Infrastructure Failures Auto Scaling
Rollback No DNS complexities
Cost Optimized via Auto Scaling,but initial scale-
out overprovisions
Swap the
environment of an
Elastic Beanstalk
application
Application Issues Ability to do canary analysis ahead of
cutover, but not with production traffic
Application Performance Full cutover
People/Process Errors Simple process,automated
Infrastructure Failures Auto Scaling,CloudWatch monitoring,Elastic
Beanstalk health reporting
Rollback DNS TTL complexities
Cost Optimized via Auto Scaling,but initial scale-
out may overprovision
Clone a stack in
OpsWorks and
update DNS
Application Issues Facilitates canary analysis
Application Performance Gradual switch,traffic splitmanagement
People/Process Errors Highly automated
Infrastructure Failures Auto-healing capability
Rollback DNS TTL complexities
Cost Dual stack of resources
34. Amazon Web Services – Blue/Green Deployments on AWS July 2016
Page 34 of 35
Document Revisions
Date Revision
June 2015 Initial Publication
Notes
1 https://aws.amazon.com/tools/
2 https://aws.amazon.com/route53/
3 https://aws.amazon.com/elasticloadbalancing/
4 https://aws.amazon.com/autoscaling/
5
http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/AutoScaling
EnteringAndExitingStandby.html
6 https://aws.amazon.com/elasticbeanstalk/
7 https://aws.amazon.com/opsworks/
8 https://aws.amazon.com/cloudformation/
9 https://aws.amazon.com/cloudwatch/
1 0 Alias records are specific to Amazon Route 53, offering extended capabilities to
standard DNS. They act as pointers to other AWS resources such as Elastic
Load Balancing endpoints or Amazon CloudFront distributions. You can read
more about them at
http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resource-
record-sets-choosing-alias-non-alias.html.
1 1 For best practices for evaluating Elastic Load Balancing, see
http://aws.amazon.com/articles/1636185810492479.
1 2
http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/ho
w-elb-works.html
35. Amazon Web Services – Blue/Green Deployments on AWS July 2016
Page 35 of 35
1 3
http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/AutoScaling
EnteringAndExitingStandby.html
1 4 For additional information about Auto Scaling state lifecycle, see
http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/AutoScaling
GroupLifecycle.html.
1 5
http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/AutoScaling
EnteringAndExitingStandby.html
1 6 http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-
features.CNAMESwap.html
1 7 Refer to the AWS OpsWorks web page at https://aws.amazon.com/opsworks/
for more details.
1 8 For more information on pre-warming with Elastic Load Balancing, see
http://aws.amazon.com/articles/1636185810492479#pre-warming.