This document discusses efficient ways to manage environments in AWS using cloud formation templates. It covers key components like build, deploy, operate, and monitor. It provides guidance on using templates to configure environments, automating deployments with tools like Chef, implementing blue-green deployments, creating alarm stacks to monitor resources, and scaling infrastructure based on cloudwatch metrics. The overall aim is to achieve faster release cycles, predictability, and reliability when managing dynamic AWS infrastructure.
2. 2
• Shorter infrastructure cycles.
• Better predictability into failures.
• Time-saving for customer releases
• Reliable dynamic infrastructure
Efficiently Managing Environments :- Need
3. 3
Managing Environments :- Key Components
Build
Deploy
Operate
Monitor
• Designing the right
rules and policies
• Configuring templates
to scale seamlessly.
• Integrate code-
deployment efficiently
with build
• Picking up right
configuration based on
environment
• Achieving faster boot-
times
• Blue- Green
Deployment for high
availability
• Promotion plan for
releases
• Rollback plan for any
failures
• Right set of Cloudwatch
monitoring.
• Different level of alarms
for failures.
• Auto-trigger post critical
Alarm actions
4. 4
Use of Single Master Template
• Pass all the necessary inputs for all the
stacks only using a stack function.
• Each output resource can further be a
cloud-formation template creating the given
stack.
• Create common stacks such as alarms and
security which other stacks will re-use.
Build
5. 5
Configure the same environment to scale differently Build
• Pass the Environment type as Input Parameter.
• Create the mapping for each environment type.
• Pass the mapping as reference while creating the AWS
resource.
• Example :-
• A RDS instance can now be m4.large in Performance environment but
m4.x.large in production without changing access rules and security
policies
6. 6
Using User-data to automate code-deployments Build
Creating the right
configuration files
with AWS resource
e.g. instance file ,
metadata files etc.
Setting the right
role and
permissions to the
EC2 instance.
Copying the right
versioned
application and
deployment builds
from S3
Sourcing the
instance data file
Running the chef-
client on the given
role from instance
file
User data should be set as
a part of the cloud-
formation :-
http://answersforaws.com/
episodes/4-user-data-
cloud-init-cloudformation/
7. 7
Stack Functions
• All of the above features in Cloud-formations can be automated
using stack functions :-
• create_environment
• delete_enviornment
• update_stack
• validate_enviornment.
• End to end environment create, update and delete at a single step.
• Building Cloud-Formation Dynamically
https://github.com/bazaarvoice/cloudformation-ruby-dsl
• Testing Cloud-Formation :- https://github.com/stelligent/cfn_nag
Build
8. 8
• Custom AMI’s are built to reduce the Boot Time for
a EC2 instance to scale up.
• With the help of packer and automation the ability
to refresh environments can be scaled up.
• Polling job to figure out when the latest Base AMI
is available.
• Packer configuration to configure the right
repository for the Custom AMI.
• Packer script to bake the ami whenever the polling
job succeeds.
Customizing AMI’s Deploy
9. 9
Break-down deployments into logical flows Deploy
All
the cookbooks
you are going
to use
All the recipes
you are going
to use
Roles and
their naming
convention
The number
of
environments
you are going
to use
Avoid the use of one giant cookbook
• We currently use 5 roles , each deriving
from a base role, each further divided
into cookbooks
10. 10
Configuring Chef for Environments :- Guiding Patterns Deploy
• Vital to separate the deployment code for different
environments
• Light-weight deployment roles, independent of run-lists.
• Run-lists stored as part of default recipe of any cookbook.
• Env_Run_list to separate different run-list for different
environments.
• Use of Wrapper cookbooks to customize the settings of
upstream cookbooks without any forking.
14. 14
Build Release/Rollback :- Best Practices Operate
• Never promote/rollback between versions by changing DNS.
• Never register/deregister instances with/from the ELB
• Use of ELB health checks for service failures
• Use build-systems and S3 to keep a track to build version
numbers on each of the ELB’s
• Live File
• Pre-Live
• Live-Prev file
15. 15
Alarm Stack :- Building the Right Trigger Monitor
• Alarms rules should be generic for all AWS resources in the environment.
• Each resource should have a unique alarm to help identify the exact failure.
• Warns the user before the actual failure, so the corrective action is taken.
• At the time nearing a failure, should trigger auto-healing steps to avoid an failure.
16. 16
• Cloudwatch is used to monitor AWS resources like EC2,
on unusual usage pattern like High CPU, less memory
the alarms are triggered.
• Alarm stack is created by a Cloud Formation template
consisting of SNS Notifications . These SNS notification
further are hooked to third party apps like email, Pager-
duty etc.
• Alarm Stacks consists of level of Alarms depending on
severity of Cloudwatch Metrics failure
• Warning Alarm :- Just notify on email,
• Critical Alarm :- Pager-duty Call + Auto-scaling
event
• Alarms + Lambda :- https://medium.com/cohealo-
engineering/how-set-up-a-slack-channel-to-be-an-
aws-sns-subscriber-63b4d57ad3ea#.kcqs9cl8x
Alarm Stack :- Defining level of alarms
Database Stack with Alarms
Monitor
17. 17
• Based on Metrics such as CPU-Utilization, ELB requests configure scaling
policies.
• Time to scale-up an instance should be considered .
• Using Scheduled Actions to change scaling policies of ASG’s for time-based
change in traffic
• Scale up early, scale down slowly.
Alarm Stack :- Scaling Based on Metrics Monitor