2. Questions for Your First Week on Amazon EC2
• What is Amazon EC2?
• Where do I start with EC2?
– What are the components of EC2?
– What are the big picture architecture cloud patterns?
– What other Amazon Web Services should I use?
• How do I map my existing infrastructure architecture to EC2?
– How do I configure my environment for high availability?
– How do manage my environment in the cloud?
– How do I monitor my environment in the cloud?
3. An Approach to Your First Week on Amazon EC2
• Leverage what you already know about web architectures
• Understand enough to get started with EC2
• Take an iterative approach
– Refactor and evolve
– Pay for what you use
• Understand and apply cloud best practices
– Capacity on demand
– Elasticity
– Design for failure
– Infrastructure automation
4. Day 1 – Identify and Deploy Application on EC2
Region
Availability Zone
Linux
Apache
Ruby
MySQL
Source Protocol Port
0.0.0.0/0 HTTP 80
0.0.0.0/0 SSH 22
5. Day 1 – Launching Your First EC2 Instance
1. Login to the AWS Management Console and go to the Amazon EC2 console
2. Choose an Amazon Machine Image (AMI)
3. Choose an instance size
4. Create a key pair for SSH access
5. Create port-based security rules
6. Launch instance
7. Upload code
12. Day 1 – Application Tasks
[laptop]$ ssh -i ~/ec2.pem ec2-user@ec2-54-242-199-31.compute-1.amazonaws.com
__| __|_ )
_| ( / Amazon Linux AMI
___|___|___|
https://aws.amazon.com/amazon-linux-ami/2012.09-release-notes/
There are 13 security update(s) out of 24 total update(s) available
Run "sudo yum update" to apply all updates.
[ec2-user@ip-10-40-203-29 ~]$ sudo yum -y -q update
[ec2-user@ip-10-40-203-29 ~]$ sudo yum -y -q install mysql-server ruby19
[ec2-user@ip-10-40-203-29 ~]$ sudo service mysqld start
Starting mysqld: [ OK ]
13. Day 1 Day 2
Day 1 Recap Day 2 Considerations
1. Created an AWS account
2. Identified an application for cloud
deployment
3. Logged into the Amazon EC2 console
4. Chose an AMI
5. Launched an EC2 Instance
6. Set up application
• What options do we have for setting
up a tiered architecture?
• How can we apply security to our
instances?
• Are there options for serving static
content?
• How can we capture our work efforts
to make them repeatable?
14. Day 2 – Create a tiered architecture
Region
Availability Zone
Snapshot Amazon S3
Internet
User
HTTP (80)
Source Protoco
l
Port
0.0.0.0/0 HTTP 80
0.0.0.0/0 SSH 22
Connection Type Details
EC2 Security
Group
web
S3 Bucket
15. Day 2 – Launching a Tiered Web Application
1. Snapshot EC2 Instance
– Stop MySQL
– Bundle New AMI
2. Create a Relational Database (RDS) Instance
– We’ll use MySQL
– Other options: Oracle, SQL Server
3. Configure App to Use RDS MySQL Database
20. Day 2 – Connect to RDS Database
[ec2-user@ip-10-40-203-29 ~]$ mysql -uroot –p –D devdb
–h nonprod.ctjsifycx3sq.us-east-1.rds.amazonaws.com
Welcome to the MySQL monitor. Commands end with ; or g.
Your MySQL connection id is 268
Server version: 5.5.27-log Source distribution
Copyright (c) 2000, 2012, Oracle and/or its affiliates. All rights reserved.
Type 'help;' or 'h' for help. Type 'c' to clear the current input statement.
mysql>
21. Day 2 – Connect to RDS Database (encrypted)
[ec2-user@ip-10-40-203-29 ~]$ wget https://rds.amazonaws.com/doc/mysql-ssl-ca-cert.pem
[ec2-user@ip-10-40-203-29 ~]$ mysql -uroot –p –D devdb
–h nonprod.ctjsifycx3sq.us-east-1.rds.amazonaws.com
--ssl_ca=rds-ssl-ca-cert.pem
Welcome to the MySQL monitor. Commands end with ; or g.
Your MySQL connection id is 269
Server version: 5.5.27-log Source distribution
Copyright (c) 2000, 2012, Oracle and/or its affiliates. All rights reserved.
Type 'help;' or 'h' for help. Type 'c' to clear the current input statement.
mysql>
22. Day 2 Day 3
Day 2 Recap Day 3 Considerations
1. Took a snapshot of AMI as a backup
2. Created an RDS MySQL Database
3. Created and validated security groups
• What tools does AWS provide to
monitor EC2 and RDS?
• How can we better monitor the our
environment (proactive vs. reactive)?
• How can we be notified when our
servers hits certain thresholds?
23. Day 3 – Monitor Environment
Region
Availability Zone
Internet User
S3 Bucket
Amazon
CloudWatch
Users
Alarm
Administrator
Email Notification
24. Day 3 – Create CloudWatch Alarm
1. Select metric to monitor
– Database write latency is an accurate indicator of our application’s health
2. Define a threshold
– Write latency that exceeds 500ms typically requires some intervention on our part
3. Create a topic for our alarm and subscribe to the topic via email
29. Day 3 Day 4
Day 3 Recap Day 4 Considerations
1. Identified CloudWatch metrics
available for EC2 and RDS
2. Created a CloudWatch alarm
3. Set up alarm to email on failure
4. Reviewed CloudWatch dashboard
• What happens if our EC2 instance
fails?
• What happens if an entire AZ is
unavailable?
• How can we elastically scale based
on increased/decreased traffic?
• What happens if our primary RDS
instance fails?
30. Day 4 – Designing for High Availability
Region
Availability Zone
Internet
S3 Bucket
Amazon
CloudWatch
Users
Alarm
Availability Zone
RDS DB Standby
Auto scaling Group
31. Day 4 – Steps to High Availability
1. Create an Elastic Load Balancer (ELB)
– Balances traffic across multiple EC2 instances
– Enables running instances in multiple Availability Zones (AZ’s)
2. Configure Auto Scaling
– Automatically scale up if demand increases
– And scale down to save money
3. Setup RDS Multi-AZ
– Synchronous replication to standby in another AZ
– Automatic fails over if needed
– Also minimizes backup window (slave is used)
36. Day 4 – Configure Auto Scaling
1. Use the Amazon Machine Image (AMI) we created
2. Leverage multiple Availability Zones
– Distribute instances across two AZ’s
– Ensure at least two instances are up
3. Create an Auto Scaling trigger
– Same concept as CloudWatch alarm from earlier
– Just now we’re proactively taking action
40. Day 4 – Set Up RDS Multi-AZ
[laptop]$ aws rds modify-db-instance
--db-instance-identifier nonprod
--multi-az --region us-east-1
Yep, that’s it.
No mouse required. :)
41. Day 4 Day 5
Day 4 Recap Day 5 Considerations
1. Spread our application across
Availability Zones.
2. Automated scaling across availability
zone leveraging Auto Scaling.
3. Implemented load balancing via AWS
Elastic Load Balancing.
4. Implemented a highly available
database by applying RDS multi-AZ.
• How do we make use of a custom
DNS domain for our load balancer?
• How can we configure accounts for
other AWS users?
• How can we template and replicate
our server environment?
42. Day 5 – DNS, Identity & Access Management, Deployment Automation
Region
Availability Zone
Internet
S3 Bucket
Amazon
CloudWatch
Users
Alarm
Availability Zone
RDS DB Standby
AWS IAM
www.example.com
AWS Management
Console
AWS
CloudFormation
TemplateStack
46. First Week on Amazon EC2
• Evolution from Day 1 Day 5
– Single AMI Monitored Tiered HA DNS, IAM, Automation
• Cloud architecture best practices implemented in week 1 on EC2
– Proactive scaling – Auto scaling triggers
– Elasticity – EC2
– Design for failure – ELB, Auto scaling groups, Availability Zones
– Decouple your components – EC2, RDS
– Infrastructure automation – CloudFormation
47. …and Beyond
• Moving beyond week 1 on EC2
– AWS Management Console is great but you have other options
• Command Line Interface
• API
– Other AWS Services
• Elasticache, OpsWorks, Beanstalk, DynamoDB, SQS
– Operational Checklist
• http://media.amazonwebservices.com/AWS_Operational_Checklists.pdf
– Deployment Automation
• http://aws.amazon.com/cloudformation/aws-cloudformation-articles-and-tutorials/
– Links to whitepapers and architectures
• http://aws.amazon.com/whitepapers/
• http://aws.amazon.com/architecture/
51. One Observatory - Two sites
SKA Phase 1
~10% full SKA
2016-2020
In Mid West WA
Survey Facility at
Low and Mid Frequency
MWA x 100
36 dish ASKAP +
60 more dishes
with PAFs
In Karoo RSA
Detailed deep field facility
at High Frequency
64 dish
MeerKAT +
190 more
dishes
Good low frequency site Good high frequency site
52. ICRAR
What’s next
• 2012 - 2015 Pre-construction design
– ANZ lead involvement in 3 areas
– Industry Opportunities in design
– SKA project + construction staff build up
• 2016 - 2020 Phase 1 Construction 10%
– ANZ contracts (institutions + industry)
– SKA construction + operations staff build up
• 2020 - 2024 Phase 2 Construction 100%
– SKA 1 operations + SKA 2 construction
53. ICRAR
Spectral Line Datacube
• Aperture Arrays
– Assume 40,000 channels
– 28,000 x 28,000 x 40,000 x 4
– ≈125TB
• Stokes parameters and Weighting Map
– Multiple by 5
– ≈ 625TB
54. ICRAR
Surveys
• ∼1000 Cubes to survey the whole sky - taken
throughout the year.
• Ultra deep surveys look at
the same point again,
and again, and again.
• The images are then
stacked to produce a
single cube.
55. 65 ICRAR
Data sizes after a year
Science Case Raw data Line Cube Continuum Polarization
Neutral IGM in EOR 86EB 22TB 1.2TB
Galaxy evolution over cosmic time 11EB 7.7PB
Galaxy evolution in the nearby universe 8EB 1.3PB
Wide area HI emission 35EB 78PB
Deep extragalactic HI emission stacking 86EB 290TB
Wide field continuum observations 425EB 23.9PB
Wide field polarization observations 425EB 300PB
57. ICRAR
theSkyNet Pan-STARRS1 Optical
Galaxy Survey (POGS)
• Pixel-by-pixel spectral energy distribution fitting
– UV, Optical, IR, and Radio
– Local stellar mass surface density
– Star formation history
– Age
– Extinction
– Dust attenuation
• Start with ~100 million pixel SEDs
– Each pixel SED takes between 5 and 10 minutes
– It would take between 950 and 1,900 years on a single core
59. ICRAR
Fabric and Boto
# Setup NAGIOS
sudo('chkconfig nrpe on')
sudo('service nrpe start')
# Setup the HDF5
with cd('/usr/local/src'):
sudo('wget http://www.hdfgroup.org/ftp/lib-external/szip/2.1/src/szip-2.1.tar.gz')
sudo('tar -xvzf szip-2.1.tar.gz')
sudo('wget http://www.hdfgroup.org/ftp/HDF5/current/src/hdf5-1.8.10.tar.gz')
sudo('tar -xvzf hdf5-1.8.10.tar.gz')
sudo('rm *.gz')
60. ICRAR
Puppet
Does all the yum updates you need and creates
directories and users
package { 'httpd':
ensure => installed,
}
package { 'httpd-devel':
ensure => installed,
}
user { 'apache':
ensure => present,
groups => ['ec2-user'],
}
service { 'httpd':
ensure => running,
enable => true,
require =>
Package['httpd'],
}
61. ICRAR
Scaling BOINC - AWS to the rescue
• You MUST think about scaleability from the beginning
• Zooniverse - crashed due to load in the first 4 hours
• theSkyNet - crashed due to load in the first 6 hours
• theSkyNet POGS crashed due to load from a BOINC
challenge (after 6 months) - was up and running again
in 3 hours
62. ICRAR
Any Questions
• I’m “hard of hearing” - I wear hearing aids so please speak
clearly;
and Ladies - I’m sorry I don’t hear higher frequencies very well
at all.
• Contact me at kevin.vinsen@icrar.org
• http://www.theskynet.org
• http://23.23.126.96/pogs
63. Your First Week on Amazon Elastic Compute Cloud
A hands on approach to understanding Amazon EC2
James Bromberger
E: jameseb@amazon.com
T: @JamesBromberger
Editor's Notes
Your First Week With EC2 – Don Southard / Nate WigerAmazon Elastic Compute Cloud (Amazon EC2) provides resizable compute capacity in the cloud and is often the starting point for your first week using AWS. Understanding where to start with EC2 can be a challenge as the the cloud reinforces old concepts of building web scale architectures, but it also introduces new concepts that entirely change the way applications are built and deployed. This session will introduce these new concepts, along with the fundamentals of EC2, by employing an agile approach that is made possible by the cloud. Attendees will experience the reality of what a first week on EC2 looks like from the perspective of someone deploying an actual application on EC2. You will follow them as they progress from deploying their entire application on an EC2 AMI on day 1 to more advanced features and patterns available in EC2 by day 5. Throughout the process we'll identify cloud best practices that can be applied to your first week on EC2 and beyond.
The way I plan on presenting this to attempt to articulate the questions customers typically have when starting with EC2/AWS. From our perspective (solutions architect) it’s tough thing to handle in that it’s a classic chicken and egg problem. In other words, should this person first understand all the individual components of EC2 or do they first need to understand the big picture architecture patterns of the cloud? It’s my experience the best to get started is to, in the infamous words of Nike, “Just Do It”
As noted in previous slide the approach we’ll take in the presentation is to “Just Do It”. We’ll leverage what we already know about web scale architectures. Additionally, we need understand the basics of EC2 (starting on Day 1) to get started. Moving from Day 1 -> 5 we’ll take an iterative approach to our first week on EC2. that is to say we’ll refactor and evolve our EC2 environment based on incrementally implementing additional features available to us. As we do that we’ll naturally begin to implement cloud best practices that allow for proactive scaling, infrastructure automation, elasticity, designing for failure and decoupling our components.
This day so we’ll start basic. May want to point to a basic checklist of initial decisions we’re going to make. For example:What’s an AMI? Should I use an EBS or instance backed AMI? PIOPS?What’s a Region? Ok, now that I know what Region is what Region am I going to use?What’s an AZ? Ok, now that I know what an AZ is which AZ should I choose and why?What’s an AMI? Is it possible to have an AMI that includes all the foundational packages and applications I need already installed (e.g. LAMP/R stack)? If not, how do get them installed?How do I access the AMI I’ve launched (instance)? How do I get my application on my instance?Optional topics:Should we talk about security groups or glance over that till later? Yes, because default deny.Key PoiintOur application is fully functioning Day 1!!!!
As noted in previous slide the approach we’ll take in the presentation is to “Just Do It”. We’ll leverage what we already know about web scale architectures. Additionally, we need understand the basics of EC2 (starting on Day 1) to get started. Moving from Day 1 -> 5 we’ll take an iterative approach to our first week on EC2. that is to say we’ll refactor and evolve our EC2 environment based on incrementally implementing additional features available to us. As we do that we’ll naturally begin to implement cloud best practices that allow for proactive scaling, infrastructure automation, elasticity, designing for failure and decoupling our components.
This is first week so we’re going to follow the KISS principle while at the same time making informed decisions. This is a one hrpreso so we can’t dissect every aspect of what we’re doing that said a few key points for this slide are as follows:Start with Launch Instance Quick Start Wizard. Chose the Amazon Linux AMI. Note this an EBS-backed instance which means, among other things, we can take snapshots of the instance.Finally notice the region we’ve selected.
There are several families of instances in EC2 that align with the needs of your application. Within the instance families are different instance types to chose from. For this first day we’ll select T1 Micro. After all it’s free!!!
Skipping this slide as it’s not super interesting and we need to move reasonably quickly.
Skipping this slide as it’s not super interesting and we need to move reasonably quickly.
Tagging is form of metadata on an instance. There are numerous way of using tags on instances but initially we’ll use tagging organizing our instances by the tier they reside in.
Key pairs allow for securely connecting to our instance after it launches. NOTE: you can only generate this key pair once.
Security groups act as firewall at the instance level. They can be configured to control inbound traffic based on;ProtocolPortSourceIn our case we’ll start out by allowing inbound traffic from anywhere for both HTTP and SSH. Iterations later in our first week may warrant modifying these rules.
Skipping this slide as it’s not super interesting and we need to move reasonably quickly.
Now our instance is launched! Lots we could point out here but a few key items to note:Instance - This is a unique ID for our instance that we’d leverage in areas such as running commands via CLI.State – For EBS volumes the state can be Pending -> Running -> Stopped or Terminated.AMI – This is external DNS name. Unless we use a DNS service such as Route 53 this is DNS name we’ll use to connect to our instance.
Once launched, works like any other Linux systemInstall packages, software, etcMight mention we’ll have to sftp our application artifacts over to the instance.
Lessons learned
Note this is essentially the same diagram from Day 1 but it’s maturing as we leverage more of the features available with EC2. First we take a snapshot of of AMI/instance to ensure we can recover from failures or mistakes on our part.Next we split off the database from the web tier by leveraging RDS MySQLThen we set security groups to ensure end users can only access our web tier via HTTP(80) and access to the web tier via SSH is limited by IP.Finally, we split off static content to S3 (debating whether this fits here???)
We’ve already load packages, application artifacts and configuration to our instance. Before we make any changes we want to take a snapshot of our instance in case we have to roll back. We can do this by right clicking on the instance and selecting “Create Image (EBS AMI)” Our snapshot is stored in S3. If we need to revert back to this snapshot we select it in AMIs and launch.
TadaThis is great for firing up additional instances for testing or to try out different instance sizesWe’ll come back to this later on
Two key fields in the Instance Details page include:DB Instance ClassMulti-AZ Deployment
Database backups are a key responsibility of operational DBAs. With RDS this operational burden is handled for us. We simply Enable Automated Backups and choose our retention period for the backups. Enabled Automated Backups – Selecting “Yes” will enable automated backups.Backup Retention Period – The # of days automated backups are retained.
Not the actual screen capture but gives an idea of what we might do for some of these concepts. BTW – did this with QuickTime so unless we really want to get fancy no need to buy any software.Question is how much specificity do we really want to provide??? It really depends on the audience.If we chose to take this step we could show quick snippets of:Creating an accountQuick overview of Management ConsoleThen transition to…Showing how to download and setup CLI and then run basic commands
Not the actual screen capture but gives an idea of what we might do for some of these concepts. BTW – did this with QuickTime so unless we really want to get fancy no need to buy any software.Question is how much specificity do we really want to provide??? It really depends on the audience.If we chose to take this step we could show quick snippets of:Creating an accountQuick overview of Management ConsoleThen transition to…Showing how to download and setup CLI and then run basic commands
Go into RDS and grant access to “web” security groupThis is what we setup when launching our EC2 instance.Practically speaking this ensures the RDS instance will only allow connections from instances in the web EC2 Security Group.
Can now verify connectivity from our EC2 instance by using the mysql command line
Can now verify connectivity from our EC2 instance by using the mysql command line
For Day 3 we’re ensuring we realize we’d like to monitor our instance so we can be proactive in addressing any issues that may come up. Once we’ve set up the appropriate monitoring we decide we’re ready to make the site available for a preview to a limited # of users. AWS Monitoring Option: What is Cloudwatch?CloudWatch is a web service that enables you to monitor, manage, and publish various metrics, as well as configure alarm actions based on data from metrics.First we decide to dig deeper into AWS monitoring options. Initial research indicates AWS provides basic monitoring of EC2 and RDS automatically. For EC2 we’re given 10 pre-selected metrics and for RDS there are 13 pre-selected metrics. Of these basic metrics we decide the following are of most applicable to monitoring our environment’s health.Instance and System status (EC2)CPU Utilization (EC2 & RDS)FreeStorageSpace (RDS) – The amount of storage space space available.ReadLatency (RDS) the ave amount of time taken per disk I/O operation.ReadIOPS (RDS) – the ave # of disk I/O operations per secondAdditionally, it’s important for us understand the request latency as our application/game experience will be highly dependant on a request latency that > 100 milliseconds. This isn’t part of the basic metrics so we decide to publish a custom metric:RequestLatency (EC2) the ave amount of time taken to per request.For each of these metrics we set an alarm based on the metric hitting a threshold. Instance and System status - CPU Utilization (EC2 & RDS) – ave CPU Util > 80FreeStorageSpace (RDS) – Freeable space >= 1024 MBReadLatency (RDS) – aveReadLatency >= .01 secondReadIOPS (RDS) – aveReadIOPS >= 100/secondRequestLatency (EC2) – aveRequestLatency >= .05 second
The first thing of note is statistical sampling period setting. A shorter period makes for a more sensitive alarm where as a longer period smooths out brief spikes.Database write latency is an accurate indicator of our application’s health so we’ll select it as our metric to monitor.
Write latency that exceeds 500ms typically requires some intervention on our part.
If the alarm is triggered we want to be notified via email. We’ll do this by creating a topic and subscribing to that topic via email. CloudWatch will manage all of this for us by leveraging other AWS services and the end result is we are notified via email when the alarm is triggered.
For Day 4 we’re beginning to think about high availability, elasticity and scale. The 1st thing we notice is web tier is a single point of failure as it consists of a single EC2 instance. There are a couple of options in addressing this in AWS:We could put an EIP on the instance which would allow us to attach the EIP to another running instance in case the instance fails unexpectedly. This requires some scripting and/or manual intervention.Or we could implement an ELB with auto-scaling. This option only requires we launch and configure ELB and auto-scaling vs. scripting + manual intervention. ELB with auto-scalingOn day three we began to use CloudWatch to monitor our EC2 instance and send alarms when certain thresholds were exceeded. This was a good start but if those thresholds were exceeded we still had to manually intervene. Now we’ll leverage some of those same metrics to horizontially scale our EC2 fleet… automatically. Multi-AZ Deployment (scaling across AZs)We are not only concerned with our EC2 instance becoming unavailable but we’re also taking a more macro view and considering what happens if an entire AZ is unreachable. This would render any HA patterns within the AZ useless. In AWS you have the option of launching instances in multiple AZs within a Region. The AZs themselves are separate facilities that are engineered to be tolerant to faults in other AZs while being connected via low latency high speed network connections. Given that we decide to spread our instances across at least 2 AZs.Auto-Scaling OverviewAS is designed to make using EC2 easier by automatically adjusting the size of your fleet of EC2 instances. Additionally, it will monitor the health of your EC2 (and AZs) and automatically terminate and re-launch instances. Core to auto scaling is:Auto scaling group – defines min and max instances Launch configurations – defines the EC2 instance characteristics of launched instancesTriggers – rules for adding or subtracing serversScaling options – Manual, schedule, policyElastic Load BalancingBy leveraging auto scaling we now have some level elasticity, scale and HA through horizontal scaling across AZs. We don’t have a single point of contact that can distribute incoming traffic to our fleet of instances. We could launch another EC2 instance, install a load balancer product on the EC2 instance and then configure the load balancer to distribute load to our fleet of EC2 instances that make up our web tier. An alternative is to levarageAWS Elastic Load Balancing. Like other load balancing products ELB will:A single point of contact for distributing load across a fleet of servers.Provides encryption/decription capabilitiesMonitors the health of EC2 instances and only sends traffic to healthy instances.Sticky sessionsAllows for associated your ELB with your domain name. Implementing the ELBConfigure listenersConfigure health checkImplementing auto scaling stepsImplementing auto-scaling in our environment will be done by defining scale up and scale down policies. Steps include:Setting up CLICreate launch configurationAmong other things this is where you associate the ELB with the ASGCreate auto scaling groupCreate scale up and cool down auto scaling policesThis is where define adjustments up or down for the auto scaling groupEx: --adjustment=1 or –adjustment=-1Create the scale up and cool down alarms and associate with policiesEssentially we’re writing the rule for alarmif ave CPU across servers in AS group > 75% for 5 mins, scale up the fleet up by 10%”Additionally, we may want to emphasize we can now replace our custom CloudWatch metric (RequestLatency) with the ELB’s request latency or request count metricsMulti-Region RDSNow that we’ve addressed some of our HA issues in the web tier we begin to look at the database tier. As it stands now we have single master running RDS for MySQL which is deployed in a single AZ. If RDS in that AZ is unreachable our application may not function. To address this we simply take advantage of RDS for MySQL multi-AZ Deployment option. RDS multi-AZ deployments will monitor our database and activite the standy RDS instance if the master is unreachable.RDS Multi-AZ Implementation StepsGo to AWS Management consoleSelect RDS DB instance and then go to Instance Actions -> Modify DB InstanceSelect Multi-AZ Deployment = YesContinue -> Modify button to save configuration.
DNS Name – we’ll make it pretty later
DNS Name – we’ll make it pretty later
Let’s try out the command lineCreate launch configThen launch group
DNS Name – we’ll make it pretty later
Let’s try out the command lineCreate launch configThen launch group
For Day 5 we’re thinking about managing our environment. Before doing that we need to register our domain and go live with our application/game.Route 53In order to go live with our application/game we’ll need to associate a previously registered domain name (example.com). Route 53 is a highly available and scalable Domain Name Service (DNS) web service. In our case Route 53 allows us to associate our zone apex (example.com) to our ELB instance. Steps to configure Route 53Create a hosted zone and resource record setsUpdate your domain registrar to use Route 53 name serversCreate alias resource record sets for ELBAlias RRSs is a Route 53 specific extension to DNS functionality. Instead of an IP address or domain name, an alias record set contains a pointer to our ELB.The advantage of using ARRS is automatically recognizes changes in the RRS that the alias refers to. In our case, suppose the alias resource record set for example.compoints to our Elastic Load Balancing load balancer at lb1-1234.us-east-1.elb.amazonaws.com. If the IP address of the load balancer changes, Route 53 will automatically reflect those changes in DNS answers for example.com without any changes to the hosted zone that contains resource record sets for example.com. Identity and Access ManagementNow that our environment is ready for release we want to share the management responsibility of the environment without sharing credentials. AWS provides IAM to address controlling access to AWS and your account resources (e.g. EC2). With IAM you can create mulitple IAM users under the umbrella of your AWS account. For our first week on EC2 we’ll start out by using IAM to create an Administrators group, create administrative users and assign them to the Admins group. Steps to configure IAMCreate Admins GroupCreate 1 or more admin users with their own credentialsAdd the admin user(s) to the Admins groupInfrastructure Deployment Automation – CloudFormationCloudFormation enables you to create and delete related AWS resources together as a unit called a stack. A stack is a collection of resources. In our case resources include the AMIs we’re using for environment, the auto scaling group, and the RDS instance.We built all our “stack” via the administration console and CLI but now we want to repurpose our efforts for others to leverage. They could follow the steps we’ve taken over the first week but it would be ideal if they could launch their own environment based on template of our environment. CFN has the concept of a template. It’s a JSON file that describes the AWS infrastructure that makes up our environment (example.com). You can create a template by:Creating a template from scratchLeveraging an existing template and making changes to fit your needsOr use CloudFormerOption 3: CloudFormer is a tool that enables you to create AWS CFN templates from existing resources. In our case this is exactly what we need to do so we’ll go with this option.Steps to create template via CloudFormerLaunch CloudFormerIn the wizard select Use a Sample Template -> CloudFormer – create a template from your existing resources.Select the existing resources that make up our environment.
Evolution from Day 1 5Just recap where we started and where we ended up. Might emphasize some key EC2 features implememented in our 1st week - auto scaling, monitoring RDS, Regions AZs, identity and access management and automation.Cloud architecture best practicesOverlaying the cloud best practices we implemented as we iteratively built out our application environment in EC2Moving beyond week 1A lot that can be put here but may want to limit to some obvous items for week 2 and beyond
Moving beyond week 1A lot that can be put here but may want to limit to some obvious items for week 2 and beyond