SlideShare a Scribd company logo
1 of 279
Download to read offline
Getting Started with
AWS
Jungwon Seo

University of Stavanger
Jungwon Seo
In Norway

From South Korea

Master in CS

Bachelor in CS

3 start-ups

Many random works
Do you like to learn about
new technology?
My first reaction : Rejection
My second reaction : Worry
The most important thing is
that companies want us to know these:
Have you ever heard
about AWS?
When we run a service..
Server
Database Files
Application
When Slack runs a service..
What makes us use it
differently?
# of data + # of requests,
+ # of operations ….
# of users
=
Typical Questions
How can we scale up our service?

Buy a better server?

But until when?
Back to CS101
File Server
Files
We need to separate all the
components as much as possible
Application Server
Database Files
Application
DB Server
Database
Let’s start using AWS
This is the website that we will test
Database Files
Application
Target Architecture #0
EC2 : working as a web server, database and file storage
This is the first architecture that we will build.
I will assume that we already have an AWS account.
1. Select the region closest to your country
2. Find EC2
This is the main page of EC2
1. Click
1. Click
Choose the OS that you want to use.
I will use Ubuntu.
t2.micro is free but only for one instance.
If you run two or more, you will have to pay.
1. Choose
2. Click
We are skipping the detailed settings.
1. Click
For the first time, we need to make a key pair.
Keep it safe!
1. Select
2. Type the name
3. Download
4. Click
This is the result page.
1. Click to check
Now we can see the instance we have launched.
Let’s just think that one instance equals one server.
In AWS, we administrate ports in the AWS console
- not in the server.
1. Click
2. Select (you can also check it from the instance’s details page)
3. Click
4. So far this is the only port that we can access remotely.
5. Click
Let’s add 80 port for HTTP.
1. Click
2. Select HTTP
3. Click
Now we have two open ports (22 and 80)!
Go back to the instance page.
1. Click
3. Check the public IP
2. Select
Now we will connect to the server using “.pem” and “ssh"
Ok, this is the first screen that you will see.
I have already made several test codes in my git.
So, let’s use them.
git clone https://github.com/MuchasEstrellas/AWS.git
Move to ‘architecture0’ and run ‘start.sh’
run ‘sh start.sh’
It will install and update many things.
This screen should appear, if it worked correctly.
Now if you type the public IP,
you can check your website publicly.
Target Architecture #1
EC2 : working as a web server and file storage

RDS : working as a database
Files
Application
This time,
we need to use RDS as the database of our service.
1. Search and Click
Get Started Now!
1. Click
Choose MySQL and click next.
1. Click
2. Click
3. Click
Scroll down.
1. Go down
Enter the information.
I recommend that you use the same name and password.
uisaws123
1. Type in the following.
2. Click
Scroll down.
1. Go down
Make a database.
Again, I recommend that you use the same database name.
1. Enter the db name.
Let’s launch!
1. Click
Now, you can see your database.
1. Click
This is the RDS instance that we made.
If you scroll down,
you can see more detailed information.
A couple minutes later, you can see the endpoint of this database.
It means, it will be the address that you can access.
Check the security group whether it is opened to everywhere.
(rule 0.0.0.0/0 means everywhere)
Move to ‘architecture1' directory.
Type ‘start.sh’ with RDS endpoint.
Then the ‘start.sh’ file will automatically change the DB_HOST 

in the views.py
views.py
You can also get an overview of your database in the RDS page.
No difference from the client side. 

But this website is receiving the data from RDS.
Target Architecture #2
EC2 : working as a web server

RDS : working as a database

S3 : working as a file server
This time, we will separate the file server from EC2.
1. Search and Click
There is a term which is called ‘bucket’.
Let’s just think S3 is like some kind of folder in the cloud.
1. Click
The bucket name should be unique.
Because S3 will make a url with your bucket name.
1. Type in your bucket name.
2. Click
Just skip this at this time.
1. Click
Just skip this too at this time.
1. Click
Okay, let’s create.
1. Click
Okay, here is our new bucket.
However, there is an important thing that we have to know.
Accessing S3 programmatically should be authorized.
Otherwise, anyone can upload and delete your files.
Let’s use IAM to access S3 programmatically.
1. Search and Click
There are many things to fix for your security,
but let’s skip at this time.
1. Click
Let’s ignore these at this moment, but it is important when you use AWS for a real service.
1. Click
Let’s make a user which will have a permission for S3.
1. Enter the user name
2. Click
3. Click
Remember to click “Programmatic access”
1. Click
You need to make a group first.
Users should be in a group.
1. Click
2. Type S3
3. Select S3FullAccess
4. Click
This group will contain the permission for S3.
Users who are in that group will have the same permission.
Select the group that we just made.
1. Click
Okay, create!
1. Click
It is really important to keep your key safe.
Mine is exposed now, but I have deleted it. Don’t try using mine!
1. Download
This is the ID This is the password
Now move to the ‘architecture2’ directory
and run following command.

sh start.sh <RDS> <ACCESSKEY> <SECRET> <S3_REGION>
RDS endpoint
Access Key Secret Key Region
Then it will edit ‘views.py’ automatically.
But your website will not work properly.
We need to move all the static and media files (css, js, image) to S3.
<collectstatic.py>
This file will help.
(like this)
Let’s run!
You have to activate the virtual environment.
Don’t forget to deactivate.
Now we can find the static folder in S3.
If you click it,
you can find an exact copy of the directory that we had in EC2.
Let’s check ‘style.css’
We can see the link.
It means now we can access this css file publicly.
Like this.
If you take a look at ‘views.py’, you can find a path for S3.
Before
After
One more thing! When a user uploads a picture,
it should be stored in S3.
Take a look at this code.
Let’s test.
Okay, now you can find the picture that you just uploaded in S3 as well.
Great!
All the uploaded images are also from S3.
Target Architecture #3
EC2 : working as a web server

RDS : working as a database

S3 : working as a file server

Route53 : DNS
You must have bought some domain before.
But how can we connect it to our service?
We need to change ‘name server’
to have control of this domain in AWS.
Let’s use ‘Route 53’ to deal with domains.
Get started now!
You can enter your domain.
In my case, uisprogramming.com.
Please use yours!
Okay, this is the default record sets.
Remember the NS values.
In the website where you bought a domain
you can change the name server to the name server
from the previous slides
This GUI can be different from site to site.
Just remember that you have to change NamesServer!
Here are four name servers that I copied from AWS route53.
Okay, now it has changed, but it takes a while to be applied completely.
Just go get a drink and continue it tomorrow.
Okay, good morning!
We will test a simple case, which is connecting
'www.uisprogramming.com' to the public IP of your server.
It also takes a while, but not that long.
Go get some coffee.
Run ‘start.sh’ in ‘architecture3_4’ with adding your domain.
sh start.sh <RDS_ENDPOINT> <ACCESS_KEY> <SECRET_KEY> <S3_REGION>
<DOMAIN>
Now we can access our website with a domain.
Target Architecture #4
EC2 : working as a web server

RDS : working as a database

S3 : working as a file server

Route53 : DNS

ELB : Load Balancer
Go to the EC2 page and click Load Balancers.
1. Click
Let’s create!
1. Click
Choose Application Load Balancer.
Enter the name, and just set the HTTP port and choose all zones.
If you see this page, just move to next page.
Load Balancer is also a kind of computer.
So we need to manage its port setting from the security group.
1. Select
2. Edit if you need to
3. This time, open only 80(http) port.
4. Click
This is the configuration between ELB and EC2.
2. Click
1. Enter the target name
Okay, so we will register an instance to this target group.
This group is very important for the future auto scaling.
1. Select
2. Click
3. Click
Good.
1. Click
Okay, so let’s check our Load Balancer.
This is the Load Balancer dash board.
This is the target group that we made.
If you click the target group, you can see the targets (instances)
1. Select
2. Click
3. Here you can check whether your instance is working properly or not.
Now, we will redirect our Route53 to ELB

(not to the public IP of EC2)
Move Move
One more!
Okay, stop.

We will replace the IP with ELB.
1. Select
2. Select
3. Choose
4. Click
The reason that we have to connect ELB to Route53 is
that we can easily replace our server without any downtime.
When you want to change your server, you just need to change your
instance from the target group.
You don’t need to reconnect your server IP to Route53 anymore.
One more thing, we can also add more instances to this ELB.
Let’s create one more instance like we did in architecture#0.
However, if you use two t2.micro instances, AWS will charge you.
If you don’t want to pay, then just read.
Open one more terminal, 

do the same job that we did in architecture#0
After that, we need to add this instance to our target group.
1. Click
2. Choose
3. Click
4. Click
1. Select
2. Click
3. Click
Add the new instance to our registered targets.
Now, we can see two instances.
Let’s see if both work fine.
We will monitor it with the Nginx access log.
As you can see, there are many “ELB-HealthCheker” logs.
This is the way to separate the healthy instances and the unhealthy
instances from ELB.
If you refresh your website, only one instance will get the request.
If you refresh it again, the other instance will get the request.
Elastic Load Balancer will choose one by one 

so that we can distribute the traffic.
Target Architecture #5
EC2 : working as a web server

RDS : working as a database

S3 : working as a file server

Route53 : DNS

ELB : Load Balancer

AWS Certificate Manager : TLS/SSL certificate
+
Before we get started, we need to modify the security group of ELB.
1. Click
2. Select
3. Click
2. Click
1. Click
Let’s add an HTTPS port.
1. Click
2. Select 3. Check the values.
4. Click
Now we are opening two ports(80,443) to the world!
We also need to add an HTTPS listener to ELB.
1. Click
2. Select
3. Click
4. Click
Oops, we don’t have a TLS certificate.
Let’s make that first.
!!!!!!
Fortunately, AWS offer an easy way to create a certificate.
1. Go!
Personally, I think this is the best service of AWS.
1. Click
1. Enter the domain name. 

I recommend that you use ‘*’.
2. Click
Use your domain!!
Since we have already moved our domain to Route53, 

DNS validation is easier than Email validation.
1. Click
2. Click
1. Click
Okay, next
1. Click
Create a record in Route 53 to validate that you are the owner.
1. Click
It will automatically add the record to your Route53.
1. Click
Good, click continue!
It takes some time.
Okay, finally we got a certificate.
Back to the ELB dashboard,
Now, we can select the certificate.
1. Select
2. Click
Now we have two listeners for ELB.
If you type your domain with https, you can see that it works.
EC2 : working as a web server

RDS : working as a database

S3 : working as a file server

Route53 : DNS

ELB : Load Balancer

AWS Certificate Manager : TLS/SSL certificate 

Elastic Cache(Redis) : Session Storage
Target Architecture #6
+
Session storage
- Cookie : Not secure.

- File storage : Disk I/O + can not be shared with other instance.

- Database : Good, but Disk I/O + additional burden on the database.

- In-memory DB : Highly recommended!
Session data : small and frequently requested
It’s better to make a security group for Redis first.
2. Click
1. Click
1. Fill in
2. Fill in like this.
3. Click
Redis will use 6379 port.
Redis can be found in the service called ElastiCache.
1. Choose
We can also use Memcached but I want to use Redis.
1. Click
1. Click
Click Redis
2. Click
It’s important to choose t2.micro, otherwise you have to pay.
1. Select
2. Name it
3. Choose t2.micro(free)
4. Choose None
5. Go down
If it is the first time, you need to make a subnet group like below.
1. Select
2. Name it
3. Choose all
Select the security group that we just made.
1. Select
2. Click
Okay, it is launching
Done!
1. Click
We can check the endpoint so that we can connect from our service.
As seen in this code, we are using Redis as a session storage.
You can find the way to replace the session storage for any kind of
server side languages that you are using.
sh start.sh <RDS_ENDPOINT> <ACCESS_KEY> <SECRET_KEY> <S3_REGION>
<DOMAIN> <REDIS_ENDPOINT>
Up until here is the tip of the iceberg.
The real beauty of AWS is auto scaling.
Let’s talk about it.
Example
Let’s say we have lunched a mobile game.
Unfortunately, it became too popular.
So we have to add more servers to deal with more users.
The maximum number of concurrent users per hour is 40000.
The minimum number of concurrent users per hour is 1000.
One server can deal with 500 people.
What is the optimal number of servers?
Solution #1 : Maximum number of servers
-> Too expensive, waste of servers for the non-busy time.
Solution #2 : Average number of servers
-> Sounds reasonable, but only government websites can do this 

because users will complain a lot.
Solution #3 : Use auto scaling!
->Yay!
But how does it work?
Shit! I can’t do this anymore.
I need some help!
Okay the users are
coming.
I’m sending but
can you handle
them?
Okay the users are
coming.
20 to Jungwon, 20 to Hans,
20 to Simon,
20 to Luca… God damm it! He is
not healthy yet.
Hans : Healthy Simon : Healthy Jungwon : Healthy Luca : Unhealthy
Wait! I’m
coming!!
Phew, thank you
guys!
Okay, they are
starting to go to bed.
Okay, Hans, you can go,
Simon can go.
Luca can go.
Ok, now I can
handle it alone.
Hans : Deleting Simon : Deleting Jungwon : Healthy Luca : Deleting
See you!
Snakkes! Ha det~!
Important!
From this slide AWS will charge you.
If you do not want to pay then just read.
EC2 : working as a web server

RDS : working as a database

S3 : working as a file server

Route53 : DNS

ELB : Load Balancer

AWS Certificate Manager : TLS/SSL certificate 

Elastic Cache(Redis) : Session Storage

Auto Scaling Group : auto scaling !
Target Architecture #7
+
Auto scaling 

group
Move to the EC2 dashboard.
Move to Auto Scaling Groups
1. Click
Let’s make an auto scaling group.
1. Click
As you can see, we have to make a launch configuration first.
1. Click
It means, we need to set the initial setting for the instance.
(The instance that will be launched)
1. Click
Choose t2.micro.
1. Select
2. Click
This is the most important part, especially ‘User data’.
Let’s talk more about this.
There are two ways to make 

‘auto scaling group’.
#1. Copy an image (AMI) of your origin instance and let it be used
for the auto scaling group.
AMI
Pros:
- Short launch time (until becoming healthy!)

- No need to install ‘things’ 

(they are already there)

Cons:
- Difficult to apply new updates.

(To do that requires making another new image.)
Auto Scaling Group
#2. Use a pure instance image and and let it be used
for the auto scaling group.
Pure
Instance
Image
Pros:
- Easy to apply new updates

(Because when it launches it will install ‘things’.)

Cons:
- Slow launch time.

(Because of installations!)
Auto Scaling Group
Both ways are fine, but I prefer option #2.
That’s why I made a ‘start.sh’ file.
So here is the scenario.
1. Launch
the pure instance
2. Clone 

the service from git
3. Install softwares and 

run the service
Then how can we set it up so that it automatically 

acts like shown on the previous slide?
That’s why we need to set ‘user data’.
Let’s take a look at the user data.
#!/bin/bash 

cd /home/ubuntu 

git clone https://github.com/MuchasEstrellas/AWS.git 

cd /home/ubuntu/AWS/architecture6_7 

sh start.sh <RDS_ENDPOINT> <ACCESS_KEY> <SECRET_KEY> <S3_REGION>
<REDIS_ENDPOINT>
1. move to the directory that contains the website
2. clone it from git.
3. move to the directory that will be run.
4. run the service.
It’s important to know that

the user who is running the command above is ‘root’.
(Keep this in mind, just in case you face some path or permission problem.)
Okay here we are again.
1. Enter the name
2. Set the initial command that should be run 

right after the instance has launched
3. Next!
1. Next!
Just skip this part.
Add 80 port.
3. Next!
1. Click 2. Add
Okay, now we are ready.
1. Click
Just use same key that we made earlier.
1. Click
Okay, now we have to make an actual group.
1. Enter the name
2. Set the number of instances you want to begin with.
3. Add all subnets.
4. Check
5. Add the target group that we made while launching ELB.
6. Both are fine for this test case, but it’s recommended to use ELB 

as long as you are not using classic load balancer.
7. Set it to 10sec (but you can choose how long you want).
8. Click
Let’s go for the first option this time. We will add scaling policies later.
1. Next
1. Check
We can set the notification, but I will skip.
1. Next
Skip.
1. Click
Let’s finish it.
1. Click
Okay, done!
One instance is launching now.
Before we test it, we need to take a brief look at the AWS console.
If you check the target group under the load balancer section,
you can see the new instance.
You can also check the new instance in the instances section.
So, how can we make this auto scaling work?
When should they be scaled out and scaled in?
There are several standards that we can set.
For example the average amount of network “in” and “out” 

or the average cpu utilization.
For this case we will use “average cpu utilization”.
This is very simple.
If the average CPU utilization is over 50% then add one more instance.
If the average CPU utilization is below 40% then remove one instance.
We can also set the range of the number of instances.

(e.g Min : 1, Max : 5)
Let’s see how it works.
We will use a new service called “cloudwatch”.
Basically this service is mainly used for monitoring our AWS services.
However, we can also set the alarm or event based on our standard.
For the autoscaling, we will use “Alarm”
1. Click
2. Click
1. Search your auto scaling group
2. Choose CPUUtilization.
Let’s choose “CPU Utilization” of “auto scaling group” 

that we have made.
1. Click Next.
And click next.
Okay, let’s set the first alarm.
This is about when to add an instance.
1. Put any name.
2. Put any desc.
3. when it’s >= 50%
4. How many datapoints should be detected out of all the data points.
5. Literally, monitoring period and the way to calculate.
6. We will set the action later. (But you can set it now)
7. Create!
So, now we have one alarm that can react based on CPU Utilization.
- If CPU reaches 50% then it will be ALARM.
- If it’s under 50% then it will be OK.
- If there are not enough monitoring data, then it will be INSUFFICIENT.
Create one more for reducing the number of instances.
This time is <= 40.
Create!
Now we have two alarms that can be used for auto scaling.
So let’s set the Scaling Policy using the alarms that we have set.
Go to the autoscaling group dashboard.
2. Click!
3. Click!
1. Select!
We need to use our alarm, so click the box below.
1. Click!
Let’s get started with the case 1: adding the instance.
1. Name it whatever you want.
2. Choose the one that we have made.
3. Action is adding 1 instance.
4. This is for efficiently adding new instances.

(Practically 10 seconds is too short, but for the test case I will go for 10sec)
5. Click!
Okay, it’s set.

Let’s add a removing policy in the same way as the adding policy.
1. Click!
1. Action is removing 1 instance.
The other steps are the same, except the action part.
2. Click!
Okay, now we have two policies.
But we need to set one more, the range of number of instances!
Minimum number and maximum number of instances.
1. Click
2. Click
1. Scroll Down
2. Set min 1
3. Set max 5
You can make any min and max number of instances 

based on your budget!
4. This is also about the time 

before the next scaling action
5. Scroll up and click save button.
Let’s test
To test and monitor, we will install two softwares.
“htop” and “stress”.
The left screen is htop screen.
The right screen is command for the stress test.
Okay, now cpu utilization of this instance has reached 100%.
If you checked the cloud watch, you can see the “higher cpu” alarm is
on. It may not react quickly, be patient.
Now you can see two instances in your auto scaling group.
As you can see in the target group, there are two instances in the group.

One is in the initial status.
It’s not healthy yet, it means the load balancer didn’t get the proper
response from that instance yet.
Now it’s healthy, maybe it finished installing things.
You can also check in the instance dash board.
Now there are three instances because it’s based on average cpu
utilization.
It may switch back and forth between 2 and 3 

because (100%+a%+b% )/3 <= 40% 

but (100%+a% )/2 >= 50%
That’s why cool down time is important!
However, let’s ignore it this time.
Now the stress test is over.
Let’s check if the number of instances will be reduced.
Okay, low cpu alarm has occurred.
The desired number of instances is 2.
However, the current number of instances is 3.
Yeah, it’s shutting down now.
After several minutes, now there is only one instance.
What about the bottle neck?
Okay the users are
coming.
20 to Jungwon, 20 to Hans,
20 to Simon,
20 to Luca
Hans : Healthy Simon : Healthy Jungwon : Healthy Luca : Healthy
Phew, thank you
guys!
Ferdinand
Really?
Meanwhile…..
Okay the users are
coming.
20 to Jungwon, 20 to Hans,
20 to Simon,
20 to Luca
Hans : Healthy Simon : Healthy Jungwon : Healthy Luca : Healthy
Phew, thank you
guys!
Ferdinand
Really?
We also need to consider the database
because it is shared by all the instances.
“But we are sharing S3 and Redis too!”
S3 is fine.
Redis is also fine because it is NOSQL.
(It means it is easy to scale out.)
-From AWS
-From Wikipedia
As long as we are using RDBMS 

as a main database,
there are 3 options.
#1 : AWS Aurora
I think this is the best.
However, I'm kind of afraid to migrate the
entire database to the new database.
-From AWS
#2 : Use EC2 and build multi-master manually.
This is what Slack is doing.
If you can do this, this is a good solution.
However, it sounds very difficult to me.
#3 : Separate the Read and Write Database.
I think I can do this.
Let’s try to make a read-replica first.
Move to the RDS page.
Find the RDS instance that we made before.
1. Click!
Go to the instance.
1. Click!
Let’s make a replica.
1. Click!
Click the “Create read replica”
1. Click!
Not to get confused, put a different name and create it.
1. Click!
2. Go down and 

Create
Now we can see two db instances.
I will just write two endpoints in my python code.
For the read query I will use the first url.
For the write query I will use the second url.
1. Read replica db!
1. Master db!
It is not the best way, but I just want you to understand the logic.
If you use other server side framework, there must be some convenient
way to separate the host addresses.
<For the read query>
<For the write query>
Still, it doesn’t sound that cool.
For the read-replica,
every time something is written in the master database,
‘read-replica’ has to be written as well.
It means for them it is not that different.
Write Read
Write
Master Slave
What’s the
difference for me?
There are several things that you can think
about.
Write operations are occurring less than the read
operations.
One solution to improve the RDS performance is
buying a better instance for Read-Replica.
Read operations are often complicated.
(Join,Union, Order by, Group By 

and amount of data)
OR!!!!
Make many read replicas.
Write
25%Read
Write
Master
Slave1
25%Read
Slave2
25%Read
Slave3
25%Read
Slave4
Unfortunately, we can’t use load
balancer or auto scaling.
However, we can use Route 53 to
distribute the requests.
EC2 : working as a web server

RDS1 : working as a master database

S3 : working as a file server

Route53 : DNS

ELB : Load Balancer

AWS Certificate Manager : TLS/SSL certificate 

Elastic Cache(Redis) : Session Storage

RDS2 : working as a slave database

+
Target Architecture #8
Let’s make one more read-replica.
Name this instance.
Okay, now we have three.
Move to the Route 53 page.
Move!
Move!
Let’s create one more Record set for the database.
I will name it “db.uisprogramming.com”.
The destination will be one read-replica’s endpoint.
1. Enter “db”.
2. Select CNAME
3. Select No.
4. Set it to 0.
5. Copy and paste one of the 

RDS read replica’s endpoint.
6. Choose Weighted.
7. Set it to 0.
8. Name it to read1.
9. Create!
Make one more record set with a different read replica.
More about Weighted Routing policy
Now we need to change the url for the read query.
I just sent a heavy query to test through “new url”, 

as you can see replica2’s cpu is 22%.
Now, do you want to use AWS?
Thank you!

More Related Content

What's hot

Tailwind CSS - KanpurJS
Tailwind CSS - KanpurJSTailwind CSS - KanpurJS
Tailwind CSS - KanpurJSNaveen Kharwar
 
Spark 의 핵심은 무엇인가? RDD! (RDD paper review)
Spark 의 핵심은 무엇인가? RDD! (RDD paper review)Spark 의 핵심은 무엇인가? RDD! (RDD paper review)
Spark 의 핵심은 무엇인가? RDD! (RDD paper review)Yongho Ha
 
Introduction To WordPress
Introduction To WordPressIntroduction To WordPress
Introduction To WordPressCraig Bailey
 
Express JS Middleware Tutorial
Express JS Middleware TutorialExpress JS Middleware Tutorial
Express JS Middleware TutorialSimplilearn
 
Introduction to WordPress
Introduction to WordPressIntroduction to WordPress
Introduction to WordPressHarshad Mane
 
Introduction To Single Page Application
Introduction To Single Page ApplicationIntroduction To Single Page Application
Introduction To Single Page ApplicationKMS Technology
 
Introduction to WordPress Security
Introduction to WordPress SecurityIntroduction to WordPress Security
Introduction to WordPress SecurityShawn Hooper
 
Announcing Amazon Athena - Instantly Analyze Your Data in S3 Using SQL
Announcing Amazon Athena - Instantly Analyze Your Data in S3 Using SQLAnnouncing Amazon Athena - Instantly Analyze Your Data in S3 Using SQL
Announcing Amazon Athena - Instantly Analyze Your Data in S3 Using SQLAmazon Web Services
 
INFCON2023-지속 가능한 소프트웨어 개발을 위한 경험과 통찰
INFCON2023-지속 가능한 소프트웨어 개발을 위한 경험과 통찰INFCON2023-지속 가능한 소프트웨어 개발을 위한 경험과 통찰
INFCON2023-지속 가능한 소프트웨어 개발을 위한 경험과 통찰Myeongseok Baek
 
Spring Security
Spring SecuritySpring Security
Spring SecurityBoy Tech
 

What's hot (20)

Tailwind CSS - KanpurJS
Tailwind CSS - KanpurJSTailwind CSS - KanpurJS
Tailwind CSS - KanpurJS
 
Spring Security 5
Spring Security 5Spring Security 5
Spring Security 5
 
Spark 의 핵심은 무엇인가? RDD! (RDD paper review)
Spark 의 핵심은 무엇인가? RDD! (RDD paper review)Spark 의 핵심은 무엇인가? RDD! (RDD paper review)
Spark 의 핵심은 무엇인가? RDD! (RDD paper review)
 
Introduction To WordPress
Introduction To WordPressIntroduction To WordPress
Introduction To WordPress
 
Express JS Middleware Tutorial
Express JS Middleware TutorialExpress JS Middleware Tutorial
Express JS Middleware Tutorial
 
Introduction to AWS X-Ray
Introduction to AWS X-RayIntroduction to AWS X-Ray
Introduction to AWS X-Ray
 
AWS Code Services
AWS Code ServicesAWS Code Services
AWS Code Services
 
Serverless Architectures.pdf
Serverless Architectures.pdfServerless Architectures.pdf
Serverless Architectures.pdf
 
Introduction to WordPress
Introduction to WordPressIntroduction to WordPress
Introduction to WordPress
 
Introduction To Single Page Application
Introduction To Single Page ApplicationIntroduction To Single Page Application
Introduction To Single Page Application
 
Introduction to WordPress Security
Introduction to WordPress SecurityIntroduction to WordPress Security
Introduction to WordPress Security
 
Announcing Amazon Athena - Instantly Analyze Your Data in S3 Using SQL
Announcing Amazon Athena - Instantly Analyze Your Data in S3 Using SQLAnnouncing Amazon Athena - Instantly Analyze Your Data in S3 Using SQL
Announcing Amazon Athena - Instantly Analyze Your Data in S3 Using SQL
 
Welcome to the AWS Cloud
Welcome to the AWS CloudWelcome to the AWS Cloud
Welcome to the AWS Cloud
 
Introduction to Amazon Athena
Introduction to Amazon AthenaIntroduction to Amazon Athena
Introduction to Amazon Athena
 
AWS Deployment Best Practices
AWS Deployment Best PracticesAWS Deployment Best Practices
AWS Deployment Best Practices
 
AWS SQS SNS
AWS SQS SNSAWS SQS SNS
AWS SQS SNS
 
INFCON2023-지속 가능한 소프트웨어 개발을 위한 경험과 통찰
INFCON2023-지속 가능한 소프트웨어 개발을 위한 경험과 통찰INFCON2023-지속 가능한 소프트웨어 개발을 위한 경험과 통찰
INFCON2023-지속 가능한 소프트웨어 개발을 위한 경험과 통찰
 
Sqs and loose coupling
Sqs and loose couplingSqs and loose coupling
Sqs and loose coupling
 
Spring Security
Spring SecuritySpring Security
Spring Security
 
Introduction to Amazon Aurora
Introduction to Amazon AuroraIntroduction to Amazon Aurora
Introduction to Amazon Aurora
 

Similar to Getting started with AWS

McrUmbMeetup 22 May 14: Umbraco and Amazon
McrUmbMeetup 22 May 14: Umbraco and AmazonMcrUmbMeetup 22 May 14: Umbraco and Amazon
McrUmbMeetup 22 May 14: Umbraco and AmazonDan Lister
 
Scaling on EC2 in a fast-paced environment (LISA'11 - Full Paper)
Scaling on EC2 in a fast-paced environment (LISA'11 - Full Paper)Scaling on EC2 in a fast-paced environment (LISA'11 - Full Paper)
Scaling on EC2 in a fast-paced environment (LISA'11 - Full Paper)Nicolas Brousse
 
Austin Web Architecture
Austin Web ArchitectureAustin Web Architecture
Austin Web Architecturejoaquincasares
 
Understand AWS OpsWorks - A DevOps Tool from AWS
Understand AWS OpsWorks - A DevOps Tool from AWSUnderstand AWS OpsWorks - A DevOps Tool from AWS
Understand AWS OpsWorks - A DevOps Tool from AWSdevopsjourney
 
Guide - Migrating from Heroku to AWS using CloudFormation
Guide - Migrating from Heroku to AWS using CloudFormationGuide - Migrating from Heroku to AWS using CloudFormation
Guide - Migrating from Heroku to AWS using CloudFormationRob Linton
 
The Future is Now: Leveraging the Cloud with Ruby
The Future is Now: Leveraging the Cloud with RubyThe Future is Now: Leveraging the Cloud with Ruby
The Future is Now: Leveraging the Cloud with RubyRobert Dempsey
 
Amazon web services : Layman Introduction
Amazon web services : Layman IntroductionAmazon web services : Layman Introduction
Amazon web services : Layman IntroductionParashar Borkotoky
 
Why Scale Matters and How the Cloud is Really Different (at scale)
Why Scale Matters and How the Cloud is Really Different (at scale)Why Scale Matters and How the Cloud is Really Different (at scale)
Why Scale Matters and How the Cloud is Really Different (at scale)Amazon Web Services
 
AWS Interview Questions And Answers | AWS Solution Architect Interview Questi...
AWS Interview Questions And Answers | AWS Solution Architect Interview Questi...AWS Interview Questions And Answers | AWS Solution Architect Interview Questi...
AWS Interview Questions And Answers | AWS Solution Architect Interview Questi...Edureka!
 
AWS CodeDeploy
AWS CodeDeploy AWS CodeDeploy
AWS CodeDeploy Ratan Das
 
Rails in the Cloud
Rails in the CloudRails in the Cloud
Rails in the Cloudiwarshak
 
Scaling on AWS for the First 10 Million Users
Scaling on AWS for the First 10 Million UsersScaling on AWS for the First 10 Million Users
Scaling on AWS for the First 10 Million UsersAmazon Web Services
 
How to copy multiple files from local to aws s3 bucket using aws cli
How to copy multiple files from local to aws s3 bucket using aws cliHow to copy multiple files from local to aws s3 bucket using aws cli
How to copy multiple files from local to aws s3 bucket using aws cliKaty Slemon
 
Serverless and Kubernetes Workshop on IBM Cloud
Serverless and Kubernetes Workshop on IBM CloudServerless and Kubernetes Workshop on IBM Cloud
Serverless and Kubernetes Workshop on IBM CloudAnsgar Schmidt
 
What is Node.js? (ICON UK)
What is Node.js? (ICON UK)What is Node.js? (ICON UK)
What is Node.js? (ICON UK)Tim Davis
 
Aws whitepaper-single-sign-on-integrating-aws-open-ldap-and-shibboleth
Aws whitepaper-single-sign-on-integrating-aws-open-ldap-and-shibbolethAws whitepaper-single-sign-on-integrating-aws-open-ldap-and-shibboleth
Aws whitepaper-single-sign-on-integrating-aws-open-ldap-and-shibbolethremayssat
 
ASP.NET Core and Docker
ASP.NET Core and DockerASP.NET Core and Docker
ASP.NET Core and DockerChuck Megivern
 
Aws vs azure bakeoff
Aws vs azure bakeoffAws vs azure bakeoff
Aws vs azure bakeoffSoHo Dragon
 
Rstudio in aws 16 9
Rstudio in aws 16 9Rstudio in aws 16 9
Rstudio in aws 16 9Tal Galili
 

Similar to Getting started with AWS (20)

McrUmbMeetup 22 May 14: Umbraco and Amazon
McrUmbMeetup 22 May 14: Umbraco and AmazonMcrUmbMeetup 22 May 14: Umbraco and Amazon
McrUmbMeetup 22 May 14: Umbraco and Amazon
 
Scaling on EC2 in a fast-paced environment (LISA'11 - Full Paper)
Scaling on EC2 in a fast-paced environment (LISA'11 - Full Paper)Scaling on EC2 in a fast-paced environment (LISA'11 - Full Paper)
Scaling on EC2 in a fast-paced environment (LISA'11 - Full Paper)
 
Austin Web Architecture
Austin Web ArchitectureAustin Web Architecture
Austin Web Architecture
 
Understand AWS OpsWorks - A DevOps Tool from AWS
Understand AWS OpsWorks - A DevOps Tool from AWSUnderstand AWS OpsWorks - A DevOps Tool from AWS
Understand AWS OpsWorks - A DevOps Tool from AWS
 
Guide - Migrating from Heroku to AWS using CloudFormation
Guide - Migrating from Heroku to AWS using CloudFormationGuide - Migrating from Heroku to AWS using CloudFormation
Guide - Migrating from Heroku to AWS using CloudFormation
 
The Future is Now: Leveraging the Cloud with Ruby
The Future is Now: Leveraging the Cloud with RubyThe Future is Now: Leveraging the Cloud with Ruby
The Future is Now: Leveraging the Cloud with Ruby
 
Amazon web services : Layman Introduction
Amazon web services : Layman IntroductionAmazon web services : Layman Introduction
Amazon web services : Layman Introduction
 
Back-end (Flask_AWS)
Back-end (Flask_AWS)Back-end (Flask_AWS)
Back-end (Flask_AWS)
 
Why Scale Matters and How the Cloud is Really Different (at scale)
Why Scale Matters and How the Cloud is Really Different (at scale)Why Scale Matters and How the Cloud is Really Different (at scale)
Why Scale Matters and How the Cloud is Really Different (at scale)
 
AWS Interview Questions And Answers | AWS Solution Architect Interview Questi...
AWS Interview Questions And Answers | AWS Solution Architect Interview Questi...AWS Interview Questions And Answers | AWS Solution Architect Interview Questi...
AWS Interview Questions And Answers | AWS Solution Architect Interview Questi...
 
AWS CodeDeploy
AWS CodeDeploy AWS CodeDeploy
AWS CodeDeploy
 
Rails in the Cloud
Rails in the CloudRails in the Cloud
Rails in the Cloud
 
Scaling on AWS for the First 10 Million Users
Scaling on AWS for the First 10 Million UsersScaling on AWS for the First 10 Million Users
Scaling on AWS for the First 10 Million Users
 
How to copy multiple files from local to aws s3 bucket using aws cli
How to copy multiple files from local to aws s3 bucket using aws cliHow to copy multiple files from local to aws s3 bucket using aws cli
How to copy multiple files from local to aws s3 bucket using aws cli
 
Serverless and Kubernetes Workshop on IBM Cloud
Serverless and Kubernetes Workshop on IBM CloudServerless and Kubernetes Workshop on IBM Cloud
Serverless and Kubernetes Workshop on IBM Cloud
 
What is Node.js? (ICON UK)
What is Node.js? (ICON UK)What is Node.js? (ICON UK)
What is Node.js? (ICON UK)
 
Aws whitepaper-single-sign-on-integrating-aws-open-ldap-and-shibboleth
Aws whitepaper-single-sign-on-integrating-aws-open-ldap-and-shibbolethAws whitepaper-single-sign-on-integrating-aws-open-ldap-and-shibboleth
Aws whitepaper-single-sign-on-integrating-aws-open-ldap-and-shibboleth
 
ASP.NET Core and Docker
ASP.NET Core and DockerASP.NET Core and Docker
ASP.NET Core and Docker
 
Aws vs azure bakeoff
Aws vs azure bakeoffAws vs azure bakeoff
Aws vs azure bakeoff
 
Rstudio in aws 16 9
Rstudio in aws 16 9Rstudio in aws 16 9
Rstudio in aws 16 9
 

Recently uploaded

Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024BookNet Canada
 
The Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsThe Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsPixlogix Infotech
 
Enhancing User Experience - Exploring the Latest Features of Tallyman Axis Lo...
Enhancing User Experience - Exploring the Latest Features of Tallyman Axis Lo...Enhancing User Experience - Exploring the Latest Features of Tallyman Axis Lo...
Enhancing User Experience - Exploring the Latest Features of Tallyman Axis Lo...Scott Andery
 
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...Wes McKinney
 
Scale your database traffic with Read & Write split using MySQL Router
Scale your database traffic with Read & Write split using MySQL RouterScale your database traffic with Read & Write split using MySQL Router
Scale your database traffic with Read & Write split using MySQL RouterMydbops
 
So einfach geht modernes Roaming fuer Notes und Nomad.pdf
So einfach geht modernes Roaming fuer Notes und Nomad.pdfSo einfach geht modernes Roaming fuer Notes und Nomad.pdf
So einfach geht modernes Roaming fuer Notes und Nomad.pdfpanagenda
 
What is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdfWhat is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdfMounikaPolabathina
 
2024 April Patch Tuesday
2024 April Patch Tuesday2024 April Patch Tuesday
2024 April Patch TuesdayIvanti
 
Decarbonising Buildings: Making a net-zero built environment a reality
Decarbonising Buildings: Making a net-zero built environment a realityDecarbonising Buildings: Making a net-zero built environment a reality
Decarbonising Buildings: Making a net-zero built environment a realityIES VE
 
Manual 508 Accessibility Compliance Audit
Manual 508 Accessibility Compliance AuditManual 508 Accessibility Compliance Audit
Manual 508 Accessibility Compliance AuditSkynet Technologies
 
Generative Artificial Intelligence: How generative AI works.pdf
Generative Artificial Intelligence: How generative AI works.pdfGenerative Artificial Intelligence: How generative AI works.pdf
Generative Artificial Intelligence: How generative AI works.pdfIngrid Airi González
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.Curtis Poe
 
Time Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsTime Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsNathaniel Shimoni
 
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyesHow to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyesThousandEyes
 
Potential of AI (Generative AI) in Business: Learnings and Insights
Potential of AI (Generative AI) in Business: Learnings and InsightsPotential of AI (Generative AI) in Business: Learnings and Insights
Potential of AI (Generative AI) in Business: Learnings and InsightsRavi Sanghani
 
[Webinar] SpiraTest - Setting New Standards in Quality Assurance
[Webinar] SpiraTest - Setting New Standards in Quality Assurance[Webinar] SpiraTest - Setting New Standards in Quality Assurance
[Webinar] SpiraTest - Setting New Standards in Quality AssuranceInflectra
 
Genislab builds better products and faster go-to-market with Lean project man...
Genislab builds better products and faster go-to-market with Lean project man...Genislab builds better products and faster go-to-market with Lean project man...
Genislab builds better products and faster go-to-market with Lean project man...Farhan Tariq
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024Lonnie McRorey
 
Connecting the Dots for Information Discovery.pdf
Connecting the Dots for Information Discovery.pdfConnecting the Dots for Information Discovery.pdf
Connecting the Dots for Information Discovery.pdfNeo4j
 
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxA Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxLoriGlavin3
 

Recently uploaded (20)

Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
 
The Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsThe Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and Cons
 
Enhancing User Experience - Exploring the Latest Features of Tallyman Axis Lo...
Enhancing User Experience - Exploring the Latest Features of Tallyman Axis Lo...Enhancing User Experience - Exploring the Latest Features of Tallyman Axis Lo...
Enhancing User Experience - Exploring the Latest Features of Tallyman Axis Lo...
 
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
 
Scale your database traffic with Read & Write split using MySQL Router
Scale your database traffic with Read & Write split using MySQL RouterScale your database traffic with Read & Write split using MySQL Router
Scale your database traffic with Read & Write split using MySQL Router
 
So einfach geht modernes Roaming fuer Notes und Nomad.pdf
So einfach geht modernes Roaming fuer Notes und Nomad.pdfSo einfach geht modernes Roaming fuer Notes und Nomad.pdf
So einfach geht modernes Roaming fuer Notes und Nomad.pdf
 
What is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdfWhat is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdf
 
2024 April Patch Tuesday
2024 April Patch Tuesday2024 April Patch Tuesday
2024 April Patch Tuesday
 
Decarbonising Buildings: Making a net-zero built environment a reality
Decarbonising Buildings: Making a net-zero built environment a realityDecarbonising Buildings: Making a net-zero built environment a reality
Decarbonising Buildings: Making a net-zero built environment a reality
 
Manual 508 Accessibility Compliance Audit
Manual 508 Accessibility Compliance AuditManual 508 Accessibility Compliance Audit
Manual 508 Accessibility Compliance Audit
 
Generative Artificial Intelligence: How generative AI works.pdf
Generative Artificial Intelligence: How generative AI works.pdfGenerative Artificial Intelligence: How generative AI works.pdf
Generative Artificial Intelligence: How generative AI works.pdf
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.
 
Time Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsTime Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directions
 
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyesHow to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
 
Potential of AI (Generative AI) in Business: Learnings and Insights
Potential of AI (Generative AI) in Business: Learnings and InsightsPotential of AI (Generative AI) in Business: Learnings and Insights
Potential of AI (Generative AI) in Business: Learnings and Insights
 
[Webinar] SpiraTest - Setting New Standards in Quality Assurance
[Webinar] SpiraTest - Setting New Standards in Quality Assurance[Webinar] SpiraTest - Setting New Standards in Quality Assurance
[Webinar] SpiraTest - Setting New Standards in Quality Assurance
 
Genislab builds better products and faster go-to-market with Lean project man...
Genislab builds better products and faster go-to-market with Lean project man...Genislab builds better products and faster go-to-market with Lean project man...
Genislab builds better products and faster go-to-market with Lean project man...
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024
 
Connecting the Dots for Information Discovery.pdf
Connecting the Dots for Information Discovery.pdfConnecting the Dots for Information Discovery.pdf
Connecting the Dots for Information Discovery.pdf
 
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxA Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
 

Getting started with AWS

  • 1. Getting Started with AWS Jungwon Seo University of Stavanger
  • 2. Jungwon Seo In Norway From South Korea Master in CS Bachelor in CS 3 start-ups Many random works
  • 3. Do you like to learn about new technology?
  • 4. My first reaction : Rejection
  • 6. The most important thing is that companies want us to know these:
  • 7. Have you ever heard about AWS?
  • 8. When we run a service.. Server Database Files Application
  • 9. When Slack runs a service..
  • 10. What makes us use it differently?
  • 11. # of data + # of requests, + # of operations …. # of users =
  • 12. Typical Questions How can we scale up our service? Buy a better server? But until when?
  • 14. File Server Files We need to separate all the components as much as possible Application Server Database Files Application DB Server Database
  • 16. This is the website that we will test
  • 17. Database Files Application Target Architecture #0 EC2 : working as a web server, database and file storage This is the first architecture that we will build.
  • 18. I will assume that we already have an AWS account. 1. Select the region closest to your country 2. Find EC2
  • 19. This is the main page of EC2 1. Click
  • 20. 1. Click Choose the OS that you want to use. I will use Ubuntu.
  • 21. t2.micro is free but only for one instance. If you run two or more, you will have to pay. 1. Choose 2. Click
  • 22. We are skipping the detailed settings. 1. Click
  • 23. For the first time, we need to make a key pair. Keep it safe! 1. Select 2. Type the name 3. Download 4. Click
  • 24. This is the result page. 1. Click to check
  • 25. Now we can see the instance we have launched. Let’s just think that one instance equals one server.
  • 26. In AWS, we administrate ports in the AWS console - not in the server. 1. Click 2. Select (you can also check it from the instance’s details page) 3. Click 4. So far this is the only port that we can access remotely. 5. Click
  • 27. Let’s add 80 port for HTTP. 1. Click 2. Select HTTP 3. Click
  • 28. Now we have two open ports (22 and 80)!
  • 29. Go back to the instance page. 1. Click 3. Check the public IP 2. Select
  • 30. Now we will connect to the server using “.pem” and “ssh"
  • 31. Ok, this is the first screen that you will see.
  • 32. I have already made several test codes in my git. So, let’s use them. git clone https://github.com/MuchasEstrellas/AWS.git
  • 33. Move to ‘architecture0’ and run ‘start.sh’ run ‘sh start.sh’
  • 34. It will install and update many things. This screen should appear, if it worked correctly.
  • 35. Now if you type the public IP, you can check your website publicly.
  • 36. Target Architecture #1 EC2 : working as a web server and file storage RDS : working as a database Files Application
  • 37. This time, we need to use RDS as the database of our service. 1. Search and Click
  • 39. Choose MySQL and click next. 1. Click 2. Click 3. Click
  • 41. Enter the information. I recommend that you use the same name and password. uisaws123 1. Type in the following. 2. Click
  • 43. Make a database. Again, I recommend that you use the same database name. 1. Enter the db name.
  • 45. Now, you can see your database. 1. Click
  • 46. This is the RDS instance that we made.
  • 47. If you scroll down, you can see more detailed information.
  • 48. A couple minutes later, you can see the endpoint of this database. It means, it will be the address that you can access. Check the security group whether it is opened to everywhere. (rule 0.0.0.0/0 means everywhere)
  • 49. Move to ‘architecture1' directory. Type ‘start.sh’ with RDS endpoint.
  • 50. Then the ‘start.sh’ file will automatically change the DB_HOST 
 in the views.py views.py
  • 51. You can also get an overview of your database in the RDS page.
  • 52. No difference from the client side. 
 But this website is receiving the data from RDS.
  • 53. Target Architecture #2 EC2 : working as a web server RDS : working as a database S3 : working as a file server
  • 54. This time, we will separate the file server from EC2. 1. Search and Click
  • 55. There is a term which is called ‘bucket’. Let’s just think S3 is like some kind of folder in the cloud. 1. Click
  • 56. The bucket name should be unique. Because S3 will make a url with your bucket name. 1. Type in your bucket name. 2. Click
  • 57. Just skip this at this time. 1. Click
  • 58. Just skip this too at this time. 1. Click
  • 60. Okay, here is our new bucket. However, there is an important thing that we have to know. Accessing S3 programmatically should be authorized. Otherwise, anyone can upload and delete your files.
  • 61. Let’s use IAM to access S3 programmatically. 1. Search and Click
  • 62. There are many things to fix for your security, but let’s skip at this time. 1. Click Let’s ignore these at this moment, but it is important when you use AWS for a real service.
  • 63. 1. Click Let’s make a user which will have a permission for S3.
  • 64. 1. Enter the user name 2. Click 3. Click Remember to click “Programmatic access”
  • 65. 1. Click You need to make a group first. Users should be in a group.
  • 66. 1. Click 2. Type S3 3. Select S3FullAccess 4. Click This group will contain the permission for S3. Users who are in that group will have the same permission.
  • 67. Select the group that we just made. 1. Click
  • 69. It is really important to keep your key safe. Mine is exposed now, but I have deleted it. Don’t try using mine! 1. Download This is the ID This is the password
  • 70. Now move to the ‘architecture2’ directory and run following command.
 sh start.sh <RDS> <ACCESSKEY> <SECRET> <S3_REGION> RDS endpoint Access Key Secret Key Region Then it will edit ‘views.py’ automatically.
  • 71. But your website will not work properly. We need to move all the static and media files (css, js, image) to S3. <collectstatic.py> This file will help. (like this)
  • 72. Let’s run! You have to activate the virtual environment. Don’t forget to deactivate.
  • 73. Now we can find the static folder in S3.
  • 74. If you click it, you can find an exact copy of the directory that we had in EC2.
  • 76. We can see the link. It means now we can access this css file publicly.
  • 78. If you take a look at ‘views.py’, you can find a path for S3. Before After
  • 79. One more thing! When a user uploads a picture, it should be stored in S3. Take a look at this code.
  • 81. Okay, now you can find the picture that you just uploaded in S3 as well.
  • 83. All the uploaded images are also from S3.
  • 84. Target Architecture #3 EC2 : working as a web server RDS : working as a database S3 : working as a file server Route53 : DNS
  • 85. You must have bought some domain before. But how can we connect it to our service?
  • 86. We need to change ‘name server’ to have control of this domain in AWS.
  • 87. Let’s use ‘Route 53’ to deal with domains.
  • 89.
  • 90.
  • 91. You can enter your domain.
  • 92. In my case, uisprogramming.com. Please use yours!
  • 93. Okay, this is the default record sets. Remember the NS values.
  • 94. In the website where you bought a domain you can change the name server to the name server from the previous slides
  • 95. This GUI can be different from site to site. Just remember that you have to change NamesServer!
  • 96. Here are four name servers that I copied from AWS route53.
  • 97. Okay, now it has changed, but it takes a while to be applied completely. Just go get a drink and continue it tomorrow.
  • 98. Okay, good morning! We will test a simple case, which is connecting 'www.uisprogramming.com' to the public IP of your server.
  • 99. It also takes a while, but not that long. Go get some coffee.
  • 100. Run ‘start.sh’ in ‘architecture3_4’ with adding your domain. sh start.sh <RDS_ENDPOINT> <ACCESS_KEY> <SECRET_KEY> <S3_REGION> <DOMAIN>
  • 101. Now we can access our website with a domain.
  • 102. Target Architecture #4 EC2 : working as a web server RDS : working as a database S3 : working as a file server Route53 : DNS ELB : Load Balancer
  • 103. Go to the EC2 page and click Load Balancers. 1. Click
  • 106. Enter the name, and just set the HTTP port and choose all zones.
  • 107. If you see this page, just move to next page.
  • 108. Load Balancer is also a kind of computer. So we need to manage its port setting from the security group. 1. Select 2. Edit if you need to 3. This time, open only 80(http) port. 4. Click
  • 109. This is the configuration between ELB and EC2. 2. Click 1. Enter the target name
  • 110. Okay, so we will register an instance to this target group. This group is very important for the future auto scaling. 1. Select 2. Click 3. Click
  • 112. Okay, so let’s check our Load Balancer.
  • 113. This is the Load Balancer dash board.
  • 114. This is the target group that we made.
  • 115. If you click the target group, you can see the targets (instances) 1. Select 2. Click 3. Here you can check whether your instance is working properly or not.
  • 116. Now, we will redirect our Route53 to ELB
 (not to the public IP of EC2)
  • 119. Okay, stop.
 We will replace the IP with ELB. 1. Select 2. Select 3. Choose 4. Click
  • 120. The reason that we have to connect ELB to Route53 is that we can easily replace our server without any downtime. When you want to change your server, you just need to change your instance from the target group. You don’t need to reconnect your server IP to Route53 anymore.
  • 121. One more thing, we can also add more instances to this ELB. Let’s create one more instance like we did in architecture#0. However, if you use two t2.micro instances, AWS will charge you. If you don’t want to pay, then just read.
  • 122. Open one more terminal, 
 do the same job that we did in architecture#0
  • 123. After that, we need to add this instance to our target group. 1. Click 2. Choose 3. Click 4. Click
  • 124. 1. Select 2. Click 3. Click Add the new instance to our registered targets.
  • 125. Now, we can see two instances.
  • 126. Let’s see if both work fine. We will monitor it with the Nginx access log.
  • 127. As you can see, there are many “ELB-HealthCheker” logs. This is the way to separate the healthy instances and the unhealthy instances from ELB.
  • 128. If you refresh your website, only one instance will get the request.
  • 129. If you refresh it again, the other instance will get the request.
  • 130. Elastic Load Balancer will choose one by one 
 so that we can distribute the traffic.
  • 131. Target Architecture #5 EC2 : working as a web server RDS : working as a database S3 : working as a file server Route53 : DNS ELB : Load Balancer AWS Certificate Manager : TLS/SSL certificate +
  • 132. Before we get started, we need to modify the security group of ELB. 1. Click 2. Select 3. Click
  • 133. 2. Click 1. Click Let’s add an HTTPS port.
  • 134. 1. Click 2. Select 3. Check the values. 4. Click Now we are opening two ports(80,443) to the world!
  • 135. We also need to add an HTTPS listener to ELB. 1. Click 2. Select 3. Click 4. Click
  • 136. Oops, we don’t have a TLS certificate. Let’s make that first. !!!!!!
  • 137. Fortunately, AWS offer an easy way to create a certificate. 1. Go!
  • 138. Personally, I think this is the best service of AWS. 1. Click
  • 139. 1. Enter the domain name. 
 I recommend that you use ‘*’. 2. Click Use your domain!!
  • 140. Since we have already moved our domain to Route53, 
 DNS validation is easier than Email validation. 1. Click 2. Click
  • 142. 1. Click Create a record in Route 53 to validate that you are the owner.
  • 143. 1. Click It will automatically add the record to your Route53.
  • 144. 1. Click Good, click continue!
  • 145. It takes some time.
  • 146. Okay, finally we got a certificate.
  • 147. Back to the ELB dashboard, Now, we can select the certificate. 1. Select 2. Click
  • 148. Now we have two listeners for ELB.
  • 149. If you type your domain with https, you can see that it works.
  • 150. EC2 : working as a web server RDS : working as a database S3 : working as a file server Route53 : DNS ELB : Load Balancer AWS Certificate Manager : TLS/SSL certificate Elastic Cache(Redis) : Session Storage Target Architecture #6 +
  • 151. Session storage - Cookie : Not secure. - File storage : Disk I/O + can not be shared with other instance. - Database : Good, but Disk I/O + additional burden on the database. - In-memory DB : Highly recommended! Session data : small and frequently requested
  • 152. It’s better to make a security group for Redis first. 2. Click 1. Click
  • 153. 1. Fill in 2. Fill in like this. 3. Click Redis will use 6379 port.
  • 154. Redis can be found in the service called ElastiCache. 1. Choose
  • 155. We can also use Memcached but I want to use Redis. 1. Click
  • 157. It’s important to choose t2.micro, otherwise you have to pay. 1. Select 2. Name it 3. Choose t2.micro(free) 4. Choose None 5. Go down
  • 158. If it is the first time, you need to make a subnet group like below. 1. Select 2. Name it 3. Choose all
  • 159. Select the security group that we just made. 1. Select 2. Click
  • 160. Okay, it is launching
  • 162. We can check the endpoint so that we can connect from our service.
  • 163. As seen in this code, we are using Redis as a session storage. You can find the way to replace the session storage for any kind of server side languages that you are using. sh start.sh <RDS_ENDPOINT> <ACCESS_KEY> <SECRET_KEY> <S3_REGION> <DOMAIN> <REDIS_ENDPOINT>
  • 164. Up until here is the tip of the iceberg.
  • 165. The real beauty of AWS is auto scaling.
  • 167. Example Let’s say we have lunched a mobile game. Unfortunately, it became too popular. So we have to add more servers to deal with more users. The maximum number of concurrent users per hour is 40000. The minimum number of concurrent users per hour is 1000. One server can deal with 500 people. What is the optimal number of servers?
  • 168. Solution #1 : Maximum number of servers -> Too expensive, waste of servers for the non-busy time. Solution #2 : Average number of servers -> Sounds reasonable, but only government websites can do this 
 because users will complain a lot. Solution #3 : Use auto scaling! ->Yay!
  • 169. But how does it work?
  • 170. Shit! I can’t do this anymore. I need some help! Okay the users are coming. I’m sending but can you handle them?
  • 171. Okay the users are coming. 20 to Jungwon, 20 to Hans, 20 to Simon, 20 to Luca… God damm it! He is not healthy yet. Hans : Healthy Simon : Healthy Jungwon : Healthy Luca : Unhealthy Wait! I’m coming!! Phew, thank you guys!
  • 172. Okay, they are starting to go to bed. Okay, Hans, you can go, Simon can go. Luca can go. Ok, now I can handle it alone. Hans : Deleting Simon : Deleting Jungwon : Healthy Luca : Deleting See you! Snakkes! Ha det~!
  • 173. Important! From this slide AWS will charge you. If you do not want to pay then just read.
  • 174. EC2 : working as a web server RDS : working as a database S3 : working as a file server Route53 : DNS ELB : Load Balancer AWS Certificate Manager : TLS/SSL certificate Elastic Cache(Redis) : Session Storage Auto Scaling Group : auto scaling ! Target Architecture #7 + Auto scaling group
  • 175. Move to the EC2 dashboard.
  • 176. Move to Auto Scaling Groups 1. Click
  • 177. Let’s make an auto scaling group. 1. Click
  • 178. As you can see, we have to make a launch configuration first. 1. Click
  • 179. It means, we need to set the initial setting for the instance. (The instance that will be launched) 1. Click
  • 181. This is the most important part, especially ‘User data’. Let’s talk more about this.
  • 182. There are two ways to make 
 ‘auto scaling group’.
  • 183. #1. Copy an image (AMI) of your origin instance and let it be used for the auto scaling group. AMI Pros: - Short launch time (until becoming healthy!) - No need to install ‘things’ 
 (they are already there)
 Cons: - Difficult to apply new updates.
 (To do that requires making another new image.) Auto Scaling Group
  • 184. #2. Use a pure instance image and and let it be used for the auto scaling group. Pure Instance Image Pros: - Easy to apply new updates
 (Because when it launches it will install ‘things’.)
 Cons: - Slow launch time.
 (Because of installations!) Auto Scaling Group
  • 185. Both ways are fine, but I prefer option #2. That’s why I made a ‘start.sh’ file.
  • 186. So here is the scenario. 1. Launch the pure instance 2. Clone 
 the service from git 3. Install softwares and 
 run the service
  • 187. Then how can we set it up so that it automatically 
 acts like shown on the previous slide? That’s why we need to set ‘user data’.
  • 188. Let’s take a look at the user data. #!/bin/bash cd /home/ubuntu git clone https://github.com/MuchasEstrellas/AWS.git cd /home/ubuntu/AWS/architecture6_7 sh start.sh <RDS_ENDPOINT> <ACCESS_KEY> <SECRET_KEY> <S3_REGION> <REDIS_ENDPOINT> 1. move to the directory that contains the website 2. clone it from git. 3. move to the directory that will be run. 4. run the service. It’s important to know that
 the user who is running the command above is ‘root’. (Keep this in mind, just in case you face some path or permission problem.)
  • 189. Okay here we are again. 1. Enter the name 2. Set the initial command that should be run 
 right after the instance has launched 3. Next!
  • 190. 1. Next! Just skip this part.
  • 191. Add 80 port. 3. Next! 1. Click 2. Add
  • 192. Okay, now we are ready. 1. Click
  • 193. Just use same key that we made earlier. 1. Click
  • 194. Okay, now we have to make an actual group. 1. Enter the name 2. Set the number of instances you want to begin with. 3. Add all subnets. 4. Check 5. Add the target group that we made while launching ELB. 6. Both are fine for this test case, but it’s recommended to use ELB 
 as long as you are not using classic load balancer. 7. Set it to 10sec (but you can choose how long you want). 8. Click
  • 195. Let’s go for the first option this time. We will add scaling policies later. 1. Next 1. Check
  • 196. We can set the notification, but I will skip. 1. Next
  • 200. One instance is launching now. Before we test it, we need to take a brief look at the AWS console.
  • 201. If you check the target group under the load balancer section, you can see the new instance.
  • 202. You can also check the new instance in the instances section.
  • 203. So, how can we make this auto scaling work? When should they be scaled out and scaled in?
  • 204. There are several standards that we can set. For example the average amount of network “in” and “out” 
 or the average cpu utilization. For this case we will use “average cpu utilization”.
  • 205. This is very simple. If the average CPU utilization is over 50% then add one more instance. If the average CPU utilization is below 40% then remove one instance. We can also set the range of the number of instances.
 (e.g Min : 1, Max : 5) Let’s see how it works.
  • 206. We will use a new service called “cloudwatch”.
  • 207. Basically this service is mainly used for monitoring our AWS services. However, we can also set the alarm or event based on our standard.
  • 208. For the autoscaling, we will use “Alarm” 1. Click 2. Click
  • 209. 1. Search your auto scaling group 2. Choose CPUUtilization. Let’s choose “CPU Utilization” of “auto scaling group” 
 that we have made.
  • 210. 1. Click Next. And click next.
  • 211. Okay, let’s set the first alarm. This is about when to add an instance. 1. Put any name. 2. Put any desc. 3. when it’s >= 50% 4. How many datapoints should be detected out of all the data points. 5. Literally, monitoring period and the way to calculate. 6. We will set the action later. (But you can set it now) 7. Create!
  • 212. So, now we have one alarm that can react based on CPU Utilization. - If CPU reaches 50% then it will be ALARM. - If it’s under 50% then it will be OK. - If there are not enough monitoring data, then it will be INSUFFICIENT.
  • 213. Create one more for reducing the number of instances. This time is <= 40. Create!
  • 214. Now we have two alarms that can be used for auto scaling.
  • 215. So let’s set the Scaling Policy using the alarms that we have set. Go to the autoscaling group dashboard. 2. Click! 3. Click! 1. Select!
  • 216. We need to use our alarm, so click the box below. 1. Click!
  • 217. Let’s get started with the case 1: adding the instance. 1. Name it whatever you want. 2. Choose the one that we have made. 3. Action is adding 1 instance. 4. This is for efficiently adding new instances.
 (Practically 10 seconds is too short, but for the test case I will go for 10sec) 5. Click!
  • 218. Okay, it’s set.
 Let’s add a removing policy in the same way as the adding policy. 1. Click!
  • 219. 1. Action is removing 1 instance. The other steps are the same, except the action part. 2. Click!
  • 220. Okay, now we have two policies.
  • 221. But we need to set one more, the range of number of instances! Minimum number and maximum number of instances. 1. Click 2. Click
  • 222. 1. Scroll Down 2. Set min 1 3. Set max 5 You can make any min and max number of instances 
 based on your budget! 4. This is also about the time 
 before the next scaling action 5. Scroll up and click save button.
  • 224. To test and monitor, we will install two softwares. “htop” and “stress”.
  • 225. The left screen is htop screen. The right screen is command for the stress test.
  • 226. Okay, now cpu utilization of this instance has reached 100%.
  • 227. If you checked the cloud watch, you can see the “higher cpu” alarm is on. It may not react quickly, be patient.
  • 228. Now you can see two instances in your auto scaling group.
  • 229. As you can see in the target group, there are two instances in the group.
 One is in the initial status.
  • 230. It’s not healthy yet, it means the load balancer didn’t get the proper response from that instance yet.
  • 231. Now it’s healthy, maybe it finished installing things.
  • 232. You can also check in the instance dash board.
  • 233. Now there are three instances because it’s based on average cpu utilization. It may switch back and forth between 2 and 3 
 because (100%+a%+b% )/3 <= 40% 
 but (100%+a% )/2 >= 50% That’s why cool down time is important! However, let’s ignore it this time.
  • 234. Now the stress test is over. Let’s check if the number of instances will be reduced.
  • 235. Okay, low cpu alarm has occurred.
  • 236. The desired number of instances is 2. However, the current number of instances is 3.
  • 237. Yeah, it’s shutting down now.
  • 238. After several minutes, now there is only one instance.
  • 239. What about the bottle neck?
  • 240. Okay the users are coming. 20 to Jungwon, 20 to Hans, 20 to Simon, 20 to Luca Hans : Healthy Simon : Healthy Jungwon : Healthy Luca : Healthy Phew, thank you guys! Ferdinand Really? Meanwhile…..
  • 241. Okay the users are coming. 20 to Jungwon, 20 to Hans, 20 to Simon, 20 to Luca Hans : Healthy Simon : Healthy Jungwon : Healthy Luca : Healthy Phew, thank you guys! Ferdinand Really?
  • 242. We also need to consider the database because it is shared by all the instances.
  • 243. “But we are sharing S3 and Redis too!”
  • 244. S3 is fine. Redis is also fine because it is NOSQL. (It means it is easy to scale out.) -From AWS -From Wikipedia
  • 245. As long as we are using RDBMS 
 as a main database, there are 3 options.
  • 246. #1 : AWS Aurora I think this is the best. However, I'm kind of afraid to migrate the entire database to the new database. -From AWS
  • 247. #2 : Use EC2 and build multi-master manually. This is what Slack is doing. If you can do this, this is a good solution. However, it sounds very difficult to me.
  • 248. #3 : Separate the Read and Write Database. I think I can do this.
  • 249. Let’s try to make a read-replica first.
  • 250. Move to the RDS page.
  • 251. Find the RDS instance that we made before. 1. Click!
  • 252. Go to the instance. 1. Click!
  • 253. Let’s make a replica. 1. Click!
  • 254. Click the “Create read replica” 1. Click!
  • 255. Not to get confused, put a different name and create it. 1. Click! 2. Go down and 
 Create
  • 256. Now we can see two db instances.
  • 257. I will just write two endpoints in my python code. For the read query I will use the first url. For the write query I will use the second url. 1. Read replica db! 1. Master db!
  • 258. It is not the best way, but I just want you to understand the logic. If you use other server side framework, there must be some convenient way to separate the host addresses. <For the read query>
  • 259. <For the write query>
  • 260. Still, it doesn’t sound that cool. For the read-replica, every time something is written in the master database, ‘read-replica’ has to be written as well. It means for them it is not that different. Write Read Write Master Slave What’s the difference for me?
  • 261. There are several things that you can think about. Write operations are occurring less than the read operations. One solution to improve the RDS performance is buying a better instance for Read-Replica. Read operations are often complicated. (Join,Union, Order by, Group By 
 and amount of data)
  • 262. OR!!!!
  • 263. Make many read replicas. Write 25%Read Write Master Slave1 25%Read Slave2 25%Read Slave3 25%Read Slave4
  • 264. Unfortunately, we can’t use load balancer or auto scaling.
  • 265. However, we can use Route 53 to distribute the requests.
  • 266. EC2 : working as a web server RDS1 : working as a master database S3 : working as a file server Route53 : DNS ELB : Load Balancer AWS Certificate Manager : TLS/SSL certificate Elastic Cache(Redis) : Session Storage RDS2 : working as a slave database + Target Architecture #8
  • 267. Let’s make one more read-replica.
  • 269. Okay, now we have three.
  • 270. Move to the Route 53 page.
  • 271. Move!
  • 272. Move!
  • 273. Let’s create one more Record set for the database.
  • 274. I will name it “db.uisprogramming.com”. The destination will be one read-replica’s endpoint. 1. Enter “db”. 2. Select CNAME 3. Select No. 4. Set it to 0. 5. Copy and paste one of the 
 RDS read replica’s endpoint. 6. Choose Weighted. 7. Set it to 0. 8. Name it to read1. 9. Create!
  • 275. Make one more record set with a different read replica. More about Weighted Routing policy
  • 276. Now we need to change the url for the read query.
  • 277. I just sent a heavy query to test through “new url”, 
 as you can see replica2’s cpu is 22%.
  • 278. Now, do you want to use AWS?