25. Now we can see the instance we have launched.
Let’s just think that one instance equals one server.
26. In AWS, we administrate ports in the AWS console
- not in the server.
1. Click
2. Select (you can also check it from the instance’s details page)
3. Click
4. So far this is the only port that we can access remotely.
5. Click
27. Let’s add 80 port for HTTP.
1. Click
2. Select HTTP
3. Click
47. If you scroll down,
you can see more detailed information.
48. A couple minutes later, you can see the endpoint of this database.
It means, it will be the address that you can access.
Check the security group whether it is opened to everywhere.
(rule 0.0.0.0/0 means everywhere)
60. Okay, here is our new bucket.
However, there is an important thing that we have to know.
Accessing S3 programmatically should be authorized.
Otherwise, anyone can upload and delete your files.
61. Let’s use IAM to access S3 programmatically.
1. Search and Click
62. There are many things to fix for your security,
but let’s skip at this time.
1. Click
Let’s ignore these at this moment, but it is important when you use AWS for a real service.
66. 1. Click
2. Type S3
3. Select S3FullAccess
4. Click
This group will contain the permission for S3.
Users who are in that group will have the same permission.
69. It is really important to keep your key safe.
Mine is exposed now, but I have deleted it. Don’t try using mine!
1. Download
This is the ID This is the password
70. Now move to the ‘architecture2’ directory
and run following command.
sh start.sh <RDS> <ACCESSKEY> <SECRET> <S3_REGION>
RDS endpoint
Access Key Secret Key Region
Then it will edit ‘views.py’ automatically.
71. But your website will not work properly.
We need to move all the static and media files (css, js, image) to S3.
<collectstatic.py>
This file will help.
(like this)
72. Let’s run!
You have to activate the virtual environment.
Don’t forget to deactivate.
106. Enter the name, and just set the HTTP port and choose all zones.
107. If you see this page, just move to next page.
108. Load Balancer is also a kind of computer.
So we need to manage its port setting from the security group.
1. Select
2. Edit if you need to
3. This time, open only 80(http) port.
4. Click
109. This is the configuration between ELB and EC2.
2. Click
1. Enter the target name
110. Okay, so we will register an instance to this target group.
This group is very important for the future auto scaling.
1. Select
2. Click
3. Click
115. If you click the target group, you can see the targets (instances)
1. Select
2. Click
3. Here you can check whether your instance is working properly or not.
116. Now, we will redirect our Route53 to ELB
(not to the public IP of EC2)
119. Okay, stop.
We will replace the IP with ELB.
1. Select
2. Select
3. Choose
4. Click
120. The reason that we have to connect ELB to Route53 is
that we can easily replace our server without any downtime.
When you want to change your server, you just need to change your
instance from the target group.
You don’t need to reconnect your server IP to Route53 anymore.
121. One more thing, we can also add more instances to this ELB.
Let’s create one more instance like we did in architecture#0.
However, if you use two t2.micro instances, AWS will charge you.
If you don’t want to pay, then just read.
122. Open one more terminal,
do the same job that we did in architecture#0
123. After that, we need to add this instance to our target group.
1. Click
2. Choose
3. Click
4. Click
131. Target Architecture #5
EC2 : working as a web server
RDS : working as a database
S3 : working as a file server
Route53 : DNS
ELB : Load Balancer
AWS Certificate Manager : TLS/SSL certificate
+
132. Before we get started, we need to modify the security group of ELB.
1. Click
2. Select
3. Click
149. If you type your domain with https, you can see that it works.
150. EC2 : working as a web server
RDS : working as a database
S3 : working as a file server
Route53 : DNS
ELB : Load Balancer
AWS Certificate Manager : TLS/SSL certificate
Elastic Cache(Redis) : Session Storage
Target Architecture #6
+
151. Session storage
- Cookie : Not secure.
- File storage : Disk I/O + can not be shared with other instance.
- Database : Good, but Disk I/O + additional burden on the database.
- In-memory DB : Highly recommended!
Session data : small and frequently requested
152. It’s better to make a security group for Redis first.
2. Click
1. Click
153. 1. Fill in
2. Fill in like this.
3. Click
Redis will use 6379 port.
154. Redis can be found in the service called ElastiCache.
1. Choose
155. We can also use Memcached but I want to use Redis.
1. Click
162. We can check the endpoint so that we can connect from our service.
163. As seen in this code, we are using Redis as a session storage.
You can find the way to replace the session storage for any kind of
server side languages that you are using.
sh start.sh <RDS_ENDPOINT> <ACCESS_KEY> <SECRET_KEY> <S3_REGION>
<DOMAIN> <REDIS_ENDPOINT>
167. Example
Let’s say we have lunched a mobile game.
Unfortunately, it became too popular.
So we have to add more servers to deal with more users.
The maximum number of concurrent users per hour is 40000.
The minimum number of concurrent users per hour is 1000.
One server can deal with 500 people.
What is the optimal number of servers?
168. Solution #1 : Maximum number of servers
-> Too expensive, waste of servers for the non-busy time.
Solution #2 : Average number of servers
-> Sounds reasonable, but only government websites can do this
because users will complain a lot.
Solution #3 : Use auto scaling!
->Yay!
170. Shit! I can’t do this anymore.
I need some help!
Okay the users are
coming.
I’m sending but
can you handle
them?
171. Okay the users are
coming.
20 to Jungwon, 20 to Hans,
20 to Simon,
20 to Luca… God damm it! He is
not healthy yet.
Hans : Healthy Simon : Healthy Jungwon : Healthy Luca : Unhealthy
Wait! I’m
coming!!
Phew, thank you
guys!
172. Okay, they are
starting to go to bed.
Okay, Hans, you can go,
Simon can go.
Luca can go.
Ok, now I can
handle it alone.
Hans : Deleting Simon : Deleting Jungwon : Healthy Luca : Deleting
See you!
Snakkes! Ha det~!
174. EC2 : working as a web server
RDS : working as a database
S3 : working as a file server
Route53 : DNS
ELB : Load Balancer
AWS Certificate Manager : TLS/SSL certificate
Elastic Cache(Redis) : Session Storage
Auto Scaling Group : auto scaling !
Target Architecture #7
+
Auto scaling
group
183. #1. Copy an image (AMI) of your origin instance and let it be used
for the auto scaling group.
AMI
Pros:
- Short launch time (until becoming healthy!)
- No need to install ‘things’
(they are already there)
Cons:
- Difficult to apply new updates.
(To do that requires making another new image.)
Auto Scaling Group
184. #2. Use a pure instance image and and let it be used
for the auto scaling group.
Pure
Instance
Image
Pros:
- Easy to apply new updates
(Because when it launches it will install ‘things’.)
Cons:
- Slow launch time.
(Because of installations!)
Auto Scaling Group
185. Both ways are fine, but I prefer option #2.
That’s why I made a ‘start.sh’ file.
186. So here is the scenario.
1. Launch
the pure instance
2. Clone
the service from git
3. Install softwares and
run the service
187. Then how can we set it up so that it automatically
acts like shown on the previous slide?
That’s why we need to set ‘user data’.
188. Let’s take a look at the user data.
#!/bin/bash
cd /home/ubuntu
git clone https://github.com/MuchasEstrellas/AWS.git
cd /home/ubuntu/AWS/architecture6_7
sh start.sh <RDS_ENDPOINT> <ACCESS_KEY> <SECRET_KEY> <S3_REGION>
<REDIS_ENDPOINT>
1. move to the directory that contains the website
2. clone it from git.
3. move to the directory that will be run.
4. run the service.
It’s important to know that
the user who is running the command above is ‘root’.
(Keep this in mind, just in case you face some path or permission problem.)
189. Okay here we are again.
1. Enter the name
2. Set the initial command that should be run
right after the instance has launched
3. Next!
194. Okay, now we have to make an actual group.
1. Enter the name
2. Set the number of instances you want to begin with.
3. Add all subnets.
4. Check
5. Add the target group that we made while launching ELB.
6. Both are fine for this test case, but it’s recommended to use ELB
as long as you are not using classic load balancer.
7. Set it to 10sec (but you can choose how long you want).
8. Click
195. Let’s go for the first option this time. We will add scaling policies later.
1. Next
1. Check
196. We can set the notification, but I will skip.
1. Next
200. One instance is launching now.
Before we test it, we need to take a brief look at the AWS console.
201. If you check the target group under the load balancer section,
you can see the new instance.
202. You can also check the new instance in the instances section.
203. So, how can we make this auto scaling work?
When should they be scaled out and scaled in?
204. There are several standards that we can set.
For example the average amount of network “in” and “out”
or the average cpu utilization.
For this case we will use “average cpu utilization”.
205. This is very simple.
If the average CPU utilization is over 50% then add one more instance.
If the average CPU utilization is below 40% then remove one instance.
We can also set the range of the number of instances.
(e.g Min : 1, Max : 5)
Let’s see how it works.
206. We will use a new service called “cloudwatch”.
207. Basically this service is mainly used for monitoring our AWS services.
However, we can also set the alarm or event based on our standard.
211. Okay, let’s set the first alarm.
This is about when to add an instance.
1. Put any name.
2. Put any desc.
3. when it’s >= 50%
4. How many datapoints should be detected out of all the data points.
5. Literally, monitoring period and the way to calculate.
6. We will set the action later. (But you can set it now)
7. Create!
212. So, now we have one alarm that can react based on CPU Utilization.
- If CPU reaches 50% then it will be ALARM.
- If it’s under 50% then it will be OK.
- If there are not enough monitoring data, then it will be INSUFFICIENT.
213. Create one more for reducing the number of instances.
This time is <= 40.
Create!
214. Now we have two alarms that can be used for auto scaling.
215. So let’s set the Scaling Policy using the alarms that we have set.
Go to the autoscaling group dashboard.
2. Click!
3. Click!
1. Select!
216. We need to use our alarm, so click the box below.
1. Click!
217. Let’s get started with the case 1: adding the instance.
1. Name it whatever you want.
2. Choose the one that we have made.
3. Action is adding 1 instance.
4. This is for efficiently adding new instances.
(Practically 10 seconds is too short, but for the test case I will go for 10sec)
5. Click!
218. Okay, it’s set.
Let’s add a removing policy in the same way as the adding policy.
1. Click!
219. 1. Action is removing 1 instance.
The other steps are the same, except the action part.
2. Click!
221. But we need to set one more, the range of number of instances!
Minimum number and maximum number of instances.
1. Click
2. Click
222. 1. Scroll Down
2. Set min 1
3. Set max 5
You can make any min and max number of instances
based on your budget!
4. This is also about the time
before the next scaling action
5. Scroll up and click save button.
233. Now there are three instances because it’s based on average cpu
utilization.
It may switch back and forth between 2 and 3
because (100%+a%+b% )/3 <= 40%
but (100%+a% )/2 >= 50%
That’s why cool down time is important!
However, let’s ignore it this time.
234. Now the stress test is over.
Let’s check if the number of instances will be reduced.
240. Okay the users are
coming.
20 to Jungwon, 20 to Hans,
20 to Simon,
20 to Luca
Hans : Healthy Simon : Healthy Jungwon : Healthy Luca : Healthy
Phew, thank you
guys!
Ferdinand
Really?
Meanwhile…..
241. Okay the users are
coming.
20 to Jungwon, 20 to Hans,
20 to Simon,
20 to Luca
Hans : Healthy Simon : Healthy Jungwon : Healthy Luca : Healthy
Phew, thank you
guys!
Ferdinand
Really?
242. We also need to consider the database
because it is shared by all the instances.
244. S3 is fine.
Redis is also fine because it is NOSQL.
(It means it is easy to scale out.)
-From AWS
-From Wikipedia
245. As long as we are using RDBMS
as a main database,
there are 3 options.
246. #1 : AWS Aurora
I think this is the best.
However, I'm kind of afraid to migrate the
entire database to the new database.
-From AWS
247. #2 : Use EC2 and build multi-master manually.
This is what Slack is doing.
If you can do this, this is a good solution.
However, it sounds very difficult to me.
248. #3 : Separate the Read and Write Database.
I think I can do this.
257. I will just write two endpoints in my python code.
For the read query I will use the first url.
For the write query I will use the second url.
1. Read replica db!
1. Master db!
258. It is not the best way, but I just want you to understand the logic.
If you use other server side framework, there must be some convenient
way to separate the host addresses.
<For the read query>
260. Still, it doesn’t sound that cool.
For the read-replica,
every time something is written in the master database,
‘read-replica’ has to be written as well.
It means for them it is not that different.
Write Read
Write
Master Slave
What’s the
difference for me?
261. There are several things that you can think
about.
Write operations are occurring less than the read
operations.
One solution to improve the RDS performance is
buying a better instance for Read-Replica.
Read operations are often complicated.
(Join,Union, Order by, Group By
and amount of data)
266. EC2 : working as a web server
RDS1 : working as a master database
S3 : working as a file server
Route53 : DNS
ELB : Load Balancer
AWS Certificate Manager : TLS/SSL certificate
Elastic Cache(Redis) : Session Storage
RDS2 : working as a slave database
+
Target Architecture #8
274. I will name it “db.uisprogramming.com”.
The destination will be one read-replica’s endpoint.
1. Enter “db”.
2. Select CNAME
3. Select No.
4. Set it to 0.
5. Copy and paste one of the
RDS read replica’s endpoint.
6. Choose Weighted.
7. Set it to 0.
8. Name it to read1.
9. Create!
275. Make one more record set with a different read replica.
More about Weighted Routing policy
276. Now we need to change the url for the read query.
277. I just sent a heavy query to test through “new url”,
as you can see replica2’s cpu is 22%.