Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Getting started with AWS

2,036 views

Published on

Basic usage of AWS.
Github : https://github.com/thejungwon/AWS

Published in: Technology

Getting started with AWS

  1. 1. Getting Started with AWS Jungwon Seo University of Stavanger
  2. 2. Jungwon Seo In Norway From South Korea Master in CS Bachelor in CS 3 start-ups Many random works
  3. 3. Do you like to learn about new technology?
  4. 4. My first reaction : Rejection
  5. 5. My second reaction : Worry
  6. 6. The most important thing is that companies want us to know these:
  7. 7. Have you ever heard about AWS?
  8. 8. When we run a service.. Server Database Files Application
  9. 9. When Slack runs a service..
  10. 10. What makes us use it differently?
  11. 11. # of data + # of requests, + # of operations …. # of users =
  12. 12. Typical Questions How can we scale up our service? Buy a better server? But until when?
  13. 13. Back to CS101
  14. 14. File Server Files We need to separate all the components as much as possible Application Server Database Files Application DB Server Database
  15. 15. Let’s start using AWS
  16. 16. This is the website that we will test
  17. 17. Database Files Application Target Architecture #0 EC2 : working as a web server, database and file storage This is the first architecture that we will build.
  18. 18. I will assume that we already have an AWS account. 1. Select the region closest to your country 2. Find EC2
  19. 19. This is the main page of EC2 1. Click
  20. 20. 1. Click Choose the OS that you want to use. I will use Ubuntu.
  21. 21. t2.micro is free but only for one instance. If you run two or more, you will have to pay. 1. Choose 2. Click
  22. 22. We are skipping the detailed settings. 1. Click
  23. 23. For the first time, we need to make a key pair. Keep it safe! 1. Select 2. Type the name 3. Download 4. Click
  24. 24. This is the result page. 1. Click to check
  25. 25. Now we can see the instance we have launched. Let’s just think that one instance equals one server.
  26. 26. In AWS, we administrate ports in the AWS console - not in the server. 1. Click 2. Select (you can also check it from the instance’s details page) 3. Click 4. So far this is the only port that we can access remotely. 5. Click
  27. 27. Let’s add 80 port for HTTP. 1. Click 2. Select HTTP 3. Click
  28. 28. Now we have two open ports (22 and 80)!
  29. 29. Go back to the instance page. 1. Click 3. Check the public IP 2. Select
  30. 30. Now we will connect to the server using “.pem” and “ssh"
  31. 31. Ok, this is the first screen that you will see.
  32. 32. I have already made several test codes in my git. So, let’s use them. git clone https://github.com/MuchasEstrellas/AWS.git
  33. 33. Move to ‘architecture0’ and run ‘start.sh’ run ‘sh start.sh’
  34. 34. It will install and update many things. This screen should appear, if it worked correctly.
  35. 35. Now if you type the public IP, you can check your website publicly.
  36. 36. Target Architecture #1 EC2 : working as a web server and file storage RDS : working as a database Files Application
  37. 37. This time, we need to use RDS as the database of our service. 1. Search and Click
  38. 38. Get Started Now! 1. Click
  39. 39. Choose MySQL and click next. 1. Click 2. Click 3. Click
  40. 40. Scroll down. 1. Go down
  41. 41. Enter the information. I recommend that you use the same name and password. uisaws123 1. Type in the following. 2. Click
  42. 42. Scroll down. 1. Go down
  43. 43. Make a database. Again, I recommend that you use the same database name. 1. Enter the db name.
  44. 44. Let’s launch! 1. Click
  45. 45. Now, you can see your database. 1. Click
  46. 46. This is the RDS instance that we made.
  47. 47. If you scroll down, you can see more detailed information.
  48. 48. A couple minutes later, you can see the endpoint of this database. It means, it will be the address that you can access. Check the security group whether it is opened to everywhere. (rule 0.0.0.0/0 means everywhere)
  49. 49. Move to ‘architecture1' directory. Type ‘start.sh’ with RDS endpoint.
  50. 50. Then the ‘start.sh’ file will automatically change the DB_HOST 
 in the views.py views.py
  51. 51. You can also get an overview of your database in the RDS page.
  52. 52. No difference from the client side. 
 But this website is receiving the data from RDS.
  53. 53. Target Architecture #2 EC2 : working as a web server RDS : working as a database S3 : working as a file server
  54. 54. This time, we will separate the file server from EC2. 1. Search and Click
  55. 55. There is a term which is called ‘bucket’. Let’s just think S3 is like some kind of folder in the cloud. 1. Click
  56. 56. The bucket name should be unique. Because S3 will make a url with your bucket name. 1. Type in your bucket name. 2. Click
  57. 57. Just skip this at this time. 1. Click
  58. 58. Just skip this too at this time. 1. Click
  59. 59. Okay, let’s create. 1. Click
  60. 60. Okay, here is our new bucket. However, there is an important thing that we have to know. Accessing S3 programmatically should be authorized. Otherwise, anyone can upload and delete your files.
  61. 61. Let’s use IAM to access S3 programmatically. 1. Search and Click
  62. 62. There are many things to fix for your security, but let’s skip at this time. 1. Click Let’s ignore these at this moment, but it is important when you use AWS for a real service.
  63. 63. 1. Click Let’s make a user which will have a permission for S3.
  64. 64. 1. Enter the user name 2. Click 3. Click Remember to click “Programmatic access”
  65. 65. 1. Click You need to make a group first. Users should be in a group.
  66. 66. 1. Click 2. Type S3 3. Select S3FullAccess 4. Click This group will contain the permission for S3. Users who are in that group will have the same permission.
  67. 67. Select the group that we just made. 1. Click
  68. 68. Okay, create! 1. Click
  69. 69. It is really important to keep your key safe. Mine is exposed now, but I have deleted it. Don’t try using mine! 1. Download This is the ID This is the password
  70. 70. Now move to the ‘architecture2’ directory and run following command.
 sh start.sh <RDS> <ACCESSKEY> <SECRET> <S3_REGION> RDS endpoint Access Key Secret Key Region Then it will edit ‘views.py’ automatically.
  71. 71. But your website will not work properly. We need to move all the static and media files (css, js, image) to S3. <collectstatic.py> This file will help. (like this)
  72. 72. Let’s run! You have to activate the virtual environment. Don’t forget to deactivate.
  73. 73. Now we can find the static folder in S3.
  74. 74. If you click it, you can find an exact copy of the directory that we had in EC2.
  75. 75. Let’s check ‘style.css’
  76. 76. We can see the link. It means now we can access this css file publicly.
  77. 77. Like this.
  78. 78. If you take a look at ‘views.py’, you can find a path for S3. Before After
  79. 79. One more thing! When a user uploads a picture, it should be stored in S3. Take a look at this code.
  80. 80. Let’s test.
  81. 81. Okay, now you can find the picture that you just uploaded in S3 as well.
  82. 82. Great!
  83. 83. All the uploaded images are also from S3.
  84. 84. Target Architecture #3 EC2 : working as a web server RDS : working as a database S3 : working as a file server Route53 : DNS
  85. 85. You must have bought some domain before. But how can we connect it to our service?
  86. 86. We need to change ‘name server’ to have control of this domain in AWS.
  87. 87. Let’s use ‘Route 53’ to deal with domains.
  88. 88. Get started now!
  89. 89. You can enter your domain.
  90. 90. In my case, uisprogramming.com. Please use yours!
  91. 91. Okay, this is the default record sets. Remember the NS values.
  92. 92. In the website where you bought a domain you can change the name server to the name server from the previous slides
  93. 93. This GUI can be different from site to site. Just remember that you have to change NamesServer!
  94. 94. Here are four name servers that I copied from AWS route53.
  95. 95. Okay, now it has changed, but it takes a while to be applied completely. Just go get a drink and continue it tomorrow.
  96. 96. Okay, good morning! We will test a simple case, which is connecting 'www.uisprogramming.com' to the public IP of your server.
  97. 97. It also takes a while, but not that long. Go get some coffee.
  98. 98. Run ‘start.sh’ in ‘architecture3_4’ with adding your domain. sh start.sh <RDS_ENDPOINT> <ACCESS_KEY> <SECRET_KEY> <S3_REGION> <DOMAIN>
  99. 99. Now we can access our website with a domain.
  100. 100. Target Architecture #4 EC2 : working as a web server RDS : working as a database S3 : working as a file server Route53 : DNS ELB : Load Balancer
  101. 101. Go to the EC2 page and click Load Balancers. 1. Click
  102. 102. Let’s create! 1. Click
  103. 103. Choose Application Load Balancer.
  104. 104. Enter the name, and just set the HTTP port and choose all zones.
  105. 105. If you see this page, just move to next page.
  106. 106. Load Balancer is also a kind of computer. So we need to manage its port setting from the security group. 1. Select 2. Edit if you need to 3. This time, open only 80(http) port. 4. Click
  107. 107. This is the configuration between ELB and EC2. 2. Click 1. Enter the target name
  108. 108. Okay, so we will register an instance to this target group. This group is very important for the future auto scaling. 1. Select 2. Click 3. Click
  109. 109. Good. 1. Click
  110. 110. Okay, so let’s check our Load Balancer.
  111. 111. This is the Load Balancer dash board.
  112. 112. This is the target group that we made.
  113. 113. If you click the target group, you can see the targets (instances) 1. Select 2. Click 3. Here you can check whether your instance is working properly or not.
  114. 114. Now, we will redirect our Route53 to ELB
 (not to the public IP of EC2)
  115. 115. Move Move
  116. 116. One more!
  117. 117. Okay, stop.
 We will replace the IP with ELB. 1. Select 2. Select 3. Choose 4. Click
  118. 118. The reason that we have to connect ELB to Route53 is that we can easily replace our server without any downtime. When you want to change your server, you just need to change your instance from the target group. You don’t need to reconnect your server IP to Route53 anymore.
  119. 119. One more thing, we can also add more instances to this ELB. Let’s create one more instance like we did in architecture#0. However, if you use two t2.micro instances, AWS will charge you. If you don’t want to pay, then just read.
  120. 120. Open one more terminal, 
 do the same job that we did in architecture#0
  121. 121. After that, we need to add this instance to our target group. 1. Click 2. Choose 3. Click 4. Click
  122. 122. 1. Select 2. Click 3. Click Add the new instance to our registered targets.
  123. 123. Now, we can see two instances.
  124. 124. Let’s see if both work fine. We will monitor it with the Nginx access log.
  125. 125. As you can see, there are many “ELB-HealthCheker” logs. This is the way to separate the healthy instances and the unhealthy instances from ELB.
  126. 126. If you refresh your website, only one instance will get the request.
  127. 127. If you refresh it again, the other instance will get the request.
  128. 128. Elastic Load Balancer will choose one by one 
 so that we can distribute the traffic.
  129. 129. Target Architecture #5 EC2 : working as a web server RDS : working as a database S3 : working as a file server Route53 : DNS ELB : Load Balancer AWS Certificate Manager : TLS/SSL certificate +
  130. 130. Before we get started, we need to modify the security group of ELB. 1. Click 2. Select 3. Click
  131. 131. 2. Click 1. Click Let’s add an HTTPS port.
  132. 132. 1. Click 2. Select 3. Check the values. 4. Click Now we are opening two ports(80,443) to the world!
  133. 133. We also need to add an HTTPS listener to ELB. 1. Click 2. Select 3. Click 4. Click
  134. 134. Oops, we don’t have a TLS certificate. Let’s make that first. !!!!!!
  135. 135. Fortunately, AWS offer an easy way to create a certificate. 1. Go!
  136. 136. Personally, I think this is the best service of AWS. 1. Click
  137. 137. 1. Enter the domain name. 
 I recommend that you use ‘*’. 2. Click Use your domain!!
  138. 138. Since we have already moved our domain to Route53, 
 DNS validation is easier than Email validation. 1. Click 2. Click
  139. 139. 1. Click Okay, next
  140. 140. 1. Click Create a record in Route 53 to validate that you are the owner.
  141. 141. 1. Click It will automatically add the record to your Route53.
  142. 142. 1. Click Good, click continue!
  143. 143. It takes some time.
  144. 144. Okay, finally we got a certificate.
  145. 145. Back to the ELB dashboard, Now, we can select the certificate. 1. Select 2. Click
  146. 146. Now we have two listeners for ELB.
  147. 147. If you type your domain with https, you can see that it works.
  148. 148. EC2 : working as a web server RDS : working as a database S3 : working as a file server Route53 : DNS ELB : Load Balancer AWS Certificate Manager : TLS/SSL certificate Elastic Cache(Redis) : Session Storage Target Architecture #6 +
  149. 149. Session storage - Cookie : Not secure. - File storage : Disk I/O + can not be shared with other instance. - Database : Good, but Disk I/O + additional burden on the database. - In-memory DB : Highly recommended! Session data : small and frequently requested
  150. 150. It’s better to make a security group for Redis first. 2. Click 1. Click
  151. 151. 1. Fill in 2. Fill in like this. 3. Click Redis will use 6379 port.
  152. 152. Redis can be found in the service called ElastiCache. 1. Choose
  153. 153. We can also use Memcached but I want to use Redis. 1. Click
  154. 154. 1. Click Click Redis 2. Click
  155. 155. It’s important to choose t2.micro, otherwise you have to pay. 1. Select 2. Name it 3. Choose t2.micro(free) 4. Choose None 5. Go down
  156. 156. If it is the first time, you need to make a subnet group like below. 1. Select 2. Name it 3. Choose all
  157. 157. Select the security group that we just made. 1. Select 2. Click
  158. 158. Okay, it is launching
  159. 159. Done! 1. Click
  160. 160. We can check the endpoint so that we can connect from our service.
  161. 161. As seen in this code, we are using Redis as a session storage. You can find the way to replace the session storage for any kind of server side languages that you are using. sh start.sh <RDS_ENDPOINT> <ACCESS_KEY> <SECRET_KEY> <S3_REGION> <DOMAIN> <REDIS_ENDPOINT>
  162. 162. Up until here is the tip of the iceberg.
  163. 163. The real beauty of AWS is auto scaling.
  164. 164. Let’s talk about it.
  165. 165. Example Let’s say we have lunched a mobile game. Unfortunately, it became too popular. So we have to add more servers to deal with more users. The maximum number of concurrent users per hour is 40000. The minimum number of concurrent users per hour is 1000. One server can deal with 500 people. What is the optimal number of servers?
  166. 166. Solution #1 : Maximum number of servers -> Too expensive, waste of servers for the non-busy time. Solution #2 : Average number of servers -> Sounds reasonable, but only government websites can do this 
 because users will complain a lot. Solution #3 : Use auto scaling! ->Yay!
  167. 167. But how does it work?
  168. 168. Shit! I can’t do this anymore. I need some help! Okay the users are coming. I’m sending but can you handle them?
  169. 169. Okay the users are coming. 20 to Jungwon, 20 to Hans, 20 to Simon, 20 to Luca… God damm it! He is not healthy yet. Hans : Healthy Simon : Healthy Jungwon : Healthy Luca : Unhealthy Wait! I’m coming!! Phew, thank you guys!
  170. 170. Okay, they are starting to go to bed. Okay, Hans, you can go, Simon can go. Luca can go. Ok, now I can handle it alone. Hans : Deleting Simon : Deleting Jungwon : Healthy Luca : Deleting See you! Snakkes! Ha det~!
  171. 171. Important! From this slide AWS will charge you. If you do not want to pay then just read.
  172. 172. EC2 : working as a web server RDS : working as a database S3 : working as a file server Route53 : DNS ELB : Load Balancer AWS Certificate Manager : TLS/SSL certificate Elastic Cache(Redis) : Session Storage Auto Scaling Group : auto scaling ! Target Architecture #7 + Auto scaling group
  173. 173. Move to the EC2 dashboard.
  174. 174. Move to Auto Scaling Groups 1. Click
  175. 175. Let’s make an auto scaling group. 1. Click
  176. 176. As you can see, we have to make a launch configuration first. 1. Click
  177. 177. It means, we need to set the initial setting for the instance. (The instance that will be launched) 1. Click
  178. 178. Choose t2.micro. 1. Select 2. Click
  179. 179. This is the most important part, especially ‘User data’. Let’s talk more about this.
  180. 180. There are two ways to make 
 ‘auto scaling group’.
  181. 181. #1. Copy an image (AMI) of your origin instance and let it be used for the auto scaling group. AMI Pros: - Short launch time (until becoming healthy!) - No need to install ‘things’ 
 (they are already there)
 Cons: - Difficult to apply new updates.
 (To do that requires making another new image.) Auto Scaling Group
  182. 182. #2. Use a pure instance image and and let it be used for the auto scaling group. Pure Instance Image Pros: - Easy to apply new updates
 (Because when it launches it will install ‘things’.)
 Cons: - Slow launch time.
 (Because of installations!) Auto Scaling Group
  183. 183. Both ways are fine, but I prefer option #2. That’s why I made a ‘start.sh’ file.
  184. 184. So here is the scenario. 1. Launch the pure instance 2. Clone 
 the service from git 3. Install softwares and 
 run the service
  185. 185. Then how can we set it up so that it automatically 
 acts like shown on the previous slide? That’s why we need to set ‘user data’.
  186. 186. Let’s take a look at the user data. #!/bin/bash cd /home/ubuntu git clone https://github.com/MuchasEstrellas/AWS.git cd /home/ubuntu/AWS/architecture6_7 sh start.sh <RDS_ENDPOINT> <ACCESS_KEY> <SECRET_KEY> <S3_REGION> <REDIS_ENDPOINT> 1. move to the directory that contains the website 2. clone it from git. 3. move to the directory that will be run. 4. run the service. It’s important to know that
 the user who is running the command above is ‘root’. (Keep this in mind, just in case you face some path or permission problem.)
  187. 187. Okay here we are again. 1. Enter the name 2. Set the initial command that should be run 
 right after the instance has launched 3. Next!
  188. 188. 1. Next! Just skip this part.
  189. 189. Add 80 port. 3. Next! 1. Click 2. Add
  190. 190. Okay, now we are ready. 1. Click
  191. 191. Just use same key that we made earlier. 1. Click
  192. 192. Okay, now we have to make an actual group. 1. Enter the name 2. Set the number of instances you want to begin with. 3. Add all subnets. 4. Check 5. Add the target group that we made while launching ELB. 6. Both are fine for this test case, but it’s recommended to use ELB 
 as long as you are not using classic load balancer. 7. Set it to 10sec (but you can choose how long you want). 8. Click
  193. 193. Let’s go for the first option this time. We will add scaling policies later. 1. Next 1. Check
  194. 194. We can set the notification, but I will skip. 1. Next
  195. 195. Skip. 1. Click
  196. 196. Let’s finish it. 1. Click
  197. 197. Okay, done!
  198. 198. One instance is launching now. Before we test it, we need to take a brief look at the AWS console.
  199. 199. If you check the target group under the load balancer section, you can see the new instance.
  200. 200. You can also check the new instance in the instances section.
  201. 201. So, how can we make this auto scaling work? When should they be scaled out and scaled in?
  202. 202. There are several standards that we can set. For example the average amount of network “in” and “out” 
 or the average cpu utilization. For this case we will use “average cpu utilization”.
  203. 203. This is very simple. If the average CPU utilization is over 50% then add one more instance. If the average CPU utilization is below 40% then remove one instance. We can also set the range of the number of instances.
 (e.g Min : 1, Max : 5) Let’s see how it works.
  204. 204. We will use a new service called “cloudwatch”.
  205. 205. Basically this service is mainly used for monitoring our AWS services. However, we can also set the alarm or event based on our standard.
  206. 206. For the autoscaling, we will use “Alarm” 1. Click 2. Click
  207. 207. 1. Search your auto scaling group 2. Choose CPUUtilization. Let’s choose “CPU Utilization” of “auto scaling group” 
 that we have made.
  208. 208. 1. Click Next. And click next.
  209. 209. Okay, let’s set the first alarm. This is about when to add an instance. 1. Put any name. 2. Put any desc. 3. when it’s >= 50% 4. How many datapoints should be detected out of all the data points. 5. Literally, monitoring period and the way to calculate. 6. We will set the action later. (But you can set it now) 7. Create!
  210. 210. So, now we have one alarm that can react based on CPU Utilization. - If CPU reaches 50% then it will be ALARM. - If it’s under 50% then it will be OK. - If there are not enough monitoring data, then it will be INSUFFICIENT.
  211. 211. Create one more for reducing the number of instances. This time is <= 40. Create!
  212. 212. Now we have two alarms that can be used for auto scaling.
  213. 213. So let’s set the Scaling Policy using the alarms that we have set. Go to the autoscaling group dashboard. 2. Click! 3. Click! 1. Select!
  214. 214. We need to use our alarm, so click the box below. 1. Click!
  215. 215. Let’s get started with the case 1: adding the instance. 1. Name it whatever you want. 2. Choose the one that we have made. 3. Action is adding 1 instance. 4. This is for efficiently adding new instances.
 (Practically 10 seconds is too short, but for the test case I will go for 10sec) 5. Click!
  216. 216. Okay, it’s set.
 Let’s add a removing policy in the same way as the adding policy. 1. Click!
  217. 217. 1. Action is removing 1 instance. The other steps are the same, except the action part. 2. Click!
  218. 218. Okay, now we have two policies.
  219. 219. But we need to set one more, the range of number of instances! Minimum number and maximum number of instances. 1. Click 2. Click
  220. 220. 1. Scroll Down 2. Set min 1 3. Set max 5 You can make any min and max number of instances 
 based on your budget! 4. This is also about the time 
 before the next scaling action 5. Scroll up and click save button.
  221. 221. Let’s test
  222. 222. To test and monitor, we will install two softwares. “htop” and “stress”.
  223. 223. The left screen is htop screen. The right screen is command for the stress test.
  224. 224. Okay, now cpu utilization of this instance has reached 100%.
  225. 225. If you checked the cloud watch, you can see the “higher cpu” alarm is on. It may not react quickly, be patient.
  226. 226. Now you can see two instances in your auto scaling group.
  227. 227. As you can see in the target group, there are two instances in the group.
 One is in the initial status.
  228. 228. It’s not healthy yet, it means the load balancer didn’t get the proper response from that instance yet.
  229. 229. Now it’s healthy, maybe it finished installing things.
  230. 230. You can also check in the instance dash board.
  231. 231. Now there are three instances because it’s based on average cpu utilization. It may switch back and forth between 2 and 3 
 because (100%+a%+b% )/3 <= 40% 
 but (100%+a% )/2 >= 50% That’s why cool down time is important! However, let’s ignore it this time.
  232. 232. Now the stress test is over. Let’s check if the number of instances will be reduced.
  233. 233. Okay, low cpu alarm has occurred.
  234. 234. The desired number of instances is 2. However, the current number of instances is 3.
  235. 235. Yeah, it’s shutting down now.
  236. 236. After several minutes, now there is only one instance.
  237. 237. What about the bottle neck?
  238. 238. Okay the users are coming. 20 to Jungwon, 20 to Hans, 20 to Simon, 20 to Luca Hans : Healthy Simon : Healthy Jungwon : Healthy Luca : Healthy Phew, thank you guys! Ferdinand Really? Meanwhile…..
  239. 239. Okay the users are coming. 20 to Jungwon, 20 to Hans, 20 to Simon, 20 to Luca Hans : Healthy Simon : Healthy Jungwon : Healthy Luca : Healthy Phew, thank you guys! Ferdinand Really?
  240. 240. We also need to consider the database because it is shared by all the instances.
  241. 241. “But we are sharing S3 and Redis too!”
  242. 242. S3 is fine. Redis is also fine because it is NOSQL. (It means it is easy to scale out.) -From AWS -From Wikipedia
  243. 243. As long as we are using RDBMS 
 as a main database, there are 3 options.
  244. 244. #1 : AWS Aurora I think this is the best. However, I'm kind of afraid to migrate the entire database to the new database. -From AWS
  245. 245. #2 : Use EC2 and build multi-master manually. This is what Slack is doing. If you can do this, this is a good solution. However, it sounds very difficult to me.
  246. 246. #3 : Separate the Read and Write Database. I think I can do this.
  247. 247. Let’s try to make a read-replica first.
  248. 248. Move to the RDS page.
  249. 249. Find the RDS instance that we made before. 1. Click!
  250. 250. Go to the instance. 1. Click!
  251. 251. Let’s make a replica. 1. Click!
  252. 252. Click the “Create read replica” 1. Click!
  253. 253. Not to get confused, put a different name and create it. 1. Click! 2. Go down and 
 Create
  254. 254. Now we can see two db instances.
  255. 255. I will just write two endpoints in my python code. For the read query I will use the first url. For the write query I will use the second url. 1. Read replica db! 1. Master db!
  256. 256. It is not the best way, but I just want you to understand the logic. If you use other server side framework, there must be some convenient way to separate the host addresses. <For the read query>
  257. 257. <For the write query>
  258. 258. Still, it doesn’t sound that cool. For the read-replica, every time something is written in the master database, ‘read-replica’ has to be written as well. It means for them it is not that different. Write Read Write Master Slave What’s the difference for me?
  259. 259. There are several things that you can think about. Write operations are occurring less than the read operations. One solution to improve the RDS performance is buying a better instance for Read-Replica. Read operations are often complicated. (Join,Union, Order by, Group By 
 and amount of data)
  260. 260. OR!!!!
  261. 261. Make many read replicas. Write 25%Read Write Master Slave1 25%Read Slave2 25%Read Slave3 25%Read Slave4
  262. 262. Unfortunately, we can’t use load balancer or auto scaling.
  263. 263. However, we can use Route 53 to distribute the requests.
  264. 264. EC2 : working as a web server RDS1 : working as a master database S3 : working as a file server Route53 : DNS ELB : Load Balancer AWS Certificate Manager : TLS/SSL certificate Elastic Cache(Redis) : Session Storage RDS2 : working as a slave database + Target Architecture #8
  265. 265. Let’s make one more read-replica.
  266. 266. Name this instance.
  267. 267. Okay, now we have three.
  268. 268. Move to the Route 53 page.
  269. 269. Move!
  270. 270. Move!
  271. 271. Let’s create one more Record set for the database.
  272. 272. I will name it “db.uisprogramming.com”. The destination will be one read-replica’s endpoint. 1. Enter “db”. 2. Select CNAME 3. Select No. 4. Set it to 0. 5. Copy and paste one of the 
 RDS read replica’s endpoint. 6. Choose Weighted. 7. Set it to 0. 8. Name it to read1. 9. Create!
  273. 273. Make one more record set with a different read replica. More about Weighted Routing policy
  274. 274. Now we need to change the url for the read query.
  275. 275. I just sent a heavy query to test through “new url”, 
 as you can see replica2’s cpu is 22%.
  276. 276. Now, do you want to use AWS?
  277. 277. Thank you!

×