Scalable Web Architecture and Distributed Systems

11,870 views

Published on

The Architecture of Open Source Applications.
- Scalable web architecture and distributed systems

Published in: Technology

Scalable Web Architecture and Distributed Systems

  1. 1. Scalable Web Architecture and Distributed Systems The Architecture of Open Source Applications II Ver 1.0 아키텍트를 꿈꾸는 사람들 cafe.naver.com/architect1 현수명 soomong.net #soomong
  2. 2. Scalable?at a primitive level its just connecting users with remoteresources via the internet - the part that makes it scalable is thatthe resources, or access to those resources, are distributedacross multiple servers.
  3. 3. Key principlesbasis for decisions in designing web architecture. Availability Scalability Performance Manageability Reliability Costit can be at odds with one another.high Scalability (more server) VS low manageability (you have to operate an additional server)high Scalability (more server) VS high cost (the price of the servers)
  4. 4. Key principles Availability - The uptime of a website - Constantly available and resilient to failure Performance - The speed of website - fast responses and low latency - consistently Reliability - return the same data
  5. 5. Key principles How much traffic can it handle - Scalabilityhow easy is it to add more storage - Easy to operate - Manageability Ease of diagnosing problems - How much – Cost h/w, s/w, deploy, maintain - build, operate, training -
  6. 6. What are the right pieces?How these pieces fit together? What are the right tradeoffs?
  7. 7. Example. Image Hosting Application User can upload images can be requested via a web or API
  8. 8. other important aspects- no limit to the number of images- low latency for downloads- if upload , it should be there- easy to maintain- cost-effective
  9. 9. let’s assume that this application has two key parts. - upload(write) - query(read)
  10. 10. Problem. Longer writes will impact the time it takes to read the images. (since they two functions will be competing the shared resources) Write function will almost always be slower than reads.Problem. Web server has an upper limit on the number of connections. (read is ok. but write is bottleneck)
  11. 11. Service split out reads and writes of image into their own services.
  12. 12. Service-Oriented Architecture (SOA)it allows each piece to scale independently of one another like OOP. all requests to upload and retrieve images are processed by the same server. break out these two functions into their own services.
  13. 13. this allows us to scale each of them independently(always do more reading than writing)Flickr solves this read/write issue by distributing users acrossdifferent shards such that each shard can handle a set number ofusers. Problem. When failure happens
  14. 14. Redundancy backup, copies If one fails , the system can failover to the healthy copy.
  15. 15. shared-nothing architecture - each node is able to operate independently - there is no central brain - new nodes can be added without special condition or knowledge. - they are much more resilient to failure Problem. There may be very large data sets that are unable to fit on a single server.
  16. 16. Partitions two choices. scale vertically or horizontally
  17. 17. to scale vertically means adding more resources to an individualserver. - adding more hard drives on a single serverto scale horizontally means adding more nodes. - adding more servers - it should be included as an intrinsic design principle of the systemarchitecture. - to break up your services into partitions, or shards. - (by geographic boundaries or by non-playing user VS playing users) an image’s name could be formed from a consistent hashing scheme mapped across servers.
  18. 18. Problem. data locality to perform a costly fetch of the required information across the network.Problem. inconsistency race condition could occur.
  19. 19. The Building Blocks of Fast and Scalable Data Access hard part : scaling access to the data.as they grow, there are two main challengesscaling access to the app server and to the database.In a highly scalable application design, the app (or web) server is typicallyminimized and often embodies a shared-nothing architecture. This makes the appserver layer of the system horizontally scalable. As a result of this design, theheavy lifting is pushed down the stack to the database server and supportingservices; its at this layer where the real scaling and performance challenges comeinto play
  20. 20. let’s assume you have many terabytes of data and you want toallow users to access small portions of that data at random. This is particularly challenging because it can be very costly to load TBs of data into memory. This directly translates to disk IO.
  21. 21. to make data access a lot faster. Caches Proxies Indexes Load balancers
  22. 22. CachesPrinciple : Recently requested data is likely to be requestedagain.like short-term memory.Caches can exist at all levels in architecture, but are often foundat the level nearest to the front end.There are a couple of places you can insert a cache.
  23. 23. 1. To insert a cache on your request layer node.Each time a request is made to the service, the node will quicklyreturn local, cached data if it exists.If it is not in cache, the request node will query the data from disk.
  24. 24. 2. Request layer is expanded to multiple nodes. Problem. If your load balancer randomly distributes requests across thenodes, the same request will go to different nodes, thus increasingcache misses. Two choices for overcoming this hurdle.
  25. 25. 3. Global cachesAll the nodes use the same single cache space.
  26. 26. Each of the request nodes queries the cache in the same way it woulda local one.It can get a bit complicated because it is very easy to overwhelm asingle cache as the number of clients and requests increase.But it is very effective in some architecturesThere are two common forms of global cache(Who has responsible for retrieval?)
  27. 27. 3’. Global caches
  28. 28. Tend to use the first type.However, there are some cases where the second implementationmakes more sense.If the cache is being used for very large files, a low cache hitpercentage would cause the cache buffer to become overwhelmedwith cache misses
  29. 29. 4. Distributed caches.Each of its nodes own part of the cached data.
  30. 30. If a refrigerator acts as a cache to the grocery store, a distributedcache is like putting your food in several locations , convenientlocations for retrieving snacks from, without a trip to the store.Typically the cache is divided up using a consistent hashingfunction.Advantages.The increased cache space that can be had just by adding nodes tothe request pool.Disadvantage.Remedying a missing node. Some distributed caches get around thisby storing multiple copies of the data on different nodes.however, you can imagine how this logic can get complicatedquickly, especially when you add or remove nodes from the requestlayer.Problem. what to do when the data is not in the cache
  31. 31. popular open source cache is Memcached
  32. 32. Proxiesto filter requests, log requests, or sometimes transform requests
  33. 33. Collapsed forwardingOne way to use a proxy to speed up data access is to collapse thesame (or similar) requests together into one request, and thenreturn the single result to the requesting clients.
  34. 34. There is some cost associated with this design, since eachrequest can have slightly higher latency, and some requests maybe slightly delayed to be grouped with similar ones.But it will improve performance in high loadsituations, particularly when that same data is requested overand over
  35. 35. to collapse requests for data that is spatially close together inthe origin store (consecutively on disk) We can set up our proxy to recognize the spatial locality of the individual requests, collapsing them into a single request and returning only bigB, greatly minimizing the reads from the data origin
  36. 36. it is best to put the cache in front of the proxy, for the same reason that it is best to let the faster runners start first in a crowded marathon race.Squid Varnish
  37. 37. Indexesto find the correct physical location of the desired data In the case of data sets that are many TBs in size, but with verysmall payloads (e.g., 1 KB), indexes are a necessity for optimizingdata access.
  38. 38. there are many layers of indexes that serve as a map, moving youfrom one location to the next, and so forth, until you get the specificpiece of data you want. used to create several different views of the same data
  39. 39. Load balancersto distribute load across a set of nodes responsible for servicingrequestsTheir main purpose is to handle a lot of simultaneous connectionsand route those connections to one of the request nodes, allowingthe system to scale to service more requests by just adding nodes
  40. 40. There are many different algorithmsLoad balancers can be implemented as software or hardwareappliances.Open source software load balancer is HAProxy
  41. 41. Multiple load balancersLike proxies, some load balancers can also route a request differentlydepending on the type of request it is.(Technically these are also known as reverse proxies.)
  42. 42. Queues effective management of writesdata may have to be written several places on different servers orindexes, or the system could just be under high load. In the caseswhere writes, or any task for that matter, may take a long time,achieving performance and availability requires building asynchronyinto the system.
  43. 43. When the server receives more requests than it can handle, then each client is forced to wait for the other clients requests to complete before a response can be generated.This kind of synchronous behavior can severely degrade client performance
  44. 44. Solving this problem effectively requires abstraction between theclients request and the actual work performed to service it. When a client submits task requests to a queue they are no longer forced to wait for the results; instead they need only acknowledgement that the request was properly received.
  45. 45. There are quite a few open source queueslike RabbitMQ, ActiveMQ, BeanstalkD,but some also use services like Zookeeper, or even data storeslike Redis.
  46. 46. Yes. It is barely scratching the surface. :)
  47. 47. Reference The Architecture of Open Source Applications http://www.aosabook.org/en/distsys.html https://www.lib.uwo.ca/blogs/education/phoneflickr.jpg
  48. 48. Thank you

×