Scalable Web Architecture and Distributed Systems The Architecture of Open Source Applications II Ver 1.0 아키텍트를 꿈꾸는 사람들 cafe.naver.com/architect1 현수명 soomong.net #soomong
Scalable?at a primitive level its just connecting users with remoteresources via the internet - the part that makes it scalable is thatthe resources, or access to those resources, are distributedacross multiple servers.
Key principlesbasis for decisions in designing web architecture. Availability Scalability Performance Manageability Reliability Costit can be at odds with one another.high Scalability (more server) VS low manageability (you have to operate an additional server)high Scalability (more server) VS high cost (the price of the servers)
Key principles Availability - The uptime of a website - Constantly available and resilient to failure Performance - The speed of website - fast responses and low latency - consistently Reliability - return the same data
Key principles How much traffic can it handle - Scalabilityhow easy is it to add more storage - Easy to operate - Manageability Ease of diagnosing problems - How much – Cost h/w, s/w, deploy, maintain - build, operate, training -
What are the right pieces?How these pieces fit together? What are the right tradeoffs?
Example. Image Hosting Application User can upload images can be requested via a web or API
other important aspects- no limit to the number of images- low latency for downloads- if upload , it should be there- easy to maintain- cost-effective
let’s assume that this application has two key parts. - upload(write) - query(read)
Problem. Longer writes will impact the time it takes to read the images. (since they two functions will be competing the shared resources) Write function will almost always be slower than reads.Problem. Web server has an upper limit on the number of connections. (read is ok. but write is bottleneck)
Service split out reads and writes of image into their own services.
Service-Oriented Architecture (SOA)it allows each piece to scale independently of one another like OOP. all requests to upload and retrieve images are processed by the same server. break out these two functions into their own services.
this allows us to scale each of them independently(always do more reading than writing)Flickr solves this read/write issue by distributing users acrossdifferent shards such that each shard can handle a set number ofusers. Problem. When failure happens
Redundancy backup, copies If one fails , the system can failover to the healthy copy.
shared-nothing architecture - each node is able to operate independently - there is no central brain - new nodes can be added without special condition or knowledge. - they are much more resilient to failure Problem. There may be very large data sets that are unable to fit on a single server.
Partitions two choices. scale vertically or horizontally
to scale vertically means adding more resources to an individualserver. - adding more hard drives on a single serverto scale horizontally means adding more nodes. - adding more servers - it should be included as an intrinsic design principle of the systemarchitecture. - to break up your services into partitions, or shards. - (by geographic boundaries or by non-playing user VS playing users) an image’s name could be formed from a consistent hashing scheme mapped across servers.
Problem. data locality to perform a costly fetch of the required information across the network.Problem. inconsistency race condition could occur.
The Building Blocks of Fast and Scalable Data Access hard part : scaling access to the data.as they grow, there are two main challengesscaling access to the app server and to the database.In a highly scalable application design, the app (or web) server is typicallyminimized and often embodies a shared-nothing architecture. This makes the appserver layer of the system horizontally scalable. As a result of this design, theheavy lifting is pushed down the stack to the database server and supportingservices; its at this layer where the real scaling and performance challenges comeinto play
let’s assume you have many terabytes of data and you want toallow users to access small portions of that data at random. This is particularly challenging because it can be very costly to load TBs of data into memory. This directly translates to disk IO.
to make data access a lot faster. Caches Proxies Indexes Load balancers
CachesPrinciple : Recently requested data is likely to be requestedagain.like short-term memory.Caches can exist at all levels in architecture, but are often foundat the level nearest to the front end.There are a couple of places you can insert a cache.
1. To insert a cache on your request layer node.Each time a request is made to the service, the node will quicklyreturn local, cached data if it exists.If it is not in cache, the request node will query the data from disk.
2. Request layer is expanded to multiple nodes. Problem. If your load balancer randomly distributes requests across thenodes, the same request will go to different nodes, thus increasingcache misses. Two choices for overcoming this hurdle.
3. Global cachesAll the nodes use the same single cache space.
Each of the request nodes queries the cache in the same way it woulda local one.It can get a bit complicated because it is very easy to overwhelm asingle cache as the number of clients and requests increase.But it is very effective in some architecturesThere are two common forms of global cache(Who has responsible for retrieval?)
Tend to use the first type.However, there are some cases where the second implementationmakes more sense.If the cache is being used for very large files, a low cache hitpercentage would cause the cache buffer to become overwhelmedwith cache misses
4. Distributed caches.Each of its nodes own part of the cached data.
If a refrigerator acts as a cache to the grocery store, a distributedcache is like putting your food in several locations , convenientlocations for retrieving snacks from, without a trip to the store.Typically the cache is divided up using a consistent hashingfunction.Advantages.The increased cache space that can be had just by adding nodes tothe request pool.Disadvantage.Remedying a missing node. Some distributed caches get around thisby storing multiple copies of the data on different nodes.however, you can imagine how this logic can get complicatedquickly, especially when you add or remove nodes from the requestlayer.Problem. what to do when the data is not in the cache
Proxiesto filter requests, log requests, or sometimes transform requests
Collapsed forwardingOne way to use a proxy to speed up data access is to collapse thesame (or similar) requests together into one request, and thenreturn the single result to the requesting clients.
There is some cost associated with this design, since eachrequest can have slightly higher latency, and some requests maybe slightly delayed to be grouped with similar ones.But it will improve performance in high loadsituations, particularly when that same data is requested overand over
to collapse requests for data that is spatially close together inthe origin store (consecutively on disk) We can set up our proxy to recognize the spatial locality of the individual requests, collapsing them into a single request and returning only bigB, greatly minimizing the reads from the data origin
it is best to put the cache in front of the proxy, for the same reason that it is best to let the faster runners start first in a crowded marathon race.Squid Varnish
Indexesto find the correct physical location of the desired data In the case of data sets that are many TBs in size, but with verysmall payloads (e.g., 1 KB), indexes are a necessity for optimizingdata access.
there are many layers of indexes that serve as a map, moving youfrom one location to the next, and so forth, until you get the specificpiece of data you want. used to create several different views of the same data
Load balancersto distribute load across a set of nodes responsible for servicingrequestsTheir main purpose is to handle a lot of simultaneous connectionsand route those connections to one of the request nodes, allowingthe system to scale to service more requests by just adding nodes
There are many different algorithmsLoad balancers can be implemented as software or hardwareappliances.Open source software load balancer is HAProxy
Multiple load balancersLike proxies, some load balancers can also route a request differentlydepending on the type of request it is.(Technically these are also known as reverse proxies.)
Queues effective management of writesdata may have to be written several places on different servers orindexes, or the system could just be under high load. In the caseswhere writes, or any task for that matter, may take a long time,achieving performance and availability requires building asynchronyinto the system.
When the server receives more requests than it can handle, then each client is forced to wait for the other clients requests to complete before a response can be generated.This kind of synchronous behavior can severely degrade client performance
Solving this problem effectively requires abstraction between theclients request and the actual work performed to service it. When a client submits task requests to a queue they are no longer forced to wait for the results; instead they need only acknowledgement that the request was properly received.
There are quite a few open source queueslike RabbitMQ, ActiveMQ, BeanstalkD,but some also use services like Zookeeper, or even data storeslike Redis.