So, why is scalability important ? We know that we need to deal with failures. Our product needs to function even if there are hardware failures ! But more importantly, it also needs to function when the product is highly successful ! Designing the application for scalability allows us to deal with the success of our product gracefully.
Since data is at the heart of all these products, and the products that we design now and will design in future, and since the amount of data available to us is exploding, we need to make sure that our product handles this exploding data well. And that’s where Hadoop really shines.
What is different between Hadoop and other traditional paradigms and frameworks, is that Hadoop makes it deceptively simple to write these big-data applications. All that the programmer has to do is specify a couple of sequential computations, and Hadoop takes care of the rest of messy parallel execution details. So, while developing hadoop applications, we sometimes tend to forget that we are writing a parallel application. But, believe me, if one forgets that this application is a parallel program, it will bite us in the end So, when thinking about scalability of hadoop applications, we need to think about scalability of parallel programs.
There are two ways of thinking about scalability of parallel programs. One, when we have more computational resources at hand, we should be able to process proportionately bigger data. The second way is, when we are dealing with fixed sized data, we should be able to process that data in shorter amount of time. When the speedup we get is proportional to the number of computational resources we throw at the application, it is called linear scalability.
These two definitions of linear scalability correspond to two goals of scalability. First is to reduce latency, by minimizing program execution time. That way, data can be processed quickly, and we can do something else with the resources available.
The second goal is to be able to process more and more data with additional machines, without having to rewrite our application. So, for reasoning about scalability for satisfying both these goals, all we need to know is..
The three equations. And since I don’t have much to say on this slide, I hope you like my choice of three cute little lion cubs here
The first equation is from 1967, by Gene Amdahl, who was at IBM at the time. What this law tells you is that if alpha is the fraction of sequential computation in your code, then you can never achieve scalability beyond one divided by alpha. So, if 5% of your code has to execute sequentially, then you never speed up beyond 20.
In case of real parallel applications, of course where there are multiple phases, and each phase can use multiple computational resources, then Amdahl’s law can be restated as…
This. And in case you start thinking about how or why this relates to hadoop, …
Here is the graphical depiction of the typical map reduce application. Note the sequential parts of computation that happen in job client, startup task and clean up task. Also note that map phase and reduce phase typically have different amount of parallelism in them.
The second equation to know is called Little’s Law. What this law tells is that the total number of occupants in a system is the product of their arrival rate and the amount of time they spend there. So, if a university accepts 1000 students every year, and every student spends 4 years at the university on an average, then at any time, there are 4000 students at the university. In case of Hadoop, we have to look at it the other way. We have fixed number of tasks that could be running at any time within the cluster. Now we have to decide whether our job arrival rate is adequate with the amount of time each task takes.
The third equation one has to know is the data transfer cost model. Every message you send from one machine to another has some fixed cost, that we call the latency of the message alpha, and then a per byte cost that we call the bandwidth of the channel, the beta.
To put it concretely, for a gigabit ethernet nic, the latency is about 300 microseconds, and the bandwidth we expect for the real payload is about 100 megabytes per second. So, now if you have to transfer 1 MB between two machines connected by gigabit nic, and you chose to send them in 100 messages of 10 KB each, it will take you 40 milliseconds. But if you choose to club these messages together and send them as 10 messages of 100 kilobytes each, it will cost you substantially lower !
One thing to note though. That these alphas and betas are very runtime dependent. A common mistake is to assume that they are constant. When a reduce task starts fetching map output, it has to contact the tasktracker and request for map output. The task tracker has to get scheduled with a context switch. The amount of time it takes for the tasktracker to get scheduled is dependent on what other things are happening on that machine. With several tasks executing, the tasktracker gets a very small slice of time, and that may result in several milliseconds latencies. Of course the bandiwtdh is divided into number of data transfers at the time, so even beta is not constant.
So, with these three equations as our tools, we try to analyse the causes of sublinear scalability of hadoop applications.
The first of course is sequential bottlenecks. In multi-phase parallel applications, the amount of time spent in low-width phases reduces scalability. So, the most common mistake is to use a single reducer. Unfortunately, that’s the default setting in Hadoop. So, make sure that you have reasoned about that well, and chosen appropriate number of reducers my specifying this config variable. Or by using a parallel directive in Pig. Another common mistake is chosing the same number of reducers no matter what the amount of data being handled or computations being performed by the reducers. In my opinion, the directive default_parallel is a mistake in pig.
The second most common cause of sublinear scalability in hadoop is load imbalance. Load imbalance results from unequal partitioning of work to be done in parallel. The imbalance can be characterized by maximum amount of work done in a partition minus minimum amount of work in a partition. Even if one tries to equally partition the work, the computational resources at hand may have different performance. What’s worse that performance is heavily dependent on the state of the resourse, and often degrades over time. When the disk gets full, the performance decreases.
A common way to deal with this problem is by overpartitioning. So, if you have N worker nodes, choose M number of partitions such that M is a lot bigger than N. About 5 to 10 times bigger. Hadoop uses adaptive scheduling. So, when a worker is free, it requests new work. This means that the load imbalance inherent in the application or the hardware evens out. Each worker processes some chunk of these avaiulable partitions. So, the max and min tend to average, as you have more partitions than the number of machines.
A common problem we see in hadoop applications is the amount of work that needs to be done on the critical path of an application. If you express computation as a directed acyclic graph, then the critical path is the longest path in that graph. In Hadoop it rears its ugly head by having one of the tasks that executes much longer than the rest. Make sure you use speculative execution for scheduling multiple redundant attempts for such a task. In the framework itself, care has been taken to reduce this critical path as much as possible. If you have a graph where each node is a map-reduce job, make sure that all the things you are doing that are not really needed in a critical path get done asynchronously.
Of course when we convert a sequential algorithm to work in parallel, we incur some algorithmic overhead. (If not, we could have just taken the efficient parellel algorithm and emulated it sequentially.) Sometimes we need to explore solutions to our problem in parallel which may be determinted to be wastage in sequential algorithm. Try to minimize such wastage. Sometimes, having a central coordinator helps, since it knows whether a given computation is already performed by someone else. When you need to share partial results among other pieces of computations, you need to keep them for a quick lookup. Consider using small fast key-value stores for this.
Fortunately, there is exactly one synchronization step in a hadoop job. The shuffle stage. Unfortunately, it uses the slowest system components, that is, network and the disk. So, all your focus should be on speeding up the shuffle stage. If you can avoid the shuffle stage altogether, that’s the best. But if you cannot, then at least try to reduce the time taken by this shuffle stage. Use combiners to partially aggregate the intermediate results to reduce the data sizes. And definitely try to compress the intermediate outputs. One can achieve significant computation-communication overlap in this stage by launching reduce tasks before al the map tasks finish. So, that while some map tasks are still executing, the reduce tasks can fetch the outputs from the finished map tasks.
The message cost model of fixed latency, and per byte bandwidth also applies to task scheduling. Task launching overheads are significant. The tasktracker has to ask for the next task, the job tracker and the scheduler has to do some work to determine which task to hand over to the tasktracker, and the tasktracker has to launch the task, and cleanup after it’s done. In order to amortize this overhead the task needs to run for some time and process some data. For this purpose, the amount of data each task processes is the best way to control the task granularity. Try combining multiple small files into a single split, try increasing the dfs block sizes to maximize split sizes. If you are performing multiple computation per key-value pair in the data, try combining them into a single tasks as much as possible. Most high-level frameworks such as Pig already do this for you, so moving to the hadoop best practices…
Try using high level languages such as pig as much as possible. Try to maximize the data processed per task (but keep enough partitions to have adequate parallelism and avoid load imbalance.) Make sure that you are not hitting the disk frequently by tuning the in-memory buffer sizes. The capacity scheduler allows you to occupy more than one task slot per task. Please avoid that, since it results in a lot of wastage of resources. Try to reduce intermediate shuffle size by using combiners and doing local partial aggregation.
Use compression on the intermediate data. It speeds up the shuffle stage. Make sure you compress output data as well, since it improves the write speed of reduces. When you are using “secondary inputs”, please use distributed cache to distribute these to individual nodes. And make sure that the secondary inputs are much smaller than the primary map inputs. If you are trying to store per job state using counters etc, make sure that you hand them over to the tasktracker as counters, and do not make calls directly to the jobtracker. And also since the namenode is a singleton, it causes sequential bottleneck, when called frequently from the tasks. Make sure you don’t do that.
Scaling Hadoop Applications Milind Bhandarkar Distributed Data Systems Engineer [email@example.com] [twitter: @techmilind] http://www.flickr.com/photos/theplanetdotcom/4878812549/
About Me http://www.linkedin.com/in/milindb Founding member of Hadoop team at Yahoo! [2005-2010] Contributor to Apache Hadoop & Pig since version 0.1 Built and Led Grid Solutions Team at Yahoo! [2007-2010] Center for Development of Advanced Computing (C-DAC), National Center for Supercomputing Applications (NCSA), Center for Simulation of Advanced Rockets, Siebel Systems, Pathscale Inc. (acquired by QLogic), Yahoo!, and LinkedIn http://www.flickr.com/photos/tmartin/32010732/
Outline Scalability of Applications Causes of Sublinear Scalability Best practices Q & A http://www.flickr.com/photos/86798786@N00/2202278303/
Importance of Scaling Zynga’s Cityville: 0 to 100 Million users in 43 days ! (http://venturebeat.com/2011/01/14/zyngas-cityville-grows-to-100-million-users-in-43-days/) Facebook in 2009 : From 150 Million users to 350 Million ! (http://www.facebook.com/press/info.php?timeline) LinkedIn in 2010: Adding a member per second ! (http://money.cnn.com/2010/11/17/technology/linkedin_web2/index.htm) Public WWW size: 26M in 1998, 1B in 2000, 1T in 2008 (http://googleblog.blogspot.com/2008/07/we-knew-web-was-big.html) Scalability allows dealing with success gracefully ! http://www.flickr.com/photos/palmdiscipline/2784992768/
Explosion of Data (Data-Rich Computing theme proposal. J. Campbell, et al, 2007) http://www.flickr.com/photos/28634332@N05/4128229979/
Hadoop Simplifies building parallel applications Programmer specifies Record-level and Group-level sequential computation Framework handles partitioning, scheduling, dispatch, execution, communication, failure handling, monitoring, reporting etc. User provides hints to control parallel execution
Scalability of Parallel Programs If one node processes k MB/s, then N nodes should process (k*N) MB/s If some fixed amount of data is processed in T minutes on one node, the N nodes should process same data in (T/N) minutes Linear Scalability http://www-03.ibm.com/innovation/us/watson/
Reduce Latency Minimize program execution time http://www.flickr.com/photos/adhe55/2560649325/
Increase Throughput Maximize data processed per unit time http://www.flickr.com/photos/mikebaird/3898801499/
Three Equations http://www.flickr.com/photos/ucumari/321343385/
Amdahl’s Law http://www.computer.org/portal/web/awards/amdahl
Multi-Phase Computations If computation C is split into N different parts, C1..CN If partial computation Ci can be speeded up by a factor of Si http://www.computer.org/portal/web/awards/amdahl
Little’s Law http://sloancf.mit.edu/photo/facstaff/little.gif
Message Cost Model http://www.flickr.com/photos/mattimattila/4485665105/
Message Granularity For Gigabit Ethernet α = 300 μS β = 100 MB/s 100 Messages of 10KB each = 40 ms 10 Messages of 100 KB each = 13 ms http://www.flickr.com/photos/philmciver/982442098/
Alpha-Beta Common Mistake: Assuming that α is constant Scheduling latency for responder Jobtracker/Tasktracker time slice inversely proportional to number of concurrent tasks Common Mistake: Assuming that β is constant Network congestion TCP incast
Causes of Sublinear Scalability http://www.flickr.com/photos/sabertasche2/2609405532/
Sequential Bottlenecks Mistake: Single reducer (default) mapred.reduce.tasks Parallel directive Mistake: Constant number of reducers independent of data size default_parallel http://www.flickr.com/photos/bogenfreund/556656621/
Load Imbalance Unequal Partitioning of Work Imbalance = Max Partition Size – Min Partition Size Heterogeneous workers Example: Disk Bandwidth Empty Disk: 100 MB/s 75% Full disk: 25 MB/s http://www.flickr.com/photos/kelpatolog/176424797/
Over-Partitioning For N workers, choose M partitions, M >> N Adaptive Scheduling, each worker chooses the next unscheduled partition Load Imbalance = Max(Sum(Wk)) – Min(Sum(Wk)) For sufficiently large M, both Max and Min tend to Average(Wk), thus reducing load imbalance http://www.flickr.com/photos/infidelic/3238637031/
Critical Paths Maximum distance between start and end nodes Long tail in Map task executions Enable Speculative Execution Overlap communication and computation Asynchronous notifications Example: deleting intermediate outputs after job completion http://www.flickr.com/photos/cdevers/4619306252/
Algorithmic Overheads Repeating computation in place of adding communication Parallel exploration of solution space results in wastage of computations Need for a coordinator Share partial, unmaterialized results Consider memory-based small replicated K-V stores with eventual consistency http://www.flickr.com/photos/sbprzd/140903299/
Synchronization Shuffle stage in Hadoop, M*R transfers Slowest system components involved: Network, Disk Can you eliminate Reduce stage ? Use combiners ? Compress intermediate output ? Launch reduce tasks before all maps finish http://www.flickr.com/photos/susan_d_p/2098849477/
Task Granularity Amortize task scheduling and launching overheads Centralized scheduler Out-of-band Tasktracker heartbeat reduces latency, but May overwhelm the scheduler Increase task granularity Combine multiple small files Maximize split sizes Combine computations per datum http://www.flickr.com/photos/philmciver/982442098/
Hadoop Best Practices Use higher-level languages (e.g. Pig) Coalesce small files into larger ones & use bigger HDFS block size Tune buffer sizes for tasks to avoid spill to disk Consume only one slot per task Use combiner if local aggregation reduces size http://www.flickr.com/photos/yaffamedia/1388280476/
Hadoop Best Practices Use compression everywhere CPU-efficient for intermediate data Disk-efficient for output data Use Distributed Cache to distribute small side-files (<< than input split size) Minimize number of NameNode/JobTracker RPCs from tasks http://www.flickr.com/photos/yaffamedia/1388280476/