Successfully reported this slideshow.
You’ve unlocked unlimited downloads on SlideShare!
Hadoopas batch processing system
•Hadoopwas designed mainly for running large batch jobs such as web indexing and log mining.
•Users submitted jobs to a queue, and the cluster ran them in order.
•Soon, another use case became attractive:
–Sharing a MapReducecluster between multiple users.
The benefits of sharing
•With all the data in one place, users can run queries that they may never have been able to execute otherwise, and
•Costs go down because system utilization is higher than building a separate Hadoopcluster for each group.
•However, sharing requires support from the Hadoopjob scheduler to
–provide guaranteed capacity to production jobs and
–good response time to interactive jobs while allocating resources fairly between users.
Approaches to Sharing
•FIFO : In FIFO scheduling, a JobTrackerpulls jobs from a work queue, oldest job first. This schedule had no concept of the priority or size of the job
•Fair : Assign resources to jobs such that on average over time, each job gets an equal share of the available resources. The result is that jobs that require less time are able to access the CPU and finish intermixed with the execution of jobs that require more time to execute. This behavior allows for some interactivity among Hadoopjobs and permits greater responsiveness of the Hadoopcluster to the variety of job types submitted.
•Capacity : In capacity scheduling, instead of pools, several queues are created, each with a configurable number of map and reduce slots. Each queue is also assigned a guaranteed capacity (where the overall capacity of the cluster is the sum of each queue's capacity). Queues are monitored; if a queue is not consuming its allocated capacity, this excess capacity can be temporarily allocated to other queues.
Fair Scheduler Basics
•Group jobs into “pools”
•Assign each pool a guaranteed minimum share
•Divide excess capacity evenly between pools
•Determined from a configurable job property
–Default in 0.20: user.name (one pool per user)
•Pools have properties:
–Minimum map slots
–Minimum reduce slots
–Limit on # of running jobs
Example Pool Allocations
entire cluster100 slots
min share = 40
min share = 30
•Split each pool’s min share among its jobs
•Split each pool’s total share among its jobs
•When a slot needs to be assigned:
–If there is any job below its min share, schedule it
–Else schedule the job that we’ve been most unfair to (based on “deficit”)
•Weights for unequal sharing:
–Job weights based on priority (each level = 2x)
–Job weights based on size
•Limits for # of running jobs:
Installing the Fair Scheduler
•Place it on the classpath:
–cp build/contrib/fairscheduler/*.jar lib
–Contains scheduler options, pointer to pools file
•Pools file (pools.xml)
–Contains min share allocations and limits on pools
–Reloaded every 15 seconds at runtime
•Assign a map and a reduce on each heartbeat; improves ramp-up speed and throughput; recommendation: set to true
•Which JobConfproperty sets what pool a job is in
-Default: user.name (one pool per user)
-Can make up your own, e.g. “pool.name”, and pass in JobConfwith conf.set(“pool.name”, “mypool”)
Make pool.name default to user.name
Issues with Fair Scheduler
–Fine-grained sharing at level of map & reduce tasks
–Predictable response times and user isolation
–For efficiency, must run tasks near their input data
–Strictly following any job queuing policy hurts locality: job picked by policy may not have data on free nodes
–Relax queuing policy for limited time to achieve locality
Delay Scheduling Details
•Scan jobs in order given by queuing policy, picking first that is permitted to launch a task
•Jobs must wait before being permitted to launch non-local tasks
–If wait < T1, only allow node-local tasks
–If T1< wait < T2, also allow rack-local
–If wait > T2, also allow off-rack
•Increase a job’s time waited when it is skipped
•Organizes jobs into queues
•Queue shares as %’s of cluster
•FIFO scheduling within each queue
•Queues are monitored; if a queue is not consuming its allocated capacity, this excess capacity can be temporarily allocated to other queues.