Successfully reported this slideshow.
Presented by: Ahmed Ibrahim FayedPresented to: Dr.Ahmed Morgan
 Introduction Motivation Parallelism-Aware Batch Scheduling Experimental Results
 In modern processors, which are multi-core(or multithreaded),the concurrently executingthreads share the DRAM system, an...
 Uncontrolled inter-thread interference inDRAM scheduling results in major problems:◦ DRAM controller can unfairly priori...
 we propose a new approach to providing fairand high-performance DRAM scheduling. Ourscheduling algorithm, called paralle...
 DRAM requests are very long latencyoperations that greatly impact theperformance of modern processors. When a load inst...
 The processor stalls until the miss is servicedby DRAM. Current processors try to reducethe performance loss due to a DR...
 Techniques strive to overlap the latency offuture DRAM accesses with the current accessso that the processor does not ne...
 In a single-threaded, single-core system, athread has exclusive access to the DRAMbanks, so its concurrent DRAM accesses...
 Request1’s (Req1) latency is hidden by thelatency of Request0 (Req0), effectivelyexposing only a single bank access late...
 If multiple threads are generating memoryrequests concurrently (e.g. in a CMPsystem), modern DRAM controllers schedule t...
 The example in Figure 2 illustrates howparallelism-unawareness can result insuboptimal CMP system throughput andincrease...
 With a conventional parallelism-unawareDRAM scheduler the requests can be servicedin their arrival order shown in Figure...
 As shown in the execution timeline, insteadof stalling once (i.e. for one bank accesslatency) for the two requests, both...
 In contrast, a parallelism-aware schedulerservices each thread’s concurrent requests inparallel, resulting in the servic...
 Based on the observation that inter-threadinterference destroys the bank-levelparallelism of the threads runningconcurre...
 Our PAR-BS controller is based on two keyprinciples. The first principle is parallelism-awareness. To preserve a thread’...
 The second principle is request batching. Ifperformed greedily, servicing requests from athread back to back could cause...
Questions! 
Parallelism aware batch scheduling
Parallelism aware batch scheduling
Parallelism aware batch scheduling
Parallelism aware batch scheduling
Parallelism aware batch scheduling
Parallelism aware batch scheduling
Parallelism aware batch scheduling
Upcoming SlideShare
Loading in …5
×

Parallelism aware batch scheduling

270 views

Published on

Published in: Technology, Education
  • Be the first to comment

  • Be the first to like this

Parallelism aware batch scheduling

  1. 1. Presented by: Ahmed Ibrahim FayedPresented to: Dr.Ahmed Morgan
  2. 2.  Introduction Motivation Parallelism-Aware Batch Scheduling Experimental Results
  3. 3.  In modern processors, which are multi-core(or multithreaded),the concurrently executingthreads share the DRAM system, and differentthreads running on different cores can delayeach other through resource contention. As the number of on-chip coresincreases, the pressure on the DRAM systemincreases, as does the interference amongthreads sharing the system
  4. 4.  Uncontrolled inter-thread interference inDRAM scheduling results in major problems:◦ DRAM controller can unfairly prioritize somethreads while starving more important threadsfor long time periods, as they wait to accessmemory.◦ Low system performance.
  5. 5.  we propose a new approach to providing fairand high-performance DRAM scheduling. Ourscheduling algorithm, called parallelism-aware batch scheduling (PAR-BS), is based ontwo new key ideas: request batching andparallelism-aware DRAM scheduling.
  6. 6.  DRAM requests are very long latencyoperations that greatly impact theperformance of modern processors. When a load instruction misses in the last-level on-chip cache and needs to accessDRAM, the processor cannot commit that(and any subsequent) instruction becauseinstructions are committed in program orderto support precise exceptions.
  7. 7.  The processor stalls until the miss is servicedby DRAM. Current processors try to reducethe performance loss due to a DRAM acces byservicing other DRAM accesses in parallelwith it.
  8. 8.  Techniques strive to overlap the latency offuture DRAM accesses with the current accessso that the processor does not need to stall(long) for future DRAM accesses. Instead, atan abstract level, the processor stalls once forall overlapped accesses rather than stallingonce for each access in a serialized fashion.
  9. 9.  In a single-threaded, single-core system, athread has exclusive access to the DRAMbanks, so its concurrent DRAM accesses areserviced in parallel as long as they aren’t tothe same bank.
  10. 10.  Request1’s (Req1) latency is hidden by thelatency of Request0 (Req0), effectivelyexposing only a single bank access latency tothe thread’s processing core. Once Req0 isserviced, the core can commit Load 0 andthus enable the decode/execution of futureinstructions. When Load 1 becomes theoldest instruction in the window, its miss hasalready been serviced and therefore theprocessor can continue computation withoutstalling.
  11. 11.  If multiple threads are generating memoryrequests concurrently (e.g. in a CMPsystem), modern DRAM controllers schedule theoutstanding requests in a way that completelyignores the inherent memory-level parallelism ofthreads. Instead, current DRAM controllersexclusively seek to maximize the DRAM datathroughput, i.e., the number of DRAM requestsserviced per second . As we show in thispaper, blindly maximizing the DRAM datathroughput does not minimize a thread’s stall-time (which directly correlates with systemthroughput).
  12. 12.  The example in Figure 2 illustrates howparallelism-unawareness can result insuboptimal CMP system throughput andincreased stalltimes. We assume twocores, each running a single thread, Thread 0(T0) and Thread 1 (T1). Each thread has twoconcurrent DRAM requests caused byconsecutive independent load misses (Load 0andLoad 1), and the requests go to twodifferent DRAM banks.
  13. 13.  With a conventional parallelism-unawareDRAM scheduler the requests can be servicedin their arrival order shown in Figure 2. First, T0’s request to Bank 0 is serviced inparallel with T1’s request to Bank 1. Later,T1’s request to Bank 0 is serviced in parallelwith T0’s request to Bank 1. This serviceorder serializes each thread’s concurrentrequests and therefore exposes two bankaccess latencies to each core.
  14. 14.  As shown in the execution timeline, insteadof stalling once (i.e. for one bank accesslatency) for the two requests, both cores stalltwice. Core 0 first stalls for Load 0, andshortly thereafter also for Load 1. Core 1stalls for its Load 0 for two bank accesslatencies.
  15. 15.  In contrast, a parallelism-aware schedulerservices each thread’s concurrent requests inparallel, resulting in the service order andexecution timeline shown in Figure 2. Thescheduler preserves bank-parallelism by firstscheduling both of T0’s requests in parallel, andthen T1’s requests. This enables Core 0 toexecute faster (shown as “Saved cycles” in thefigure) as it stalls for only one bank accesslatency. Core 1’s stall time remains unchanged:although its second request (T1-Req1) isserviced later than with a conventionalscheduler, T1-Req0 still hides T1-Req1’s latency.
  16. 16.  Based on the observation that inter-threadinterference destroys the bank-levelparallelism of the threads runningconcurrently on a CMP and thereforedegrades system throughput, we incorporateparallelism-awareness into the design of ourfair and high-performance memory accessscheduler.
  17. 17.  Our PAR-BS controller is based on two keyprinciples. The first principle is parallelism-awareness. To preserve a thread’s bank-levelparallelism, a DRAM controller must service athread’s requests (to different banks) back toback (that is, one right after another, withoutany interfering requests from other threads).This way, each thread’s request servicelatencies overlap.
  18. 18.  The second principle is request batching. Ifperformed greedily, servicing requests from athread back to back could cause unfairnessand even request starvation. To prevent this,PAR-BS groups a fixed number of oldestrequests from each thread into a batch, andservices the requests from the currentbatchbefore all other requests.
  19. 19. Questions! 

×