DIOS: Dynamic Instrumentation for (not so) Outstanding Scheduling Blake Sutton & Chris Sosa
Motivation <ul><li>Scheduling jobs on a group of machines  </li></ul><ul><ul><li>Cluster </li></ul></ul><ul><ul><li>Distri...
Approach: Adaptive Distributed Scheduler <ul><li>Monitor machines and processes to motivate migration decisions. </li></ul...
Dynamic Instrumentation with Pin <ul><li>Insert new code into apps on the fly </li></ul><ul><ul><li>No recompile </li></ul...
Application-Specific Information <ul><li>Want to capture memory behavior over time </li></ul><ul><li>We gathered: </li></u...
Evaluation <ul><li>Distributed scheduler </li></ul><ul><ul><li>Rhino on realitytv16, Hare on realitytv13-16 </li></ul></ul...
The Good <ul><li>Potential for improvement </li></ul><ul><li>Lower total runtime with simple policy </li></ul><ul><li>Rest...
The Bad <ul><li>Overhead from Pintool is too high to realize gains </li></ul><ul><ul><li>Pin isn’t designed for on-the-fly...
The “Interesting” <ul><li>Pintool does capture intriguing info… </li></ul>
Conclusion: the Future of DIOS <ul><li>Overhead is prohibitive – for now </li></ul><ul><ul><li>Add  attach / detach </li><...
¿ Preguntas?
Wait…hasn’t this been solved? <ul><li>Condor  </li></ul><ul><ul><li>popular user-space distributed scheduler </li></ul></u...
Upcoming SlideShare
Loading in …5
×

DIOS

314 views

Published on

Presentation for OS class of DIOS our scheduling system that took real-time attributes from hardware systems to change scheduling behavior

Published in: Technology, Education
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
314
On SlideShare
0
From Embeds
0
Number of Embeds
5
Actions
Shares
0
Downloads
2
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide
  • Long-running, short-running, memory-intensive, cpu-bound…don’t know what kind of jobs to expect. So how can the scheduler put them where they should be if it doesn’t know these things? Transition: Wouldn’t it be nice if the scheduler could just “handle it” – without the user having specify characteristics of their jobs in advance?
  • Our approach to this problem is DIOS – an adaptive distributed scheduler. Describe diagram. Transition: So you must be thinking…wait, how are you going to just “gather application-specific info”?
  • The answer is – we’ll write a tool with Pin, a dynamic instrumentation framework. Describe diagram – how it’s a mini VM. Describe points for inserting instrumentation and the tradeoffs – routine level, instruction level.
  • So we’ve established that Pin is a tool for what we want to do – dynamically instrument applications. But what code do we want to insert? What are we looking to get from our pintool? Since we are trying to detect and avoid memory contention between processes, it makes senses to study the memory behavior of the applications. To this end, we chose three things to start with (describe them). The figure to side there shows how the pintool fits in to our overall plan – it would collect information for each application and report the results to Hare, the local scheduler. Then Hare, which is also monitoring the memory subsystem of the local machine, reports to Rhino, and Rhino decides what to do.
  • Considering our motivation, it was important to try to evaluate it on a somewhat realistic workload. Since it seems like most long-running jobs on clusters are scientific applications, we wanted to use real scientific benchmarks. Describe benchmarks. To evaluate the scheduler, we measured the total runtime from… Then, to evaluate our pintool, we measured the overhead from running each application with our pintool and also tracked the information we collected over time to see if we could correlate it to interesting behavior or differences between programs.
  • Potential for improvement – we saw this from our baseline, using a simple policy to react to the presence of memory contention. Might be able to get even better results on long-running jobs, with better information on the running processes (like we could get from dynamic instrumentation!)
  • But on the other hand, there’s the bad. Although our scheduler works perfectly well with the pintool, we discovered that the overhead introduced by Pin is just too much. Some of our overhead results are below – we show the time to run the application natively, with pin (no pintool), with a tool that only counts instructions, and with our three metrics. The way we hoped to solve the overhead problem originally was to basically only instrument when we needed to –like when the scheduler decided the machine was performing badly. Then, the relatively high overhead to run the analysis wouldn’t have to make much of an impact overall. However, we were unable to get the performance gains we hoped – Pin doesn’t offer the ability to completely attach and detach from a running program, only to attach, and we discovered when we tried to add and remove insturmentation dynamically that we lost the gains from code caching. So while this idea could work with another system or with a new Pin, we couldn’t manage to bring the overhead down.
  • But on the bright side, at least it collected some interesting information. Note how similar the patterns of LU and heatedplate are – talk about how that’s probably because they are tightly looped and very repetitive, whereas Ocean is obviously performing a more irregular and complex analysis with some possible distinct phases in it. Possibility of using the variation in a metric like this to “predict the predictability” to separate applications that are better left alone from those that are more likely to be safely handled by common heuristics, etc.
  • So – the future of DIOS.
  • Questions?
  • Kind of...but no comprehensive solution.
  • DIOS

    1. 1. DIOS: Dynamic Instrumentation for (not so) Outstanding Scheduling Blake Sutton & Chris Sosa
    2. 2. Motivation <ul><li>Scheduling jobs on a group of machines </li></ul><ul><ul><li>Cluster </li></ul></ul><ul><ul><li>Distributed operating system </li></ul></ul><ul><li>Don’t know what to expect at submission time! </li></ul><ul><li>Memory contention </li></ul><ul><li>Migrate processes away to a better place... </li></ul>
    3. 3. Approach: Adaptive Distributed Scheduler <ul><li>Monitor machines and processes to motivate migration decisions. </li></ul><ul><li>Gather application-specific info and feed to local schedulers. </li></ul><ul><li>Global scheduler collects local schedulers’ observations and uses information on all machines and all applications to make decisions. </li></ul><ul><ul><li>Migrate? Which one? Where? </li></ul></ul><ul><ul><li>Pause? Which one? How long? </li></ul></ul>
    4. 4. Dynamic Instrumentation with Pin <ul><li>Insert new code into apps on the fly </li></ul><ul><ul><li>No recompile </li></ul></ul><ul><ul><li>Operates on copy </li></ul></ul><ul><ul><li>Code cache </li></ul></ul><ul><li>Our Pintool </li></ul><ul><ul><li>Routine-level </li></ul></ul><ul><ul><li>Instruction-level </li></ul></ul>
    5. 5. Application-Specific Information <ul><li>Want to capture memory behavior over time </li></ul><ul><li>We gathered: </li></ul><ul><ul><li>Ratio of malloc to free calls </li></ul></ul><ul><ul><li>Wall-clock time to execute 10,000,000 insns </li></ul></ul><ul><ul><li>Number of memory ops in last 2,000,000 insns </li></ul></ul>
    6. 6. Evaluation <ul><li>Distributed scheduler </li></ul><ul><ul><li>Rhino on realitytv16, Hare on realitytv13-16 </li></ul></ul><ul><ul><li>Looks for % memory free and restarts youngest job </li></ul></ul><ul><ul><li>heatedplate with modified parameters </li></ul></ul><ul><ul><li>Baseline: Queue balancing </li></ul></ul><ul><li>Pintool </li></ul><ul><ul><li>2 applications from SPLASH-2 </li></ul></ul><ul><ul><li>Heatedplate </li></ul></ul>
    7. 7. The Good <ul><li>Potential for improvement </li></ul><ul><li>Lower total runtime with simple policy </li></ul><ul><li>Restart youngest </li></ul>
    8. 8. The Bad <ul><li>Overhead from Pintool is too high to realize gains </li></ul><ul><ul><li>Pin isn’t designed for on-the-fly analysis </li></ul></ul><ul><ul><li>Couldn’t attach / detach </li></ul></ul><ul><ul><li>Code caching can’t save it </li></ul></ul>7.64 7.90 14.51 6.27 1.25 1.00 lu 5.81 6.04 7.84 2.87 1.48 1.00 ocean 7.26 7.45 5.43 2.65 1.88 1.00 heatedplate latency # mems malloc/free count only pin native application
    9. 9. The “Interesting” <ul><li>Pintool does capture intriguing info… </li></ul>
    10. 10. Conclusion: the Future of DIOS <ul><li>Overhead is prohibitive – for now </li></ul><ul><ul><li>Add attach / detach </li></ul></ul><ul><ul><li>Lighter instrumentation framework </li></ul></ul><ul><li>But instrumentation can capture aspects of application-specific behavior! </li></ul><ul><li>Marty was right. </li></ul><ul><li>Find out the final answer: </li></ul><ul><li>9am 5/9, MEC215. </li></ul>
    11. 11. ¿ Preguntas?
    12. 12. Wait…hasn’t this been solved? <ul><li>Condor </li></ul><ul><ul><li>popular user-space distributed scheduler </li></ul></ul><ul><ul><li>process migration </li></ul></ul><ul><ul><li>tries to keep queues balanced </li></ul></ul><ul><ul><ul><li>but jobs have different behavior </li></ul></ul></ul><ul><ul><ul><li>over time </li></ul></ul></ul><ul><ul><ul><li>from each other </li></ul></ul></ul><ul><li>LSF (Load Sharing Facility) </li></ul><ul><ul><li>monitors system, moves processes around based on what they need </li></ul></ul><ul><ul><li>must input static job information (requires profiling etc beforehand) </li></ul></ul><ul><ul><ul><li>what if something about your job isn't captured by your input? </li></ul></ul></ul><ul><ul><ul><li>what if you end up giving it margins that are too large? too small? </li></ul></ul></ul><ul><ul><ul><li>unnecessary inefficiencies? </li></ul></ul></ul><ul><ul><ul><li>it's not exactly hassle-free... </li></ul></ul></ul><ul><ul><ul><li>  </li></ul></ul></ul><ul><li>Hardware feedback </li></ul><ul><ul><li>PAPI </li></ul></ul><ul><ul><li>Still not very portable (invasive kernel patch for install) </li></ul></ul><ul><li>Wouldn't it be nice if the scheduler could just...&quot;do the right thing&quot;? </li></ul>

    ×