Lp seminar
Upcoming SlideShare
Loading in...5
×

Like this? Share it with your network

Share
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
    Be the first to like this
No Downloads

Views

Total Views
510
On Slideshare
510
From Embeds
0
Number of Embeds
0

Actions

Shares
Downloads
3
Comments
0
Likes
0

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide

Transcript

  • 1. On-the-Fly Garbage Collection Using Sliding Views Erez Petrank Technion – Israel Institute of Technology Joint work with Yossi Levanoni, Hezi Azatchi, and Harel Paz
  • 2. Garbage Collection
    • User allocates space dynamically, the garbage collector automatically frees the space when it “no longer needed”.
    • Usually “no longer needed” = unreachable by a path of pointers from program local references (roots).
    • Programmer does not have to decide when to free an object. (No memory leaks, no dereferencing of freed objects.)
    • Built into Java, C#.
  • 3. Garbage Collection Two Classic Approaches Reference counting [Collins 1960]: keep a reference count for each object, reclaim objects with count 0. Tracing [McCarthy 1960]: trace reachable objects, reclaim objects not traced. Traditional Wisdom Good Problematic
  • 4. What (was) Bad about RC ?
    • Does not reclaim cycles
    • A heavy overhead on pointer modifications.
    • Traditional belief: “Cannot be used efficiently with parallel processing”
    A B
  • 5. What’s Good about RC ?
    • Reference Counting work is proportional to work on creations and modifications.
      • Can tracing deal with tomorrow’s huge heaps?
    • Reference counting has good locality.
    • The Challenge:
      • RC overhead on pointer modification seems too expensive.
      • RC seems impossible to “parallelize”.
  • 6. Garbage Collection Today
    • Today’s advanced environments:
      • multiprocessors + large memories
    Dealing with multiprocessors Single-threaded stop the world
  • 7. Garbage Collection Today
    • Today’s advanced environments:
      • multiprocessors + large memories
    Dealing with multiprocessors Concurrent collection Parallel collection
  • 8. Terminology (stop the world, parallel, concurrent, …) Stop-the-World Parallel (STW) Concurrent On-the-Fly program GC
  • 9. Benefits & Costs Informal Pause times 200ms 2ms 20ms Throughput Loss: 10-20% Stop-the-World Parallel (STW) Concurrent On-the-Fly program GC
  • 10. This Talk
    • Introduction: RC and Tracing, Coping with SMP’s.
    • RC introduction and parallelization problem.
    • Main focus: a novel concurrent reference counting algorithm (suitable for Java).
    • Concurrent made on-the-fly based on “sliding views”
    • Extensions:
      • cycle collection, mark and sweep, generations, age-oriented.
    • Implementation and measurements on Jikes.
      • Extremely short pauses, good throughput .
  • 11. Basic Reference Counting
    • Each object has an RC field, new objects get o.rc:=1.
    • When p that points to o 1 is modified to point to o 2 execute: o 2 .rc++, o 1 .rc--.
    • if then o 1 .rc==0:
      • Delete o 1 .
      • Decrement o.rc for all children of o 1 .
      • Recursively delete objects whose rc is decremented to 0.
    o 1 o 2 p
  • 12. An Important Term:
    • A write barrier is a piece of code executed with each pointer update.
    • “ p  o2 ” implies: Read p ; (see o1 ) p  o2 ; o2.rc ++; o1.rc - -;
    o 1 o 2 p
  • 13. Deferred Reference Counting
    • Problem: overhead on updating program variables (locals) is too high.
    • Solution [Deutch & Bobrow 76] :
      • Don’t update rc for local variables (roots).
      • “ Once in a while”: collect all objects with o.rc=0 that are not referenced from local variables.
    • Deferred RC reduces overhead by 80%. Used in most modern RC systems.
    • Still, “heap” write barrier is too costly.
  • 14. Multithreaded RC?
    • Traditional wisdom: write barrier must be synchronized !
  • 15. Multithreaded RC?
    • Problem 1 : ref-counts updates must be atomic
    • Fortunately, this can be easily solved : Each thread logs required updates in a local buffer and the collector applies all the updates during GC (as a single thread).
  • 16. Multithreaded RC?
    • Problem 1 : ref-counts updates must be atomic
    A B D C Thread 2: Read A.next; (see B) A.next  D; B.rc- -; D.rc++ Thread 1: Read A.next; (see B) A.next  C; B.rc- -; C.rc++
    • Problem 2 : parallel updates confuse counters:
  • 17. Known Multithreaded RC
    • [DeTreville 1990, Bacon et al 2001]:
      • Cmp & swp for each pointer modification.
      • Thread records its updates in a buffer.
  • 18. To Summarize Problems…
    • Write barrier overhead is high.
      • Even with deferred RC.
    • Using RC with multithreading seems to bear high synchronization cost.
      • Lock or “compare & swap” with each pointer update.
  • 19. Reducing RC Overhead:
    • We start by looking at the “parent’s point of view”.
    • We are counting rc for the child, but rc changes when a parent’s pointer is modified.
    Parent Child
  • 20. An Observation
    • Consider a pointer p that takes the following values between GC’s: O 0 ,O 1 , O 2 , …, O n .
    • All RC algorithms perform 2n operations:
    • O 0 .rc--; O 1 .rc++; O 1 .rc--; O 2 .rc++; O 2 .rc--; … ; O n .rc++;
    • But only two operations are needed:
      • O 0 .rc-- and O n .rc++
    p O 1 O 2 O 3 O n . . . . . O 4 O 0
  • 21. Use of Observation
    • Garbage Collection: For each modified slot p:
    • Read p to get O n , read records to get O 0.
    • O 0 .rc-- , O n .rc++
    Time Only the first modification of each pointer is logged. Garbage Collection P  O 1 ; (record p’s previous value O 0 ) P  O 2 ; (do nothing) … P  O n ; (do nothing)
  • 22. Some Technical Remarks
    • When a pointer is first modified, it is marked “dirty” and its previous value is logged.
    • We actually log each object that gets modified (and not just a single pointer).
      • Reason 1: we don’t want a dirty bit per pointer.
      • Reason 2: object’s pointers tend to be modified together.
    • Only non-null pointer fields are logged.
    • New objects are “born dirty”.
  • 23. Effects of Optimization
    • RC work significantly reduced :
      • The number of logging & counter updates is reduced by a factor of 100-1000 for typical Java benchmarks !
  • 24. Elimination of RC Updates Mpegaudio 5,517,795 51 1/108192 Jess 26,258,107 27,333 1/961 Javac 22,042,028 535,296 1/41 Jack 135,174,775 1,546 1/87435 Db 33,124,780 30,696 1/1079 Compress 64,905 51 1/1273 jbb 71,011,357 264,115 1/269 Benchmark No of stores No of “first” stores Ratio of “first” stores
  • 25. Effects of Optimization
    • RC work significantly reduced :
      • The number of logging & counter updates is reduced by a factor of 100-1000 for typical Java benchmarks !
    • Write barrier overhead dramatically reduced.
      • The vast majority of the write barriers run a single “if”.
    • Last but not least: the task has changed ! We need to record the first update.
  • 26. Reducing Synch. Overhead
    • Our second contribution:
    • A carefully designed write barrier (and an observation) does not require any sync. operation.
  • 27. The write barrier Update(Object **slot, Object *new){ Object *old = *slot if (!IsDirty(slot)) { log( slot, old ) SetDirty(slot) } *slot = new }
    • Observation:
    • If two threads:
    • invoke the write barrier in parallel, and
    • both log an old value,
    • then both record the same
    • old value.
  • 28. Running Write Barrier Concurrently Thread 1: Update(Object **slot, Object *new){ Object *old = *slot if (!IsDirty(slot)) { /* if we got here, Thread 2 has */ /* yet set the dirty bit, thus, has */ /* not yet modified the slot. */ log( slot, old ) SetDirty(slot) } *slot = new } Thread 2: Update(Object **slot, Object *new){ Object *old = *slot if (!IsDirty(slot)) { /* if we got here, Thread 1 has */ /* yet set the dirty bit, thus, has */ /* not yet modified the slot. */ log( slot, old ) SetDirty(slot) } *slot = new }
  • 29. Concurrent Algorithm:
    • Use write barrier with program threads.
    • To collect:
      • Stop all threads
      • Scan roots (local variables)
      • get the buffers with modified slots
      • Clear all dirty bits.
      • Resume threads
      • For each modified slot:
        • decrement rc for old value (written in buffer),
        • increment rc for current value (“read heap”),
      • Reclaim non-local objects with rc 0.
  • 30. Timeline Stop threads. Scan roots; Get buffers; erase dirty bits; Resume threads. Decrement values in read buffers; Increment “current” values; Collect dead objects
  • 31. Timeline Stop threads. Scan roots; Get buffers; erase dirty bits; Resume threads. Decrement values in read buffers; Increment “current” values; Collect dead objects Unmodified current values are in the heap. Modified are in new buffers.
  • 32. Concurrent Algorithm:
    • Use write barrier with program threads.
    • To collect:
      • Stop all threads
      • Scan roots (local variables)
      • get the buffers with modified slots
      • Clear all dirty bits.
      • Resume threads
      • For each modified slot:
        • decrease rc for old value (written in buffer),
        • increase rc for current value (“read heap”),
      • Reclaim non-local objects with rc 0.
    Goal 2: stop one thread at a time Goal 1: clear dirty bits during program run.
  • 33. The Sliding Views “Framework”
    • Develop a concurrent algorithm
      • There is a short time in which all the threads are stopped simultaneously to perform some task.
    • Avoid stopping the threads together. Instead, stop one thread at a time.
    • Tricky part: “fix” the problems created by this modification.
    • Idea borrowed from the Distributed Computing community [Lamport].
  • 34. Graphically A Snapshot A Sliding View time time Heap Addr. Heap Addr. t t1 t2
  • 35. Fixing Correctness
    • The way to do this in our algorithm is to use snooping :
      • While collecting the roots, record objects that get a new pointer.
      • Do not reclaim these objects.
    • No details…
  • 36. Cycles Collection
    • Our initial solution: use a tracing algorithm infrequently.
      • More about this tracing collector and about cycle collectors later…
  • 37. Performance Measurements
    • Implementation for Java on the Jikes Research JVM
    • Compared collectors:
      • Jikes parallel stop-the-world ( STW )
      • Jikes concurrent RC ( Jikes concurrent )
    • Benchmarks:
      • SPECjbb2000: a server benchmark --- simulates business-like transactions.
      • SPECjvm98: a client benchmarks --- a suite of mostly single-threaded benchmarks
  • 38. Pause Times vs. STW
  • 39. Pause Times vs. Jikes Concurrent
  • 40. SPECjbb2000 Throughput
  • 41. SPECjvm98 Throughput
  • 42. SPECjbb2000 Throughput
  • 43. A Glimpse into Subsequent Work: SPECjbb2000 Throughput
  • 44. Subsequent Work
    • Cycle Collection [CC’05])
    • A Mark and Sweep Collector [OOPSLA’03]
    • A Generational Collector [CC’03]
    • An Age-Oriented Collector [CC’05]
  • 45. Related Work
    • It’s not clear where to start…
    • RC, concurrent, generational, etc…
    • Some more relevant work was mentioned.
  • 46. Conclusions
    • A Study of Concurrent Garbage Collection with a Focus on RC.
    • Novel techniques obtaining short pauses, high efficiency.
    • The best approach: age-oriented collection with concurrent RC for old and concurrent tracing for young.
    • Implementation and measurements on Jikes demonstrate non-obtrusiveness and high efficiency.
  • 47. Project Building Blocks
    • A novel reference counting algorithm.
    • State-of-the-art cycle collection .
    • Generational RC (for old) and tracing (for young)
    • A concurrent tracing collector.
    • An age-oriented collector: fitting generations with concurrent collectors.