Deep dumpster diving 2010


Published on

Published in: Technology
  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide
  • IntroductionThis is my first time doing this and I’m sure I’ll muck things up somewhere along the way. I don’t consider myself an expert on this subject. I simply have done enough investigation in this area to have some interesting insights on the subject.
  • Why should I care?After all one of the reasons you chose a managed environment is to forget about memory management. You pay a price for garbage collection. Bad algorithms cause the garbage collector to work harder. Simple program, Count the number of words in a document using string.Split() and then get the length of the array.
  • Take for example a simple program. You need to count the number of words in a document and you don’t want to invest the time to write a complex algorithm so you take the shortcut. You do a string.Split() and then get the length of the array.
  • Now Run the program and monitor Percent Time in Garbage Collection. Expect to see somewhere between 20% to 40% of the time spent in GC.While this program runs I’m going to open performance monitor and add one counter to monitor. The counter is Percent of time in Garbage Collection. This number should be as low as possible and anything over 20% can cause some severe performance problems. For this program we can see that it is spending about 20 to 40% of its time just trying to manage the memory. You can give this program a 20 to 40% performance boost just by fixing how it uses memory.Here’s a picture of a graph of when I ran it earlier. Note how in this run there were 2 peaks that reached a little over 70%.I’ll discuss some more efficient alternatives later but for now it is sufficient to simply understand that the problem is that the program is creating a lot of little objects and then throwing them away. This makes a lot of work for the garbage collector and we see that reflected in the % time in GC. So at this point you are thinking “that’s not a big deal, my program doesn’t need to be efficient. I’m not doing anything that really requires the speed.”Sure, not every program needs to be lighting fast. But before you dismiss it let’s look at another reason you should care about .Net memory management.
  • Let’s say I’m a guy who thinks that there just aren’t enough clocks. Despite my watch, my cell phone, the one in the task bar and the standard windows clock. I need just one more so I’m going to write my own.Let’s just flush out the basic framework.using System;using System.Collections.Generic;using System.Threading;using System.Runtime.CompilerServices; public class MyClass{ public static void RunSnippet() { Timer tmr = new Timer(M, null, 0, 1000); Thread.Sleep(20); Console.ReadLine(); } static void M(object state) { Console.WriteLine("M - " + DateTime.Now); } The first thing that I’m going to need is something to wake up my program so it can update the clock display. So I’m going to create a timer and have it display something so I know its working. Here’s the timer, it calls M starting at 0 and every 1000 milliseconds. We run the program and we can see that this part works.Ok now I’m ready to start building some of the graphical part so write some code that will allocate some space for the really cool graphics that I’m going to be using. using System;using System.Collections.Generic;using System.Threading;using System.Runtime.CompilerServices; public class MyClass{ static Byte[] bytes; public static void RunSnippet() { Timer tmr = new Timer(M, null, 0, 1000); Thread.Sleep(20); for(int i = 0; i < 1000; i++) { bytes = new Byte[2000]; } Console.ReadLine(); } static void M(object state) { Console.WriteLine("M - " + DateTime.Now); } Now we run it again and what the hell happened?It must be that crappy software that Microsoft puts out. It couldn’t be anything I’ve done wrong!!!We’ve all seen this at some point. You write some code, it is working great and you add one simple thing and suddenly it breaks for no apparent reason.Obviously I can’t tell you what happened for you but I can tell you in this case it’s because we didn’t think about the garbage collector.Now I think I have the attention of the rest of you who just want to forget about the GC and go about your business.Well I’m not going to tell you… It’s a free seminar… you get what you pay for!!
  • Unmanaged memory managementTo distract you from the example I’m going to dive into memory management. But, before we jump into .Net memory management let’s quickly cover unmanaged memory management (I know… it’s a bit of an oxymoron).What I’m referring to here is essentially malloc() /free() (for c) or new() /delete() for C++.I don’t really want to go into too many details here because the actual implementation varies depending on the system your application is running on but the basic idea is that your program has some address space that looks something like this:
  • The stack is where local variables are allocated and the heap is where malloc and new allocate space. As I mentioned different systems do this differently but generally speaking all systems have to do something like the following.
  • As programs request memory the system allocates it from the heap space and as the programs release it open spots appear in the heap.
  • As more applications request memory it tries to reuse these open spots. If one requests something too large for these spots it has to use another spot.
  • Eventually this area becomes fragmented and the system has to search through a list of free space to find enough space for the memory requested.
  • As programs release memory open spots appear in the heap. Eventually this area becomes fragmented and the system has to search through a list of free space to find enough space for the memory requested. The more fragmented memory becomes the longer this can take.
  • .Net reserves a contiguous region of memory that it will use for the managed heap.Frameworkalso maintains a pointer to the location where the next object will be allocated.
  • Application creates some objects. If it will fit the location the NewObjPtr points to is used for the new object, the constructor of the new object is called and the address of the object is returned to the calling program.NewObjPtr is advanced beyond the new object and points to where the next object will be created in the heap. This mechanism has some obvious advantages over conventional memory management. It is very fast to allocate memory since there is no need to search for an open block of memory. As long the object will fit it already knows where to put it. This means that a managed can outperform (in at least some cases) an unmanaged native application.Now the application is done with some objects. It seems logical at this point that the garbage collector would run to reclaim the unused space but that isn’t the case. The runtime doesn’t do anything with these objects; it just continues to allocate new objects from the reserved memory area.At some point the runtime will inevitably be requested to create an object that is larger than the remaining space in the reserved memory. This is when garbage collection is triggered.
  • So a garbage collection has been triggered and there are a number of objects that are no longer being used that can be reclaimed but the developer didn’t do anything to indicate which ones. The question is how does it do this?Every application has a set of roots that is maintained by the just-in-time (JIT) compiler and Common Language Runtime (CLR).Roots identify storage locations, which refer to objects on the managed heap or to objects that are set to null.These include global and static object pointers, local variables & parameters, CPU registers pointing to objects in the managed heap. GC Phase #1: MarkWhen the garbage collector begins it assumes everything in the heap is garbage. It starts walking the roots and building a graph of all objects reachable from the roots.If the GC attempts to add an object already present in the graph, then it stops walking down that path. This improves performance significantly and prevents infinite loops if it encounters circular references.Once all the roots have been checked, the garbage collector's graph contains the set of all objects that can be reached from the application's roots. Anything not in the graph is considered garbage.
  • GC Phase #2: Compact This operation simply moves all live objects to the bottom of the heap leaving free space at the top. The garbage collector walks through the heap linearly, looking for contiguous blocks of garbage objects (now considered free space). It then shifts the non-garbage objects down in memory, removing all of the gaps in the heap. When the objects are moved all pointers to the objects become invalid so the GC modifies all pointers so they point to the new objects’ location. After the heap is compacted the pointers are fixed up the next object point is positioned just after the last non-garbage object where new objects can be added.Finalization This scheme implicitly tracks the lifetime of the objects created by an application and it works very well for managed objects. The task becomes more difficult when the application starts dealing with unmanaged resources like files, windows or network connections. These unmanaged resources must be explicitly released once the application has finished using them. The framework provides the Object.Finalize method.
  • The framework provides the Object.Finalize method. This is a method that the garbage collector must run on the object to clean up its unmanaged resources before reclaiming the memory used up by the object. By default the Finalize method does nothing and must be overridden if explicit cleanup is required. Some of you who come from a C++ background may be tempted to think about these as just a different name for destructors. This is particularly true if you look at documentation that shows the C# shortcut syntax for implementing finalizers. They have similar functionality but the semantics are very different. For example, C++ destructors run as soon as an object goes out of scope whereas a finalize method is called when the garbage collector gets around to cleaning up the object.
  • Finalizers complicate the job of garbage collection because they add some extra steps that must be performed before freeing an object. When objects that have a finalize method are allocated on the heap, a pointer to the object is added to an internal structure call the finalization queue. In the example here, when the system created the objects C, E, F, and I it detected they had finalize methods and it added them to the finalization queue.
  • Now a garbage collection occurs and objects B, E, G, H, and I are determined to be garbage; but before it just throws them away it checks the finalization queue to see if any pointers point to the garbage objects.
  • When a pointer is found, the pointer is removed from the finalization queue and appended to another internal data structure called Freachable queue, making the object no longer a part of the garbage. Each pointer in the FReachable queue now identifies an object that is ready do have its finalize method called.
  • Who can tell me what is going on with my code?Did someone say that the timer dropped out of scope? It is a tempting thought but it isn’t accurate. The timer is very much in scope because we are blocking on the ReadLine().Because the timer got garbage collected? That sounds plausible but it has to be that buggy Microsoft code. Why does it still work if I make this change?In fact there are 2 different reasons. In the first case it was a garbage collection. Allocating the memory forced a garbage collection and when the GC looked at the timer and saw that no one else references it so it threw it away.
  • In the second case the compiler looked at the look and saw that the byte array isn’t referenced outside the loop and it optimized it out. We can prove this by simply referencing the array outside the loop and we see that it breaks again. This is essentially what I did by making it static in my original example.
  • Even more interesting is that it didn’t optimize out the loop. Just the array. If we put something in the loop we can see that the loop runs but the timer continues to run because we haven’t allocated any more space (because the array was optimized out). I think we have clearly established that scope has little to do with garbage collection and we have covered what the GC has to do to manage memory. Now back to our first sample that was spending too much time doing garbage collection. Let’s see if we can understand why it was spending all this time. In order to do this we need some visibility into what the program is doing and what effect this will have on the garbage collector. If we were just working through some logic errors we would use the debugger to see what is going on. Unfortunately the debugger is not well suited to the task of analyzing memory utilization but there is a free tool called the CLR Profiler that is much better suited.
  • Demo IV - Analyzing what is wrong (CLR Profiler) Let’s go back to our first program. When running under the profiler the application will run much slower and I want to compare apples to apples as I make further modifications so I’m going to make a few modifications. First, I only want one iteration and I’m taking out the timer since I don’t care about speed while I’m profiling.
  • Allocated bytes is the total amount of memory allocated by the application. Relocated bytes is the amount of memory that survived at least one collection and needed to be moved. Final heap size is self explanatory. Objects finalized is the number of objects that had their finalizer run. Critical objects finalized is a subset of objects finalized. Critical objects have their finalizer marked to indicate that it is very important that the finalizer is run because the object references some important system resources. Below the summary at the top we also see some statistics on the Garbage Collections that were done for each of the generations and to the right of this we can see sizes of each of the generations of the garbage collector. These are actually averages taken over the life of the run and may not reflect the situation at the end of the run. No doubt some of you are looking at the last item in that section titled “large object heap bytes” and wondering what the heck that is. Well, remember I told you I was lying to you. Well this is another area I need to come clean about. When a program attempts to allocate an object that is larger than 85,000 bytes the framework considers it a large object and allocates from a separate heap called the large object heap. Objects in this heap are never compacted because it would be very costly to move the memory around. The size indicated here is the size of the objects allocated from the large object heap. Let’s start by looking at a histogram of the objects allocated.
  • This window shows a histogram by size of all the objects allocated during the profiling run with a sorted list of the most allocated types.From this information we can see that the program allocated a total of 6,578,038 bytes. The file it parsed is only 389,967 bytes so this amount is roughly 16x the size of the file.From this view we can also see that 65% of this size is consumed by System.String and another 23.8% of this space is consumed by System.Int32 arrays. If we move the mouse over the bars of the histogram we can see more details. For instance this red bar shows that 1.7MB are consumed by 67, 555 instances of strings that average 24 bytes in size.If we move scroll the histogram so we can see the larger objects on the right we can see the Int32 array. We see that this is a single array that is 1.5MB in size and that there is 1 string that is 1MB in size.
  • The Int32 array is a bit of a mystery since we didn’t use any int’s in the program at all.If we select that item from the list on the right it highlights the items in the histogram. We can then right click on this item and select “Show Who Allocated”. This will show us the following window:
  • The item we are looking at is on the far right and we can see moving to the left that it was allocated by string.split which was allocated by RunSnippet which was run by Main. If we look inside string split we would see that this is something it creates internally that we have no control over.Now let’s do the same and see where the strings are coming from. These are coming from GetStringForStringBuilder and StringBuilder.Append.
  • Now before we move on let’s go back to the summary and look at some more information. Remember that relocated bytes are objects that survived a garbage collection and had to be moved. Since this causes additional work for the garbage collector we should take a look at that and the last thing is that we saw a lot of space on the large object heap. We could get a histogram of the relocated objects but if we look at “objects by address” it will conveniently show us both the large object heap and the generation that the other objects are in.Here we can see that a lot of the strings are in generation 1 (meaning that they survived a generation) and that the Int32 array and the 1MB string are in the large object heap. We can also see that there is a blank spot in the large object heap where something was created and later thrown away. The space was not reclaimed because as I pointed out the LOH is never compacted.From this information it looks like we may be able to reduce the GC work by reducing the number of objects that survive to Gen1 and reducing the number and size of the strings that are created.Let’s start by looking at the finalized objects. When we discussed objects with finalizers that are no longer needed I pointed out that these objects die, are resurrected and then die again. This means they are the most expensive of the garbage collected objects and even though there aren’t a lot of them we should pay attention to them. Lets go back to the summary screen and select to get a histogram of the finalized objects.
  • We can see that 3 types have been finalized. The largest of these is a file stream and if we look who allocated it we can see it came from the StreamReader we are using to read from the text file. Since this object is under our control perhaps there is a way we can avoid running the finalizer.
  • IDisposableAs it turns out Microsoft created an interface called IDisposable for exactly this purpose. On the surface the IDisposable interface is very simple. If the object implements IDisposable you call the dispose method to tell it to clean up.If you do some searching on the internet you will quickly find there is a recommended design pattern associated with this interface.The way this works is that your class implements 3 methods. Dispose(), Dispose(bool), and a finalizer.In the Dispose() method you call the Dispose(bool) method passing true. This causes the resources to be freed. Since the resources are released, there is no reason to run the finalize method so this method calls GC.SuppressFinalize() to remove the object from the finalization queue.If for some reason the programmer forgets to call the Dispose method the class will still be cleaned up by the Finalizer which calls Dispose(bool) with false. If the object has not yet been disposed the cleanup logic will run but cleanup on any managed objects will be skipped.It is important to skip the managed object cleanup when running in the finalization because these objects probably will not exist anymore and will throw exceptions that can cause a memory leak.
  • UsingThere is one more thing we need to cover before returning to the demo and that is the using statement.We have all seen it used for namespace inclusion at the top of a program like this:
  • However, there is another form of this command that is used specifically with the IDisposable pattern. This form looks like this and is used to force the Dispose method to be called on the object that is constructed.
  • Demo V – Adding using to StreamReaderNow that we understand IDisposable and the using statement let’s make some modifications to our program.First we’ll throw a “using” in for the StreamReader and then see if we can cut down on the size of the strings that are allocated. Rather than pull the entire text into a single string we’ll try doing it line by line. That will avoid creating one huge string and reduce the number of strings that are created by the split.Now when we run this and right away we see about 10 millisecond drop in the time to process (it was around 45 milliseconds and it is now around 35).Furthermore if we bring up perfmon again and look at it we see a dramatic drop in the percent time spent in GC. It is now down to 0-3% which is pretty good.
  • Furthermore if we bring up perfmon again and look at it we see a dramatic drop in the percent time spent in GC. It is now down to 0-3% which is pretty good.
  • Now if we take out the timer, the loop and make it a one pass and profile it we see the following summary.We know that %GC went down significantly so we know this is using memory more efficiently. In addition Total memory allocated, final and large object heap all went down. But it also seems a bit counter-intuitive because relocated, gen 0 and gen 2 heap went up. As I mentioned these numbers are averages so I don’t think we can draw too many concrete conclusions from these numbers. The better indicators lie in the other counters. First, we can see that there were no Gen 1 collections in the re-factored code. This is a definite win because a gen 1 collection must consider a lot more objects. It is also a win that the total bytes is about 1 meg less and that the final size is almost 4 meg less. That means the garbage collector was able to free up more memory when it needed it which reduced the overall pressure on memory. It may have moved more bytes but it was only 5k and the fact that it wasn’t using large object heap and the objects were smaller it was able to confine it’s operation to the normal generational heap which is much more efficient. As a result of the changes it avoided gen 1 and gen 2 collections. Avoiding the gen 2 collection is particularly beneficial since a gen 2 collection involves collecting the large object heap.Lastly not only was the amount of space in the large object heap significantly smaller, we never had to collect any of it.Net result was that the GC was able to collect in smaller chunks which saved time trying to reclaim the harder to get memory.
  • Anyone who has done any WinForms development is familiar with this diagram. It’s a circular reference. Forms have a controls collection and each control has a reference back to the parent. If you remember we talked about how the garbage collector traces all references and when if finds objects that are not referenced by anything it disposes of them. This arrangement is not a problem as long as there is nothing referencing one of the objects in the loop.
  • Rooted objects or SingletonsProblems can arise when you start adding references to rooted objects (singletons or statics). A common pattern is that you create a singleton to hold preferences and that singleton implements IPropertyModified that controls and forms subscribe to. (Observer pattern).
  • If you aren’t careful to unsubscribe for the events when windows are closed the reference will keep the window and all the controls alive.The same holds true when there are references from a static member..Net 4.0 has a design pattern called Weak Event Pattern that was specifically created to address this problem. However, I am not going to go into it in this session because I believe the pattern is somewhat dubious. In an observer pattern the publisher shouldn’t really care about the lifecycle of the listener. If you unsubscribe from the events like you are supposed to then you won’t have a problem.However, there are cases where it makes sense and if you really need it I have a reference to it at the end of the deck.
  • Another situation is Lists, hashtables, arrays etc.These also are not a problem.
  • Not necessary at all but it doesn’t hurt to clear the contents of any large hash tables or lists.If for some reason there is a rooted reference to the list, then at least the contents of it are empty and there are no links to follow therefore the GC should have less work to do.
  • One place it is particularly important to do this is when you are type initializers.These are particularly problematic because once the class is referenced the data is consumed even if it is never used.It can never be reclaimed unless you provide a mechanism to clear the dictionary or tear down the app domain.If you use this technique a lot you can end up consuming a lot of resources that are infrequently used and just take up space.You might be better off using a cache because if there is memory pressure the cache can be trimmed.
  • Deep dumpster diving 2010

    1. 1. Ronn Black<br />October 2010<br />Deep Dumpster DivingA close look at .Net garbage collection<br />
    2. 2. Why should I care?<br />
    3. 3. Demo 1 (Word Count)<br />using System;<br />usingSystem.Collections.Generic;<br />usingSystem.Diagnostics;<br />using System.IO;<br />publicclassMyClass<br />{<br /> publicstaticvoidRunSnippet()<br /> {<br /> while(true)<br /> {<br /> Stopwatch watch = new Stopwatch();<br />watch.Start();<br />StreamReadersr = newStreamReader(@"C:UsersRonnDocumentsMy Code SnippetsGarbage Collectioncatcher.txt");<br /> string text = sr.ReadToEnd();<br />intwordCount = text.Split().Length;<br />Console.WriteLine("{0} Words", wordCount);<br />watch.Stop();<br />Console.WriteLine(watch.ElapsedMilliseconds + " Milliseconds");<br /> }<br /> }<br />
    4. 4.
    5. 5. using System;<br />usingSystem.Collections.Generic;<br />usingSystem.Threading;<br />usingSystem.Runtime.CompilerServices;<br />publicclassMyClass<br />{<br /> static Byte[] bytes;<br /> publicstaticvoidRunSnippet()<br /> {<br /> Timer tmr = new Timer(M, null, 0, 1000);<br />Thread.Sleep(20);<br /> for(int i = 0; i < 1000; i++)<br /> bytes = new Byte[2000];<br />Console.ReadLine();<br /> }<br /> staticvoid M(object state)<br /> {<br />Console.WriteLine("M - " + DateTime.Now);<br /> }<br />Demo 2<br />
    6. 6. Unmanaged Memory Management <br />
    7. 7. Hi Address<br />Unused Area<br />Unused Area<br />Args & Variables<br />Stack<br />Stack Pointer<br />Heap<br />Reserved<br />Low Address<br />
    8. 8.
    9. 9.
    10. 10.
    11. 11.
    12. 12. NextObjPtr<br />
    13. 13. NextObjPtr<br />
    14. 14. Roots<br />NextObjPtr<br />
    15. 15. NextObjPtr<br />Roots<br />
    16. 16. Finalizers<br />=<br />~MyClass(){ //Do work here…}<br />MyClass.Finalize(){ //Do work here…}<br />
    17. 17. Roots<br />Finalization Queue<br />I<br />I<br />H<br />F<br />G<br />E<br />F<br />C<br />E<br />Freachable Queue<br />D<br />C<br />B<br />A<br />
    18. 18. Roots<br />Finalization Queue<br />I (x)<br />I (x)<br />H (x)<br />F<br />G (x)<br />E (x)<br />F<br />C<br />E (x)<br />Freachable Queue<br />D<br />C<br />B (x)<br />A<br />
    19. 19. Roots<br />Finalization Queue<br />F<br />I (x)<br />C<br />F<br />Freachable Queue<br />E (x)<br />D<br />C<br />I (x)<br />A<br />E (x)<br />
    20. 20. Optimizations<br />Generations<br />Newly created objects tend to have short lives.<br />The older an object is, the longer it will survive. <br />Groups objects by age and collects younger objects more frequently than older objects.<br />All objects added to heap are in generation 0.<br />When an object survives the first garbage collection it is promoted to generation 1.<br />When garbage collection is triggered survivors from generation 1 are promoted to generation 2 and generation 0 survivors are promoted to gen 1.<br />As objects "mature", they are moved to the next older generation until they reach gen 2. <br />
    21. 21. using System;<br />usingSystem.Collections.Generic;<br />usingSystem.Threading;<br />usingSystem.Runtime.CompilerServices;<br />publicclassMyClass<br />{<br />static Byte[] bytes;<br /> publicstaticvoidRunSnippet()<br /> {<br />Byte[] bytes;<br /> Timer tmr = new Timer(M, null, 0, 1000);<br />Thread.Sleep(20);<br /> for(int i = 0; i < 1000; i++)<br /> bytes = new Byte[2000];<br />Console.ReadLine();<br /> }<br /> staticvoid M(object state)<br /> {<br />Console.WriteLine("M - " + DateTime.Now);<br /> }<br />Demo 3 – WTF??<br />
    22. 22. using System;<br />usingSystem.Collections.Generic;<br />usingSystem.Threading;<br />usingSystem.Runtime.CompilerServices;<br />publicclassMyClass<br />{<br />static Byte[] bytes;<br /> publicstaticvoidRunSnippet()<br /> {<br />Byte[] bytes;<br /> Timer tmr = new Timer(M, null, 0, 1000);<br />Thread.Sleep(20);<br /> for(int i = 0; i < 1000; i++)<br /> bytes = new Byte[2000];<br />Console.WriteLine(bytes.Length);<br />Console.ReadLine();<br /> }<br /> staticvoid M(object state)<br /> {<br />Console.WriteLine("M - " + DateTime.Now);<br /> }<br />Demo 3 – WTF??<br />
    23. 23. Demo 3 – WTF??<br />using System;<br />usingSystem.Collections.Generic;<br />usingSystem.Threading;<br />usingSystem.Runtime.CompilerServices;<br />publicclassMyClass<br />{<br />static Byte[] bytes;<br /> publicstaticvoidRunSnippet()<br /> {<br />Byte[] bytes;<br /> Timer tmr = new Timer(M, null, 0, 1000);<br />Thread.Sleep(20);<br /> for(int i = 0; i < 1000; i++)<br /> {<br /> bytes = new Byte[2000];<br />Console.WriteLine(“XX”);<br /> }<br />Console.ReadLine();<br /> }<br /> staticvoid M(object state)<br /> {<br />Console.WriteLine("M - " + DateTime.Now);<br /> }<br />
    24. 24. Demo 4 (CLR Profile Word Count)<br />using System;<br />usingSystem.Collections.Generic;<br />usingSystem.Diagnostics;<br />using System.IO;<br />publicclassMyClass<br />{<br /> publicstaticvoidRunSnippet()<br /> {<br /> while(true)<br /> {<br /> Stopwatch watch = new Stopwatch();<br />watch.Start();<br />StreamReadersr = newStreamReader(@"C:UsersRonnDocumentsMy Code SnippetsGarbage Collectioncatcher.txt");<br /> string text = sr.ReadToEnd();<br />intwordCount = text.Split().Length;<br />Console.WriteLine("{0} Words", wordCount);<br />watch.Stop();<br />Console.WriteLine(watch.ElapsedMilliseconds + " Milliseconds");<br /> }<br /> }<br />
    25. 25.
    26. 26.
    27. 27.
    28. 28.
    29. 29.
    30. 30.
    31. 31. IDisposable<br />public class MyClass : IDisposable<br />{<br />public void Dispose()<br /> {<br /> Dispose(true);<br />GC.SuppressFinalize(this);<br /> }<br />protected virtual void Dispose(bool disposing)<br /> {<br />if (!disposed)<br /> {<br />if (disposing)<br /> {<br />// Dispose managed resources. Ex: Components.Dispose();<br /> }<br />// Release ONLY unmanaged resources. Ex: CloseHandle(handle);<br /> }<br /> disposed = true;<br /> }<br />protected volatile bool disposed = false;<br /> ~MyClass() <br /> {<br /> Dispose(false);<br /> }<br />}<br />[ComVisible(true)]<br />public interface IDisposable<br />{<br /> void Dispose();<br />}<br />
    32. 32. Using<br />using System;<br />using System.Collections.Generic;<br />using System.Diagnostics;<br />
    33. 33. Using<br />using System;<br />using System.Collections.Generic;<br />using System.Diagnostics;<br />using (MyClass c = new MyClass())<br />{<br />//Do Some Work<br />}<br />
    34. 34. Demo 5 - optimize<br />using System;<br />using System.Collections.Generic;<br />using System.Diagnostics;<br />using System.IO;<br />public class MyClass<br />{<br /> public static void RunSnippet()<br /> {<br /> while(true)<br /> {<br /> Stopwatch watch = new Stopwatch();<br />watch.Start();<br /> using(StreamReadersr = new StreamReader(@"C:…Garbage Collectioncatcher.txt"))<br /> {<br /> string line = "";<br />intwordCount = 0;<br /> while((line = sr.ReadLine()) != null)<br /> {<br />wordCount += line.Split().Length;<br /> }<br />Console.WriteLine("{0} Words", wordCount);<br /> }<br />watch.Stop();<br />Console.WriteLine(watch.ElapsedMilliseconds + " Milliseconds");<br /> }<br /> }<br />
    35. 35.
    36. 36. TotalRelocatedFinal Gen 0 Gen 1 Large Object Heap6,578,038 96,608 5,057,272 1,400,580 12 3,535,4645,473,972 101,501 1,441,201 2,097,172 103,992 9,328======== ======== ======== ========= ======== =========== -1,104,066 +4,893 -3,617,071 +696,592 +103,980 -3,526,136<br />
    37. 37. Design patterns and Memory<br />
    38. 38. Design patterns and Memory<br />Circular References.<br />Form<br />Control<br />Control<br />Control<br />Control<br />+Controls<br />+Parent<br />+Parent<br />+Parent<br />+Parent<br />
    39. 39. Design patterns and Memory<br />public class Preferences <br />{<br />public Preferences instance;<br />public static Preferences GetPrefs()<br /> {<br />if (instance == null)<br />instance = new Preferences();<br /> return instance;<br /> }<br />public event PrefsChanged;<br />}<br />
    40. 40. Design patterns and Memory<br />Rooted objects (Singletons).<br />Form<br />Control<br />Control<br />Control<br />Control<br />Preferences<br />+Controls<br />+Parent<br />+Parent<br />+Parent<br />+Parent<br />$GetPrefs<br />+PrefsChanged<br />
    41. 41. Design patterns and Memory<br />Lists, Hashtables, Dictionaries, etc.<br />Control<br />Control<br />Control<br />T<br />List<T><br />+Parent<br />+Parent<br />+Parent<br />…<br />
    42. 42. Design patterns and Memory<br />public class Foo<br />{<br />public static void DoSomething()<br /> {<br /> List<Bar> bars;<br />...<br />//Do Something<br />bar.Clear();<br /> }<br />}<br />
    43. 43. Design patterns and Memory<br />public class Foo<br />{<br /> static Dictionary<string, Bar> _bars;<br />public static Foo()<br /> {<br />//Initialize the Lookup table<br /> _bars = new Dictionary<string, Bar>();<br /> _bars.Add(“EndUp”, new Bar());<br />...<br /> }<br />}<br />
    44. 44. Take Aways<br />Don’t keep objects around unless you know you will be using them again.<br />Save these techniques for objects that are expensive to create and are frequently used.<br />Carefully consider use of type initializers and statics. <br />Consider caching patterns so memory can be reclaimed if needed.<br />If you are using observable patterns be sure you unsubscribe properly<br />
    45. 45. Contact & Reference Material<br />Ronn Black<br /> (Garbage Collector Basics and Performance Hints)<br /> (CLR Profiler for .Net 2.0)<br /> (Heap Overview)<br /> Advanced Malloc exploits<br /> (Large Object Heap Uncovered)<br /> (Weak Event Patterns)<br />