Netcf Gc

507
-1

Published on

Published in: Technology, Design
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
507
On Slideshare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
3
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide
  • Take AwayBasic understanding of how NETCF GC Works
  • Allocation from the end (no searching for holes)
  • Works on WinMo, CE devices, Xbox, Zune. But primarily targeted towards low-mem deviceUses grow able mark buffers with failoverCompacts when fragmentation reaches 750KCompaction based on Jonker’s (chaining) algorithmGC triggersNon-adaptive with registry tweaks
  • Largest contributors is image
  • Netcf Gc

    1. 1. Abhinaba Basu http://blogs.msdn.com/abhinaba
    2. 2. App State  64K object pools  Large (>16K) and Fast free huge objects (>64K) Pool 2  Per app domain finalizer thread Pool 0 Pool 1
    3. 3.  Targeted towards diverse devices  Non generational mark-sweep-compaction  Highest cost is compaction ◦ Fired when fragmentation > 750K  GC can trigger code pitching  GC triggers ◦ Allocation fails ◦ Quanta of allocation ◦ User code forces GC ◦ When app goes to background
    4. 4.  Actively engaging with ◦ Desktop SL team ◦ Desktop CLR GC folks ◦ XNA team  Measure, measure and then measure some more  Development of tools for memory simulations
    5. 5.  60% to 85% of memory in native heap with 80 MB working set ◦ Native heap allocations don’t impact GC perf ◦ Images, in native heap, dominate working set  Desktop GC inadequate in this scenario  Unsuccessfully attempted Add/Remove Memory pressure  SL drives GC  Suggested we tweak our GC to be total memory aware and expose hosting API
    6. 6. Samsung Mirage 250 200 150 GC latency Garbage per sec:100K (milli sec) 100 50 0 1 2 3.2 4.3 5.5 6.7 7.8 9 10 Managed Heap Working Set OMAP - 10 mb managed data 1. Without  60ms 2. With compaction  186ms 3. 26MB Compaction  324ms
    7. 7.  60-80% of working set is native  Linear degradation with increase in managed working set  Expected max is 50MB  Managed WS < 10MB generally works fine  Managed WS >100MB experiences “hangs”  UI over web-services, small data works fine  UI over large in-memory data has large startup time  Multi-media, games can have small freezes (200ms) on full GC  SL apps with lot of native memory may run into OOM even when memory can be reclaimed
    8. 8.  Refactoring to allow GC in-built profiles  Auto-tuning to reduce the number of GC  Configurability  Hosting changes
    9. 9.  Auto tune internal thresholds ◦ Collection threshold/budget ◦ Fragmentation threshold for compaction  Native allocation aware  System memory aware  Low memory state
    10. 10.  Support GC policies ◦ Currently limited hand tuning for entire CLR ◦ Per process GC config (*.exe.config)  Downloaded application cannot override  System applications can change Quanta, disable pitching  Add GC Hosting API’s ◦ ICLRHostManager::Collect ◦ ICLRManager::SetGCStartupLimits ◦ IHostMemoryManager::GetMemoryLoad ◦ IHostGCManager::GCNotification
    11. 11.  CE 6.0 memory model changes  Hand optimization  Generational GC ◦ Current data on desktop SL suggests against it
    12. 12.  Xbox and other large systems bring in special challenges  Per CPU heap/allocation context  Parallel GC on multi-proc systems  Pre-fetch  Current focus is Mobile scenarios
    13. 13.  http://blogs.msdn.com/abhinaba/archive/ta gs/Garbage+Collection/default.aspx  http://sharepointasia/sites/mobiledev/netcf/ NETCF%20Internal%20Documents/CF%20Desi gn/Proposed%20changes%20to%20Garbage% 20Collector.mht  CLRProfiler  Remote Performance Monitor
    1. A particular slide catching your eye?

      Clipping is a handy way to collect important slides you want to go back to later.

    ×