Everything I Ever Learned About JVM Performance Tuning @Twitter
Upcoming SlideShare
Loading in...5
×
 

Everything I Ever Learned About JVM Performance Tuning @Twitter

on

  • 122,234 views

Summarizes about a year worth of experiences and case studies in performance tuning the JVM for various services at Twitter.

Summarizes about a year worth of experiences and case studies in performance tuning the JVM for various services at Twitter.

Statistics

Views

Total Views
122,234
Views on SlideShare
118,590
Embed Views
3,644

Actions

Likes
393
Downloads
3,126
Comments
18

93 Embeds 3,644

http://paper.li 485
http://localhost 364
http://www.redditmedia.com 242
https://twitter.com 238
http://a0.twimg.com 235
http://guiral.info 189
http://codeslinger.posterous.com 172
http://www.techgig.com 170
http://javatropbien.free.fr 164
http://roshanpaiva.wordpress.com 160
http://asyncionews.com 151
http://inundev.cafe24.com 130
http://willolbrys.com 109
http://www.sukruuzel.com 84
http://www.scoop.it 74
http://us-w1.rockmelt.com 49
http://geekgabyte.com 44
http://it-system-consulting.blogspot.com 41
http://formacionmestreacasa.gva.es 40
http://www.foogletrends.com 37
http://it-system-consulting.blogspot.de 37
http://cbcg.net 34
http://tingletech.tumblr.com 32
http://tweetedtimes.com 26
http://malenegodtgregersen.keablogs.dk 21
http://feeds2.feedburner.com 19
https://si0.twimg.com 19
http://codetrace.wordpress.com 18
http://buddypress.freenice.org 18
https://pramati.qontext.com 17
http://clusterexample.tf.etondigital.com 14
http://cackle.ru 13
http://www.mefeedia.com 13
https://localhost 12
https://www.linkedin.com 10
http://www.linkedin.com 10
http://informatica4c032012.blogspot.com 9
http://10.13.37.94:3002 9
http://rikkecharlotte.keablogs.dk 9
http://social.local 8
http://www.lifeyun.com 8
http://kowidi.com 8
http://nuevospowerpoints.blogspot.com 6
http://www.schoox.com 6
http://127.0.0.1:3002 6
http://jesper.keablogs.dk 5
https://twimg0-a.akamaihd.net 5
http://community 4
http://twitter.com 4
http://www.twylah.com 3
More...

Accessibility

Upload Details

Uploaded via as Apple Keynote

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel

110 of 18 Post a comment

  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…

110 of 18

Post Comment
Edit your comment
  • Here it is all together\n
  • Here it is all together\n
  • Here it is all together\n
  • Here it is all together\n
  • Here it is all together\n
  • Here it is all together\n
  • Here it is all together\n
  • Here it is all together\n
  • Here it is all together\n
  • Here it is all together\n
  • Here it is all together\n
  • Here it is all together\n
  • Here it is all together\n
  • Here it is all together\n
  • Here it is all together\n
  • Here it is all together\n
  • Here it is all together\n
  • Here it is all together\n
  • Here it is all together\n
  • Here it is all together\n
  • Here it is all together\n
  • Here it is all together\n
  • Here it is all together\n
  • Here it is all together\n
  • Here it is all together\n
  • Here it is all together\n
  • \n
  • \n
  • Here it is all together\n
  • Here it is all together\n
  • Here it is all together\n
  • Here it is all together\n
  • Here it is all together\n
  • Here it is all together\n
  • Here it is all together\n
  • Here it is all together\n
  • Here it is all together\n
  • Here it is all together\n
  • Here it is all together\n
  • Here it is all together\n
  • Here it is all together\n
  • Here it is all together\n
  • Here it is all together\n
  • Here it is all together\n
  • Here it is all together\n
  • Here it is all together\n
  • Here it is all together\n
  • Here it is all together\n
  • Here it is all together\n
  • Here it is all together\n
  • Here it is all together\n
  • Here it is all together\n
  • Here it is all together\n
  • Here it is all together\n
  • Here it is all together\n
  • Here it is all together\n
  • Here it is all together\n
  • Here it is all together\n
  • Here it is all together\n
  • Here it is all together\n
  • Here it is all together\n

Everything I Ever Learned About JVM Performance Tuning @Twitter Everything I Ever Learned About JVM Performance Tuning @Twitter Presentation Transcript

  • Attila Szegedi, Software Engineer@asz 1
  • Everything I everlearned about JVMperformance tuning @twitter 2
  • Everything Morethan I ever wanted to learned aboutJVM performance tuning @twitter 3
  • • Memory tuning• CPU usage tuning• Lock contention tuning• I/O tuning 4
  • Twitter’s biggest enemy 5
  • Twitter’s biggest enemy Latency 5
  • Twitter’s biggest enemy Latency 5
  • Twitter’s biggest enemy LatencyCC licensed image from http://www.flickr.com/photos/dunechaser/213255210/ 5
  • Latency contributors• By far the biggest contributor is garbage collector• others are, in no particular order: • in-process locking and thread scheduling, • I/O, • application algorithmic inefficiencies. 6
  • Areas of performance tuning• Memory tuning• Lock contention tuning• CPU usage tuning• I/O tuning 7
  • Areas of memory performance tuning• Memory footprint tuning• Allocation rate tuning• Garbage collection tuning 8
  • Memory footprint tuning• So you got an OutOfMemoryError… • Maybe you just have too much data! • Maybe your data representation is fat! • You can also have a genuine memory leak… 9
  • Too much data• Run with -verbosegc• Observe numbers in “Full GC” messages secs] [Full GC $before->$after($total), $time• Can you give the JVM more memory?• Do you need all that data in memory? Consider using: • a LRU cache, or… • soft references* 10
  • Fat data• Can be a problem when you want to do wacky things, like • load the full Twitter social graph in a single JVM • load all user metadata in a single JVM• Slimming internal data representation works at these economies of scale 11
  • Fat data: object header • JVM object header is normally two machine words. • That’s 16 bytes, or 128 bits on a 64-bit JVM! • new java.lang.Object() takes 16 bytes. • new byte[0] takes 24 bytes. 12
  • Fat data: padding class A { byte x; } class B extends A { byte y; }• new A() takes 24 bytes.• new B() takes 32 bytes. 13
  • Fat data: no inline structs class C { Object obj = new Object(); } • new C() takes 40 bytes. • similarly, no inline array elements. 14
  • Slimming taken to extreme• A research project had to load the full follower graph in memory• Each vertex’s edges ended up being represented as int arrays• If it grows further, we can consider variable- length differential encoding in a byte array 15
  • Compressed object pointers• Pointers become 4 bytes long• Usable below 32 GB of max heap size• Automatically used below 30 GB of max heap 16
  • Compressed object pointers Uncompressed Compressed 32-bit Pointer 8 4 4 Object header 16 12* 8 Array header 24 16 12 Superclass pad 8 4 4* Object can have 4 bytes of fields and still only take up 16 bytes 17
  • Avoid instances of primitive wrappers• Hard won experience with Scala 2.7.7: • a Seq[Int] stores java.lang.Integer • an Array[Int] stores int • first needs (24 + 32 * length) bytes • second needs (24 + 4 * length) bytes 18
  • Avoid instances of primitive wrappers• This was fixed in Scala 2.8, but it shows that: • you often don’t know the performance characteristics of your libraries, • and won’t ever know them until you run your application under a profiler. 19
  • Map footprints• Guava MapMaker.makeMap() takes 2272 bytes!• MapMaker.concurrencyLevel(1).makeMap() takes 352 bytes!• ConcurrentMap with level 1 makes sense sometimes (i.e. you don’t want a ConcurrentModificationException) 20
  • Thrift can be heavy• Thrift generated classes are used to encapsulate a wire tranfer format.• Using them as your domain objects: almost never a good idea. 21
  • Thrift can be heavy• Every Thrift class with a primitive field has a java.util.BitSet __isset_bit_vector field.• It adds between 52 and 72 bytes of overhead per object. 22
  • Thrift can be heavy 23
  • Thrift can be heavy• Thrift does not support 32-bit floats.• Coupling domain model with transport: • resistance to change domain model• You also miss oportunities for interning and N-to-1 normalization. 24
  • class Location { public String city; public String region; public String countryCode; public int metro; public List<String> placeIds; public double lat; public double lon; public double confidence; 25
  • class SharedLocation { public String city; public String region; public String countryCode; public int metro; public List<String> placeIds;class UniqueLocation { private SharedLocation sharedLocation; public double lat; public double lon; public double confidence; 26
  • Careful with thread locals• Thread locals stick around.• Particularly problematic in thread pools with m⨯n resource association. • 200 pooled threads using 50 connections: you end up with 10 000 connection buffers.• Consider using synchronized objects, or• just create new objects all the time. 27
  • Part II:fighting latency 28
  • Performance tradeoff Memory Time Convenient, but oversimplified view. 29
  • Performance triangle Memory footprint Throughput Latency 30
  • Performance triangle Compactness Throughput Responsiveness C ⨯T ⨯ R = a• Tuning: vary C, T, R for fixed a• Optimization: increase a 31
  • Performance triangle• Compactness: inverse of memory footprint• Responsiveness: longest pause the application will experience• Throughput: amount of useful application CPU work over time• Can trade one for the other, within limits.• If you have spare CPU, can be pure win. 32
  • Responsiveness vs. throughput 33
  • Biggest threat toresponsiveness in the JVM is the garbage collector 34
  • Memory poolsEden Survivor Old Code Permanent cache This is entirely HotSpot specific! 35
  • How does young gen work? Eden S1 S2 Old• All new allocation happens in eden. • It only costs a pointer bump.• When eden fills up, stop-the-world copy-collection into the survivor space. • Dead objects cost zero to collect.• Aftr several collections, survivors get tenured into old generation. 36
  • Ideal young gen operation• Big enough to hold more than one set of all concurrent request-response cycle objects.• Each survivor space big enough to hold active request objects + tenuring ones.• Tenuring threshold such that long-lived objects tenure fast. 37
  • Old generation collectors• Throughput collectors • -XX:+UseSerialGC • -XX:+UseParallelGC • -XX:+UseParallelOldGC• Low-pause collectors • -XX:+UseConcMarkSweepGC • -XX:+UseG1GC (can’t discuss it here) 38
  • Adaptive sizing policy• Throughput collectors can automatically tune themselves: • -XX:+UseAdaptiveSizePolicy • -XX:MaxGCPauseMillis=… (i.e. 100) • -XX:GCTimeRatio=… (i.e. 19) 39
  • Adaptive sizing policy at work 40
  • Choose a collector• Bulk service: throughput collector, no adaptive sizing policy.• Everything else: try throughput collector with adaptive sizing policy. If it didn’t work, use concurrent mark-and-sweep (CMS). 41
  • Always start with tuning the young generation• Enable -XX:+PrintGCDetails, -XX:+PrintHeapAtGC, and -XX:+PrintTenuringDistribution.• Watch survivor sizes! You’ll need to determine “desired survivor size”.• There’s no such thing as a “desired eden size”, mind you. The bigger, the better, with some responsiveness caveats.• Watch the tenuring threshold; might need to tune it to tenure long lived objects faster. 42
  • -XX:+PrintHeapAtGCHeap after GC invocations=7000 (full 87): par new generation total 4608000K, used 398455K eden space 4096000K, 0% used from space 512000K, 77% used to space 512000K, 0% used concurrent mark-sweep generation total 3072000K, used 1565157K concurrent-mark-sweep perm gen total 53256K, used 31889K} 43
  • -XX:+PrintTenuringDistributionDesired survivor size 262144000 bytes, new threshold 4 (max 4)- age 1: 137474336 bytes, 137474336 total- age 2: 37725496 bytes, 175199832 total- age 3: 23551752 bytes, 198751584 total- age 4: 14772272 bytes, 213523856 total • Things of interest: • Number of ages • Size distribution in ages • You want strongly declining. 44
  • Tuning the CMS• Give your app as much memory as possible. • CMS is speculative. More memory, less punitive miscalculations.• Try using CMS without tuning. Use -verbosegc and -XX:+PrintGCDetails. • Didn’t get any “Full GC” messages? You’re done!• Otherwise, tune the young generation first. 45
  • Tuning the old generation • Goals: • Keep the fragmentation low. • Avoid full GC stops. • Fortunately, the two goals are not conflicting. 46
  • Tuning the old generation • Find the minimum and maximum working set size (observe “Full GC” numbers under stable state and under load). • Overprovision the numbers by 25-33%. • This gives CMS a cushion to concurrently clean memory as it’s used. 47
  • Tuning the old generation • Set -XX:InitiatingOccupancyFraction to between 80-75, respectively. • corresponds to overprovisioned heap ratio. • You can lower initiating occupancy fraction to 0 if you have CPU to spare. 48
  • Responsiveness still not good enough?• Too many live objects during young gen GC: • Reduce NewSize, reduce survivor spaces, reduce tenuring threshold.• Too many threads: • Find the minimal concurrency level, or • split the service into several JVMs. 49
  • Responsiveness still not good enough?• Does the CMS abortable preclean phase, well, abort? • It is sensitive to number of objects in the new generation, so • go for smaller new generation • try to reduce the amount of short-lived garbage your app creates. 50
  • Part III:let’s take a break from GC 51
  • Thread coordination optimization• You don’t have to always go for synchronized.• Synchronization is a read barrier on entry; write barrier on exit.• Sometimes you only need a half-barrier; i.e. in a producer-observer pattern.• Volatiles can be used as half-barriers. 52
  • Thread coordination optimization• For atomic update of a single value, you only need Atomic{Integer|Long}.compareAndSet().• You can use AtomicReference.compareAndSet() for atomic update of composite values represented by immutable objects. 53
  • Fight CMS fragmentation with slab allocators• CMS doesn’t compact, so it’s prone to fragmentation, which will lead to a stop-the-world pause.• Apache Cassandra uses a slab allocator internally. 54
  • Cassandra slab allocator • 2MB slab sizes • copy byte[] into them using compare-and-set • GC before: 30-60 seconds every hour • GC after: 5 seconds once in 3 days and 10 hours 55
  • Slab allocator constraints• Works for limited usage: • Buffers are written to linearly, flushed to disk and recycled when they fill up. • The objects need to be converted to binary representation anyway.• If you need random freeing and compaction, you’re heading down the wrong direction.• If you find yourself writing a full memory manager on top of byte buffers, stop! 56
  • Soft references revisited• Soft reference clearing is based on the amount of free memory available when GC encounters the reference.• By definition, throughput collectors always clear them.• Can use them with CMS, but they increase memory pressure and make the behavior less predictable.• Need two GC cycles to get rid of referenced objects. 57
  • Everything Morethan I ever wanted to learned aboutJVM performance tuning @twitter Questions? 58
  • Attila Szegedi, Software Engineer@asz 59