Performance Testing Java Applications

2,885 views
2,574 views

Published on

Video and slides synchronized, mp3 and slide download available at http://bit.ly/14w07bK.

Martin Thompson explores performance testing, how to avoid the common pitfalls, how to profile when the results cause your team to pull a funny face, and what you can do about that funny face. Specific issues to Java and managed runtimes in general will be explored, but if other languages are your poison, don't be put off as much of the content can be applied to any development. Filmed at qconlondon.com.

Martin Thompson is a high-performance and low-latency specialist, with over two decades working with large scale transactional and big-data systems, in the automotive, gaming, financial, mobile, and content management domains. Martin was the co-founder and CTO of LMAX, until he left to specialize in helping other people achieve great performance with their software.

Published in: Technology
0 Comments
14 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
2,885
On SlideShare
0
From Embeds
0
Number of Embeds
13
Actions
Shares
0
Downloads
0
Comments
0
Likes
14
Embeds 0
No embeds

No notes for slide

Performance Testing Java Applications

  1. 1. Performance Testing Java Applications Martin Thompson - @mjpt777
  2. 2. Watch the video with slide synchronization on InfoQ.com! http://www.infoq.com/presentations /performance-testing-java InfoQ.com: News & Community Site• 750,000 unique visitors/month• Published in 4 languages (English, Chinese, Japanese and Brazilian Portuguese)• Post content from our QCon conferences• News 15-20 / week• Articles 3-4 / week• Presentations (videos) 12-15 / week• Interviews 2-3 / week• Books 1 / month
  3. 3. Presented at QCon London www.qconlondon.comPurpose of QCon- to empower software development by facilitating the spread ofknowledge and innovationStrategy - practitioner-driven conference designed for YOU: influencers ofchange and innovation in your teams- speakers and topics driving the evolution and innovation- connecting and catalyzing the influencers and innovatorsHighlights- attended by more than 12,000 delegates since 2007- held in 9 cities worldwide
  4. 4. What is Performance?
  5. 5. Throughput / Bandwidth
  6. 6. Latency / Response Time
  7. 7. Throughput vs. Latency• How does an application cope under burst conditions?• Are you able to measure queuing delay? Latency• Back-off strategies and other effects Typical Possible• Amortise the expensive operations – Smart Batching Load
  8. 8. Performance Requirements
  9. 9. Performance Requirements• What throughput and latency does your system require?  Do you need to care?  How will you be competitive?  Does performance drive business?• Investments start from a business plan  Work with the business folk  Set Transaction Budget limits• As the business scales does the software scale in an economic fashion?  Don’t limit design options
  10. 10. Decompose the Transaction Budget• How much time is each layer in the architecture allowed?• Do all technology choices pay their way?  Think Aircraft or Spacecraft!• Profile to ensure budget is enforced• What happens when throughput increases? X µs Total with  Queuing delay introduced? Breakdown  Scale out at constant latency?
  11. 11. How can we test Performance?
  12. 12. Types of Performance Testing1. Throughput / Bandwidth Testing2. Latency / Response Testing3. Stress Testing4. Concurrent / Contention Testing5. Endurance / Soak Testing6. Capacity Testing
  13. 13. Understand Algorithm Behaviour• Need to model realistic scenarios  Read to write ratios  Distribution across data sets  No single entity / item tests!• Model based on production• Are unbounded queries allowed?  Deal in manageable chunks
  14. 14. The “Onion”• Separation of Concerns is key  Layer your architecture• Test individual components• Test assemblies of components with a focus on interactions  Beware micro-benchmarking!• Test core infrastructure  Useful for catching upgrade issues• Same patterns at different levels of scale
  15. 15. Know Your Platform/Infrastructure• Stress test until breaking point  Do things degrade gracefully?  Do things crash?  Order of the algorithms?• What are the infrastructure capabilities?  Profile to know relative costs of components  Operations Per Second  Bandwidth  Latency  Endurance• What happens when redundant components take over?
  16. 16. When should we test Performance?
  17. 17. Performance Test & Profile“Premature optimization is the root of all evil” – Donald Knuth / Tony Hoare• What does “optimization” mean?  Specialisation vs. Flexibility?  Very different from knowing your system capabilities  Test / profile early and often• Integrate performance testing to CI• Monitor production systems• Change your development practices...
  18. 18. Development Practices• Performance “Test First”• Red, Green, Debug, Profile, Refactor...  A deeper understanding makes you faster• Use “like live” pairing stations• Don’t add features you don’t need• Poor performance should fail the build!
  19. 19. Performance Testing in Action
  20. 20. The Java Pitfalls• Runtime Compiler  JIT & On Stack Replacement (OSR)  Polymorphism and In-lining  Dead code elimination  Race conditions in optimisation• Garbage Collection  Which collector - Dev vs. Production  Skewed results from pauses  Beware “Card Marking”• Class Loading
  21. 21. Micro Benchmarking• Framework should handle warm up• Representative Data Sets public class MyBenchmark  Vary set size extends Benchmark { public void timeMyOp(int reps) {• Key Measures int i = reps + 1;  Ops Per Second (per thread) while (-–i != 0) {  Allocation rates MyClass.myOperation(); } } }• Concurrent Testing  Scaling effects with threads  Queuing effects
  22. 22. Anatomy Of A Micro Benchmarkpublic class MapBenchmark extends Benchmark{ private int size; private Map<Long, String> map = new MySpecialMap<Long, String>(); private long[] keys; private String[] values; // setup method to init keys and values public void timePutOperation(int reps) { for (int i = 0; i < reps; i++) { map.put(keys[i], values[i]); } }}
  23. 23. Performance Testing Concurrent Components• Straight Performance Tests  Ramp number of threads for plotting scaling characteristics  Measure Ops / Sec throughput – Averages vs. Intervals  Measure latency for queuing effects• Validating Performance Tests  Check invariants and sequences
  24. 24. System Performance Tests Observer Distributed Load Generation Agents << XML / JSON / Binary >> Vary numbersand firepower!!! Acceptance Test Runners? System Under Test
  25. 25. System Performance Testing Analysis• Build a latency histogram for given throughput  Disruptor Histogram, HdrHistogram  Investigate the outliers!• Gather metrics from the system under test  Design system to be instrumented  Don’t forget the Operating System  Plot metrics against latency and throughput  Capacity planning from this is possible• Generate micro-bursts  They show up queuing effects at contention points  Uncover throughput bottlenecks
  26. 26. Got a Performance Issue?
  27. 27. Performance Profiling• Java Applications  JVisualVM, YourKit, Solaris Studio, etc  What is the GC doing?  Learn bytecode profiling• Operating System  htop, iostat, vmstat, pidstat, netstat, etc.• Hardware  Perf Counters – perf, likwid, VTune• Follow Theory of Constraints for what to tackle!
  28. 28. Performance Testing Lessons
  29. 29. Mechanical Sympathy• Java Virtual Machines  Garbage Collection  Optimization  Locks• Operating Systems  Schedulers  Virtual Memory  File Systems & IO• Hardware  Hardware capabilities and interactions  Profile the counters for greater understanding
  30. 30. The Issues With “Time”• NTP is prone to time correction  Careful of System.currentTimeMillis()• Monotonic time not synchronised across sockets  System.nanoTime() is monotonic  RDTSC is not an ordered instruction• Not all servers and OSes are equal  Pre Nehalem TSC was not invariant  Older OSes and VT can be expensive  Resolution is highly variable by OS/JVM
  31. 31. Beware Being Too Predictable• CPU Branch Prediction  Fake Orders  Taking same path in code• Cache hits  Disk loaded into memory  Memory loaded into CPU cache  Application level caching
  32. 32. Beware YAGNI YAGNI
  33. 33. The “Performance Team” Anti-Pattern• They struggle to keep up with rate of change• Performance testing is everyones responsibility• Better to think of a “Performance Team” as a rotating R&D exercise• Performance specialists should pair on key components and spread knowledge
  34. 34. Lame Excuses - “It’s only …”• It is only start-up code...  MTTF + MTTR• It is only test code...  Feedback cycles!• It is only UI code...  Read “High Performance Web Sites” by Steve Souders
  35. 35. Questions?Blog: http://mechanical-sympathy.blogspot.com/Twitter: @mjpt777Links:https://github.com/giltene/HdrHistogramhttps://github.com/LMAX- Exchange/disruptor/blob/master/src/main/java/com/lmax/disruptor/collecti ons/Histogram.javahttps://code.google.com/p/caliper/http://grinder.sourceforge.net/http://www.javaworld.com/javaworld/jw-08-2012/120821-jvm-performance- optimization-overview.html

×