HOLISTICPERFORMANCE   John Resig
PerformancePerformance analysis is amazingly complex  There is no, single, silver-bulletDon’t want to compromise quality i...
Analyzing PerformanceWall-clock time  Time in different browsers  CPU consumptionMemory consumption  Memory leaksBandwidth ...
Dictionary Lookups in         JavaScriptAn interesting example for looking at performance.Most frequent concern: File Size...
Naïve SolutionPull in a raw list of wordsPush it into an object for fast property lookupsUses a lot of file sizeVery fast l...
TrieA compact structure forstoring dictionariesOptimizes heavily for filesizeCan be rather expensiveto parseCan also use a ...
File Size of Dictionaries1100KB 825KB550KB 275KB  0KB         Plain String Binary String Simple Trie Optimized Trie Suffix T...
Load Speed of Dictionaries            Time to load the dictionary once in Node.js on a 2.8 GHz Core i7. 150ms112.5ms  75ms...
Search Speed of Dictionaries                          Time to look up one word. 6ms4.5ms 3ms1.5ms 0ms        Plain String ...
Private Memory Usage of Dictionaries                        After loading the dictionary once. 11MB8.25MB 5.5MB2.75MB 0MB ...
dynaTrace
dynaTraceOne of the best tools available for analyzing the fullbrowser stackDig into CPU usage, bandwidth usage, and evenp...
Practical PerformanceThink about the larger context  Pre-optimization is dangerousCode qualityImportanceCross-browser comp...
Performance in the  jQuery Project
Rule 1: Prove it.
Prove it.Any proposed performance optimization must beundisputedly proven.Show us the proposed changes and how it’ll affect...
JSPerfJSPerf is a great toolMakes it very easy to build a reproducible test:  http://jsperf.com/valhooks-vs-val/2
JSPerfJSPerf builds on some of the earlier analysis I did in2008  http://ejohn.org/blog/javascript-benchmark-quality/Runs ...
Rule 2: See the Big Picture.
See the Big Picture.Micro-optimizations are death.Doesn’t matter how much you unroll a loop if that loopis doing DOM manip...
Prove the use case.If you’re proposing an optimization you must provewhat it’ll help.Show real world applications that’ll ...
Rule 3: Clean Code.
Clean Code.We won’t compromise our code quality in exchange forperformance.Almost all code quality compromises come fromne...
Rule 4: Don’t Slow IE.
Don’t Slow IE.Just because performance gets better in one browserdoesn’t mean it’ll get faster in all browsers.You shouldn...
Communicating the ResultsCreating realistic testsCommunicating in an effective manner
Creating Realistic Tests
RealismIt’s incredibly hard to create realistic test casesIt’s important to look at actual applicationsWe frequently use G...
Communicating the Results
BrowserscopeCollection of performance resultsOrganized by browserJSPerf plugs right in
Creating ResultsPull the results directly from BrowserScopeBest: Compare old versions to new versions  Within the context ...
.val() (get)                        (Number of test iterations, higher is better.)700000525000350000175000    0         Ch...
CompetitionYou might be inclined to compare performance againstother frameworks, libraries, applications, etc.This tends t...
Compete Against YourselfIn the jQuery project we work to constantly improveagainst ourselvesEvery release we try to have s...
More InformationThank you!http://ajax.dynatrace.com/ajax/en/http://jsperf.comhttp://www.browserscope.orghttp://ejohn.org/b...
Holistic JavaScript Performance
Holistic JavaScript Performance
Upcoming SlideShare
Loading in...5
×

Holistic JavaScript Performance

1,080

Published on

Talk given at Velocity 2011.

Published in: Technology
0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
1,080
On Slideshare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
16
Comments
0
Likes
1
Embeds 0
No embeds

No notes for slide

Holistic JavaScript Performance

  1. 1. HOLISTICPERFORMANCE John Resig
  2. 2. PerformancePerformance analysis is amazingly complex There is no, single, silver-bulletDon’t want to compromise quality in favor ofperformanceAlso want to communicate the changes in a realisticway
  3. 3. Analyzing PerformanceWall-clock time Time in different browsers CPU consumptionMemory consumption Memory leaksBandwidth consumption Parse timeBattery consumption (Mobile!)
  4. 4. Dictionary Lookups in JavaScriptAn interesting example for looking at performance.Most frequent concern: File SizeMany solutions only optimize for file size Disregard parse time, or other performance aspects
  5. 5. Naïve SolutionPull in a raw list of wordsPush it into an object for fast property lookupsUses a lot of file sizeVery fast lookups
  6. 6. TrieA compact structure forstoring dictionariesOptimizes heavily for filesizeCan be rather expensiveto parseCan also use a lot of memory
  7. 7. File Size of Dictionaries1100KB 825KB550KB 275KB 0KB Plain String Binary String Simple Trie Optimized Trie Suffix Trie Succinct Trie Normal Gzipped
  8. 8. Load Speed of Dictionaries Time to load the dictionary once in Node.js on a 2.8 GHz Core i7. 150ms112.5ms 75ms37.5ms 0ms Plain String Binary String Hash Trie Succinct Trie
  9. 9. Search Speed of Dictionaries Time to look up one word. 6ms4.5ms 3ms1.5ms 0ms Plain String Binary String Hash Trie Succinct Trie Found Missing
  10. 10. Private Memory Usage of Dictionaries After loading the dictionary once. 11MB8.25MB 5.5MB2.75MB 0MB Plain String Binary String Hash Trie Succinct Trie
  11. 11. dynaTrace
  12. 12. dynaTraceOne of the best tools available for analyzing the fullbrowser stackDig into CPU usage, bandwidth usage, and evenperformance of browser-internal methodsWorks in both IE and Firefox
  13. 13. Practical PerformanceThink about the larger context Pre-optimization is dangerousCode qualityImportanceCross-browser compatibility
  14. 14. Performance in the jQuery Project
  15. 15. Rule 1: Prove it.
  16. 16. Prove it.Any proposed performance optimization must beundisputedly proven.Show us the proposed changes and how it’ll affectperformance across all platforms.How? JSPerf. http://jsperf.com/
  17. 17. JSPerfJSPerf is a great toolMakes it very easy to build a reproducible test: http://jsperf.com/valhooks-vs-val/2
  18. 18. JSPerfJSPerf builds on some of the earlier analysis I did in2008 http://ejohn.org/blog/javascript-benchmark-quality/Runs tests the maximum number of times in 5 secondsEven does optimization to make sure there is less loopoverheadAlso uses a Java Applet for even better timer accuracy
  19. 19. Rule 2: See the Big Picture.
  20. 20. See the Big Picture.Micro-optimizations are death.Doesn’t matter how much you unroll a loop if that loopis doing DOM manipulation.Most crippling web app performance is from DOMperformance issues.Pure JS performance is rarely an issue.
  21. 21. Prove the use case.If you’re proposing an optimization you must provewhat it’ll help.Show real world applications that’ll benefit from thechange.This is especially important as it’ll help stop you fromwasting time on performance issues that don’t matter.
  22. 22. Rule 3: Clean Code.
  23. 23. Clean Code.We won’t compromise our code quality in exchange forperformance.Almost all code quality compromises come fromneedless micro-optimizations.~~(1 * string) vs. parseInt( string )+new Date vs. (new Date).getTime()Don’t even get me started on loop unrolling.
  24. 24. Rule 4: Don’t Slow IE.
  25. 25. Don’t Slow IE.Just because performance gets better in one browserdoesn’t mean it’ll get faster in all browsers.You shouldn’t compromise performance in otherbrowsers for the sake of one. (Unless that browser is IE, always improve IE performance.)
  26. 26. Communicating the ResultsCreating realistic testsCommunicating in an effective manner
  27. 27. Creating Realistic Tests
  28. 28. RealismIt’s incredibly hard to create realistic test casesIt’s important to look at actual applicationsWe frequently use Google Code Search to find out howpeople are using our APIs (This gives us the knowledge that we need when we want to deprecate an API as well.)
  29. 29. Communicating the Results
  30. 30. BrowserscopeCollection of performance resultsOrganized by browserJSPerf plugs right in
  31. 31. Creating ResultsPull the results directly from BrowserScopeBest: Compare old versions to new versions Within the context of all browsers
  32. 32. .val() (get) (Number of test iterations, higher is better.)700000525000350000175000 0 Chrome 11 Safari 5 Firefox 4 Opera 11 IE 7 IE 8 IE 9 1.5.2 1.6
  33. 33. CompetitionYou might be inclined to compare performance againstother frameworks, libraries, applications, etc.This tends to create more problems than it’s worth And the comparison isn’t always one-to-oneIf competing, agree on some tests firstWork with your competition to create realistic tests
  34. 34. Compete Against YourselfIn the jQuery project we work to constantly improveagainst ourselvesEvery release we try to have some performanceimprovements Always compare against our past releasesRewriting API internals is a frequent way of gettinggood performance results
  35. 35. More InformationThank you!http://ajax.dynatrace.com/ajax/en/http://jsperf.comhttp://www.browserscope.orghttp://ejohn.org/blog/javascript-benchmark-quality/http://ejohn.org/
  1. A particular slide catching your eye?

    Clipping is a handy way to collect important slides you want to go back to later.

×