Benchmarking the Efficiency of Your Tools
Upcoming SlideShare
Loading in...5
×
 

Benchmarking the Efficiency of Your Tools

on

  • 375 views

 

Statistics

Views

Total Views
375
Views on SlideShare
375
Embed Views
0

Actions

Likes
0
Downloads
2
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as OpenOffice

Usage Rights

CC Attribution License

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • \n \n \n \n \n \n
  • \n \n \n \n This talk was inspired by a talk from Tim Bunce who wrote the NYTProf \n \n I want to share with you, the things I haven't seen in other presentations on the subject. \n \n \n
  • \n \n \n \n \n
  • \n \n \n \n For me the most important tools are time, Devel::NYTProf and Devel::Dependencies \n \n \n
  • \n \n \n \n \n
  • \n \n \n \n \n
  • \n \n \n \n \n The interpreter has different compile options which makes it load slower or faster. Linking to more or less libraries. \n \n So this talk is mainly about how to identify what is important \n \n \n \n \n
  • \n \n \n \n \n Shared resources: \n \n \n \n \n shared memory \n \n \n locks \n \n \n files \n \n \n databases \n \n \n \n \n \n \n
  • \n \n \n \n \n I/O usage is one of the most misleading factor because on live environments the I/O profile is completely different then on the development and testing systems \n Deployment environment – \n how often the app is executed \n what data it handles \n what is the average load of the machines without running our app \n \n \n \n
  • \n \n \n \n \n What most ppl forget about this is that the number of executions is not the same and most time they don't take this into account. \n \n \n
  • \n \n \n \n \n
  • \n \n \n \n \n
  • \n \n \n \n \n
  • \n \n \n \n Most of the ppl will fail to understand the meaning of REALTIME. It is the SUM of USERTIME, SYSTIME and the time it had to wait for other processes to be executed or wait for resources. \n \n Note the difference between the results on beast and on the vm which is running on beast. \n \n \n
  • \n \n \n \n Note here how there is almost no difference between the beast and its VM here \n \n Different filesystems affect the performance in different ways. \n \n \n
  • \n \n \n \n \n \n
  • \n \n \n \n \n \n
  • \n \n \n \n 2005 - Jean-Louis Leroy – A Timely Start \n \n \n
  • \n \n \n \n NYTProf shows you the number of calls and the time \n \n inclusive time – includes the time executing the actual sub \n exclusive time – only the time executing the statement \n \n \n
  • \n \n \n \n \n \n \n \n \n OOP \n \n \n Good for code maintainability \n \n \n Its actually killing performance \n \n \n \n \n Functional \n \n \n Better for performance if not overused \n \n \n Easily lures you in performance traps \n \n \n \n \n Better load time may mean \n \n \n worse CPU or Memory usage \n \n \n \n \n \n \n \n \n
  • \n \n \n \n Premature optimizations are infecting all of us \n \n Wrongly implemented algorithms or usage of things like for/foreach when you can use while are clear performance hogs. \n \n \n
  • \n \n \n \n \n
  • \n \n \n \n \n

Benchmarking the Efficiency of Your Tools Benchmarking the Efficiency of Your Tools Presentation Transcript

  • 1H.com 1H.com Benchmarking the Efficiency of Your Tools Marian Marinov - mm@1h.com Co-founder & CEO of 1H Ltd.
  • AGENDA
    • No benchmarking
        • you already know Benchmark.pm
    • What we should consider
        • before and after benchmarks
    • Some common misconceptions
    • Identify places where we need to optimize
  • $ $$ $$$ $$ $ If you rate my survey, I'll hook you up with $20 cPCache $$$. Go to this address to take the survey: http://go.cpanel.net/b11 and come up to the podium once you've completed it.
  • Tools of trade
    • time
    • Benchmark.pm
    • Devel::SmallProf
    • Devel::FastProf
    • Devel::NYTProf
    • DBI::Profiler
    • Devel::Dependencies
  • Before benchmarking
    • Perl interpreter and its environment
    • Application load time
      • Interpreter
      • Modules
    • Perl does not release memory
    • Try to test only the suspected code
    • How to profile a big web application
      • HTTP::Server::Simple::*
    • How to identify what is important
  • Before benchmarking @INC: /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi /usr/lib/perl5/site_perl/5.8.7/i386-linux-thread-multi /usr/lib/perl5/site_perl/5.8.6/i386-linux-thread-multi /usr/lib/perl5/site_perl/5.8.5/i386-linux-thread-multi /usr/lib/perl5/site_perl/5.8.8 /usr/lib/perl5/site_perl/5.8.7 /usr/lib/perl5/site_perl/5.8.6 /usr/lib/perl5/site_perl/5.8.5 /usr/lib/perl5/site_perl /usr/lib/perl5/vendor_perl/5.8.8/i386-linux-thread-multi /usr/lib/perl5/vendor_perl/5.8.7/i386-linux-thread-multi /usr/lib/perl5/vendor_perl/5.8.6/i386-linux-thread-multi /usr/lib/perl5/vendor_perl/5.8.5/i386-linux-thread-multi /usr/lib/perl5/vendor_perl/5.8.8 ................
  • Before benchmarking
    • Perl interpreter and its environment
    • Application load time
      • Interpreter
      • Modules
    • Perl does not release memory
    • Try to test only the suspected code
    • How to profile a big web application
      • HTTP::Server::Simple::*
    • How to identify what is important
  • Before benchmarking
    • The effect of profiling/benchmarking
      • the code will run slower
      • it will use more memory
    • Parallel or single execution
    • Competing for shared resources
    • Using external programms
  • After benchmarking
    • The five major factors:
      • CPU usage
      • Memory usage
      • I/O usage
      • Time spent waiting
      • Deployment environment
    • Never look only at a single factor!
    • !!! Always consider them together !!!
  • Understanding the data
    • before optimization
    332340 1.82s analyze_proc::CORE:match
    • after optimization
    1. 332337 1.52s analyze_proc::CORE:match 2. 312320 1.52s analyze_proc::CORE:match
    • actual performance increase
    • 1. 0.30s gain
    • 2. 0.11s gain
  • file-bench-module.pl #!/usr/bin/perl use strict; use warnings; use File::Util; my $dir = '/home/hackman/bench-tests'; my $f = File::Util->new(); foreach my $obj ($f->list_dir($dir)) { print "$objn"; }
  • file-bench-plain.pl #!/usr/bin/perl use warnings; use strict; my $dir = '/home/hackman/bench-tests'; opendir B, $dir; while (my $obj = readdir(B)) { print "$objn"; } closedir B;
  • file-bench-sub.pl #!/usr/bin/perl use warnings; use strict; my $dir = '/home/hackman/bench-tests'; sub check_dir { my $dir = shift; opendir B, $dir; while (my $obj = readdir(B)) { print "$objn"; } closedir B; } check_dir($dir);
  • Using a module or our own function
    • File::Util
    real 0.067 user 0.059 sys 0.008 - hotstare real 0.030 user 0.017 sys 0.013 - beast real 0.013 user 0.007 sys 0.002 - vm on beast real 0.031 user 0.020 sys 0.005 - remote to beast
    • Using a sub
    real 0.019 user 0.015 sys 0.004 - hotstare real 0.006 user 0.003 sys 0.003 - beast real 0.004 user 0.002 sys 0.001 - vm on beast real 0.012 user 0.004 sys 0.003 - remote to beast
    • Plain opendir
    real 0.019 user 0.015 sys 0.004 - hotstare real 0.005 user 0.003 sys 0.002 - beast real 0.004 user 0.002 sys 0.001 - vm on beast real 0.011 user 0.004 sys 0.003 - remote to beast
  • Reading the /proc
    • File::Util
    real 0.066 user 0.058 sys 0.008 - hotstare real 0.013 user 0.010 sys 0.003 - beast real 0.013 user 0.008 sys 0.002 - vm on beast
    • Using a sub
    real 0.019 user 0.015 sys 0.004 - hotstare real 0.006 user 0.003 sys 0.003 - beast real 0.006 user 0.002 sys 0.001 - vm on beast
    • Plain opendir
    real 0.018 user 0.014 sys 0.004 - hotstare real 0.005 user 0.003 sys 0.002 - beast real 0.004 user 0.002 sys 0.001 - vm on beast
  • Reading of data from files
    • Hot and cold caches
    • Common files and directories
    • the /proc file system
    • files on NFS or other shared storage
  • A module load times
    • File::Util
    • real 0.084 user 0.071 sys 0.013
    • Load times
    • real 0.060 user 0.054 sys 0.006
    • Time spent in the code
    • real 0.024 user 0.017 sys 0.007
    $ for i in {1..1000}; do time perl -MFile::Util -e 0; done > results 2>&1 $ awk 'simple counters here' results Average real: 0.060 user: 0.054 sys: 0.006
  • A module load times $ perl -d:Dependencies -MFile::Util -e 0 Devel::Dependencies finds 10 dependencies: /usr/lib/perl5/site_perl/5.8.8/auto/File/Util/autosplit.ix AutoLoader.pm Class/OOorNO.pm Exporter/Heavy.pm Fcntl.pm File/Util.pm XSLoader.pm constant.pm vars.pm warnings/register.pm No DB::DB routine defined at -e line 1.
  • Profile your apps Devel::NYTProf
    • Common misconceptions
    • Object-oriented programming
    • Functional programming
    • Better load time is always good
  • Conclusions
    • Premature optimizations are a spreading disease
    • Understanding the numbers and the surroundings is a must
    • Using Perl the wrong way will cost you
    • You always trade flexibility and maintainability for performance
  • Questions ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?
  • Thank you Please visit us at Booth 23 Marian Marinov - mm@1h.com Co-founder & CIO at 1H Ltd. 1H.com 1H.com