Your SlideShare is downloading. ×
0
Benchmarking Perl Lightning Talk (NPW 2007)
Benchmarking Perl Lightning Talk (NPW 2007)
Benchmarking Perl Lightning Talk (NPW 2007)
Benchmarking Perl Lightning Talk (NPW 2007)
Benchmarking Perl Lightning Talk (NPW 2007)
Benchmarking Perl Lightning Talk (NPW 2007)
Benchmarking Perl Lightning Talk (NPW 2007)
Benchmarking Perl Lightning Talk (NPW 2007)
Benchmarking Perl Lightning Talk (NPW 2007)
Benchmarking Perl Lightning Talk (NPW 2007)
Benchmarking Perl Lightning Talk (NPW 2007)
Benchmarking Perl Lightning Talk (NPW 2007)
Benchmarking Perl Lightning Talk (NPW 2007)
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Benchmarking Perl Lightning Talk (NPW 2007)

1,799

Published on

Published in: Technology, Business
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
1,799
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
0
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. Benchmarking Perl brian d foy <brian@stonehenge.com> Stonehenge Consulting Ser vices Nordic Perl Workshop 2007 Lightning Talk Edition
  • 2. Benchmarks are reference points
  • 3. All Things Being Equal... There are lies, damned lies, and benchmarks Everyone has an agenda You don’t run testbeds as production Skepticism wins the day
  • 4. Language Shootout Benchmarking programming languages? How can we benchmark a programming language? We can't - we benchmark programming language implementations. How can we benchmark language implementations? We can't - we measure particular programs. http:/ /shootout.alioth.debian.org/
  • 5. Availability Disk Use Concurrent Users CPU Time Completion Time Memory Use Uptime Bandwidth Use Network Lag Responsiveness Binary Size Programmer Time
  • 6. Programmer Ti me
  • 7. Theory of Measurement Obser vation changes the universe Nothing is objective Tools have inherent uncertainities
  • 8. Modules Devel::Size - size(), total_size() Devel::Peek - mstat() DBI::Profile Benchmark Devel::DProf, Devel::Smallprof
  • 9. Common Benchmark misuse use Benchmark 'cmpthese'; my @long = ('a' .. 'z', ''); my $iter = shift || -1; cmpthese( $iter,{ long_block_ne => q{grep {$_ ne ''} @long}, long_block_len => q{grep {length} @long}, long_bare_ne => q{grep $_ ne '', @long}, long_bare_len => q{grep length, @long}, } ); http:/ /www.perlmonks.org/index.pl?node_id=536503
  • 10. What’s wrong with this picture? Rate bare_ne block_len block_ne bare_len long_bare_ne 3635361/s -- -6% -6% -8% long_block_len 3869054/s 6% -- -0% -2% long_block_ne 3872708/s 7% 0% -- -2% long_bare_len 3963159/s 9% 2% 2% --
  • 11. Do something useful use Benchmark 'cmpthese'; our @long = ('a' .. 'z', ''); my $iter = shift || -1; cmpthese( $iter,{ long_block_ne => q{my @array = grep {$_ ne ''} @long}, long_block_len => q{my @array = grep {length} @long}, long_bare_ne => q{my @array = grep $_ ne '', @long}, long_bare_len => q{my @array = grep length, @long}, } );
  • 12. These numbers make sense 15” Powerbook G5, Mac OS X.4.5, perl5.8.4 Rate block_ne block_len bare_ne bare_len long_block_ne 31210/s -- -3% -3% -5% long_block_len 32119/s 3% -- -0% -2% long_bare_ne 32237/s 3% 0% -- -2% long_bare_len 32755/s 5% 2% 2% --
  • 13. Conclusion Decide what is important to you Realize you have bias Report the situation Don’t turn off your brain Make predictions that you can verify

×