Benchy: Lightweight framework for Performance Benchmarks

6,693 views

Published on

Benchy: Lightweight framework for Performance Benchmarks on Python Scripts.
Presented at XXVI Pernambuco Python User Group Meeting at Recife, Pernambuco, Brazil on 06.04.2013

Published in: Technology
0 Comments
3 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
6,693
On SlideShare
0
From Embeds
0
Number of Embeds
5,443
Actions
Shares
0
Downloads
0
Comments
0
Likes
3
Embeds 0
No embeds

No notes for slide

Benchy: Lightweight framework for Performance Benchmarks

  1. 1. BenchyLightweight performing benchmark framework for Python scripts Marcel Caraciolo @marcelcaraciolo Developer, Cientist, contributor to the Crab recsys project, works with Python for 6 years, interested at mobile, education, machine learning and dataaaaa! Recife, Brazil - http://aimotion.blogspot.com
  2. 2. About meCo-founder of Crab - Python recsys libraryCientist Chief at Atepassar, e-learning social networkCo-Founder and Instructor of PyCursos, teaching Python on-lineCo-Founder of Pingmind, on-line infrastructure for MOOC’sInterested at Python, mobile, e-learning and machine learning!
  3. 3. Why do we test ?
  4. 4. Freedom from fear
  5. 5. Testing forperformance
  6. 6. What made mycode slower ?
  7. 7. me
  8. 8. Solutions ?In  [1]:  def  f(x):      ...:          return  x*x      ...:  In  [2]:  %timeit  for  x  in  range(100):  f(x)100000  loops,  best  of  3:  20.3  us  per  loop
  9. 9. Stop. Help is near https://github.com/python-recsys/benchyPerformance benchmarks to compare several python code alternatives Generates graphs using matplotlib Memory consumption, Performance timing available
  10. 10. Performancebenchmarks
  11. 11. Writing benchmarks $  easy_install  -­‐U  benchy   #  pip  install  -­‐U  benchy
  12. 12. Writing benchmarksfrom  benchy.api  import  Benchmarkcommon_setup  =  ""statement  =  "lst  =  [i  for  x  in  range(100000)]"benchmark1  =  Benchmark(statement,  common_setup,  name=  "range")statement  =  "lst  =  [i  for  x  in  xrange(100000)]"benchmark2  =  Benchmark(statement,  common_setup,  name=  "xrange")statement  =  "lst  =  [i]  *  100000"benchmark3  =  Benchmark(statement,  common_setup,  name=  "range")
  13. 13. Use them in your workflow[1]:  print  benchmark1.run(){memory:  {repeat:  3,                        success:  True,                        units:  MB,                        usage:  2.97265625},  runtime:  {loops:  100,                          repeat:  3,                          success:  True,                          timing:  7.5653696060180664,                          units:  ms}} Same code as %timeit and %memit
  14. 14. Beautiful reportsrst_text  =  benchmark1.to_rst(results)
  15. 15. Benchmark suite from  benchy.api  import  BenchmarkSuite suite  =  BenchmarkSuite() suite.append(benchmark1) suite.append(benchmark2) suite.append(benchmark3)
  16. 16. Run the benchmarksfrom  benchy.api  import  BenchmarkRunnerrunner  =  BenchmarkRunner(benchmarks=suite,  tmp_dir=.,                                                            name=  List  Allocation  Benchmark)n_benchs,  results  =  runner.run()
  17. 17. Who is the faster ?{Benchmark(list  with  "*"):        {runtime:  {timing:  0.47582697868347168,  repeat:  3,  success:  True,  loops:  1000,  timeBaselines:  1.0,  units:  ms},        memory:  {usage:  0.3828125,  units:  MB,  repeat:  3,  success:  True}},Benchmark(list  with  xrange):        {runtime:  {timing:  5.623779296875,  repeat:  3,  success:  True,  loops:  100,  timeBaselines:  11.818958463504936,  units:  ms},        memory:  {usage:  0.71484375,  units:  MB,  repeat:  3,  success:  True}},Benchmark(list  with  range):  {        runtime:  {timing:  6.5933513641357422,  repeat:  3,  success:  True,  loops:  100,  timeBaselines:  13.856615239384636,  units:  ms},        memory:  {usage:  2.2109375,  units:  MB,  repeat:  3,  success:  True}}}
  18. 18. Plot relativefig  =  runner.plot_relative(results,  horizontal=True)plt.savefig(%s_r.png  %  runner.name,  bbox_inches=tight)
  19. 19. Plot absoluterunner.plot_absolute(results,  horizontal=False)plt.savefig(%s.png  %  runner.name)  #  bbox_inches=tight)
  20. 20. Full reportrst_text  =  runner.to_rst(results,  runner.name  +  png,                runner.name  +  _r.png)with  open(teste.rst,  w)  as  f:                f.write(rst_text)
  21. 21. Full report
  22. 22. Full report
  23. 23. Why ? Benchmark pairwise functions at Crab recsys libraryhttp://aimotion.blogspot.com.br/2013/03/performing-runtime-benchmarks-with.html
  24. 24. Get involved Create the benchmarks as TestCasesCheck automatically for benchmark files and run like %nose.test() More setup and teardown control Group benchmarks at the same graph
  25. 25. Stay tuned!Historical commits from version control now benchmarked
  26. 26. https://github.com/python-recsys/benchy Forks and pull requests are welcomed!
  27. 27. BenchyLightweight performing benchmark framework for Python scripts Marcel Caraciolo @marcelcaraciolo Developer, Cientist, contributor to the Crab recsys project, works with Python for 6 years, interested at mobile, education, machine learning and dataaaaa! Recife, Brazil - http://aimotion.blogspot.com

×