Pypy is-it-ready-for-production-the-sequel


Published on

Slidedeck for my talk at Pycon Singapore 2013.

Published in: Technology
1 Like
  • Be the first to comment

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide
  • I have listed a number of resources that I found helpful but this talk is more about using pypy rather than how it works.
  • The first 8 criteria came from a question on stackexchange, the last 2 are my additional requirements. A little detailed definition than the management version: it runs, it makes money. You may disagree with the list but it’s the criteria I will be using. Also I will be biased towards the needs of the company I work for. So let’s work thru the list to see how pypy stacks up.
  • It runs great on x86 32bit and 64bit platforms under Linux, Windows and OS X. There are other backend implementations – ARM, PPC, Java & .NET VM’s. Some have had more love than others. Pypy implements the Python language version 2.7.3, supporting all of the core language passing the Python test suite. It supports most of the standard library modules. It has support for CPython C API but it is beta quality. I will go into more detail about standard library and other module compatibility later in the talk.
  • I am not a language interpreter designer so I cannot really comment on the design but you would assume with the number of years development & refactoring by the pypy team it is a well thought out design.With regards maintainability, due to much of the pypytoolchain using RPython and the complexity of the architecture I feel it is hard for the normal python programmer to be able to contribute to coding maintenance of pypy. The learning curve is steep but certainly maintainability f the pure-python portions of the pypy components are easier.
  • As I said before pypy implements python language version 2.7.2
  • As at pypy 2.0.2 c-api support is considered beta and while it worked for many of the modules we use e.g PIL, it failed with the c extensions for reportlab. This wasn’t a show-stopper as these extensions also have python equivalents in the standard reportlab distribution. Of course, our python library use will be different from yours, so you experience will be different as well.
  • Since Wes used ipython notebook in his keynote this morning, I thought I should see if it would work under pypy. Apart from a unicode to char issue with that was simply patched it worked great. Panda’s was a bigger challenge.
  • pypy has work in progress implementation of numpy written in pypy. It is called numpypy.
  • The above plot represents PyPy trunk (with JIT) benchmark times normalized to Cpython as at 12 June 2013. Smaller is better.The standard benchmarks are limited to one domain and do not in a lot of cases cover complete processes or workloads. For example:
  • Thedjango benchmark in the standard pypy benchmark suite and was originally part of the unladen swallow benchmarks. So this benchmark is only testing the template rendering performance of django. There is nothing wrong with this and it’s a standard benchmark technique. So if you see the results of this benchmark, then it’s likely the performance of django template rendering under pypy would be faster than cpython. Does this mean your django website perfromance would be better? Maybe or maybe not.
  • My benchmarks are a little different from the standard pypy as they simulate workloads similar to what we use python for at work. So rather than benchmarking a small portion or function as the standard benchmarks do, mine cover either a complete process or the majority of one. So my benchmarks are impacted by io as well as the in-program execution. Since the majority of the non web use of in our workplace is extract/transform/load (ETL) tasks, this is what the benchmarks are doing.
  • To perform the benchmarks, a clone of the pypy benchmark tools was done and my benchmarks added to it. You can see these at The benchmarks were run on a VMWare virtual instance with 2GB RAM, 1 Core 64bit running Scientific Linux 6.2. The base CPython used was 2.7.2 and comparison benchmarks were run against pypy-jit release 1.9 and the nightly pypy-jit build of August 14 2012 collecting avg execution time and memory use. 50 iteration benchmark runs. So for the bm_csv2xml benchmark, 100Mb csv file of census data to loaded, parsed and output as xml to a file. So it is faster than cpython, things are looking good. But I had hoped it would be a little better. So
  • I created a benchmark of just the csv load and parse and was surprised for pypy 1.9 to see that it was slower than the cpython equivalent, so in my previous benchmark the xml output was what gave the improved performance under pypy. Note that in 2.0.2 csv conversion is now on-par. This indicates there has been an improvement in the JIT.
  • .
  • The bm_interp benchmark just provides a baseline of what memory just the interpreter uses prior to any real work.Just in case these benchmark results were related to something related to my vm configuration, I also reran these benchmarks on physical hardware and obtained similar results. If I had stopped here, you would have say that pypy didn’t meet my production criteria but since some of the components that affect the performance are in python under pypy, I decided to see why performance wasn’t the same or better than cpython. I decided to start with the low hanging fruit – csv performance.
  • You can use the pypyjit viewer to see what is happening and of course I can review the source of since it’s written in pure python. Thanks to some input in pypy issue tracker
  • I was able to after a number of attempts modify so that bm_csv benchmark performed at the same speed as cpython. This also gave a small performance improvement in the bm_csv2xml benchmark. Based on thee improvements, it is very likely we can use pypy in place of cpython for the ETL where we load csv files and convert to xml. I also intend to investigate where the performance bottlenecks are in the other ETL process benchmarks to see if we can get the gains sinmilar to what we get with pypy for the bm_csv2xml benchmark.
  • If we revisit the definition of production ready, certainly if we just use items 1-7 as the criteria, pypy is certainly production ready when compared with other python implementations that are being used in production. If you want to run existing python code under pypy, then pypy compatibility with non standard python libraries needs to be considered and getting your hands dirty by running the code under pypy is really the best way to see if pypy will work. If nothing else you can report an issue to the pypy team and they can use it to improve compatibility. And will our company be deploying anything in production under pypy? It is likely sometime this year we will look at deploying it for certain ETL workloads due to measured benchmark performance. The additional memory overhead isn’t an issue for us. So my recommendation is that if you are looking for performance improvements, give pypy a go, you may be surprised.But performance shouldn’t be the only reason to consider pypy, there are various pypy side projects that will have good benefits for the python community as a whole. Last week the pypy team released cffi Foreign Function Interface for Python calling C code. The aim of this project is to provide a convenient and reliable way of calling C code from Python. It is
  • But performance shouldn’t be the only reason to consider pypy, there are various pypy side projects that will have good benefits for the python community as a whole. Last year the pypy team released cffi Foreign Function Interface for Python calling C code. The aim of this project is to provide a convenient and reliable way of calling C code from Python. It works with both pypy and cpython 2.6+. The pypy team are working a pypy implementation of numpy and are close to a py3k language compliant version. If you want to help with pypy, check out the howto help page & the donation page.
  • pypy on ARM is 3 times faster than cpython on the ARM and they still do believe there will be more gains as the assembler output is optimized. I didn’t have a chance to run a complete set of benchmarks but initial results prove the 3 times claim.
  • To illustrate how cffi can simplify the integration of cpython & pypy with C libraries, let’s use a simple example. To call the C crypt function from Python with ctypes to must identify the C types for the input arguments and result type programmatically. Also to access the result it is via contents.value attribute.
  • With cffi we can copy & paste the man page definition of the crypt function, cffi works out the input argument and result types. A C compiler is required to be installed during development but not for distributed modules.Cffi is shippedwith pypy 2.0. Available for Python 2.6+ and Python 3.2 as a pypi install.Cffi speed is comparable to ctypes on CPython (a bit faster but a higher warm-up time). It is already faster on PyPy (1.5x-2x).
  • Pypy is-it-ready-for-production-the-sequel

    1. 1. the sequelMark ReesCTOCentury Software (M) Sdn Bhdis it ready for production?
    2. 2. pypy & menot affiliated with pypy teamhave followed it‟s development since2004use cpython and jython at workused ironpython for small projectsgave a similar talk at PyConAU 2012the question:would pypy improve performance ofsome of our workloads?i am a manager, who still is wants to be aprogrammer, so i did the analysis
    3. 3. pypyhistory- first sprint 2003, EU project from 2004 – 2007- open source project from 2007 pypy 1.4 first release suitable for “production”12/2010what is pypy?- RPython translation toolchain, a framework forgenerating dynamic programming languageimplementations- a implementation of Python in Python using theframework
    4. 4. pypycurrent releasepypy 2.0 released may 2013latest iteration 2.0.2want to know more about pypy- david beazley pycon 2012 keynote how the pypy jit works why pypy by example
    5. 5. production ready – a definitionit runsit satisfies the project requirementsits design was well thought outits stableits maintainableits scalableits documentedit works with the python modules we useit is as fast or faster than cpython
    6. 6. pypy – does it run?of course, it runsSee differences between PyPy and CPython
    7. 7. pypy – other production criteriadoes it satisfy the project requirements- yesis it‟s design was well thought out- I would assume sois it stable- yesis it maintainable- 7 out of 10is it scalable- stackless & greenlets built inis it documented- cpython docs for functionality, rpython toolchain 8 outof 10
    8. 8. pypy – does it work with the modules we usestandard library modules supported:__builtin__, __pypy__, _ast, _bisect, _codecs, _collections, _ffi, _hashlib,_io, _locale, _lsprof, _md5, _minimal_curses, _multiprocessing, _random,_rawffi, _sha, _socket, _sre, _ssl, _warnings, _weakref, _winreg, array,binascii, bz2, cStringIO, clr, cmath, cpyext, crypt, errno, exceptions,fcntl, gc, imp, itertools, marshal, math, mmap, operator, oracle, parser,posix, pyexpat, select, signal, struct, symbol, sys, termios, thread, time,token, unicodedata, zipimport, zlibthese modules are supported but written inpython:cPickle, _csv, ctypes, datetime, dbm, _functools, grp, pwd, readline,resource, sqlite3, syslog, tputilmany python libs are known to work, like:ctypes, django, pyglet, sqlalchemy, PIL. See for a moreexhaustive list.
    9. 9. pypy – does it work with the modules we usepypy c-api support is beta, worked most ofthe time but failed with reportlab:Fatal error in cpyext, CPython compatibility layer, callingPySequence_GetItemEither report a bug or consider not using this particular extension<OpErrFmt object at 0x7f94582f3100>RPython traceback:File ”pypy_module_cpyext_api_1.c", line 30287, in PySequence_GetItemFile ”pypy_module_cpyext_pyobject.c", line 1056, inBaseCpyTypedescr_realizeFile ”pypy_objspace_std_objspace.c", line 3404, inallocate_instance__W_ObjectObjectFile ”pypy_objspace_std_typeobject.c", line 33781, inW_TypeObject_check_user_subclassSegmentation faultBut this was the only compatibility issue wehad running all of our python code underpypy and we could fallback to pure pythonreportlab extensions anyway.
    10. 10. pypy – does it work with the modules you useIpython notebook requires tornado & zeromq
    11. 11. pypy – does it work with the modules you use
    12. 12. pypy – does it run as fast as cpython!
    13. 13. pypy django benchmarkDJANGO_TMPL = Template("""<table>{% for row in table %}<tr>{% for col in row %}<td>{{ col|escape }}</td>{% endfor %}</tr>{% endfor %}</table>""")def test_django(count):table = [xrange(150) for _ in xrange(150)]context = Context({"table": table})# Warm up Django.DJANGO_TMPL.render(context)DJANGO_TMPL.render(context)times = []for _ in xrange(count):t0 = time.time()data = DJANGO_TMPL.render(context)t1 = time.time()times.append(t1 - t0)return times
    14. 14. my csv to xml benchmarkdef bench(data, output):f = open(data, rb)fn = [„age‟,….]reader = csv.DictReader(f, fn)writer = SAXWriter(output)writer.start_doc()writer.start_tag(data)try:for row in reader:writer.start_tag(row)for key in row.keys():writer.tag(key.replace( , _), body=row[key])writer.end_tag(row)finally:f.close()writer.end_tag(data)writer.end_doc()
    15. 15. my pypy benchmarks cpython2.7.3pypy-jit1.9pypy-jit2.0.2bm_csv2xml 88.26/94.0428.89 3.0549 xfaster23.86 3.7728xfasteraverage execution time (in seconds)
    16. 16. my pypy benchmarks cpython2.7.3pypy-jit1.9pypy-jit2.0.2bm_csv2xml 88.26/94.0428.89 3.0549 xfaster23.86 3.7728xfasterbm_csv 1.54/1.65 5.89 3.8122 xslower1.72 0.9825 xsloweraverage execution time (in seconds)
    17. 17. my pypy benchmarks cpython2.7.3pypy-jit1.9pypy-jit2.0.2bm_csv2xml 88.26/94.0428.89 3.0549 xfaster23.86 3.7728xfasterbm_csv 1.54/1.65 5.89 3.8122 xslower1.72 0.9825 xslowerbm_openpyxl 1.31/1.21 3.26 2.4871 xslower3.15 2.6051 xsloweraverage execution time (in seconds)
    18. 18. my pypy benchmarks cpython2.7.3pypy-jit1.9pypy-jit2.0.2bm_csv2xml 88.26/94.0428.89 3.0549 xfaster23.86 3.7728xfasterbm_csv 1.54/1.65 5.89 3.8122 xslower1.72 0.9825 xslowerbm_openpyxml 1.31/1.21 3.26 2.4871 xslower3.15 2.6051 xslowerbm_xhtml2pdf 1.91/1.95 3.27 1.7155 xslower4.22 2.1637 xsloweraverage execution time (in seconds)
    19. 19. my pypy benchmarks cpython2.7.3pypy-jit1.9pypy-jit2.0.2bm_interp 5412/5248 12556 2.32 xlarger21880 4.1692 xlargerbm_csv2xml 7048/7064 55180 7.8292 xlarger55232 7.8188 xlargerbm_csv 5812/5180 52200 8.9814 xlarger52176 10.0726x largerbm_openpyxl 12656/1265677252 6.1040 xlarger80428 6.3549 xlargerbm_xhtml2pdf 48880/34884236792 4.8444 xlarger101376 2.906 xlargermax memory use
    20. 20. what is the pypy jit doing?
    21. 21. modified csv pypy benchmarks cpython2.7.3pypy-jit1.9pypy-jit2.0.2bm_csv2xml_mod 88.25/90.02 23.65 3.7315 xfaster21.76 4.0556 xfasterbm_csv_mod 1.62/1.69 1.89 0.8571 xslower1.68 0.9643 xsloweraverage execution time (in seconds)
    22. 22. is pypy ready for production1. it runs2. it satisfies the project requirements3. its design was well thought out4. its stable5. its maintainable6. its scalable7. its documented8. it works with the python modules we use9. it can be as fast or faster than cpython
    23. 23. some other reasons to consider pypycffi – C foreign function interface for python- version of numpypy3k version of pypy work-in-progresscheck out the STM/AME project- can help
    24. 24. now for something different
    25. 25. cffi better than ctypes?
    26. 26. cffi better than ctypes?
    27. 27. Mark Reesmark at censof dot com+Mark Rees@hexdump42hex-dump.blogspot.comcontact details