- Performance Benchmarks
- Code Quality
- Template Engines
- bobo 1.0.0
- bottle 0.11.6
- cherrypy 3.2.4
- circuits 2.1.0
- django 1.5.1
- flask 0.9
- pyramid 1.4
- tornado 3.0.1
- turbogears 2.2.0
- web.py 0.37
- web2py 2.1.1
- wheezy.web 0.1.365
apt-get install make python-dev python-virtualenv \ mercurial unzip # Up TCP connection limits sysctl net.core.somaxconn=2048 sysctl net.ipv4.tcp_max_syn_backlog=2048The source code is hosted on bitbucket, let clone it into some directory and setup virtual environment (this will download all necessary package dependencies per framework listed above).
hg clone https://bitbucket.org/akorn/helloworld cd helloworld/01-welcome && make envThe make file has a target for each framework and runs particular example in uWSGI, e.g. in order to run django application just issue make django. The installation for PyPy has own target so issue the following:
make pypyIn order to run things on PyPy issue the command like this one (this way you specify you need gunicorn server and use PyPy environment):
make wheezy.web SERVER=gunicorn ENV=pypy-1.9The throughtput (requests served per second) was captured using apache benchmark (concurrecy level 1K, number of requests 1M) for http://yourserver:8080/welcome:
cpython2.7 pypy 1.9 cpython3.3 uwsgi gunicorn uwsgi bobo 20736 20633 - bottle 24366 22229 23882 cherrypy 6418 9179 - circuits 5797 3837 - django 16007 16848 15965 flask 12483 14399 - pyramid 23360 20202 24367 tornado 15861 17265 13825 turbogears 2764 5808 - web.py 5023 - - web2py 4065 2769 - wheezy.web 24703 22323 24858 wsgi 24938 23272 24955The benchmark results above 22K are not reliable due to hardware limitations.
Isolated Benchmark
In order to provide more reliable benchmark I get rid of application server and network boundary. As a result I simulated a valid WSGI request and isolated calls just to framework alone (the source code is here). Here are raw numbers:cpython 2.7 msec rps tcalls funcs bobo* 9414 10622 116 65 bottle 2832 35308 65 32 cherrypy* 54320 1841 600 165 circuits* 130650 765 504 112 django 13484 7416 144 75 flask 18861 5302 207 106 pyramid 5595 17875 65 48 tornado 15068 6636 201 66 turbogears* 301980 331 1706 331 web.py* 595932 168 2191 65 web2py 153727 651 417 143 wheezy.web 1793 55786 25 23 wsgi 281 355255 8 8 pypy 1.9 msec rps tcalls funcs bobo* 1884 53076 114 64 bottle 803 124559 63 32 cherrypy* 53630 1864 652 185 circuits* 89780 1114 509 112 django 3395 29456 138 73 flask 10273 9735 215 110 pyramid 1819 54990 91 53 tornado* 3465 28864 176 62 turbogears* 275830 363 1705 347 web.py* - 29 12868 73 web2py 189691 527 562 177 wheezy.web 475 210341 26 24 wsgi 287 349001 8 8 cpython 3.3 msec rps tcalls funcs bobo not installed bottle 4377 22848 79 41 cherrypy not installed django 13572 7368 141 74 flask not installed pyramid 6611 15125 87 53 tornado 18331 5455 220 74 turbogears not installed webpy not installed web2py not installed wheezy.web 1968 50818 27 25 wsgi 378 264275 10 10msec - a total time taken in milliseconds, rps - requests processed per second, tcalls - total number of call made by corresponding web framework, funcs - a number of unique functions used.
ATTENTION: The web frameworks marked with * (asterisk) experience memory leaks in this test.
Environment Specification
- Client: Intel Core 2 Quad CPU Q6600 @ 2.40GHz × 4, Kernel 3.2.41-2 i686
- Server: Intel Xeon CPU X3430 @ 2.40GHz x 4, Kernel 3.2.41-2 amd64, uwsgi 1.9.6
- Debian Testing, Python 2.7.4, LAN 1 Gb
Great benchmark! For someone who's not familiar with all the frameworks, it would be great if you could link the list items to the respective benchmarks.
ReplyDeleteSomething is wrong here. Can you post the web2py code you are running? Did you remove the scaffolding models (they do lots of authentication logic)? Did you run it with -N (to disable background cron processes, a web2py only feature)? Are you using a template (if so are you using a template for other frameworks)? In web2py session handling is always on, did you enable sessions in other frameworks? Can you post any code that can will enable to reproduce your results at least for web2py and one of the other frameworks?
ReplyDeleteThe link to the source is in the post, as well as how to run it.
DeleteThe situation is actually even worse... during test I have noticed huge memory leak.
DeleteI have rebuild environment and re-run web2py test. The problem is gone, no memory leaks. There were no need to change anything.
DeleteThank you Andriy for updating the benchmarks. For readers out there, the web2py benchmark still includes session handling (session is created but although sessions is not saved) and internationalization handling (lookup per-request translation file to match requested accept-language).
DeleteSorry but is the chart updated for web2.py?
DeleteThe benchmark raw results and chart correspond to latest version of each web framework at the time the post was updated.
DeleteHello Andriy. We are looking into this. You have at least one problem. In the web2py example you must add session.forget() otehrwise your app is creating a session file at every request. This mans you have a huge number of session files and access gets slower and slower. Disk access alone can account for the numbers you get. We are discussing the memory leak since many of us have never seen the problem. A user believe it to be a uwsgi configuration issue (https://groups.google.com/d/msg/web2py/Yrtrj3BSFl4/_06tIzqJNzQJ).
ReplyDeleteYou have to be kidding, right? I am using a `hello world` application per examples provided by corresponding frameworks... including web2py. Let collaborate this via email or supply a patch so it doesn't bounce back and forward. Thanks.
DeleteuWSGI configuration is taken from here http://projects.unbit.it/uwsgi/wiki/Example. That most likely a source of all web2py deployments using uWSGI...
DeleteThank you. :-)
Delete+1 Massimo - nice way to handle a rather rude/ defensive response.
DeleteFor readers out there. A "hello world" app in web2py is not a "hello word" app since web2py does not follow "explicit is better then implicit". Out of the box it is optimized for rapid prototyping and not for speed. Web2py does lots of stuff out of the box whether you like it or not. Anyway, the main problem here is probably saving sessions, and there is a way to disable it.
ReplyDeleteand you understand the risk web2py application (that use sessions) experience in production...
DeleteYour argument isn't valid Massimo. You want web2py to be benchmarked disabling those features because it scores low when they are enabled, but django has those features enabled too and scores high anyway.
DeleteFor a "hello world" benchmark the framework should be tested with no fine tunning. That's the purpose of this kind of benchmarks.
If you check the source you will notice that I tried to pick a minimal possible application stack for given web framework: turned off debug, disabled logging, removed `unnecessary` middleware, etc. If I missed anything... please just let me know.
DeleteYes. DoS. In fact, old web2py was always creating a session file but new web2py only saves session if it changes and is not empty. Let's find out why your benchmarks do not agree with yours. Perhaps is not the problem.
ReplyDeleteIt does matter to attacker...
DeleteI think you mean bobo 0.2?
ReplyDeleteno, I am using latest available version from pypi.
DeleteAndriy, Thank you for doing the research and sharing this. It is very interesting to see pypy not living up to it's benchmarking hype. Possible extension to this test if you inclined to do so would be to include Apache + mod_wsgi stack test, and other languages frameworks, Haskell; I know the latter is very unlikely :).
ReplyDeleteI personally believe pypy is the future. If you compare gunicorn on cpython (make wsgi SERVER=gunicorn) vs pypy (make wsgi SERVER=gunicorn ENV=pypy-1.9) you will see cpython is not that good... In my believe uwsgi is the right application server to be used with cpython, while gunicorn for pypy (I haven't chance to see uwsgi+pypy since it is not a primary direction of its development team). What relates to apache mod_wsgi: it all way good except performance, so I stick with uwsgi to be able to see the difference. It is actually a challenge to benchmark anything across other languages since many factors applies... I tried answer it for python.
DeleteYou forgot circuits.web (1) :/
ReplyDeletecheers
James
[1] http://pypi.python.org/pypi/circuits/
accepted
DeleteSorry mate but these "benchmarks" are so damn pointless.
ReplyDeleteThese benchmarks give you an idea:
Delete(1) where web frameworks stand in terms of internal effectivity running a simple thing
(2) how various python implementations handle it
(3) all are good (and sufficient for their communities)
I think these results are very useful. It's certanily given me some insight as to some aspects of circuits and circuits.web I can improve upon. It's also a good measure of the quality (or lack thereof) of some of the frameworks and server implementations (even tornado doesn't suffer from memory leaks).
ReplyDelete--JamesMills / prologic
I would be curious to see how the recently released TurboGears2.2 behaves related to the other frameworks. Some of the performance improvements that has been done in the 2.3 branch have been backported to 2.2 so there has been a sensible improvement over the past versions.
ReplyDeleteLet me know if you need any help setting up a benchmark for TG, I would be glad to help.
accepted
DeleteThank your for your test!
DeleteResults are interesting, I'm going to try to replicate them using your configurations, how did you check for the memory leaks?
I saw that the benchmarks got conducted with minimal setups for the other frameworks, have you set full_stack=False in config/middlware.py and disabled in config/app_cfg.py most optional things like:
base_config.use_toscawidgets2 = False
base_config.use_toscawidgets = False
base_config.i18n_enabled = False
base_config.disable_request_extensions = True
base_config.serve_static = False
As the speedup work is happening all on 2.3 branch we didn't have any compared benchmark for 2.2 and I was curious to see how it behaved.
I have isolated turbogears and noticed constant residential memory grow.
DeleteThe Makefile target creates project using paster quistart, than few files are copied (take look at source code). The ini file was updated per production configuration comments.
Not sure why this older article showed up in my feed reader this morning but its appearance caused me to finally install and run the benchmarks. The takeaway for me was... choose the framework that best fits your mind or task, hopefully both at the same time. They all perform about the same when run with uwsgi. I mostly use DurusWorks which is an evolutionary brother to Quixote, one of the older web frameworks out there. Neither has a large community of users.
ReplyDeleteThe post has been updated due to community request for django python3. The difference between frameworks well seen in three others: routing, reverse urls and caching.
DeletePS, meant to thank you for putting together the bundle of apps and tests. Regardless of my conclusion it was still interesting to go through them and I appreciate the effort.
ReplyDeleteIn the benchmarks what app server does wsgi refer to?
ReplyDeletewsgi test (as well as all others) is hosted in uwsgi for python2.7/3.3 and gunicorn for pypy.
DeleteThank you so much for doing this benchmark. I found out the hard way after writing an app that the framework wouldn't scale above 500 RPS regardless of settings. Talk about dissapointment! It's crazy that so many frameworks are so far away from pure WSGI perf.
ReplyDeleteHave you tested the new Falcon framework? It is said to be very fast. Would be interesting to see how it compares with the others.
ReplyDeletehttp://falconframework.org/
http://faruk.akgul.org/blog/python-web-frameworks-benchmark/
Delete