- Site: the site content is changed rarely. The best case scenario is nginx+wsgi caching facilities, see more here.
- Handler: the site content caching policy vary, there are only few handlers were content caching is applicable.
- Managed (semi-real time): the site is dynamic, it is not permissible to cache a given output unless there is a way to invalidate content since some data changed, e.g. item price, new message arrived, etc. Read more here or give it a try.
- django 1.4.2
- flask 0.9
- wheezy.web 0.1.307
$ nice memcached -s /tmp/memcached.sockThe throughtput (requests served per second) was captured using apache benchmark (concurrecy level 500, number of requests 1M). Raw numbers:
cpython2.7 + uwsgi1.3 Throughput (requests per second) no gzip welcome memory pylibmc memcache django 11255* 6053 2690 2115 flask 11600* 10303 9128 (1) 8266 wheezy.web 11664* 11402* 11616* 11458* gzip welcome memory pylibmc memcache django 7296 6320 2829 2222 flask - - - - wheezy.web 14678 24131 22210 16392 Average Response Time (ms) no gzip welcome memory pylibmc memcache django 44 82 185 236 flask 43 48 54 (1) 60 wheezy.web 42 43 43 43 gzip welcome memory pylibmc memcache django 68 79 176 225 flask - - - - wheezy.web 34 20 22 30 * - network limit, ~103MB/s (1) - 14813 requests failedThe benchmark results above 22K are not reliable due to hardware limitations.
Isolated Benchmark
I get rid of application server and network boundary, simulated a valid WSGI request and isolated calls just to framework alone. Raw numbers:no gzip welcome msec rps tcalls funcs django 16607 6022 195 85 flask 37906 2638 210 107 wheezy.web 4109 24336 32 27 memory msec rps tcalls funcs django 91041 1098 783 128 flask 44609 2242 244 122 wheezy.web 3094 32320 28 26 pylibmc msec rps tcalls funcs django 106767 937 314 106 flask 50365 1986 241 122 wheezy.web 19248 5195 54 43 memcache msec rps tcalls funcs django 137809 726 834 141 flask 55020 1818 307 146 wheezy.web 28073 3562 102 61 gzip welcome msec rps tcalls funcs django 79911 1251 279 107 flask - wheezy.web 36019 2776 45 33 memory msec rps tcalls funcs django 91369 1094 784 129 flask - wheezy.web 3167 31576 28 26 pylibmc msec rps tcalls funcs django 108117 925 315 107 flask - wheezy.web 18833 5310 54 43 memcache msec rps tcalls funcs django 137303 728 829 141 flask - wheezy.web 26590 3761 130 60msec - a total time taken in milliseconds, rps - requests processed per second, tcalls - total number of call made by corresponding web framework, funcs - a number of unique functions used.
Environment Specification
- Client: Intel Core 2 Quad CPU Q6600 @ 2.40GHz × 4, Kernel 3.2.0-3-686-pae
- Server: Intel Xeon CPU X3430 @ 2.40GHz x 4, Kernel 3.2.0-3-amd64
- Debian Testing, LAN 1 Gb
I know you like to add other frameworks in, try out Mako integration with Dogpile.cache: http://dogpilecache.readthedocs.org/en/latest/api.html#mako-integration . If you use <%page cached="True"> in a template, it'll go through the cache.
ReplyDelete(thought I posted this yesterday, might have forgotten to hit "publish").
Mike, thank you for the comment. Unfortunately this benchmark is solely for web framework caching purpose. In case of template caching, web handler is still executed thus template caching is considered less effective.
DeleteI think the template `page level` caching has a little sense in compare to handler caching. In both cases the result is the same content, however in handler caching there is no data lookup/preparation.
I will be happy try dogpile.cache with web framework that offers such integration.