The Participants

The objective of the benchmark is not testing deployment (like uwsgi vs gunicorn and etc) but instead test the frameworks itself.

The Methodic

Results bellow were received with Amazon EC2 t2.medium server. I've used a WRK utility with params:

$ wrk -d30s -t12 -c400 [URL]

Most of the frameworks (except Tornado) were running from Gunicorn HTTP Server with 2 workers and Meinheld worker class for WSGI applications. Example:

$ gunicorn [APP] --workers=2 --worker-class=meinheld.gmeinheld.MeinheldWorker

The sources of the applications for tests can be found at here.

The benchmark has a three kind of tests:

  1. JSON test. Serialize a object to JSON and return `application/json` response.
  2. Remote test. Load and return http response from a remote server.
  3. Complete test. Load some data from DB using ORM, insert a new object, sort and render to template.

The Results

Name 50% (ms) 75% (ms) Avg (ms) Req/s Non 200-x Timeouts
Aiohttp 91.67 103.1 108.01 4093.41
Bottle 24.77 26.23 25.06 15761.45
Django 103.2 112.19 103.36 3696.90
Falcon 19.24 19.81 19.19 20677.13
Flask 64.32 71.59 65.68 6023.40
Muffin 108.07 115.09 171.56 3575.36
Pyramid 41.75 43.49 41.63 9402.69
Tornado 138.24 149.84 136.87 2829.72
Name 50% (ms) 75% (ms) Avg (ms) Req/s Non 200-x Timeouts
Aiohttp 358.08 369.08 338.94 1120.27
Bottle 3363.74 9911.84 6403.92 19.09 335
Django 3317.64 12954.23 6918.64 18.96 300
Falcon 3196.23 12976.84 6696.17 19.28 328
Flask 3306.88 11690.8 6824.88 19.16 363
Muffin 372.95 428.75 376.98 1019.76
Pyramid 3295.1 10518.92 6673.78 19.35 338
Tornado 1994.39 2069.25 1928.31 194.37
Name 50% (ms) 75% (ms) Avg (ms) Req/s Non 2xx Timeouts
Aiohttp 151.78 156.9 254.75 1004.82 68% 236
Bottle 613.5 630.17 1062.86 451.34 178
Django 1610.46 1976.44 2632.36 88.57 42
Falcon 766.75 805.35 1457.99 350.26 81
Flask 1032.63 1649.89 1465.25 222.78 496
Muffin 420.14 485.4 1552.7 819.62
Pyramid 562.44 601.49 812.43 248.42 235
Tornado 937.37 988.86 910.06 418.36

Conclusion

Nothing here, just some measures for you.