Skip to content

Created to compare how Flask, CherryPy and FastAPI perform under heavy load

Notifications You must be signed in to change notification settings

bordax/wf_benchmark

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Python Web Frameworks Benchmark

This project hosts the same application implemented in CherryPy, FastAPI and Flask. Then, we load test each one of then using autocannon and present the results.

Application

The application contains two endpoints:

  • GET /: receives no input data and returns {"Hello": "World"}
  • POST /users: receives {"name":str, "age":integer, "location":str} and returns the input content concatenated 1024 times

To run each app we use the application server recommended on the framework docs. On each run.sh file I also added a commented line with the command to run the app with Gunicorn (production WSGI server) with 4 sync workers for CherryPy and Flask and 4 UvicornWorker's for FastAPI.

Running the apps

  • Clone the repository
  • Run docker compose up on the root directory

The applications will be listening on the ports:

Application Port
flask 5000
fastapi 8000
cherrypy 8080

Running the tests

  • Install autocannon
  • Run ./test.sh --post or ./test.sh --get depending on which endpoint you wants to load test The tests runs for 10 seconds with 30 simultaneous connections for each app.

Test example

Linux Mint 20 on Ryzen 5 5400G, 32GB of RAM

GET endpoint

Flask

Flask GET result

FastAPI

FastAPI GET result

CherryPy

CherryPy GET result

POST endpoint

Flask

Flask POST result

FastAPI

FastAPI POST result

CherryPy

CherryPy POST result

About

Created to compare how Flask, CherryPy and FastAPI perform under heavy load

Resources

Stars

Watchers

Forks