Benchmarking the most famous serverless tools
Serverless is a quite a popular topic nowadays, and with that popularity came an overwhelming amount of new solutions. This repository focus on providing an easy way to compare the Function as a Service (FaaS, for short) solutions. Here, we're going to keep all the scripts and benchmark results for those tools.
Our benchmark relies on a custom methodology that requires the execution of some standardized tests.
Each tool requires a set of steps to reproduce our benchmark methodology, to automate that we're using Terraform as our automated provision tool, and Ansible for IT automation.
Terraform handles the basic provisioning of our cloud platform while Ansible uses the provisioned infrastructure to configure our Kubernetes cluster, setup each tool, and run the tests.
IMPORTANT: We're using AWS as our primary (and only) cloud platform. The test architecture allows the usage of their free tier images, so you can create a new account and execute the tests with no expenses.
On the next sections of this document you may learn more about prerequisites and how to execute our benchmarks
In this section, we're going to list all available benchmarks on this repository so far. After the tool name, we've placed relevant shortcuts for you to navigate on our benchmarks.
- Kubeless: website . results . more
- Fission: website . results . more
- OpenFaas: website . results . more
- Nuclio: website . results . more
- Knative: website . results . more
If you're searching for a tool we haven't done the benchmark yet, please open a new feature request.
Before executing our benchmark, you may need the following tools installed on your local environment:
To check if your local environment matches the prerequisites, you can run the following command:
make check_prerequisites
To start a new routine from scratch, you can run:
make benchmark
This command does the following:
- Prompts for required cloud information to set up your Terraform integration
- Provision the required resources in your cloud platform
- Waits for all provisioned resources to be available
- Runs Ansible playbook to configure the Kubernetes cluster
- Runs all Ansible playbooks to setup every tool on our cluster
- Runs all Ansible playbooks to execute the benchmark tests
- Exports the results for our tests
You can also run only parts of the benchmark. To do so, take a look on the additional commands section.
Additionally, we have other commands that run only a part of our entire benchmark pipeline. They're all available as make
commands:
provision
: applies the Terraform desired infrastructure on your cloud platformprovision_config
: setup the Terraform integrationprovision_wait
: waits for the required resources to be available on your cloud platformsetup
: runs the Ansible playbook to set up our Kubernetes cluster and, afterward, executes all playbooks to set up our tools on the clustersetup_cluster
: runs only the Ansible playbook to set up our Kubernetes clustersetup_tools
: runs only the Ansible playbook to set up our serverless toolsrun_tests
: runs all Ansible playbooks for all tests on all benchmarked toolsrun_tests_response_time
: runs all Ansible playbooks for all benchmarked tools to test only their response timerun_tests_scalability
: runs all Ansible playbooks for all benchmarked tools to test only their scalabilityrun_<tool>
: runs the Ansible playbook for all tests on the given toolrun_<tool>_<test>
: runs the Ansible playbook to test only the given tool with the provided test
You can find our analysis, technical inspections, and relevant results at the results
folder on the repository root. You can click here to go there.
That folder contains documents comparing general aspects of all tools. Also, there is a specific folder for each tool that contains a detailed analysis of that tool.
Contributions are what make the open source such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated. You can learn how to contribute to this project on the CONTRIBUTING
file.
Distributed under the MIT License. See LICENSE
for more information.