Benchmark utility that's intended to exercise benchmarks and how they scale with a large number of cores.
How does your code scale and perform when running on high-core servers?
First Provider: DigitalOcean up to 48 cores currently.
- --cpu flag supported: specify cpu delimited list
- --benchmem flag supported: capture allocations
- --count flag supported: multiple iterations of each benchmark
- --stat flag supported: executes benchstat analysis
- --regex flag supported: limits which benchmarks are run
- --leave-running flag supported: leaves a box running so user can log on
- sizes command: lists DigitalOcean instance sizes
- term command: terminates instances created by corebench
- list command: lists active corebench provisioned instances
Second Provider: AWS, specify your preferred instance type, us-east-1a for now
- --instancetype (e.g. t2.micro)
- --sizes - currently TODO - pricing API needs a soft touch/to add real value beyond --list, need mapping of instancetype/reigion --> ami
- all other flags supported
# Install corebench
go get github.com/deckarep/corebench
DigitalOcean:
- Sign up for a DigitalOcean account if not already a member
- Create a DigitalOcean Personal Access Token to be used for: --DO_PAT={token-here}
- Add your SSH public key to DigitalOcean for SSH access: --ssh-fp={ssh-md5-signature}
Run corebench:
// Fetch instance sizes
./corebench do sizes --DO_PAT=$DO_PAT
// Run a benchmark
./corebench do bench github.com/{user}/{repo} [OPTIONS] --DO_PAT=$DO_PAT --ssh-fp=$SF
// List active instances
./corebench do list --DO_PAT=$DO_PAT
// Terminate instances created by corebench
./corebench do term --DO_PAT=$DO_PAT --all
AWS:
- Sign up for AWS!
- Make sure your instance type is available (request a limit increase with support if needed)
Run corebench:
// Run a benchmark
./corebench aws bench github.com/{user}/{repo} [OPTIONS]
//Run a benchmark on this repo with instancetype xxx and leave the resources running
./corebench aws bench github.com/deckarep/corebench --instancetype m3.medium --leave-running true
// List running instances
./corebench aws list
// Terminate/delete AWS resource stack 'corebench'
./corebench aws term
- A command like above will provision an on-demand high-performance computing server
- Installs Go, and clones your repository
- It will run Go's benchmark tooling against your repo and generate a comprehensive report demonstrating just how well your code scales across a large number of cores
- It will immediately decomission (unless leave-running flag set) the computing resource so you only pay for a fraction of the cost
- API/Credential access to at least one provider - Both Digital Ocean & AWS are currently supported
- The ability to pay for your own computing resources for which ever providers you choose
- A source repo with comprehensive benchmarks to run against this suite
- If you work in Go chances are you care about concurrent and parallel performance. If you don't care why are using Go at all?
- Developers often benchmark their code on developer workstations, with a small number of cores
- Benchmarks on a small number of cores often times don't reflect the true nature of your application
- Sometimes an algorithm looks great on a few cores, but performance dramatically drops off when the core count gets higher
- A larger number of cores often times illustrates performances problems around:
-
- Contention/locking bottlenecks
-
- Cache coherence issues
-
- Parallelization overhead or lack of parallelization at all
-
- Multi-threading overhead: starvation, race conditions, live-locks and priority inversion
-
- The list goes on...
-
Q: Why is 48 the max amount of cores this utility supports?
-
A: This is DigitalOcean's beefiest box. AWS cores currently specified by instance type.
-
Q: What happens to the server and the code after the benchmark completes?
-
A: The default behavior is the server is destroyed along with the code and benchmark data. There is a setting that allows you to leave the server running if you'd like to log in and inspect the results using the --leave-running flag.
-
Q: Why is DigitalOcean the first provider?
-
A: Easy, because their droplets fire up FAST allowing a quick feedback loop during development of this project.
-
Q: When you will you add Google Cloud, AWS, {other-provider} next?
-
A: AWS support has been added. Google Cloud is next because they offer per minute billing which is great to save money. I'm hoping the community can help me build other providers along with refactoring as necessary to align the API.
-
Q: Why did you build this tool?
-
A: Because I wanted to quick way to execute remote benchmarks on cloud servers that are beefy (large number of cores).
-
Q: Will you eventually support other languages?
-
A: Meybe. :) Did I mention this code is open source?
-
Q: Why is your code sloppy?
-
A: Because I'm currently in rapid prototype mode...don't worry it will get a lot better. Also through the power of open-source...yada, yada, yada.
-
Q: Doesn't this cost money everytime you need to fire up a benchmark?
-
A: Yes, yes it does...you have been warned.
-
A: If you want to test-drive, you can use a weak single core instance which costs like a penny an hour.
- This utility is in active development, API is in flux and is expected to change
- This project and its maintainers are NOT responsible for any monetary charges, overages, fees as a result of the auto-provision process during proper usage of the script, bugs in the script or because you decided to leave a cloud server running for months