-
Notifications
You must be signed in to change notification settings - Fork 58
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Add README.md to document interactive site.
- Loading branch information
Showing
1 changed file
with
31 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,31 @@ | ||
# Rust Serialization Benchmark Interactive Site | ||
|
||
## Inputs | ||
|
||
* Bandwidth: in terabytes per month. 1 TB/Mo is 0.38 megabytes per second or 3.04 megabits per second | ||
* CPU: fraction of CPU benchmarks were run on available for use (if > 1 assumes 0 overhead for parallelization) | ||
* Dataset: (see ../README.md) changes messages/s to e.g. logs/s | ||
* log: logs (benchmark size divided by 10000, equal to individual logs in benchmark) | ||
* mesh: meshes (benchmark size) | ||
* minecraft_savedata: saves (benchmark size divided by 500, equal to individual player saves in benchmark) | ||
* mk48: updates (benchmark size divided by 1000, equal to individual updates in benchmark) | ||
* Mode: | ||
* serialize: Bandwidth usage is size of compressed data, CPU usage is serialization + compression | ||
* deserialize: Bandwidth usage is size of compressed data, CPU usage is decompression + deserialization | ||
* round trip: Bandwidth/CPU usage is sum of Mode serialize and Mode deserialize | ||
* zlib: allow using zlib as Compression | ||
* zstd: allow using zstd as Compression | ||
|
||
## Outputs | ||
|
||
* Crate: which crate is being used for serialization/deserialization | ||
* Compression: which compression algorithm is deemed the best (most messages/s) for that crate | ||
* messages/s: how many messages could theoretically be sent per second based on available Bandwidth/CPU consumed by compressed data/serialization + compression | ||
* Relative: normalized messages/s | ||
* Bottleneck: whether Bandwidth or CPU runs out first (limiting messages/s) | ||
|
||
## Assumptions | ||
|
||
* zlib/zstd have a constant speed irrelevant of Dataset (hopefully we can fix this) | ||
* 1 message of size 1000 takes the same Bandwidth/CPU as 1000 messages of size 1 | ||
* The amount of messages that need to be sent per second is constant (if each day you had all of your messages in a 1-hour interval, your real CPU requirement would be 24x) |