-
Notifications
You must be signed in to change notification settings - Fork 58
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add interactive website via Github Pages #46
Conversation
Thanks for the PR, I think this is very exciting for the benchmarks. I'll break down my thought process a bit: Direction
Technical
With that in mind, here's what I propose:
With future work to add more input sizes to the benchmark data sets. @caibear I would appreciate your feedback and thoughts. I understand this is probably a significant expansion of the intended scope, so I would of course help get this work done. |
|
||
pub fn serialize_seconds(self, bytes: u64) -> f32 { | ||
// TODO real benchmarks (since speed is different on different data and cpus). | ||
const SCALE: f32 = 0.5; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is this constant for?
…ad of uncompressed size.
Also clean up workspace dependencies and add optimization flags for wasm
This pull request adds an interactive website. Given bandwidth and cpu limits, it calculates how many messages per second could be sent/received for different combinations of serialization crates and compression libraries.
See https://caibear.github.io/rust_serialization_benchmark/
For example, this is useful for calculating how many average concurrent players an mk48.io server can handle. Given inputs 1 TB/Mo and 0.01 cores, it returns 437 updates/s for bitcode. SInce mk48.io sends 10 updates/s per player, a server can handle 43.7 players. The second best is serde_bare + zstd which returns 387 updates/s aka 38.7 players.
The data is taken from a copy of the README.md embedded in the binary. Compression speeds are currently based on constants, ideally they would be measured during the benchmarks.
TODO
Add Cargo.lock?