Skip to content

Commit

Permalink
Update readme (#228)
Browse files Browse the repository at this point in the history
  • Loading branch information
IlyasMoutawwakil committed Jul 15, 2024
1 parent e291e9b commit 6351e36
Showing 1 changed file with 2 additions and 1 deletion.
3 changes: 2 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,8 @@ Optimum-Benchmark is a unified [multi-backend & multi-device](#backends--devices
*News* πŸ“°

- πŸ₯³ PyPI package is now available for installation: `pip install optimum-benchmark` πŸŽ‰ [check it out](https://pypi.org/project/optimum-benchmark/) !
- numactl support for Process and Torchrun launchers to control the NUMA nodes on which the benchmark runs 🧠
- Model loading latency/memory/energy tracking for all backends in the inference scenario πŸš€
- numactl support for Process and Torchrun launchers to control the NUMA nodes on which the benchmark runs.
- 4 minimal docker images (`cpu`, `cuda`, `rocm`, `cuda-ort`) in [packages](https://github.com/huggingface/optimum-benchmark/pkgs/container/optimum-benchmark) for testing, benchmarking and reproducibility 🐳
- vLLM backend for benchmarking [vLLM](https://github.com/vllm-project/vllm)'s inference engine πŸš€
- Hosting the codebase of the [LLM-Perf Leaderboard](https://huggingface.co/spaces/optimum/llm-perf-leaderboard) πŸ₯‡
Expand Down

0 comments on commit 6351e36

Please sign in to comment.