Skip to content

Commit

Permalink
update CI badge with IPEX
Browse files Browse the repository at this point in the history
  • Loading branch information
IlyasMoutawwakil committed Sep 23, 2024
1 parent 1c43fa2 commit 6bb660c
Showing 1 changed file with 5 additions and 1 deletion.
6 changes: 5 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,6 +48,7 @@ Optimum-Benchmark is continuously and intensively tested on a variety of devices

### CLI 📈

[![CLI_CPU_IPEX](https://github.com/huggingface/optimum-benchmark/actions/workflows/test_cli_cpu_ipex.yaml/badge.svg)](https://github.com/huggingface/optimum-benchmark/actions/workflows/test_cli_cpu_ipex.yaml)
[![CLI_CPU_LLAMA_CPP](https://github.com/huggingface/optimum-benchmark/actions/workflows/test_cli_cpu_llama_cpp.yaml/badge.svg)](https://github.com/huggingface/optimum-benchmark/actions/workflows/test_cli_cpu_llama_cpp.yaml)
[![CLI_CPU_NEURAL_COMPRESSOR](https://github.com/huggingface/optimum-benchmark/actions/workflows/test_cli_cpu_neural_compressor.yaml/badge.svg)](https://github.com/huggingface/optimum-benchmark/actions/workflows/test_cli_cpu_neural_compressor.yaml)
[![CLI_CPU_ONNXRUNTIME](https://github.com/huggingface/optimum-benchmark/actions/workflows/test_cli_cpu_onnxruntime.yaml/badge.svg)](https://github.com/huggingface/optimum-benchmark/actions/workflows/test_cli_cpu_onnxruntime.yaml)
Expand Down Expand Up @@ -93,14 +94,14 @@ pip install -e .
Depending on the backends you want to use, you can install `optimum-benchmark` with the following extras:

- PyTorch (default): `pip install optimum-benchmark`
- IPEX: `pip install optimum-benchmark[ipex]`
- OpenVINO: `pip install optimum-benchmark[openvino]`
- Torch-ORT: `pip install optimum-benchmark[torch-ort]`
- OnnxRuntime: `pip install optimum-benchmark[onnxruntime]`
- TensorRT-LLM: `pip install optimum-benchmark[tensorrt-llm]`
- OnnxRuntime-GPU: `pip install optimum-benchmark[onnxruntime-gpu]`
- Neural Compressor: `pip install optimum-benchmark[neural-compressor]`
- Py-TXI: `pip install optimum-benchmark[py-txi]`
- IPEX: `pip install optimum-benchmark[ipex]`
- vLLM: `pip install optimum-benchmark[vllm]`

We also support the following extra extra dependencies:
Expand Down Expand Up @@ -282,6 +283,8 @@ See [TrainingConfig](optimum_benchmark/scenarios/training/config.py) for more in
- [x] vLLM backend for CUDA (`backend=vllm`, `backend.device=cuda`)
- [x] vLLM backend for ROCM (`backend=vllm`, `backend.device=rocm`)
- [x] vLLM backend for CPU (`backend=vllm`, `backend.device=cpu`)
- [x] IPEX backend for CPU (`backend=ipex`, `backend.device=cpu`)
- [x] IPEX backend for XPU (`backend=ipex`, `backend.device=xpu`)

<details>
<summary>General backend features 🧰</summary>
Expand All @@ -299,6 +302,7 @@ See [TrainingConfig](optimum_benchmark/scenarios/training/config.py) for more in
For more information on the features of each backend, you can check their respective configuration files:

- [VLLMConfig](optimum_benchmark/backends/vllm/config.py)
- [IPEXConfig](optimum_benchmark/backends/ipex/config.py)
- [OVConfig](optimum_benchmark/backends/openvino/config.py)
- [PyTXIConfig](optimum_benchmark/backends/py_txi/config.py)
- [PyTorchConfig](optimum_benchmark/backends/pytorch/config.py)
Expand Down

0 comments on commit 6bb660c

Please sign in to comment.