Skip to content

Update vllm backend to support offline and online serving modes #319

Update vllm backend to support offline and online serving modes

Update vllm backend to support offline and online serving modes #319

Triggered via pull request July 22, 2024 11:38
Status Cancelled
Total duration 58s
Artifacts
cli_cuda_tensorrt_llm_tests
46s
cli_cuda_tensorrt_llm_tests
Fit to window
Zoom out
Zoom in

Annotations

2 errors
cli_cuda_tensorrt_llm_tests
Canceling since a higher priority waiting request for 'CLI CUDA TensorRT-LLM Tests-232' exists
cli_cuda_tensorrt_llm_tests
The operation was canceled.