Skip to content

Fix PyTorchBackend TP vs DP inputs distribution across replicas and shards #277

Fix PyTorchBackend TP vs DP inputs distribution across replicas and shards

Fix PyTorchBackend TP vs DP inputs distribution across replicas and shards #277

Triggered via pull request July 1, 2024 11:07
Status Cancelled
Total duration 9m 35s
Artifacts
cli_cuda_tensorrt_llm_tests
6m 41s
cli_cuda_tensorrt_llm_tests
Fit to window
Zoom out
Zoom in

Annotations

2 errors
cli_cuda_tensorrt_llm_tests
Canceling since a higher priority waiting request for 'CLI CUDA TensorRT-LLM Tests-218' exists
cli_cuda_tensorrt_llm_tests
The operation was canceled.