Inference: fix batch_size issue. #1870
This run and associated checks have been archived and are scheduled for deletion.
Learn more about checks retention
gpu-ci.yml
on: pull_request
GPU CI Concierge
12s
Check Python Interface
0s
Single Machine, Multiple GPUs Tests
0s
Annotations
2 errors
Check Python Interface
Canceling since a higher priority waiting request for 'gpu-ci-fix_batch_size' exists
|
Inference Tests
Canceling since a higher priority waiting request for 'gpu-ci-fix_batch_size' exists
|