You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
when I use dlrm mode to run test with --inference-only, I found the time that forward pass used vary a lot when using different mini-batch-size(like :2048 / 8192)
time forward pass used:
200s(mini-batch-size = 8192),100s(mini-batch-size = 2048)
I wonder how mini-batch-size impact inference test.
And what's the differences between mini-batch-size and test-mini-batch-size in inference-only test ?
when I use dlrm mode to run test with --inference-only, I found the time that forward pass used vary a lot when using different mini-batch-size(like :2048 / 8192)
time forward pass used:
200s(mini-batch-size = 8192),100s(mini-batch-size = 2048)
I wonder how mini-batch-size impact inference test.
And what's the differences between mini-batch-size and test-mini-batch-size in inference-only test ?
cmd i use:
dlrm_s_pytorch.py --arch-sparse-feature-size=64 --arch-mlp-bot="512-512-64" --arch-mlp-top="1024-1024-1024-1" --data-generation=dataset --data-set=terabyte --raw-data-file=input/day/day --processed-data-file=input/day/terabyte_processed.npz --loss-function=bce --round-targets=True --learning-rate=0.1 --mini-batch-size=2048 --num-batches=512 --print-freq=1024 --print-time --num-workers=32 --dataset-multiprocessing --inference-only
The text was updated successfully, but these errors were encountered: