priyakasimbeg
released this
31 Mar 03:57
·
177 commits
to main
since this release
Summary
- Finalized variant workload targets.
- Fix in random_utils helper function.
- For conformer PyTorch Dropout layers set
inplace=True
. - Clear CUDA cache at begining of each trial for PyTorch.
What's Changed
- update speech variants target setting points by @priyakasimbeg in #727
- set num_workers for librispeech back to 4 by @priyakasimbeg in #736
- [fix] random_utils.py to
_signed_to_unsigned
by @tfaod in #739 - Fix path in helper config for running experiments in bulk. by @priyakasimbeg in #740
- Finalize variants targets by @priyakasimbeg in #738
- Aiming to Fix Conformer OOM by @pomonam in #710
- Lint fixes by @priyakasimbeg in #742
- Add warning for PyTorch data loader num_workers flag. by @priyakasimbeg in #726
Full Changelog: algoperf-benchmark-0.1.4...algoperf-benchmark-0.1.5