- Major refactoring - per-sample gradient computation is separated into its own module - GradSampleModule (#175)
- Improved RDP to (eps, delta)-DP conversion (#162)
- Multi-GPU support (#166)
- Handle empty batches in Poisson sampling (#164)
- Fixed memory leak from no_grad execution (#180)
- PackedSequence support for DPLSTM (#150) (thanks @touqir14 !)
- Pytest moved to dev installation (#144)
This version introduces a mildly-breaking change: the privacy engine will now support sampling with variable batch size, just like in the Abadi et al. paper. To accommodate this feature, we have made batch_size
a kwarg (no longer positional). We are also enforcing that all kwargs must not be specified positionally. If you had code that passed kwargs positionally, you will find an error (which will be very simple to fix).
- Enforce kwargs to Privacy Engine (#136).
- Fix batch construction and privacy engine (#128). (thanks @ConstanceBeguier!)
- Compute required sigma to reach (epsilon, delta) budget (#126)
- Friendly user message for unused parameters (#118).
- Print helpful message when models are not in train mode (#113)
- Now the Opacus package has a
__version__
attribute. - Fix immer security issue, fix website errors
- Updated setup.py version requirements to support 3.6.8 for Windows (#108) (thanks @madhavajay!)
- Rewrote the grad_sample tests to use Hypothesis (#125). (thanks @touqir14!)
- Extend DPLSTM to support multilayer, dropout (#101)
- Modifications to Char LSTM name classification example
- Introduce issue templates for GitHub (#102)
- Added support for Conv3D layers
- Linter fixes for Conv3D (#105)
- Make TorchCSPRNG an optional dependency (#106)
- Removed unnecessary calls to zero_grad from examples and tutorials (#96)
- Fix PyPI deployment (#91).
- Refactor grad sample tests (#90).
- Avoid storing activations in certain scenarios (#87)
- Reimplemented the Embedding layer, making it 9x faster with lower memory footprint (#73).
- Reimplemented the DPLSTM layer, making it 2x faster with lower memory footprint.
- Extended our Conv support to grouped convolutions (#78).
- Small fixes to clipping logic (#45).
- Changed docstring style from numpy -> Google.
- Throw an error if sample rate > 1 in privacy engine.
- Migrated our IMDB example from TorchText -> HuggingFace (#85).
- Added PRNG shuffling to our examples.
- Compatibility with Python 3.6 (Minimum required version changed from 3.7 to 3.6.9).
- Allow DP-LSTM to have null init.
- Initial commit.