Privacy Testing for Deep Learning
-
Updated
Jul 20, 2023 - Python
Privacy Testing for Deep Learning
A comprehensive toolbox for model inversion attacks and defenses, which is easy to get started.
[ICML 2022 / ICLR 2024] Source code for our papers "Plug & Play Attacks: Towards Robust and Flexible Model Inversion Attacks" and "Be Careful What You Smooth For".
reveal the vulnerabilities of SplitNN
Code for "Variational Model Inversion Attacks" Wang et al., NeurIPS2021
Research into model inversion on SplitNN
📄 [Talk] OFFZONE 2022 / ODS Data Halloween 2022: Black-box attacks on ML models + with use of open-source tools
My attempt to recreate the attack described in "Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures" by Fredrikson et al. in 2015 using Tensorflow 2.9.1
a gradient-based optimisation routine for highly parameterised non-linear dynamical models
Implementation of "An Approximate Memory based Defense against Model Inversion Attacks to Neural Networks" and "MIDAS: Model Inversion Defenses Using an Approximate Memory System"
Add a description, image, and links to the model-inversion topic page so that developers can more easily learn about it.
To associate your repository with the model-inversion topic, visit your repo's landing page and select "manage topics."