This is the demo code for our USENIX Security 22 paper ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine Learning Models
Please find the updated version in our lab's repo.
We prefer that users provide the data loader themselves, but we show the demo data loader in the code. Due to the size of the dataset, we won't upload it to GitHub.
For UTKFace, we have two folders downloaded from official website in the UTKFace folder. The first is the "processed" folder, which contains three landmark_list files(which can also be downloaded from the official website). It is used to quickly get the image name because all the features of the images can be achieved from the file names. The second folder is the "raw" folder, which contains all the aligned and cropped images.
For the CelebA dataset, we have one folder and three files in the "celeba" folder. The "img_celeba" folder contains all the images downloaded from the official website, and we align and crop them by ourselves. The others are three files used to get the attributes or file names, named "identity_CelebA.txt," "list_attr_celeba.txt," and "list_eval_partition.txt." The crop center is [89, 121], but it is ok if the users wouldn't like to crop it because we have a resize function in the transforms so that it will not affect the input shapes.
For FMNIST and STL10, PyTorch has offered datasets that can be easily employed.
Users should install Python3 and PyTorch first. To train differential privacy shadow models, you should also install opacus. Based on the official documents, we recommend using conda to install it.
Or directly run pip install -r requirements.txt
.
python demo.py --attack_type X --dataset_name Y
Attack Type | 0 | 1 | 2 | 3 |
Name | MemInf | ModInv | AttrInf | ModSteal |
For dataset name, there are four datasets in the code, namely CelebA, FMNIST (Fashion-MNIST), STL10, and UTKFace.
For AttrInf, users should provide two attributes in the command line with the format "X_Y," and only CelebA and UTKface contain two attributes, e.g.
python demo.py --attack_type 2 --dataset_name UTKFace --attributes race_gender
We have four modes in this function
Mode | 0 | 1 | 2 | 3 |
Name | BlackBox Shadow | BlackBox Partial | WhiteBox Partial | WhiteBox Shadow |
When using mode 0 and mode 3, i.e., having shadow models, users should choose the get_attack_dataset_with_shadow
function.
For the others (mode 1 and mode 2), it should be get_attack_dataset_without_shadow
function.
When using mode 0, attack_model
should be ShadowAttackModel
, while PartialAttackModel
is attack_model
for mode 1 in blackbox.
As for whitebox (mode 2 and mode 3), users need to change attack_model
to WhiteBoxAttackModel
.
Users can also define attack models by themselves, so we didn't fix the models here.
Note: we have the same ShadowAttackModel
and PartialAttackModel
in the code.
For the Secret Revealer method, users should pre-train an evaluation model with ResNet18 architecture and name it as your model name + "_eval.pth", e.g., "UTKFace_eval.pth", with the same path as the target model.
There are two general modes, i.e., partial and shadow. Users could change the training set in main
function
Please cite this paper in your publications if it helps your research:
@inproceedings{LWHSZBCFZ22,
author = {Yugeng Liu and Rui Wen and Xinlei He and Ahmed Salem and Zhikun Zhang and Michael Backes and Emiliano De Cristofaro and Mario Fritz and Yang Zhang},
title = {{ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine Learning Models}},
booktitle = {{USENIX Security Symposium (USENIX Security)}},
pages = {4525-4542},
publisher = {USENIX},
year = {2022}
}
ML-Doctor is freely available for free non-commercial use, and may be redistributed under these conditions. For commercial queries, please drop an e-mail at admin@mldoctor.io. We will send the detail agreement to you.