Skip to content

Commit

Permalink
minor update
Browse files Browse the repository at this point in the history
  • Loading branch information
elliothe committed May 8, 2020
1 parent 0fbcbd2 commit c54df39
Show file tree
Hide file tree
Showing 2 changed files with 38 additions and 1 deletion.
2 changes: 1 addition & 1 deletion BFA_imagenet.sh
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ fi

############### Configurations ########################
enable_tb_display=false # enable tensorboard display
model=mobilenet_v2_quan
model=resnet18_quan
dataset=imagenet
test_batch_size=256

Expand Down
37 changes: 37 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,6 +43,9 @@ If you find this project useful to you, please cite [our work](http://openaccess
- [Example of ResNet-18 on ImageNet](#example-of-resnet-18-on-imagenet )
- [What if I want to attack another Network architecture?](#what-if-i-want-to-attack-another-network-architecture )
- [How to perform random bit-flips on a given model?](#how-to-perform-random-bit-flips-on-a-given-model )
- [2.2 Training-based BFA defense](#22-training-based-bfa-defense )
- [Binarization-aware training](#binarization-aware-training )
- [Piecewise Weight Clustering](#piecewise-weight-clustering )
- [Misc](#misc )
- [Model quantization](#model-quantization )
- [Bit Flipping](#bit-flipping )
Expand Down Expand Up @@ -222,6 +225,40 @@ The random attack is performed on all the possible weight bit (regardless MSB to
...
```

#### 2.2 Training-based BFA defense


##### Binarization-aware training


Taken the ResNet-20 on CIFAR-10 as example:

1. Define a binarized ResNet20 in `models/quan_resnet_cifar.py`.
2. To use the weight binariztaion function. Comment out [multi-bit quantization](https://github.com/elliothe/BFA/blob/8a540ac0900f2599778394cfd1df56c0965c7cdf/models/quantization.py#L8-L142 ) and uncomment the [binarization modules](https://github.com/elliothe/BFA/blob/8a540ac0900f2599778394cfd1df56c0965c7cdf/models/quantization.py#L147-L290 ).

3. Perform the model training, where the binarized model is initialized in `models/__init__.py` as `resnet20_quan`. Then run `bash train_CIFAR.sh` in terminal (Don't forget the path configuration!).

4. With binarized model trained and stored at `<path-to-model>/model_best.pth.tar`, make sure the following changes in the `BFA_CIFAR.sh`:
```bash
pretrained_model='<path-to-model>/model_best.pth.tar'
```

##### Piecewise Weight Clustering


> The piecewise weight clutering should not be applied on the binarized NN.
1. Make sure ```models/quantization.py``` use the multi-bit quantization, in constrast to the binarized counterpart. To change the bit-width, please access the code in ```models/quantization.py```. Under the definition of ```quan_Conv2d``` and ```quan_Linear```, change the arg ```self.N_bits = 8``` if you want 8-bit quantization.

2. In `train_CIFAR.sh`, enable (i.e., uncomment) the following command:
```bash
--clustering --lambda_coeff 1e-3
```
Then train the model by `bash train_CIFAR.sh`.

3. For the BFA evaluation, please refer the binarization case.



## Misc

Expand Down

0 comments on commit c54df39

Please sign in to comment.