Skip to content

Analyze the embedding extracted from specific layers of models given adversarial examples via FGSM, PGD, MI-FGSM, and DeepFool algorithms.

Notifications You must be signed in to change notification settings

zhangxiang1209/Adversarial-attack-embedding-analysis

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Adversarial-attack-embedding-analysis

Analyze the embedding extracted from specific layers of models given adversarial examples via FGSM, PGD, MI-FGSM, and DeepFool algorithms.

Introduction

This project is to investigate and elucidate which layer in the popular deep learning model architecture that is most vulnerable to the adversarial examples and lead to misclassification.

res18_res18_fsgm_un_layer4_adv res18_res18_fsgm_un_layer4_naive res18_res18_fsgm_un_first_adv res18_res18_fsgm_un_first_naive

Reproduce

  1. Download ImageNette dataset from link. Or by
wget https://s3.amazonaws.com/fast-ai-imageclas/imagenette2.tgz
  1. Unzip the data in the folder of ./data/imagenette2/
  2. Process the images to smaller size.
python3 process.py
  1. Train (fine-tune) four models (ResNet-18, ResNet-50, DenseNet-121, Wide ResNet-50 v2)
./train.sh
  1. Generate adversarial examples. There are five scripts provided for adversarial examples generation. You should consider the computing resource you have to decide the generation order to avoid cuda out of memory
./generate_fgsm_un.sh
./generate_pgd_un.sh
./generate_pgd_ta.sh
./generate_mifgsm_un.sh
./generate_deepfool.sh
  1. If you want to generate other type of adversarial examples, you can directly run the python script. You can revise generate.py for more choices.
python3 generate.py --model_name $model --attack_name $attackname
  1. To analyze the transferability of each kind of adversarial example. Please refer to Transferability.ipynb
  2. To investigate the embedding of each layer, please refer to Analysis attack resnet18.ipynb, Analysis attack resnet50.ipynb, Analysis attack densenet121.ipynb, and Analysis attack wide resnet50.ipynb.
  3. Several useful functions are provided for you to perform further analysis.

About

Analyze the embedding extracted from specific layers of models given adversarial examples via FGSM, PGD, MI-FGSM, and DeepFool algorithms.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 76.2%
  • Jupyter Notebook 21.7%
  • Shell 2.1%