This repository contains the code needed to evaluate models trained in Data Augmentation Can Improve Robustness which has been accepted at ICLR 2021 Security and Safety in Machine Learning Systems Workshop.
We have released our top-performing models in two formats compatible with JAX and PyTorch. This repository also contains our model definitions.
Download a model from links listed in the following table. Clean and robust accuracies are measured on the full test set. The robust accuracy is measured using AutoAttack.
dataset | norm | radius | architecture | extra data | clean | robust | link |
---|---|---|---|---|---|---|---|
CIFAR-10 | ℓ∞ | 8 / 255 | WRN-70-16 | ✓ | 92.23% | 66.58% | jax, pt |
CIFAR-10 | ℓ∞ | 8 / 255 | WRN-70-16 | ✗ | 87.25% | 60.07% | jax, pt |
CIFAR-10 | ℓ∞ | 8 / 255 | WRN-28-10 | ✗ | 86.09% | 57.61% | jax, pt |
CIFAR-100 | ℓ∞ | 8 / 255 | WRN-70-16 | ✗ | 65.76% | 32.43% | jax, pt |
CIFAR-100 | ℓ∞ | 8 / 255 | WRN-28-10 | ✗ | 62.97% | 29.80% | jax, pt |
Once downloaded, a model can be evaluated (clean accuracy) by running the
eval.py
script in either the jax
or pytorch
folders. E.g.:
cd jax
python3 eval.py \
--ckpt=${PATH_TO_CHECKPOINT} --depth=70 --width=16 --dataset=cifar10
If you use this code or these models in your work, please cite the complete version which combines data augmentation with generated samples:
@article{rebuffi2021fixing,
title={Fixing Data Augmentation to Improve Adversarial Robustness},
author={Rebuffi, Sylvestre-Alvise and Gowal, Sven and Calian, Dan A. and Stimberg, Florian and Wiles, Olivia and Mann, Timothy},
journal={arXiv preprint arXiv:2103.01946},
year={2021},
url={https://arxiv.org/pdf/2103.01946}
}
This is not an official Google product.