Skip to content

Commit

Permalink
updated lisa README
Browse files Browse the repository at this point in the history
  • Loading branch information
ivan committed Apr 8, 2018
1 parent 97ea015 commit a08d3d4
Showing 1 changed file with 14 additions and 2 deletions.
16 changes: 14 additions & 2 deletions lisa-cnn-attack/README.md
Original file line number Diff line number Diff line change
@@ -1,14 +1,26 @@
This directory contains the code for attacking the LISA-CNN model and the model itself. It is self-contained and there should be nothing for you to download in order to run the attack. It also contains the outputs of the runs that generated two of the attacks in the paper (in `optimization_output`).

To run, use the script `run_attack_many.sh`. It is set up in the repo so that it replicates the subliminal poster attack. To see a description of what all the parameters mean, run `python gennoise_many_images.py -h` or look at the definitions of the various command line flags that specify the optimization parameters.
## Driver Scripts

To run, use the script `run_attack_many.sh` inside a [pipenv](https://docs.pipenv.org/) shell. It is set up in the repo so that it replicates the subliminal poster attack. To see a description of what all the parameters mean, run `python gennoise_many_images.py -h` or look at the definitions of the various command line flags that specify the optimization parameters.

The file `Pipfile` specifies the exact version of the packages we used. Newer versions of tensorflow and keras don't always work with this code. We also include an older version of cleverhans (see below).

Moreover, the script `run_noise_to_big_img.sh` takes the noise (adversarial perturbation), as stored in some checkpoint file, resizes it, and applies it to a high-res image.

## Outputs of Optimization Runs
The "traces" of our optimization runs are stored in these folders:
* `optimization_output/noinversemask_second_trial_run` has the noise and optimization parameters for the subliminal poster attack. The `run_attack_many.sh` is set up to replicate that training run and save it under a folder called `octagon`.

* `optimization_output/l1basedmask_uniformrectangles` contains the outputs from optimizing for a camouflage sticker attack.

In both of these folders, the `model` subfolder contains the final tensorflow checkpoint and `noisy_images` holds images with the perturbation applied to them saved at regular intervals during the attack optimization.

The `optimization_output_*.txt` files hold the printouts of the optimization parameters. Use these values in `run_noise_to_big_img.sh` if you want to replicate any one optimization run.
The `optimization_output_*.txt` files hold the printouts of the optimization parameters. Use these values in `run_noise_to_big_img.sh` if you want to replicate any one optimization run.

## Classify Using the Model
The model is to be found under `models/all_r_ivan`. To classify images using it run `python manyclassify.py --attack_srcdir <folder>` where `<folder>` is the path to a folder **of only 32 by 32 png images**. This code is *not* set up to auto-resize images or throw away non-png files in the directory, so it might error out if you don't follow that guideline.

## Attack Code and Cleverhans
The attack graph itself and the code to run it are in the files `gennoise_many_images.py` and `utils.py`. We also include an older version of the core of the [cleverhans](https://github.com/tensorflow/cleverhans) library. It carries its own MIT license.

0 comments on commit a08d3d4

Please sign in to comment.