Code and data for paper "Deep Painterly Harmonization"
This software is published for academic and non-commercial use only.
This code is based on torch. It has been tested on Ubuntu 16.04 LTS.
Dependencies:
CUDA backend:
Download VGG-19:
sh models/download_models.sh
Compile cuda_utils.cu
(Adjust PREFIX
and NVCC_PREFIX
in makefile
for your machine):
make clean && make
To generate all results (in data/
) using the provided scripts, simply run
python gen_all.py
in Python and then
run('filt_cnn_artifact.m')
in Matlab or Octave. The final output will be in results/
.
Note that in the paper we trained a CNN on a dataset of 80,000 paintings collected from wikiart.org, which estimates the stylization level of a given painting and adjust weights accordingly. We will release the pre-trained model in the next update. Users will need to set those weights manually if running on their new paintings for now.
Here are some results from our algorithm (from left to right are original painting, naive composite and our output):
- Our torch implementation is based on Justin Johnson's code;
- Histogram loss is inspired by Risser et al.
If you find this work useful for your research, please cite:
@article{luan2018deep,
title={Deep Painterly Harmonization},
author={Luan, Fujun and Paris, Sylvain and Shechtman, Eli and Bala, Kavita},
journal={arXiv preprint arXiv:1804.03189},
year={2018}
}
Feel free to contact me if there is any question (Fujun Luan [email protected]).