Skip to content

hughplay/DFNet

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Deep Fusion Network for Image completion

Introduction

Deep image completion usually fails to harmonically blend the restored image into existing content, especially in the boundary area. And it often fails to complete complex structures.

We first introduce Fusion Block for generating a flexible alpha composition map to combine known and unknown regions. It builds a bridge for structural and texture information, so that information in known region can be naturally propagated into completion area. With this technology, the completion results will have smooth transition near the boundary of completion area.

Furthermore, the architecture of fusion block enable us to apply multi-scale constraints. Multi-scale constrains improves the performance of DFNet a lot on structure consistency.

Moreover, it is easy to apply this fusion block and multi-scale constrains to other existing deep image completion models. A fusion block feed with feature maps and input image, will give you a completion result in the same resolution as given feature maps.

More detail can be found in our paper

The illustration of a fusion block:

Examples of corresponding images:

If you find this code useful for your research, please cite:

@inproceedings{DFNet2019,
  title={Deep Fusion Network for Image Completion},
  author={Xin Hong and Pengfei Xiong and Renhe Ji and Haoqiang Fan},
  journal={arXiv preprint},
  year={2019},
}

Prerequisites

  • Python 3
  • PyTorch 1.0
  • OpenCV

Testing

Clone this repo:

git clone https://github.com/hughplay/DFNet.git
cd DFNet

Download pre-trained model from Google Drive and put them into model.

Testing with Places2 model

There are already some sample images in the samples/places2 folder.

python test.py --model model/model_places2.pth --img samples/places2/img --mask samples/places2/mask --output output/places2 --merge

Testing with CelebA model

There are already some sample images in the samples/celeba folder.

python test.py --model model/model_celeba.pth --img samples/celeba/img --mask samples/celeba/mask --output output/celeba --merge

Training

Currently we don't provide training code. If you want to train this model on your own dataset, there are some training settings in config.yaml may be useful. And the loss functions which defined in loss.py is available.

License

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.