Skip to content

Commit

Permalink
Release code.
Browse files Browse the repository at this point in the history
  • Loading branch information
hughplay committed Apr 21, 2019
1 parent a64c9ea commit f9783a4
Show file tree
Hide file tree
Showing 24 changed files with 830 additions and 0 deletions.
4 changes: 4 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
.DS_Store
__pycache__
output
*.pth
71 changes: 71 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,71 @@
# Deep Fusion Network for Image completion

## Introduction

Deep image completion usually fails to harmonically blend the restored image into existing content,
especially in the boundary area.
Our method handles with this problem from a new perspective of
creating a smooth transition and proposes a concise Deep Fusion Network (DFNet).
Firstly, a fusion block is introduced to generate a flexible alpha composition map
for combining known and unknown regions.
The fusion block not only provides a smooth fusion between restored and existing content,
but also provides an attention map to make network focus more on the unknown pixels.
In this way, it builds a bridge for structural and texture information,
so that information can be naturally propagated from known region into completion.
Furthermore, fusion blocks are embedded into several decoder layers of the network.
Accompanied by the adjustable loss constraints on each layer, more accurate structure information are achieved.
The results show the superior performance of DFNet,
especially in the aspects of harmonious texture transition, texture detail and semantic structural consistency.
More detail can be found in our [paper](https://arxiv.org/abs/1904.08060)

![](imgs/github_teaser.jpg)

If you find this code useful for your research, please cite:

```
@inproceedings{xin2019dfnet,
title={Deep Fusion Network for Image Completion},
author={Xin Hong and Pengfei Xiong and Renhe Ji and Haoqiang Fan},
journal={arXiv preprint},
year={2019},
}
```

## Prerequisites

- Python 3
- PyTorch 1.0
- OpenCV

## Testing

Clone this repo:

``` py
git clone https://github.com/hughplay/DFNet.git
cd DFNet
```

Download pre-trained model from [Google Drive](https://drive.google.com/drive/folders/1lKJg__prvJTOdgmg9ZDF9II8B1C3YSkN?usp=sharing)
and put them into `model`.

### Testing with Places2 model

There are already some sample images in the `samples/places2` folder.

``` sh
python test.py --model model/model_places2.pth --img samples/places2/img --mask samples/places2/mask --output output/places2 --merge
```

### Testing with CelebA model

There are already some sample images in the `samples/celeba` folder.

``` sh
python test.py --model model/model_celeba.pth --img samples/celeba/img --mask samples/celeba/mask --output output/celeba --merge
```

## License

<a rel="license" href="http://creativecommons.org/licenses/by-nc/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-nc/4.0/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-nc/4.0/">Creative Commons Attribution-NonCommercial 4.0 International License</a>.

133 changes: 133 additions & 0 deletions config.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,133 @@
---
# 0. Datasets
# You can define multiple datasets and use them later.
places2_inpaint:
train_path: "/data/datasets/places2/places_train.info"
val_path: "/data/datasets/places2/places_valid.info"
mask_path: "/data/datasets/places2/mask_strokes_and_shapes.info"
---
# 1. Experiment common used information
tag: 'inpaint.dfn.63'
task: 'inpaint'
description: 'Use conv2d, compare to pconv.'
update:
date: '2019-04-04'

# Support attr: ['batch_size', 'dataset', 'date', 'device', 'dpflow_device', 'input_size', 'split', 'tag', 'task', 'username']
# You can define extra attributes in 'extra'
exp_id_format: [task, model, input_size, date, tag]
dpflow_format: [username, dataset, split, input_size, batch_size, dpflow_device]

extra: # Will be used to generate `exp_id`
model: 'Conv.StyleLoss'

computed: true # whether auto computed variables are computed.

exp_id: 'inpaint.Conv.StyleLoss.512x512.2019-04-04.inpaint.dfn.63' # auto computed, depending on train settings.

experiment_root: '/data/train_log/hongxin/inpaint'
rrun_root: '/data/train_log/hongxin/rrun'

model_dir: '/data/train_log/hongxin/inpaint/conv/dfn/places2/best/num/inpaint.Conv.StyleLoss.512x512.2019-04-04.inpaint.dfn.63/models' # auto computed, experiment_root/exp_id/models
tensorboard_dir: '/data/train_log/hongxin/inpaint/conv/dfn/places2/best/num/inpaint.Conv.StyleLoss.512x512.2019-04-04.inpaint.dfn.63/tb' # auto computed, experiment_root/exp_id/tensorboard
log_dir: '/data/train_log/hongxin/inpaint/conv/dfn/places2/best/num/inpaint.Conv.StyleLoss.512x512.2019-04-04.inpaint.dfn.63/log' # auto computed, experiment_root/exp_id/log
result_dir: '/data/train_log/hongxin/inpaint/conv/dfn/places2/best/num/inpaint.Conv.StyleLoss.512x512.2019-04-04.inpaint.dfn.63/results' # auto computed, experiment_root/exp_id/results

model_latest: 'latest.pth'
model_best: 'best.pth'

data_script: 'start_dpflow.py'
---
# 2. Training settings
continue: true
remove_old: false

# Data settings
dataset: 'places2_inpaint'
split: 'train'
input_size: [512, 512] # scalar or list [height, width]
batch_size: 6 # per GPU
seed: 2019
dpflow_base_name: 'hongxin.places2_inpaint.train.512x512.6.8x4' # auto computed, based on `dpflow_format`
dpflow_replicas: 8 # Generally, same as total number of gpus
worker_per_dpflow: 4

# Trainer setting
device:
- {num: 1, gpu: 8, cpu: 16, memory: 51200} # Memory in MiB

model:
c_img: 3
c_mask: 1
c_alpha: 3
mode: 'nearest'
norm: 'batch'
act_en: 'relu'
act_de: 'leaky_relu'
en_ksize: [7, 5, 5, 3, 3, 3, 3, 3]
de_ksize: [3, 3, 3, 3, 3, 3, 3, 3]
blend_layers: [0, 1, 2, 3, 4, 5]

optimizer:
name: 'Adam'
args:
lr: 0.0002

epoch: 20
iter_per_epoch: 37500
lr_decay_epoch: 5
lr_decay_ratio: 0.1

loss:
c_img: 3
w_l1: 6.
w_percep: 0.1
w_style: 240.
w_tv: 0.1
structure_layers: [0, 1, 2, 3, 4, 5]
texture_layers: [0, 1, 2]

log_level: 'INFO'
action:
save_model: true
validate: false
tensorboard: true
model_graph: false # depend on tensorboard

log_interval: 10
model_save_interval: 1000 # (iters)
validate_interval: 1000
validate:
dataset: 'places2_inpaint'
split: 'val'
num: 20
input_size: [512, 512]
batch_size: 1 # To keep the original size of images
---
# 3. Testing settings
# Auto compute: the following parameter will be compute automatically
device:
- {num: 1, gpu: 1, cpu: 4, memory: 10240} # Memory in MiB

model: 'best'
data_tag: 'places2-1000'

img: /data/train_log/hongxin/inpaint/data/sample512-1000/img
mask: /data/train_log/hongxin/inpaint/data/sample512-1000/mask
input_size: [512, 512]
batch_size: 16

action:
save: ['final']
metrics:

dataset: 'places2'
split: 'val'
seed: 2019
dpflow_base_name: 'hongxin.places2.val.512x512.16.1x2' # auto computed
dpflow_replicas: 1 # Generally, same as number of gpus
worker_per_dpflow: 2
---
# 4. Training & Testing record
train: []
test: []
Binary file added imgs/github_teaser.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading

0 comments on commit f9783a4

Please sign in to comment.