PyTorch implementation of our ICCV 2019 paper:
Liquid Warping GAN: A Unified Framework for Human Motion Imitation, Appearance Transfer and Novel View Synthesis
Please clone the newest codes.
Python 3.6+, Pytorch 1.2, torchvision 0.4, cuda10.0, at least 8GB GPU memory and other requirements.
pip install -r requirements.txt
cd thirdparty/neural_renderer
python setup.py install
-
Download
pretrains.zip
from OneDrive or BaiduPan and then move the pretrains.zip to theassets
directory and unzip this file. -
Download
checkpoints.zip
from OneDrive or BaiduPan and then unzip thecheckpoints.zip
and move them tooutputs
directory. -
Download
samples.zip
from OneDrive or BaiduPan, and then unzip thesamples.zip
and move them toassets
directory.
If you want to get the results of the demo shown in webpage, you can run the following scripts.
The results are saved in ./outputs/results/demos
-
Demo of Motion Imitation
python demo_imitator.py --gpu_ids 1
-
Demo of Appearance Transfer
python demo_swap.py --gpu_ids 1
-
Demo of Novel View Synthesis
python demo_view.py --gpu_ids 1
If you want to test other inputs (source image and reference images), here are some examples.
Please replace the --ip YOUR_IP
and --port YOUR_PORT
for
Visdom visualization.
-
Motion Imitation
- source image from iPER dataset
python run_imitator.py --gpu_ids 0 --model imitator --output_dir ./outputs/results/ \ --src_path ./assets/src_imgs/imper_A_Pose/009_5_1_000.jpg \ --tgt_path ./assets/samples/refs/iPER/024_8_2 \ --bg_ks 13 --ft_ks 3 \ --has_detector --post_tune \ --save_res --ip YOUR_IP --port YOUR_PORT
- source image from DeepFashion dataset
python run_imitator.py --gpu_ids 0 --model imitator --output_dir ./outputs/results/ \ --src_path ./assets/src_imgs/fashion_woman/Sweaters-id_0000088807_4_full.jpg \ --tgt_path ./assets/samples/refs/iPER/024_8_2 \ --bg_ks 25 --ft_ks 3 \ --has_detector --post_tune \ --save_res --ip YOUR_IP --port YOUR_PORT
- source image from Internet
python run_imitator.py --gpu_ids 0 --model imitator --output_dir ./outputs/results/ \ --src_path ./assets/src_imgs/internet/men1_256.jpg \ --tgt_path ./assets/samples/refs/iPER/024_8_2 \ --bg_ks 7 --ft_ks 3 \ --has_detector --post_tune --front_warp \ --save_res --ip YOUR_IP --port YOUR_PORT
-
Appearance Transfer
An example that source image from iPER and reference image from DeepFashion dataset.
python run_swap.py --gpu_ids 0 --model imitator --output_dir ./outputs/results/ \ --src_path ./assets/src_imgs/imper_A_Pose/024_8_2_0000.jpg \ --tgt_path ./assets/src_imgs/fashion_man/Sweatshirts_Hoodies-id_0000680701_4_full.jpg \ --bg_ks 13 --ft_ks 3 \ --has_detector --post_tune --front_warp --swap_part body \ --save_res --ip http://10.10.10.100 --port 31102
-
Novel View Synthesis
python run_view.py --gpu_ids 0 --model viewer --output_dir ./outputs/results/ \ --src_path ./assets/src_imgs/internet/men1_256.jpg \ --bg_ks 13 --ft_ks 3 \ --has_detector --post_tune --front_warp --bg_replace \ --save_res --ip http://10.10.10.100 --port 31102
The details of each running scripts are shown in runDetails.md.
The details are shown in train.md [TODO].
@InProceedings{lwb2019,
title={Liquid Warping GAN: A Unified Framework for Human Motion Imitation, Appearance Transfer and Novel View Synthesis},
author={Wen Liu and Zhixin Piao, Min Jie, Wenhan Luo, Lin Ma and and Shenghua Gao},
booktitle={The IEEE International Conference on Computer Vision (ICCV)},
year={2019}
}