Berna Kabadayi · Wojciech Zielonka · Bharat Lal Bhatnagar . Gerard Pons-Moll . Justus Thies
This is the Pytorch implementation of GAN-Avatar. More details please check our project page.
Clone the repo
git clone https://github.com/bernakabadayi/ganavatar
cd ganavatar
Setup the submodules recursively
mkdir eg3d
git submodule update --init --recursive
Ganavatar uses eg3d for finetuning and its submodule Deep3DFaceRecon_pytorch for data processing. Please follow their instructions to setup it.
Following the issue from eg3d apply the following patch to fix triplane init.
git apply patch/eg3d.patch
Download the pretrained eg3d model trained on ffhq and put inside models/
folder.
Finetunine eg3d with the following options.
conda activate eg3d
python eg3d/train.py --data=/wojtek_1gan --gpus=8 --batch=32 --cfg=ffhq --gamma=5 --snap=10 --outdir=training_runs_rebut --gen_pose_cond=False --neural_rendering_resolution_initial=128 --neural_rendering_resolution_final=128 --resume=/models/eg3d-fixed-triplanes-ffhq.pkl --metrics=none
Generate frontal looking images for expression mapping network.
python gen_images_eg3d.py --args=/cfg/datagen/args_nf01_neck.yaml
Ganavatar uses expression parameters to train mapping network. Expression parameters from frontal images can be extracted as follows:
python scripts/preprocess_mapping.py --indir=/frontal/img
Train mapping network
python lib/mapping_train.py --args ../cfg/mapnet/args_train_nf01_neck.yaml
Test mapping network
python mapping_test.py --args ../../cfg/maptest/args_test_nf01_neck.yaml
Sample dataset and appearance model training json can be found here.
We provide scripts to process INSTA actors for Ganavatar training.
Your tracked mesh (i.e., FLAME) should align with eg3d marching cube result, located in models/
. After obtaining transformation matrix, run insta2ganavatar.py
python scripts/insta2ganavatar.py
If you need the pretrained models, please contact [email protected]
Cite us if you find this repository is helpful to your project:
@misc{kabadayi2023ganavatar,
title={GAN-Avatar: Controllable Personalized GAN-based Human Head Avatar},
author={Berna Kabadayi and Wojciech Zielonka and Bharat Lal Bhatnagar and Gerard Pons-Moll and Justus Thies},
year={2023},
eprint={2311.13655},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
Here are some great resources we benefit from:
- EG3D for finetuning our person-specific generative model
- INSTA actors for training
- MICA metrical tracker to obtain FLAME mesh and camera params
- Multiface actors for multiview experiments
- RobustVideoMatting for background segmentation
- Deep3DFaceRecon_pytorch to extract expression parameters
- PyTorch3D
- Nerface actors
This code and model are available for non-commercial scientific research purposes as defined in the LICENSE file. By downloading and using the code and model you agree to the terms in the LICENSE. Please also check eg3d license.