This is a tensorflow implementation of the following paper: Deep 3D Portrait from a Single Image.
- We propose a two-step geometry learning scheme which first learn 3DMM face reconstruction from single images then learn to estimate hair and ear depth in a stereo setup.
- Typical single-image head reconstruction results. Our method can deal with a large variety of face shapes and hair styles, generating high-quality 3D head models.
- Typical pose manipulation results. The left column shows the input images to our method, and the other columns show our synthesized images with altered head poses.
- Software: Ubuntu 16.04, CUDA 9.0
- Python >= 3.5
- Clone the repository
git clone https://github.com/sicxu/Deep3dPortrait.git
cd Deep3dPortrait
pip install -r requirements.txt
- Follow the intructions in Deep3DFaceReconstruction to prepare the BFM folder
- Download the pretrained face reconstruction model and depth estimation model, then put the pb files into the model folder.
- Run the following steps.
python step1_recon_3d_face.py
python step2_face_segmentation.py
python step3_get_head_geometry.py
python step4_save_obj.py
- To check the results, see ./output subfolders which contain the results of corresponding steps.
- An image pre-alignment is necessary for face reconstruction. We recommend using Bulat et al.'s method to get facial landmarks (3D definition). We also need to use the masks of face, hair and ear as input to the depth estimation network. We recommend using Lin et al.'s method for semantic segmentation.
- The face reconstruction code is heavily borrowed from Deep3DFaceReconstruction.
- The render code is modified from tf_mesh_render. Note that the renderer we complied does not support other tensorflow versions and can only be used on Linux.
- The manipulation code will not be released. If you want to make a comparison with our method, please use the results in our paper, or you can contact me([email protected]) for more comparisons.
If you find this code helpful for your research, please cite our paper
@inproceedings{xu2020deep,
author = {Xu, Sicheng and Yang, Jiaolong and Chen, Dong and Wen, Fang and Deng, Yu and Jia, Yunde and Xin, Tong},
title = {Deep 3D Portrait from a Single Image},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2020}
}