This repository contains the source code for applying 3PSDF to 3D reconstruction tasks based on single-view images.
This repository depends on Tensorflow, NumPy, Scikit-image and Horovod. The code is tested under the following package versions with CUDA 11.2 and Ubuntu 18.04:
tensorflow-gpu==2.6.0
numpy==1.19.5
scikit-image==0.18.3
horovod==0.23.0
- Example command with required parameters to indicate the data folders:
horovodrun -np 2 python -m src.train
--sdf_dir data/sdf-depth7-tfrecord
--cam_dir data/cam-tfrecord
--img_dir data/img-tfrecord
--split_file data/datasplit/train.lst
- Use
horovodrun -np GPU_NUM
to indicate the number of GPUs used for distributed training. If you only have one GPU, setGPU_NUM
to 1. It is strongly recommended to use multiple GPUs. - See
python -m src.train --help
for all the detailed training options.
- Example command with required parameters to indicate the data folders and pre-trained model:
python -m src.test
--sdf_dir data/sdf-depth7-tfrecord
--cam_dir data/cam-tfrecord
--img_dir data/img-tfrecord
--split_file data/datasplit/test.lst
--load_model_path weights/3psdf_svr_weights
- See
python -m src.test --help
for all the detailed testing options.
To run the code:
To obtain the raw data:
- Raw ShapeNet meshes with consistent normals: shapenet_consistent_normal.zip
- 3PSDF values sampled for ShapeNet shapes: shapnet_3psdf.zip
- Raw ShapeNet rendered images and camera parameters from 3DR2N2: shapenet_renderings.zip
To convert data:
- We provide example scripts to convert the raw data to TFRecord in
src/utils/shapenet_tfrecord_generator.py
If you have any questions, please email Weikai Chen and Cheng Lin at [email protected].