This repository contains the source code for the paper Pix2Vox: Context-aware 3D Reconstruction from Single and Multi-view Images. The follow-up work Pix2Vox++: Multi-scale Context-aware 3D Object Reconstruction from Single and Multiple Images has been published in International Journal of Computer Vision (IJCV).
@inproceedings{xie2019pix2vox,
title={Pix2Vox: Context-aware 3D Reconstruction from Single and Multi-view Images},
author={Xie, Haozhe and
Yao, Hongxun and
Sun, Xiaoshuai and
Zhou, Shangchen and
Zhang, Shengping},
booktitle={ICCV},
year={2019}
}
We use the ShapeNet and Pix3D datasets in our experiments, which are available below:
- ShapeNet rendering images: http://cvgl.stanford.edu/data2/ShapeNetRendering.tgz
- ShapeNet voxelized models: http://cvgl.stanford.edu/data2/ShapeNetVox32.tgz
- Pix3D images & voxelized models: http://pix3d.csail.mit.edu/data/pix3d.zip
The pretrained models on ShapeNet are available as follows:
git clone https://github.com/hzxie/Pix2Vox.git
cd Pix2Vox
pip install -r requirements.txt
You need to update the file path of the datasets:
__C.DATASETS.SHAPENET.RENDERING_PATH = '/path/to/Datasets/ShapeNet/ShapeNetRendering/%s/%s/rendering/%02d.png'
__C.DATASETS.SHAPENET.VOXEL_PATH = '/path/to/Datasets/ShapeNet/ShapeNetVox32/%s/%s/model.binvox'
__C.DATASETS.PASCAL3D.ANNOTATION_PATH = '/path/to/Datasets/PASCAL3D/Annotations/%s_imagenet/%s.mat'
__C.DATASETS.PASCAL3D.RENDERING_PATH = '/path/to/Datasets/PASCAL3D/Images/%s_imagenet/%s.JPEG'
__C.DATASETS.PASCAL3D.VOXEL_PATH = '/path/to/Datasets/PASCAL3D/CAD/%s/%02d.binvox'
__C.DATASETS.PIX3D.ANNOTATION_PATH = '/path/to/Datasets/Pix3D/pix3d.json'
__C.DATASETS.PIX3D.RENDERING_PATH = '/path/to/Datasets/Pix3D/img/%s/%s.%s'
__C.DATASETS.PIX3D.VOXEL_PATH = '/path/to/Datasets/Pix3D/model/%s/%s/%s.binvox'
To train Pix2Vox, you can simply use the following command:
python3 runner.py
To test Pix2Vox, you can use the following command:
python3 runner.py --test --weights=/path/to/pretrained/model.pth
If you want to train/test Pix2Vox-F, you need to checkout to Pix2Vox-F
branch first.
git checkout -b Pix2Vox-F origin/Pix2Vox-F
This project is open sourced under MIT license.