The code has been tested with Python 3.6.9, TensorFlow 1.13.2, TFLearn 0.3.2, CUDA 10.0 and cuDNN 7.6.2 on Ubuntu 16.04.
Install TensorFlow. You can also use a TensorFlow Docker image. A Docker image that meets the TensorFlow, CUDA and cuDNN version that we used is tensorflow/tensorflow:1.13.2-gpu-py3.
Install TFLearn.
Install matplotlib (for visualization of point clouds).
In order to download the dataset for training and evaluation, wget package is required. To install wget:
sudo apt-get update
sudo apt-get install wget
Compile TensorFlow ops: nearest neighbor grouping and farthest point sampling, implemented by Qi et al.; structural losses, implemented by Fan et al. The ops are located under reconstruction/external
at grouping
, sampling
, and structural losses
folders, respectively. If needed, use a text editor and modify the corresponding sh
file of each op to point to your nvcc
path. Then, use:
cd reconstruction/
sh compile_ops.sh
An o
and so
files should be created in the corresponding folder of each op.
For a quick start please use:
sh runner_samplenet.sh
or:
sh runner_samplenet_progressive.sh
These scripts train and evaluate an Autoencoder model with complete point clouds, use it to train a sampler (SampleNet or SampleNetProgressive), and then evaluate sampler by running its sampled points through the Autoencoder model. In the following sections, we explain how to run each part of this pipeline separately.
Point clouds of ShapeNetCore models in ply
files (provided by Achlioptas et al.) will be automatically downloaded (1.4GB) on the first training of an Autoencoder model. Each point cloud contains 2048 points, uniformly sampled from a shape surface. The data will be downloaded to the folder reconstruction/data/shape_net_core_uniform_samples_2048
.
Alternatively, you can download the data before training by using:
sh download_data.sh
To train an Autoencoder model, use:
python autoencoder/train_ae.py --train_folder log/autoencoder
To evaluate the Autoencoder model, use:
python autoencoder/evaluate_ae.py --train_folder log/autoencoder
This evaluation script saves the reconstructed point clouds from complete input point clouds of the test set, and the reconstruction error per point cloud (Chamfer distance between the input and reconstruction). The results are saved to the train_folder
.
To evaluate reconstruction with FPS sampled points (with sample size 64 in this example), use:
python autoencoder/evaluate_ae.py --train_folder log/autoencoder --use_fps 1 --n_sample_points 64
This evaluation script saves the sampled point clouds, sample indices and reconstructed point clouds of the test set, and the reconstruction error per point cloud (Chamfer distance between the input and reconstruction). It also computes the normalized reconstruction error, as explained in the paper. The results are saved to the train_folder
.
To train SampleNet (with sample size 64 in this example), using an existing Autoencoder model as the task network (provided in ae_folder
argument), use:
python sampler/train_samplenet.py --ae_folder log/autoencoder --n_sample_points 64 --train_folder log/SampleNet64
To evaluate reconstruction with SampleNet's sampled points (with sample size 64 in this example), use:
python sampler/evaluate_samplenet.py --train_folder log/SampleNet64
This script operates similarly to the evaluation script for the Autoencoder with FPS sampled points.
To train SampleNetProgressive, using an existing Autoencoder model as the task network (provided in ae_folder
argument), use:
python sampler/train_samplenet_progressive.py --ae_folder log/autoencoder --n_sample_points 64 --train_folder log/SampleNetProgressive
To evaluate reconstruction with SampleNetProgressive's sampled points (with sample size 64 in this example), use:
python sampler/evaluate_samplenet_progressive.py --n_sample_points 64 --train_folder log/SampleNetProgressive
This script operates similarly to the evaluation script for the Autoencoder with FPS sampled points.
You can visualized point clouds (input, reconstructed, or sampled) by adding the flag --visualize_results
to the evaluation script of the Autoencoder, SampleNet or SampleNetProgressive.
Our code builds upon the code provided by Achlioptas et al. and Dovrat et al. We thank the authors for sharing their code.