This repo contains the training code and demo for NDE.
- python 3.8
- CUDA 11.7
- pytorch 2.0.1
- pytorch-lightning 2.0.8
- nerfacc
- tinycudann (fp32)
We compile tinycudann
with fp32 precision for stable optimization. This is done by set TCNN_HALF_PRECISION=0
in this line.
The pre-trained weights for both synthetic and real scenes can be found in here
- Edit
configs/synthetic.yaml
orconfigs/real.yaml
to set up dataset path and configure a training. - To train a model, run:
python train.py --experiment_name=EXPERIMENT_NAME --device=GPU_DEVICE\
--config CONFIG_FILE --max_epochs=NUM_OF_EPOCHS # 4000 by default
- For view synthesis results, see
demo/demo.ipynb
@inproceedings{wu2024neural,
author = {Liwen Wu and Sai Bi and Zexiang Xu and Fujun Luan and Kai Zhang and Iliyan Georgiev and Kalyan Sunkavalli and Ravi Ramamoorthi},
title = {Neural Directional Encoding for Efficient and Accurate View-Dependent Appearance Modeling},
booktitle = {CVPR},
year = {2024}
}