Skip to content

lwwu2/nde

Folders and files

NameName
Last commit message
Last commit date

Latest commit

May 24, 2024
71a6ddb · May 24, 2024

History

5 Commits
Mar 7, 2024
Mar 7, 2024
Mar 7, 2024
Mar 7, 2024
Mar 7, 2024
May 24, 2024
Mar 10, 2024

Repository files navigation

NDE: Neural Directional Encoding

This repo contains the training code and demo for NDE.

Setup

  • python 3.8
  • CUDA 11.7
  • pytorch 2.0.1
  • pytorch-lightning 2.0.8
  • nerfacc
  • tinycudann (fp32)

We compile tinycudann with fp32 precision for stable optimization. This is done by set TCNN_HALF_PRECISION=0 in this line.

Dataset

Pre-trained models

The pre-trained weights for both synthetic and real scenes can be found in here

Usage

  1. Edit configs/synthetic.yaml or configs/real.yaml to set up dataset path and configure a training.
  2. To train a model, run:
python train.py --experiment_name=EXPERIMENT_NAME --device=GPU_DEVICE\
                --config CONFIG_FILE --max_epochs=NUM_OF_EPOCHS # 4000 by default
  1. For view synthesis results, see demo/demo.ipynb

Citation

@inproceedings{wu2024neural,
  author = {Liwen Wu and Sai Bi and Zexiang Xu and Fujun Luan and Kai Zhang and Iliyan Georgiev and Kalyan Sunkavalli and Ravi Ramamoorthi},
  title = {Neural Directional Encoding for Efficient and Accurate View-Dependent Appearance Modeling},
  booktitle = {CVPR},
  year = {2024}
}

About

NDE: Neural Directional Encoding

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published