This is a work in progress reimplementation of Instant Neural Graphics Primitives Currently this can train an implicit representation of a gigapixel image using a multires hash encoding.
FYI: This is brand new -- most parameters in the training script are hard coded right now
Check out results in viz
Download the Tokyo image
wget -O tokyo.jpg https://live.staticflickr.com/1859/29314390837_b39ae4876e_o_d.jpg
Convert to numpy binary format for faster reading (1s w/ .npz vs 14s with .jpg)
from PIL import Image
Image.MAX_IMAGE_PIXELS = 10**10
img = np.asarray(Image.open("tokyo.jpg")) # Abount 3.5 gb
np.save("tokyo.npy", img)
python src/train_image.py
In all tasks, except for NeRF which we will describe later, we use an MLP with two hidden layers that have a width of 64 neurons and rectified linear unit (ReLU)
- Initialize hash table entries with uniform distribution [-1e-4, 1e-4]
- Optimizer
- Adam: β1 = 0.9, β2 = 0.99, ϵ = 1e−15
- Learning rate: 1e-2 (ref: tiny-cuda-nn)
- Regularization:
- L2: 10e-6 Applied to the MLP weigths not the hash table weights
we skip Adam steps for hash table entries whose gradient is exactly 0. This saves ∼10% performance when gradients are sparse