Official PyTorch Implementation of paper "Deep Multi Depth Panoramas for View Synthesis", ECCV 2020.
Kai-En Lin1 Zexiang Xu1,3 Ben Mildenhall2 Pratul P. Srinivasan2 Yannick Hold-Geoffroy3 Stephen DiVerdi3 Qi Sun3 Kalyan Sunkavalli3 Ravi Ramamoorthi1
1University of California, San Diego 2University of California, Berkeley 3Adobe Research
-
PyTorch & torchvision
-
numpy
-
imageio
-
matplotlib
We only provide the inference code.
For training code, please refer to this repo, Deep 3D Mask Volume for View Synthesis of Dynamic Scenes, in train_mpi
directory.
-
run
python gen_mpi.py --scene cafe/ --out example_cafe/ --model_path ckpts/paper_model.pth
-
run
python gen_ldp.py --scene cafe/ --mpi_folder example_cafe/ --ldp_folder example_cafe_ldp/ --out_folder example_cafe_img
Note:
You might need to implement custom camera poses for rendering. Some functions are in gen_ldp.py
.
The extrinsics are in world to camera convention.
For custom data, you could pack the data similar to cafe/
.
The camera poses are in the same format as Local Light Field Fusion, meaning that it is in (N, 17), N is the number of source views.
The 17-dim vector is composed of 3x5 matrix (just do np.reshape(3, 5)
) and 2-dim vector for near and far plane bounds.
The 3x5 matrix is 3x4 [R|t]
from camera extrinsics and last column is (height, width, focal length)
.
@inproceedings{lin2020mdp,
title={Deep Multi Depth Panoramas for View Synthesis},
author={Lin, Kai-En and Xu, Zexiang and Mildenhall, Ben and Srinivasan, Pratul P and Hold-Geoffroy, Yannick and DiVerdi, Stephen and Sun, Qi and Sunkavalli, Kalyan and Ramamoorthi, Ravi},
year={2020},
booktitle={ECCV},
}
Parts of the code were adapted from StereoMag (https://github.com/google/stereo-magnification), LLFF (https://github.com/Fyusion/LLFF), LSI (https://github.com/google/layered-scene-inference)