Skip to content

Commit

Permalink
second commit
Browse files Browse the repository at this point in the history
  • Loading branch information
fqnchina committed Mar 29, 2018
1 parent 16d2186 commit 9afc95d
Show file tree
Hide file tree
Showing 2 changed files with 33 additions and 26 deletions.
27 changes: 15 additions & 12 deletions evaluation/test_IIW.lua
Original file line number Diff line number Diff line change
Expand Up @@ -41,26 +41,30 @@ for _,inputFile in ipairs(files) do
input[1] = inputImg:cuda()
input = input * 255

-- local guide_albedo_file = string.gsub(inputFile,imgPath,'/raid/qingnan/data/iiw_L1_IntrinsicDecomposition_r/')
-- local guide_a = image.load(guide_albedo_file)
-- local guide_albedo = torch.CudaTensor(1, 3, height, width)
-- guide_albedo[1] = guide_a:cuda()
-- guide_albedo_mean = torch.sum(guide_albedo,2)/3

-- local guide_shading_file = string.gsub(inputFile,imgPath,'/raid/qingnan/data/iiw_L1_IntrinsicDecomposition_s/')
-- local guide_s = image.load(guide_shading_file)
-- local guide_shading = torch.CudaTensor(1, 3, height, width)
-- guide_shading[1] = guide_s:cuda()
-- guide_shading = torch.sum(guide_shading,2)/3

local guide_albedo_file = string.gsub(inputFile,imgPath,'/raid/qingnan/data/iiw_L1_IntrinsicDecomposition_r/')
local guide_a = image.load(guide_albedo_file)
local guide_albedo = torch.CudaTensor(1, 3, height, width)
guide_albedo[1] = guide_a:cuda()
local guide_albedo = input/255
guide_albedo_mean = torch.sum(guide_albedo,2)/3

local guide_shading_file = string.gsub(inputFile,imgPath,'/raid/qingnan/data/iiw_L1_IntrinsicDecomposition_s/')
local guide_s = image.load(guide_shading_file)
local guide_shading = torch.CudaTensor(1, 3, height, width)
guide_shading[1] = guide_s:cuda()
local guide_shading = input/255
guide_shading = torch.sum(guide_shading,2)/3
local guide_albedo_mean = torch.sum(guide_albedo,2)/3

local predictions_final = model:forward(input)
predictions_final1 = predictions_final[1]
predictions_final2 = predictions_final[2]
predictions_final3 = predictions_final[3]

-- uncomment the following line while testing the jointly trained model
-- predictions_final1 = torch.cmax(torch.sum(predictions_final1,2)/3,0.0000000001)

local r_value = torch.cmax(predictions_final1,0.0000000001)
local input_mean = torch.cmax(torch.sum(input,2)/3,0.0000000001)
local r_div = torch.cdiv(r_value,input_mean)
Expand Down Expand Up @@ -90,7 +94,6 @@ for _,inputFile in ipairs(files) do
local sav = string.gsub(savColor,'.png','-r_small.png')
image.save(sav,predictions_final1[1])


for m = 1,3 do
local numerator = torch.dot(predictions_final3[1][m], guide_albedo[1][m])
local denominator = torch.dot(predictions_final3[1][m], predictions_final3[1][m])
Expand Down
32 changes: 18 additions & 14 deletions readme.MD
Original file line number Diff line number Diff line change
@@ -1,35 +1,40 @@
# Revisiting Deep Intrinsic Image Decompositions
This is the implementation of CVPR 2018 Oral paper *"Revisiting Deep Intrinsic Image Decompositions"* by Qingnan Fan *et al.*.

Note
Compilation
----

**Compilation**
Our codes are implemented in Torch framework. Some self-defined layers are included in the '**compile**' folder. Before testing or training the models, you need to install the latest torch framework and compile the ***.lua** under the nn module, the ***.cu** and ***.h** under the cunn module, and **adam_state.lua** under optim module.

Our codes are implemented in Torch framework. Some self-defined layers are included in the 'compile' folder. Before testing or training the models, you need to install the latest torch framework and compile the ***.lua** under the nn module, the ***.cu** and ***.h** under the cunn module, and **adam_state.lua** under optim module.
To be specific, the lua files have to be put in **./torch/extra/nn/** module directory, and editing **init.lua** file to include the corresponding file. Finally run the command under nn directory, **luarocks make ./rocks/nn-scm-1.rockspec**. Then nn module will be independently compiled.

To be specific, the lua files have to be put in **./torch/extra/nn/** module directory, and editing **init.lua** file to include the corresponding file. Finally run the command under nn directory, **luarocks make ./rocks/nn-scm-1.rockspec**. Then nn module will be independently compiled. Similarly for the cuda files, they need to be put into **./torch/extra/cunn/lib/THCUNN/** module directory, then editing the **./torch/extra/cunn/lib/THCUNN/generic/THCUNN.h** to include the corresponding files, and finally run the command **luarocks make ./rocks/cunn-scm-1.rockspec** under **./torch/extra/cunn** folder to compile them. Accordingly, the adam lua file has to be put in **./torch/pkg/optim**, edit the **init.lua** file and then run **luarocks make optim-1.0.5-0.rockspec**.
Similarly for the cuda files, they need to be put into **./torch/extra/cunn/lib/THCUNN/** module directory, then editing the **./torch/extra/cunn/lib/THCUNN/generic/THCUNN.h** to include the corresponding files, and finally run the command **luarocks make ./rocks/cunn-scm-1.rockspec** under **./torch/extra/cunn** folder to compile them.

**Data Generation**
Accordingly, the adam lua file has to be put in **./torch/pkg/optim**, edit the **init.lua** file and then run **luarocks make optim-1.0.5-0.rockspec**.

The training data for IIW can be directly obtained from their official project page. The training data for the guidance network of IIW data is L1smooth results, which is shared from this link, https://drive.google.com/file/d/0B_bZKF86bTHLb0NrNm1NRTZJUUk/view?usp=sharing. Note the IIW label is transfered by us into txt format for training, we include these IIW txt label files into this link, https://www.dropbox.com/s/kjrvo1smbiz7ovr/IntrinsicImg_dataset.zip?dl=0. As for MIT and MPI-Sintel dataset, whose data sources may be inconvinient to obtain, we also include them in the last link.
Data Generation
----

The training data for IIW can be directly obtained from their official [project page](<https://labelmaterial.s3.amazonaws.com/release/iiw-dataset-release-0.zip>). The training data for the guidance network of IIW data is L1smooth results, which is shared from [this link](<https://drive.google.com/file/d/0B_bZKF86bTHLb0NrNm1NRTZJUUk/view?usp=sharing>). Note the IIW label is transfered by us into txt format for training, we include these IIW txt label files into [this link](<https://www.dropbox.com/s/kjrvo1smbiz7ovr/IntrinsicImg_dataset.zip?dl=0>). As for MIT and MPI-Sintel dataset, whose data sources may be inconvinient to obtain, we also include them in the last link.

You need to run **gen_trainData_MPI.m** to crop and flip the MPI images to generate our training data. The input images to our network have to cropped to image size of even number as we downsample the intermediate feature maps by half.
Run **gen_trainData_MPI.m** to crop and flip the MPI images to generate our training data. The input images to our network have to cropped to image size of even number as we downsample the intermediate feature maps by half.

The sparse label of IIW dataset is in json file format originally, we extract the useful information and transfer them to txt file. You can either transfer the labels using **transferIIW.m** or directly use the transfered files in our released dataset.

The training and testing file lists are included into the .txt files in the **dataset** folder.

**Training**
Training
----

The trained models are all in the **netfiles** folder.

To train the models from scratch, you need to run the files in the **training** folder.
To train the models from scratch, run the files in the **training** folder.

Since we levearge two-fold cross validation for the MPI dataset, you need to train on both the train and test split of MPI data.
Since we levearge two-fold cross validation for the MPI dataset, training on both the train and test split of MPI data is required.

**Evaluation**

To test our trained model, you can run **test_*.lua** files, while to evaluate the numerical errors, you can run **compute_*_error.*** files.
Evaluation
----
To test our trained model, run **test_*.lua** files, while to evaluate the numerical errors, run **compute_*_error.*** files.

Cite
----
Expand All @@ -49,4 +54,3 @@ Contact
If you find any bugs or have any ideas of optimizing these codes, please contact me via fqnchina [at] gmail [dot] com



0 comments on commit 9afc95d

Please sign in to comment.