You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello Changha Shin.
In the paper, you mentioned that methods such as randomly converting color to gray scale from [0,1] is used. However, by reviewing the "epinet_fun/func_makeinput.py" I notice that it've already converted the image into gray scale before returning the data.
Is the training data all gray or just part of it?
Is this function only applied in the test process or in the training as well?
More details in augmentation would be appreciated.
Thank you for your great work.
The text was updated successfully, but these errors were encountered:
Yes, training data is also all gray scale.
We use a randomly gray-scale conversion. img_gray=random0to1_R*img_Red+ random0to1_G *img_Green+ random0to1_B *img_Blue
(Constraint: random0to1_R +random0to1_G+random0to1_B = 1, min(img_gray)=0 , max(img_gray)=1 )
I found that gray-scale conversion improves overall performance quite significantly perhaps because it reduces the overall complexity while conserving the light field structure.
Hello Changha Shin.
In the paper, you mentioned that methods such as randomly converting color to gray scale from [0,1] is used. However, by reviewing the "epinet_fun/func_makeinput.py" I notice that it've already converted the image into gray scale before returning the data.
Is the training data all gray or just part of it?
Is this function only applied in the test process or in the training as well?
More details in augmentation would be appreciated.
Thank you for your great work.
The text was updated successfully, but these errors were encountered: