4D Light Field Benchmark Blender Plug-in disparity

132 views
Skip to first unread message

Sarah Mo

unread,
Jan 27, 2021, 8:14:02 PM1/27/21
to Light Field Vision
Hi everyone, I am a PhD student doing some work on light-field data. I'm trying to generate a new dataset using the following Blender plug-in. However, I'm facing an issue regarding generating the disparity maps. When I try to generate the new data only the parts of the objects closer to the camera are shown in the disparity map as seen in the images attached. Any idea what causing this issue? I have played around with the parameters but I still get the same problem. I kept the resolution the same as the dataset but in the dataset, a much accurate disparity is generated! What am I doing wrong?

Another question regarding the disparity, I read in the supplementary material that the sensors are shifted to ensure z=0 does that mean the scene is in-focus and if so does that mean the plug-in mimics Rytrix camera (plenoptic 2.0) rather than Lytro camera (plenoptic 1.0)?

Any help is appreciated. Thank you :)

input_Cam000.png
Screenshot 2021-01-28 at 1.03.55 am.pngScreenshot 2021-01-28 at 1.03.45 am.pnginput_Cam000.pngScreenshot 2021-01-28 at 1.10.05 am.png

Dierk Ole Johannsen

unread,
Jan 28, 2021, 6:20:15 AM1/28/21
to Light Field Vision
Hello!
Thanks for using our Plugin :)

Looking at the bottom images of the cube, I believe the problem lies with the visualization. As the cameras are shifted (I'll comment on that in a second) the disparities contain also negative values. The disparity value of 0 corresponds to the focus distance of the plenoptic camera. Objects in front of this plane have positive disparity, objects behind negative. E.g. [-2 2]px is a common disparity range - at least for the data that we produced for the benchmark. The bottom visualization looks like the colors are clamped to something like [0 1] and thus everything further away is depicted as black and everything closer is depicted as white. The images of the car might show the actual depth (closer -> lower values -> darker) with a similar clamping problem. I am unsure how blender handles regions witout any object present. maybe its saved as "infinity" which might lead to the black parts. I'm sure about the right image of the car.

Technically speaking, our plugin renders the subaperture views generated from a plenoptic 1.0 camera with an rectengular grid of microlenses (for hexagonal grids some interpolation is needed). The outer microlenses basically view the lens from an angle, resulting in the shift thats modelled in our plugin.

Kind regards,
Ole

Bdour Mo

unread,
Jan 28, 2021, 6:07:02 PM1/28/21
to Light Field Vision
Hi Ole,

Thank you for your reply. I will try to modify the scene accordingly and see if it'll work. Are the blender files of the data that you produced available online? 

Would you say changing focDist to 0 would make the model more similar to plenoptic 2.0 (with rectangular grid) 

Bdour :)

Reply all
Reply to author
Forward
0 new messages