Hi Paul,
Very sorry for the late reply, I completely missed out on the messages being posted to the forum.
Thank you very much for your interest in the project! Were you able to get it to work in VR? Which platform / head-mounted display are you using?
Ah, that is indeed an undesirable feature. However I am not able to replicate this issue on my setup. If you place a Unity Camera in the same spot as the virtual capture camera, does it see the objects? Do the objects use a special material?
As for your initial message, I completely agree, Welcome to Lightfields also blew me away, and was a game changer in the way I decided to orient my work.
The technical aspects of the more complex rendering pipelines are definitely daunting, and there is still much I have to wrap my head around as well when I read papers on the subject. But it's definitely worth it, the results achieved by these researchers are truly impressive, I hope we can do the same using Unity!
As for your question, the main limitation I see with the current state of the project is the size of the captured data. Currently, we send all of the images as one object to the GPU, so the graphics memory has to be able to hold all of this image data at the same time. If we use a very dense set of high-resolution images, as was done by Google, it's thus very likely that no GPU will be able to handle this amount of data. Therefore, we have to look towards more efficient ways of sending the data for rendering, i.e. of dynamically selecting the images that actually have to be used to render the current frame. This is something that I'm currently working on, but is not ready yet.
A second important limitation is that Welcome to Lightfields relies on estimated depth (cf their corresponding
research paper), one depth map per image, which they transform into one 3D mesh per image. To estimate this geometric information for a large number of images is a very time-consuming process. To give an idea, the authors of the paper estimate that "Running [their processing pipeline] in serial on a single workstation would take over a month". So to get similar results, for similarly large image datasets, the geometry estimation step is likely to be an important obstacle, as these time frames are definitely prohibitive.
I hope that I was able to answer your questions, I'd be happy to discuss more. And again, sincerest apologies for replying so late!
Best,
Greg