Official topic - Image-based rendering, view-dependent rendering, light fields

110 views
Skip to first unread message

COLIBRI VR

unread,
Mar 21, 2020, 2:41:42 PM3/21/20
to COLIBRI VR
Hi,

This is the official topic for discussing image-based rendering, view-dependent rendering, light fields, etc.

Feel free to post!

--
Grégoire Dupont de Dinechin

Paul Haynes

unread,
Apr 13, 2020, 10:17:26 PM4/13/20
to COLIBRI VR
Hello Grégoire,

Firstly, thank you for making this tool available.

My interest in experimenting with capturing and rendering stereoscopic panoramas with my DSLR started soon after first experiencing them in VR.

A few years ago I discovered Google's 'Welcome to Lightfields' and was blown away by the experience and realised how the limitations of basic stereoscopy could be overcome. I have since spent numerous hours trying to understand the process of capturing and rendering Lightfields (many technical papers read and YouTube videos watched) and although I began to understand and realised rendering could be done in Unity, with my basic knowledge I would not know where to start!

As a non-gamer VR enthusiast I have enjoyed learning to use tools such as Unity / Blender to develop VR experiences and have experimented with Facebook's open-source Surround360 and 360_dep (6DoF) render pipelines. (I am fortunate to own one of Jaunt VR's 360 cameras). As a non-academic who didn't study Mathematics beyond high school (some 25 years ago), it's the Maths that I struggle with the most (camera rig geometries / lens calibration etc.) but I do have fun trying to make sense of it all!

I am looking forward to experimenting with Colibri VR and having watched your videos it seems very intuitive to use!

Finally, a question: With a bit more experience and a lot more knowledge, with this tool would it be possible to render a 360 degree Lightfield at a quality comparable to what Google have achieved if the input data was captured in a similar way (dense capture sphere - 0.5m radius)? What limitations / problems do you think I would encounter?

Thanks,
Paul.

Paul Haynes

unread,
Apr 23, 2020, 2:23:55 AM4/23/20
to COLIBRI VR
Hi Greg,

Just a quick follow up to my previous message. I had the opportunity to take a look at COLIBRI VR this week and with the help of your original YouTube videos I was up and running in no time. Unfortunately I couldn't get it to work in VR, it was therefore a pleasant surprise to just find out that you have uploaded more tutorials to YouTube that should not only help resolve the issues I was having viewing in VR but you have also answered (most) of the questions I had planned to ask based around the process of using my own input images / utilising the 3rd party tools.

One thing I did notice that maybe a bug is capturing images in Unity with objects in the scene only worked if the camera was set to 360, the regular camera only captured the skybox (this was represented in the preview window also)?

Paul.


Greg de Dinechin

unread,
Jun 17, 2020, 5:23:35 AM6/17/20
to COLIBRI VR
Hi Paul,

Very sorry for the late reply, I completely missed out on the messages being posted to the forum.

Thank you very much for your interest in the project! Were you able to get it to work in VR? Which platform / head-mounted display are you using?
Ah, that is indeed an undesirable feature. However I am not able to replicate this issue on my setup. If you place a Unity Camera in the same spot as the virtual capture camera, does it see the objects? Do the objects use a special material?

As for your initial message, I completely agree, Welcome to Lightfields also blew me away, and was a game changer in the way I decided to orient my work.
The technical aspects of the more complex rendering pipelines are definitely daunting, and there is still much I have to wrap my head around as well when I read papers on the subject. But it's definitely worth it, the results achieved by these researchers are truly impressive, I hope we can do the same using Unity!

As for your question, the main limitation I see with the current state of the project is the size of the captured data. Currently, we send all of the images as one object to the GPU, so the graphics memory has to be able to hold all of this image data at the same time. If we use a very dense set of high-resolution images, as was done by Google, it's thus very likely that no GPU will be able to handle this amount of data. Therefore, we have to look towards more efficient ways of sending the data for rendering, i.e. of dynamically selecting the images that actually have to be used to render the current frame. This is something that I'm currently working on, but is not ready yet.
A second important limitation is that Welcome to Lightfields relies on estimated depth (cf their corresponding research paper), one depth map per image, which they transform into one 3D mesh per image. To estimate this geometric information for a large number of images is a very time-consuming process. To give an idea, the authors of the paper estimate that "Running [their processing pipeline] in serial on a single workstation would take over a month". So to get similar results, for similarly large image datasets, the geometry estimation step is likely to be an important obstacle, as these time frames are definitely prohibitive.

I hope that I was able to answer your questions, I'd be happy to discuss more. And again, sincerest apologies for replying so late!

Best,

Greg

Paul Haynes

unread,
Jun 20, 2020, 4:17:35 PM6/20/20
to COLIBRI VR
Hi Greg,

Thanks for the reply and don't worry about the delay!

To answer your question about my experience; I first programmed a little Yi action camera (they can easily be hacked to do various things) to take a photo every 5 seconds (the camera may be able to do this by default however I wanted to expand upon this script later to do HDR capture etc.) I then placed it on a 1m long shelf and proceeded to move the camera horizontally in 5cm increments in-between each photo.

Having captured 20 images I then proceeded to process using ColibriVR and view in VR. I was impressed with the results but as expected even the smallest of vertical head movement caused issues.

For my second attempt I removed the shelf from the wall and rather poorly attached it to a camera tripod, this time I took multiple rows of images (I forget how many but covering an area of approximately 1m x 0.5m). I assumed this would allow for some vertical head movement. Again I processed in ColibriVR and was amazed at the results! Now is probably a good time to mention that I was processing/viewing on a small Dell PC (i3 6100, GTX1050ti), needless to say I was getting 2-3fps when viewing in VR but it was still really impressive!

I have yet to start my third attempt however in preparation I have purchased a new(er) PC and GPU and have grand plans for a motorised rig to automate everything and have the ability to do spherical capture. I'll keep you updated!

BTW I've sent you an email.

Regards,
Paul.

Heath

unread,
Feb 7, 2021, 4:07:55 AM2/7/21
to COLIBRI VR
Hello Grégoire,
Thank you for creating such a great tool.

I'm working on a VR project for the Quest 2(Android).
When building for Android, Unity doesn't want to package anything outside the Assets folder into the .apk (the android package).
When I try to follow the Amethyst tutorial, but try to put the data into the assets folder, I get several errors such as the images folder being deleted automatically.

If I work in a "Data" folder in the root project directory(as per your tutorial), I can complete the tutorial without issue. 
Is there anyway to work within the Assets folder with the files generated via the tutorial? 

Greg de Dinechin

unread,
Feb 1, 2022, 8:00:56 AM2/1/22
to COLIBRI VR
Hello Heath,

Thank you for your message, and very sorry for answering so late, I haven't been checking on the forum this past year.

For the processing step, the data cannot be placed in the Assets folder. Indeed, the processing methods create and delete files, and I implemented a security to disable creating/deleting files when inside of the Assets folder in a directory that is not explicitly temporary (see details here). This is intentional, as it prevents accidentally deleting any other assets in the project.

Rendering, on the other hand, can be performed from with the Assets folder. To do so, simply copy the required processed files :
  • the bundled_data folder
  • the sparse folder
  • the processed_data/processing_information.txt file (to be placed in a folder called processed_data)
  • the additional_information.txt file
from the folder in which they were created during processing (e.g. Project/Data/Amethyst) to your folder in the Assets folder (e.g. Project/Assets/Data/Amethyst).
Then, in the Rendering component, change the Source Data folder to this new folder in Assets, and you should be able to render the scene just as before.

Concerning building for Android: the COLIBRI VR toolkit was only tested as an Editor tool, so I cannot guarantee that it works correctly as part of a build. Some additional development may be needed for this to work.

Thanks again!

Best,
Greg
Reply all
Reply to author
Forward
0 new messages