Light field images and alicevison

79 views
Skip to first unread message

Dany Jar Jar

unread,
Aug 18, 2020, 10:04:51 AM8/18/20
to AliceVision
Hi everyone,

I'm getting interesting in light field images (I should take a lytro illum soon do to initial test with it) and it look like this kind of images should be really nice for photogrametry.

Each individual images can give a good quality depth map (see https://github.com/Computational-Camera/Light_Field_Depth) and I was wondering how much effort is required too  integrate that in alicevision pipeline. 

The default pipeline would need to be changed a bit, the first steps (before the depth map) would be the same. However I'm wondering if the fact the light field images can be refocused will not mess up the current algorithm. Features matching steps could be done on a all in focus version of the images for example. But, for the structure from motion I'm wondering if the multi focus will not be an issue for the distance estimation from the object.

Before starting anything I wanted to see if anyone had any thought on that are already did some experiment.

Daniel

Sim

unread,
Sep 9, 2020, 3:55:19 AM9/9/20
to AliceVision
I am also interested in light field imaging. One benefit is that you can get in-focus images from all captures, which is really useful for photogrammetry, as out of focus areas can mess up the reconstruction or result in blurry areas.

It should not be too difficult to add support for this (in theory). I think the in-focus images should be used for feature extraction and to compute the SFM. Then the Light_Field_Depth tool can be used to generate the depth maps. The light field depth maps then need to be converted to the Meshroom format.

Adding support for light field images could go hand in hand with support for "bokeh" images (supported by some devices by Google, Huawei, Apple) as both allow refocussing and depth map generation.

bokeh effect: https://github.com/panrafal/depthy for google camera (also http://stereo.jpn.org/kitkat/indexe.html, http://stereo.jpn.org/kitkat/gcamera001.zip) and https://github.com/designer2k2/depth-map-extractor for Huawei. Actually embedded depth maps.
Apple has something similar https://developer.apple.com/documentation/avfoundation/avportraiteffectsmatte/extracting_portrait_effects_matte_image_data_from_a_photo https://www.raywenderlich.com/314-image-depth-maps-tutorial-for-ios-getting-started

Fabien Castan

unread,
Sep 9, 2020, 4:04:49 AM9/9/20
to Sim, amp...@gmail.com, AliceVision
Hi Daniel,
I would be curious to see your results.
I made some tests a while ago and it was working fine but it was just on a single dataset. So you should be able to do what you need by adjusting the pipeline.
We can setup a confcall if you have troubles.
Best,




--
You received this message because you are subscribed to the Google Groups "AliceVision" group.
To unsubscribe from this group and stop receiving emails from it, send an email to alicevision...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/alicevision/ac9d056d-ee87-46eb-b54e-a444c0dc795fn%40googlegroups.com.

Dany Jar Jar

unread,
Sep 9, 2020, 5:24:08 AM9/9/20
to AliceVision
I don't have a lot of time to put in this project right now. But, I now have a lytro illum I can use for test. I'm going to look into how to use the previously linked github with lytro images (the test images are not available anymore).
After having the depth map I'm thinking to generate an "ALL in focus" images using the depth map and the different focus generated by plenotpicam (https://github.com/hahnec/plenopticam).

For the pipeline itself, I was thinking of maybe starting by doing a pre-processing node  (generating all in focus image and depth map) and adapting the other step in the pipeline to use that.
The same logic could be used for "bokeh" images but with a specific node per type of images I guess.

I'll try to post test results here when I have the time to start working on it.

Fabien Castan

unread,
Sep 9, 2020, 6:37:08 AM9/9/20
to Dany Jar Jar, AliceVision
Hi Daniel,
Would it be possible for you to put some records online?
Or do you have links to existing online datasets?


De : alice...@googlegroups.com <alice...@googlegroups.com> de la part de Dany Jar Jar <amp...@gmail.com>
Envoyé : mercredi 9 septembre 2020 11:24
À : AliceVision <alice...@googlegroups.com>
Objet : Re: Light field images and alicevison
 
⚠️ Do not click or open unknown attachments ⚠️ **

Dany Jar Jar

unread,
Sep 9, 2020, 6:49:40 AM9/9/20
to AliceVision
Hi Fabien,

I will try to upload a dataset of lytro illum photos containing photos of the same objects on different angles this weekend. (Most dataset I've seen online just have 1 photo per object).

Daniel

Sim

unread,
Sep 11, 2020, 11:01:22 AM9/11/20
to AliceVision

Dany Jar Jar

unread,
Sep 14, 2020, 12:39:21 PM9/14/20
to AliceVision
Hi,

I found the time quickly to do 3 sets of photos. 1 set with a box and a case, 1 with a miniature with a lot of tiny detail and 1 with a camera bag.
All sets have different angles. Some are darker than other because of the light setting in the room I took the photos, they may not be all usable. But, it should still allow some testing with multiple light field photo of the same object.
I started to quickly look at https://github.com/Computational-Camera/Light_Field_Depth which can give a usable depth map but I think it is possible to make improve it. I'll put some updates when I managed to have a better depth map out of it.

Daniel
Reply all
Reply to author
Forward
0 new messages