Aftermath of the discussion at the LF4CV workshop at CVPR17

332 views
Skip to first unread message

Dierk Ole Johannsen

unread,
Aug 30, 2017, 10:01:24 AM8/30/17
to Light Field Vision
First we would again like to thank all of you who participated at the workshop and made the discussion at the end of the workshop a very lively one!

We took notes during the discussion and tried to compress them into the following list of topics. We can not guarantee that it contains all points raised at the workshop and grouping is just a very quick one.
Still we hope that it can provide ground for future discussion and research!

Please feel free to add additional points or open up new threads to discuss individual topics :)




DATA with quality ground truth is missing
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

what kind of data? synthetic vs real. camera array vs micro lens images vs plenoptic raw images. bias towards Lytro Illum because its available and camera array/subaperture views, because its easy.
different algorithms perform differently depending on the data and the presence of errors (vignetting etc)

what kind of data is important? what kind of scenes? what is the focus? depth? matting/segmentation? handling non-lambertian scenes? brdfs? intrinsic images?

what kind of depth should be the result? point clouds? meshes? depth maps? light fields can look behind occlusion boundaries => depth maps are bad ;)

how about resolution? light fields offer redundancy => should depth map be of higher resolution?

real world benchmark on raw data...

CAMERAS
%%%%%%%
how do light field cameras perform in terms of "pixel budget"? how to optimally distribute pixels (in cameras, behind lenses etc) to get optimal results

where to get (cheap(er)?) light field cameras?

PIPELINE
%%%%%%%%
especially from user perspective (industry): calibration, demosaicing, depth estimation, matting, temporal consistency. treat as one? treat separately?


OPEN QUESTIONS
%%%%%%%%%%%%%%
beyond 3D reconstruction, salience detection, segmentation, material analysis, brdf estimation, intrinsic images, scene understanding => using light field instead of 2D images for any CV problem? where is it really useful? what are the problems ONLY light fields can solve... or at least provide sufficient advantages
again: (ground truth) data is important!

DEFINITION of light fields
%%%%%%%%%%%%%%%%%%%%%%%%%%
surface light fields, multi camera, plenoptic, random collection of rays. do we need a broader definition of light fields?

COMPRESSION
%%%%%%%%%%%
currently jpeg wants to standardize the way plenoptic (or in general "rich" data) can be stored: https://jpeg.org/jpegpleno/index.html
what about compression? lossy vs lossless, should we as a community be involved? what about the non lambertian part?

DISPLAYS
there are light field displays ;)


Jose Gaspar

unread,
Sep 6, 2017, 10:16:42 AM9/6/17
to Light Field Vision
I would like to give my point of view on a couple of subjects risen from Dierk Johannsen.

-- "CAMERAS - where to get (cheap(er)?) light field cameras?"

The way I see for cameras to get cheap is that a huge market on "3D TV without glasses" really lifts off. Being able to display 3D at home will firstly make content producers get professional lightfield cameras. But then, the TV stereoscopic displays may get into computers, and then it is likely that people will consider buying consumer lightfield cameras to create and put online stereoscopic videos.

-- "OPEN QUESTIONS - using light field instead of 2D images for any CV problem?"

Comparing to binocular or trinocular stereo, lightfield imaging brings a larger complexity in its projection model (instead of 2 or 3 viewpoints all the sudden one can be dealing with more than 100 viewpoints). Software libraries will have to handle the extra complexity. Note however that the extra complexity can in some cases be trivialized e.g. using epipolar plane images and, when lesser expected, the complexity found in traditional "features correspondence problems" may turn to easier algorithms. Maybe some of the new science will appear in how to develop software for lightfield cameras.

Considering the CV point of view, one can see very interesting examples on e.g. seeing within "foggy" environments (S. Nayar), seeing behind metallic fences (R. Szelisky), etc. Works traditionally more within Computer Graphics, such as image or video base rendering, will evoke also more and more ideas on plenoptic / lightfield imaging.

-- "OPEN QUESTIONS - BRDF estimation"

Estimating the BRDF is a traditional subject under CG. One finds online BRDF datasets for many materials - their acquisition has motivated the creation of many methodologies and publications. This is a subject where the current trend of fusing CV and CG may get even more visible. Radiometry studies for lighfield cameras will be certainly be a topic interesting both to CV and CG researchers, specially whenever generalizations to non Lambertian surfaces start to be considered (a number of research groups are already putting energy here). Good tools for local BRDF estimation or identification will play a role e.g. in the aspects of lightfield structure reconstruction.

-- "COMPRESSION - JPEG wants to standardize the way plenoptic (...) can be stored"

I must say I will thank lightfield compression happening and, even better, to get e.g. OpenCV to providing freeware access libraries. Handling the memory space occupied by lightfield datasets is really cumbersome. Back in the 90s, when JPEG was still to appear and get generally accepted, many researchers would say that lossy compression would trouble the algorithms. However, everyone found pretty fast that no one had enough storage to handle uncompressed data (neither bus bandwidth to load/save) and many works started to be done upon JPEG and, later, even on MPEG. Lightfield compression will be surely very welcome.


Reply all
Reply to author
Forward
0 new messages