Limitations to the size of a GraphicalModel

32 views
Skip to first unread message

Sahand Yousefpour

unread,
Feb 16, 2017, 12:31:09 PM2/16/17
to opengm
Hi,

I was trying to build a graphical model out of a point cloud. Since I do not know the logical numbers for openGM, I would say it was rather a small one (120,000 points). My question could basically be divided into two parts:
  • How much memory is normally needed during the construction of a GraphicalModel?
  • What is the normal size of the variables the GraphicalModel is designed to operate on?
The reason I am asking the first question is because of the fact that during the procedure of creating a Graphical Model instance, the program used a considerable amount of memory. However, when the procedure was done, the resulting GM (still the object in memory and not a saved one on disk) was pretty small.

Thank you,

Jörg Kappes

unread,
Feb 18, 2017, 3:12:09 PM2/18/17
to opengm
Hi Sahand,
in our benchmark-paper we have consider models with 2.356.620 variables, c.f. http://hciweb2.iwr.uni-heidelberg.de/opengm/index.php?l0=benchmark&bench=ijcv2014 .
However, beside the number of variables also the number of labels, structure of the model and type of functions matters.

So without these additional information, I can not answer your questions.

During construction of a model it can happen that more memory is needed, especially if you do not reserve factors, functions and nodes.
Furthermore it matters if You use native c++, python ore matlab to build the model.

Best Joerg
,

Sahand Yousefpour

unread,
Feb 20, 2017, 9:31:33 AM2/20/17
to opengm
Thank you very much for your answer, Joerg.

I managed to found the source of my problem, which was defining the shape of the functions in a wrong way. However, I would like to explain a bit more, in case it was interesting for someone on the forum.

I am trying to perform a multi-label segmentation on a point cloud, using alpha-expansion. To achieve so, I need to first set up a graphical model based on the given data and smoothness terms of my energy function. 
To get familiar with OpenGM, I started by constructing the 1st order functions and factors out of my data term. My first choice was to use the available ExplicitFunctions and compute the data cost of each label (my case: numberofLabels = 4) for every variable. 

In general, is using the ExplicitFunction the best way for such case? It is storing all possible combinations, but some may never be used. Shall I implement my own CustomFunction whenever I feel the need?

  • The system I'm running my code on has 32 GB of RAM.
  • I am using the native C++ code.
  • Input data is a uniformly sampled point cloud, which normally has information such as x-y-z coordinates, normal vectors, color information, etc.
  • The number of my variables are normally high (order of million).
  • Neighborhood information is computed using a radius search on a tree structure.
  • Currently, I assume less than 5 labels for the model.
Reply all
Reply to author
Forward
0 new messages