Hello,
I'm fairy new to probabilistic graphical models, and just started looking into loopy belief propagation on some small graphs with the aid of OpenGM. Thanks for this great piece of work!
This might be quite a stupid question but I really need some help to point me in the right direction. After performing the inference, I queried the marginal probabilities, but had no idea how to interprete the output. Should the outputs be probabilities?
I build up a graph in Python and called opengm through the python wrapper. The graph has numVar vertices, and each vertex may take one of two labels. I performed LBP using the following code:
inferBipartite = opengm.inference.BeliefPropagation(gm,parameter=opengm.InfParam(damping=0.01))
inferBipartite.infer()
Then I query the marginals using the following (where numVar=7 is the total number of vertices in the graph):
inferBipartite.marginals(range(numVar))
I got the following numpy array:
Final Marginal:
[[ 0. 0. ]
[ 0.82664797 0. ]
[ 0. 1.35244174]
[ 0. 0. ]
[ 0. 0.40009358]
[ 0. 0.33051176]
[ 0.73773787 0. ]]
I'm not sure how I should inteprete these results -- since the third row contains a number larger than 1, they don't look like probabilities; moreover, why is there at least one of the two entries in the same row equaling zero?
I think I have already understood the basics of message passing and belief propagation, but there are some fundamental pieces missing from my knowledge base that should bridge my understanding of the theory with opengm. Any comments or pointers will be highly appreciated!
Again, many thanks for this great library!