Understanding belief propogation

28 views
Skip to first unread message

Chengi Liu

unread,
Feb 20, 2014, 3:44:11 AM2/20/14
to graphl...@googlegroups.com, graph...@googlegroups.com, Danny Bickson
Hi,
  I am having a hard time understanding belief propagation algorithm mentioned here:
http://docs.graphlab.org/graphical_models.html
vs the wiki link.
I would really appreciate if you can help me clarify my understanding.
Lets say... that I have the following data:

0 0.5 0.5
1 0.8 0.2
2 0.1 0.9
3 0.3 0.7
4 0.2 0.8


0 1
0 2
0 3
0 4

and so on..

I just want to focus on computation happening on vertex 0...
So, lets start working on the forumla (joint prob distribution) from left
The exp part means the following...

For all the edges connected to vertex 0, let me grab all the weights (defaulted to 1) and smoothing parameter (defaulted to 2) and then sum them up.. The indicator function is just to check self edges?? (Please confirm)
So, in this example  I should get:
exp-(1*2 +1*2 +1*2 +1*2) = exp(-8)

Now this is multiplied by the prior vector? How?
Is it the belief I computed from the vertices attached to vertex 0?? (1,2,3,4?)
So, lets say vertices 1,2,3 and 4 have believes [0.1,0.9] (for sake of an example)

Then this is a probability vector?? Since the formula says product.. Is it a dot product of these vectors?
Then that would mean I get a scalar "k", so for vertex 0, I get exp(-8)*k
That doesnt make any sense? This should have been a vector?

Can someone please explain to me what am i lacking? (in plain simple words if possible.)
Many thanks

Chengi Liu

unread,
Feb 21, 2014, 4:40:53 AM2/21/14
to graphl...@googlegroups.com, graph...@googlegroups.com, Danny Bickson
pinggg

Danny Bickson

unread,
Feb 21, 2014, 6:26:52 AM2/21/14
to graph...@googlegroups.com, GraphLab Users
Hi Chengi, 
There are two levels of complexity involved in understanding belief propagation code in GraphLab. First you need to verify you master the theory of the algorithm. Second you need to understand GraphLab code which implements BP in a distributed way.

Your questions are mainly about the formulas, thus I recommend first taking a Coursera graphical models course: https://www.coursera.org/course/pgm
Especially lecture 4.

The priors are vectors of probabilities with a value for each state.
The product is dot product of vectors. The indicator function means there are no self weights.

Please ask any follow up questions in our new forum: http://forum.graphlab.com
as we are deprecating this mailing list.

Best,

Danny Bickson
Co-Founder
GraphLab Inc.


--
You received this message because you are subscribed to the Google Groups "GraphLab API" group.
To unsubscribe from this group and stop receiving emails from it, send an email to graphlabapi...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Reply all
Reply to author
Forward
0 new messages