Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Difference between Gaussian Mixture Model (GMM) and Naive Bayes (NB)

2,390 views
Skip to first unread message

qon...@gmail.com

unread,
Mar 16, 2006, 7:16:59 AM3/16/06
to
What is exactly the difference between a Gaussian Mixture Model and a
Naive Bayes Network?

They both have the same independence assumption and the assumption that
the variables are Gaussian distributed (I use continue nodes in the
naive bayes network). They have a same kind
of prior mechanism. But the performance differs greatly.

I'm a bit confused, I hope someone is able to clarify the differences.
Thanks for the effort!

Greg Heath

unread,
Mar 16, 2006, 11:06:37 AM3/16/06
to

In general, the two concepts are not related.

The Gaussian Mixture Model approximates the probability
density with a sum of Gaussians. Each gaussian is, in
general, characterized by a full covariance matrix.
The variables are independent if, and only if, the matrix is
diagonal.

The Naive Bayes Assumption assumes that the variables
are independent. Therefore, the probability density can be
characterized by a product of univariate densities.
If the univariate densities are Gaussian, the probability is
characterized by a Gaussian Mixture Model with a diagonal
covariance matrix.


Hope this helps.

Greg

Message has been deleted

Greg Heath

unread,
Mar 16, 2006, 11:19:17 AM3/16/06
to

qon...@gmail.com wrote:
> What is exactly the difference between a Gaussian Mixture Model and a
> Naive Bayes Network?

Bayesian Belief Networks ( aka Bayesian Nets ) and Neural
Networks are two completely different animals. It is not
clear whether this is contributing to your confusion.

Hope this helps.

Greg

qon...@gmail.com

unread,
Mar 16, 2006, 6:54:42 PM3/16/06
to
Hey Greg,

Thanks for your help, your answer is very clear. I did however forget
to mention that the implementation I use for GMM uses a diagonal
covariance matrix, that's why I mentioned that independency assumption.

The last thing I would like to ask you is this:

Did I interpreted the last sentence correctly when I come to the
conclusion that Naive Bayes with univariate gaussian nodes should give
the same results as a GMM with a diagonal covariance matrix ?

Thanks again for your time and effort.. I appreciate it a lot.

ic3m...@gmail.com

unread,
Dec 7, 2013, 8:53:48 AM12/7/13
to
Hi Greg,
I stumbled upon your post and it is very clarifying. However, I still have a question, the Gaussian Naive Bayes computes the probability as a product of the gaussians while the Mixture of Gaussians computes it as a the sum of mixing coef of each cluster multiplied by the gaussian. Therefore, how can a sum provide similar results to a product?
Thank you.
Regards

dsps...@gmail.com

unread,
Nov 24, 2014, 4:57:24 AM11/24/14
to
Its the total probability of a data point which is the sum of multivariate gaussians, each corresponding to a cluster.
But the likelihood, ie P(x|c) will still be a multiplication for both naive bayes and a diagonal-covariance-based GMM.
0 new messages