They both have the same independence assumption and the assumption that
the variables are Gaussian distributed (I use continue nodes in the
naive bayes network). They have a same kind
of prior mechanism. But the performance differs greatly.
I'm a bit confused, I hope someone is able to clarify the differences.
Thanks for the effort!
In general, the two concepts are not related.
The Gaussian Mixture Model approximates the probability
density with a sum of Gaussians. Each gaussian is, in
general, characterized by a full covariance matrix.
The variables are independent if, and only if, the matrix is
diagonal.
The Naive Bayes Assumption assumes that the variables
are independent. Therefore, the probability density can be
characterized by a product of univariate densities.
If the univariate densities are Gaussian, the probability is
characterized by a Gaussian Mixture Model with a diagonal
covariance matrix.
Hope this helps.
Greg
Bayesian Belief Networks ( aka Bayesian Nets ) and Neural
Networks are two completely different animals. It is not
clear whether this is contributing to your confusion.
Hope this helps.
Greg
Thanks for your help, your answer is very clear. I did however forget
to mention that the implementation I use for GMM uses a diagonal
covariance matrix, that's why I mentioned that independency assumption.
The last thing I would like to ask you is this:
Did I interpreted the last sentence correctly when I come to the
conclusion that Naive Bayes with univariate gaussian nodes should give
the same results as a GMM with a diagonal covariance matrix ?
Thanks again for your time and effort.. I appreciate it a lot.