MultivariateGaussian positive definite restriction

18 Aufrufe
Direkt zur ersten ungelesenen Nachricht

brandon willard

ungelesen,
04.02.2013, 11:57:2104.02.13
an cognitiv...@googlegroups.com
Any chance the restriction to positive definite covariance matrices in MultivariateGaussian could be removed to allow positive semi-definite covariances?

Justin Basilico

ungelesen,
04.02.2013, 23:57:5104.02.13
an cognitiv...@googlegroups.com
Looks like this gets enforced in our wrapper for MTJ's Cholesky Decomposition. Is it hard to not just add a small diagonal term to your covariance matrix to ensure it is positive semidefinite?

Kevin, do you know if the MTJ Cholesky ends up not providing data from a non positive semidefinite matrix or is it an extra safety check? We may want to look at http://en.wikipedia.org/wiki/Cholesky_decomposition#Avoiding_taking_square_roots as an alternative.

Thanks,
Justin

On Mon, Feb 4, 2013 at 8:57 AM, brandon willard <brandon...@gmail.com> wrote:
Any chance the restriction to positive definite covariance matrices in MultivariateGaussian could be removed to allow positive semi-definite covariances?

--
You received this message because you are subscribed to the Google Groups "Cognitive Foundry" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cognitive-foun...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
 
 

Dixon, Kevin R

ungelesen,
05.02.2013, 00:15:1505.02.13
an <cognitive-foundry@googlegroups.com>, cognitiv...@googlegroups.com
Yeah, we have gone back and forth on this over the years. I would recommend just adding a small diagonal term to the covariance.

The Cholesky form without square roots doesn't extend to PSD matrices, just eliminates square roots (I think). Plus we tend to use LAPACK for decompositions, and I'm reluctant to special case this one.

What do you think?

--
Kevin R. Dixon
Sandia National Laboratories
Critical Systems Security (05621)
MS0672, TA-I: 729/134
tel: (505) 284-5615
fax: (505) 284-3258


brandon willard

ungelesen,
05.02.2013, 00:30:3805.02.13
an cognitiv...@googlegroups.com
Well, SVD works for all cases; are you avoiding that, though?  I don't know about the LAPACK available through java, but I know it can perform pivoted cholesky decomp in some versions.

Dixon, Kevin R

ungelesen,
05.02.2013, 08:36:2205.02.13
an <cognitive-foundry@googlegroups.com>, cognitiv...@googlegroups.com
True. But the problem boils down that we need the inverse of the covariance and determinant to evaluate the PDF and we need some square root decomposition (Cholesky or some other) to generate correlated samples.

I think when Baz and I used to debate this, we couldn't come up with a solution that produced intuitive behavior 
In both cases.

If you know of a package that handles PSD covariance for evaluation and generation, I would be happy to shamelessly steal it. :-)

--
Kevin R. Dixon
Sandia National Laboratories
Critical Systems Security (05621)
MS0672, TA-I: 729/134
tel: (505) 284-5615
fax: (505) 284-3258


brandon willard

ungelesen,
05.02.2013, 13:39:5605.02.13
an cognitiv...@googlegroups.com
The SVD offers all of those things (determinant, inverse, "square root", for semi and fully pos. def. matrices), and, if you keep track of it in the distribution object, you can use it for improved  Kalman filtering.

It's usually not as efficient as a simple Cholesky decomp when the covar is strictly pos. def., but the big picture use of the SVD can balance that out. 

brandon willard

ungelesen,
05.02.2013, 13:48:4505.02.13
an cognitiv...@googlegroups.com

Dixon, Kevin R

ungelesen,
06.02.2013, 00:22:4506.02.13
an cognitiv...@googlegroups.com
Hi Brandon,

Thanks for the pointers... I'll look at incorporating the pivoted Cholesky into the Foundry.

But I'm still unclear how I would evaluate the PDF of a Gaussian with a PSD covariance matrix.  What would I use for the inverse of the determinant in the normalizing constant?

In particular, I'm unsure how to answer these questions if C has a null space (det(C)=0):

1) What is the value of p(x|u,C) when x is in the range of PSD C?

2) What is the value of p(x|u,C) when x is in the null space of C?  Maybe I'm rusty, but L'Hospital's answer doesn't make sense to me here :-)

Also, we tend to use the Foundry for some large-scale applications where efficiency is very important, so SVD would be a nose bleed for a very special case.  We tend to just add small values to the diagonal of C to kludge it to be full rank and numerically stable....

What do you think?

Thanks again!
Dr. Kevin R. Dixon
Sandia National Laboratories
Department Manager, Critical Systems Security (05621)
MS0622, TA-I: 729/134

From: cognitiv...@googlegroups.com [cognitiv...@googlegroups.com] on behalf of brandon willard [brandon...@gmail.com]
Sent: Tuesday, February 05, 2013 11:48 AM
To: cognitiv...@googlegroups.com
Subject: Re: [EXTERNAL] Re: [Cognitive Foundry] MultivariateGaussian positive definite restriction

brandon willard

ungelesen,
06.02.2013, 15:09:4506.02.13
an cognitiv...@googlegroups.com
Yeah, the density isn't defined in C, but you generally work in a subset of C where it is defined (with SVD results, the space of nonzero singular values).  [The wiki on the multivariate normal mentions this under "degenerate case"]
 
However, at this point the implementation probably does get more complicated than desired, so I can understand avoiding it.  

Fyi: MultivariateGaussian only fails for semi-definite covariances when sampling, not when evaluating the density.  It appears that denseLU is used to find the logDeterminant.
Allen antworten
Antwort an Autor
Weiterleiten
0 neue Nachrichten