lnProbability returning values bigger than 0

190 views
Skip to first unread message

Mircea Pricop

unread,
May 4, 2012, 5:59:08 AM5/4/12
to jahmm-...@googlegroups.com
Hello, first of all thank you so much for putting out such a great and complete library. I am using it to great succes for my B.sC. thesis, and the performance and ease of use is amazing. I do have one small curiosity though. When using lnProbability to get the prob. of a sequence to be generated by a previously trained HMM, I get very large values (in the hundreds). I expected all values to be negative. Also, probability() also returns very large values sometimes. 

Would this happen because of scaling? If so, can the factor be accessed somehow? I know most people can just treat the results as "scores" for comparison, but I would be interested in the absolute probability value.

PS. I am training a left-right MultiGaussian HMM.

Jean-Marc François

unread,
May 5, 2012, 4:32:41 PM5/5/12
to jahmm-...@googlegroups.com
Hi Mircea,

Happy to hear you've found the library useful.

I agree with you concerning the value of Hmm#lnProbability: it should of course be negative; you are most probably running into numerical instabilities.
The scaling factors can theoretically be accessed (ForwardBackwardScaledCalculator#ctFactors which are the alpha and beta factors described in Rabiner & Juang's classical papers), but I doubt you'll find those very useful.

If I were you, I'd check that Hmm#probability returns an odd value when Hmm#lnProbability does (there could be a bug if it is not the case). In that case, the HMM you are using could be invalid, probably because the learning procedure did not converge; you could try to print it out if it's not too complex.

Hth,
JM


--
You received this message because you are subscribed to the Google Groups "Jahmm HMM library" group.
To view this discussion on the web visit https://groups.google.com/d/msg/jahmm-discuss/-/pP3SHSLaOioJ.
To post to this group, send email to jahmm-...@googlegroups.com.
To unsubscribe from this group, send email to jahmm-discus...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/jahmm-discuss?hl=en.

Mircea Pricop

unread,
May 6, 2012, 11:10:59 AM5/6/12
to jahmm-...@googlegroups.com
Hi JM,

Thanks a lot for answering. I was stupidly initializing the HMM with completely invalid Pi's, that didn't add up to 1. I would have never found it out without your hint.

Sadly, all my 56-hours evaluation runs are now invalidated :))

Thank you,
Mircea Pricop.
To unsubscribe from this group, send email to jahmm-discuss+unsubscribe@googlegroups.com.

shuyi...@googlemail.com

unread,
Mar 18, 2014, 9:47:47 AM3/18/14
to jahmm-...@googlegroups.com

Hello,

I did not want to make a new Thread because I have the same 'problem'.
Hope someone will notice.

Before I start with my problem, I want to thank the author of jahmm Jean-Marc.
Thank you, for the great library. I am also using it in my B.Sc. thesis. It is easy to use and has a good documentary. The examples were also very helpful. I really like it!

But I have one problem and hope someone can give me a hint or an approach.
It is the same situation Mircea had. I am training a left-right MultiGaussian HMM. When using lnProbability() to get the probability of an incoming observation sequence, the resulting value is very high. Also using probability() will return enormously high values (~100k). So both methods will get me strange values.
The HMM was trained with around 1300 observation sequences and has 3 states. The observations are of the type ObservationVector and have a dimensionality of 2.
Maybe I should mention, that the sequences are of different length (3-10).
Could this be a mistake?

As Mircea did, I checked the Pi values. They seem fine, all summing up to 1. (both initial and trained HMM)
I also checked the state transition values in A and they also sum up to 1.
Maybe my HMM is still invalid as JM already stated to Mircea? JM also mentioned 'the learning procedure did not converge'.
What exactly does this mean? Do I need more training data?
Are there possibly other causes for that behaviour? I do not think that the library caused this. I tried a simpler example form a book with an HMM with OpdfDiscrete values. They are fine.

I hope my english is ok and you understand what I mean.

Regards,
Shuyi Weng

shuyi...@googlemail.com

unread,
Jan 11, 2015, 9:47:28 AM1/11/15
to jahmm-...@googlegroups.com
Hey,

I will answer my last question myself. The solution was quite simple for my problem :)
In short I was writing an application for character recognition. The input values were normalized between 0 and 1. The problem lies exactly here.
When I tried a normalization between 0 and 10 all probabilities turned out just fine! So I guess it was a numerical problem. (My original normalized values were already very small.)

So I hope this answer may help some other users too :)

Regards,
Shuyi Weng

Vinh Nguyen

unread,
May 4, 2016, 5:13:21 PM5/4/16
to Jahmm HMM library
Hi JM,

I met same problems with lnProbability > 0 and the HMM.probability >> 1 (very very large). I have checked the code for ForwardBackwardCalculator for the method HMM.probability, it looks good. 
Then I checked each Opdf of my learned HMM. One of my Opdf gives Opdf.probability(...) > 1. This seems not right. Do you think it may be the problem? Anyway, could you tell me how to check if the HMM converges or not?

Thanks,

V
Reply all
Reply to author
Forward
0 new messages