Hello,
I did not want to make a new Thread because I have the same 'problem'.
Hope someone will notice.
Before I start with my problem, I want to thank the author of jahmm Jean-Marc.
Thank you, for the great library. I am also using it in my B.Sc. thesis. It is easy to use and has a good documentary. The examples were also very helpful. I really like it!
But I have one problem and hope someone can give me a hint or an approach.
It is the same situation Mircea had. I am training a left-right MultiGaussian HMM. When using lnProbability() to get the probability of an incoming observation sequence, the resulting value is very high. Also using probability() will return enormously high values (~100k). So both methods will get me strange values.
The HMM was trained with around 1300 observation sequences and has 3 states. The observations are of the type ObservationVector and have a dimensionality of 2.
Maybe I should mention, that the sequences are of different length (3-10).
Could this be a mistake?
As Mircea did, I checked the Pi values. They seem fine, all summing up to 1. (both initial and trained HMM)
I also checked the state transition values in A and they also sum up to 1.
Maybe my HMM is still invalid as JM already stated to Mircea? JM also mentioned 'the learning procedure did not converge'.
What exactly does this mean? Do I need more training data?
Are there possibly other causes for that behaviour? I do not think that the library caused this. I tried a simpler example form a book with an HMM with OpdfDiscrete values. They are fine.
I hope my english is ok and you understand what I mean.
Regards,
Shuyi Weng