Negative perplexity ?

1,343 views
Skip to first unread message

Dave Robinson

unread,
Jan 31, 2013, 9:13:20 AM1/31/13
to gen...@googlegroups.com
I'm just getting my feet wet with the variational methods for LDA so I apologize if this is an obvious question.  

While I appreciate the concept in a philosophical sense, what does negative perplexity for an LDA model imply?  I get a very large negative value for LdaModel.bound(corpus=ModelCorpus) . Looking at the Hoffman,Blie,Bach paper (Eq 16) leads me to believe that this is 'difficult' to observe. 

Or is the exponent from Eq 16 what is being presented?   

Cheers,
=Dave


Radim Řehůřek

unread,
Feb 2, 2013, 6:39:38 AM2/2/13
to gensim
Hi Dave,

On Jan 31, 3:13 pm, Dave Robinson <roguerug...@gmail.com> wrote:
> While I appreciate the concept in a philosophical sense, what does negative
> perplexity for an LDA model imply?  I get a very large negative value for
> LdaModel.bound(corpus=ModelCorpus) . Looking at the Hoffman,Blie,Bach paper
> (Eq 16) leads me to believe that this is 'difficult' to observe.
>
> Or is the exponent from Eq 16 what is being presented?

exactly, the probabilities are infinitesimal, so all computations
happen in log space.

Best,
Radim



>
> Cheers,
> =Dave

Han Xu

unread,
Nov 4, 2013, 12:21:32 AM11/4/13
to gen...@googlegroups.com
Hi Radim, 

Thank you for your kind answer. I'm just wondering if this means the smaller this score given by bound(), the better the language model is likely to be? 

Thanks and Regards,
Han
Reply all
Reply to author
Forward
0 new messages