Quick checks:
It looks like you are still feeding in the my_bigrams variable, which presumably has the bigrams rather than trigrams?
Are you sure you want to be feeding NgramModel a list of ngrams? I believe it would be better to give it the word list and allow it to build the ngrams itself.
Generally:
How to smooth model is going to heavily depend on the input data, e.g. feature selection, tokenisation, stemming and many other choices. It is very difficult to provide a specific answer to this question.
What are your options when a mailing list can't help? You can probably figure out an answer to the question yourself though by getting a deeper understanding of what is happening.
You can take a look at the source code that generates the perplexity score relatively easily[0]. You'll see that it's just an expansion on the text's entropy score[1]. The docstring for NgramModel.entropy is trying to build "the average log probability of each word in the text." So changing the probability distribution
should change the entropy value. However, if you are already providing ngrams, perhaps there is not that much work for it to do?
[0]
http://nltk.org/_modules/nltk/model/ngram.html#NgramModel.perplexity