Another thing you could try is just to rescore using the boosted LM
(either by boosted G, or modify ARPA model and recompute const arpa
or I have a code that uses srilm libs to rescore using the ARPA model
directly).
Yes, it's just an ugly hack, as it won't help you to get more
(key)words into the lattice. But on the other hand, the lattice recall
(or STWV) is usually quite OK and the issue just the (key)word scores
are low. I was playing with this a couple of years back and saw ~3%
abs improvement on ATWV for most of the babel langs. Due to the lack
of bandwidth, I never got to use it in a real system.
YMMV -- the possible improvement probably depends also on how good is
your calibration technique.
y.