Decoding Beam vs Lattice Generation Beam

536 views
Skip to first unread message

orum farhang

unread,
Aug 22, 2019, 4:08:10 AM8/22/19
to kaldi-help
Hi All,

Could someone please give a brief explanation about the difference between Decoding Beam(--beam) and Lattice Generation Beam(--lattice-beam)?
I know that higher beam will cause to slower decoding and more accurate results. But I'm not sure about the "--lattice-beam" and what is the role of this parameter in WER?

Thanks in advance,

Rudolf A. Braun

unread,
Aug 22, 2019, 5:02:29 PM8/22/19
to kaldi-help
--beam is what you would expect, based on this value during decoding states will be pruned or not (which are treated as transitions (class ForwardLink)). It's actually a bit more complicated, but nevermind that.

When you initialize the decoder there is a parameter prune_interval, which determines how often the decoding graph will be pruned (by default every 25 frames). This is to keep the decoding graph relatively small. During that pruning, the lattice-beam is used. An additional reason to have a lattice-beam that is smaller than the normal beam, is if you use a large lattice-beam (say, 13) you will notice a significant increase in RTF because creating the CompactLattice from the decode graph (after decoding is done) will take a a long time.

So having a --lattice-beam which is smaller than the beam improves both inference speed and memory usage. In my experience at a negligible cost in WER.
Reply all
Reply to author
Forward
0 new messages