It's best to just train a small model directly. I'm not aware of any
methods like that (model compression, soft target training) that are
really effective, at least not compared with TDNN-F. Also, training
the model directly is much faster. If you use the resnet-style TDNN-F
models from this pull request
https://github.com/kaldi-asr/kaldi/pull/2430 (which I intend to merge
in the next few days),
... you can generally decrease the bottleneck-dim of the 'tdnnf-layer'
layers fairly aggressively without affecting the WER very much. If
you want a very small model you can reduce the number of layers and
the other dimensions too, e.g. reduce the 1536 to 1024-- but for the
bottlenecks near the end, in the linear-component and prefinal-layer,
don't go much below 256, because that becomes an information
bottleneck, and it will start to degrade the results more.
Dan
> --
> Go to
http://kaldi-asr.org/forums.html find out how to join
> ---
> You received this message because you are subscribed to the Google Groups
> "kaldi-help" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to
kaldi-help+...@googlegroups.com.
> To post to this group, send email to
kaldi...@googlegroups.com.
> To view this discussion on the web visit
>
https://groups.google.com/d/msgid/kaldi-help/f62208b3-39b0-4556-9cd7-d00b854f7c1e%40googlegroups.com.
> For more options, visit
https://groups.google.com/d/optout.