Hi, I am going to train RNNLM for a spontaneous corpus. I have ca. 600 GB text data for the training of language model. Is there any recipe for such kind of large data?
--
Go to http://kaldi-asr.org/forums.html find out how to join
---
You received this message because you are subscribed to the Google Groups "kaldi-help" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kaldi-help+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kaldi-help/15690e30-b82a-486a-a4ae-4c51613651be%40googlegroups.com.
--
Go to http://kaldi-asr.org/forums.html find out how to join
---
You received this message because you are subscribed to the Google Groups "kaldi-help" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kaldi-help+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kaldi-help/94319e53-4f23-4fb5-a7e2-c886c891e653%40googlegroups.com.
To unsubscribe from this group and stop receiving emails from it, send an email to kaldi...@googlegroups.com.