--
Go to http://kaldi-asr.org/forums.html find out how to join
---
You received this message because you are subscribed to the Google Groups "kaldi-help" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kaldi-help+unsubscribe@googlegroups.com.
To post to this group, send email to kaldi...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kaldi-help/bddec0e8-e4f0-46f7-a1d7-ae94949f7013%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
--
Go to http://kaldi-asr.org/forums.html find out how to join
---
You received this message because you are subscribed to the Google Groups "kaldi-help" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kaldi-help+unsubscribe@googlegroups.com.
To post to this group, send email to kaldi...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kaldi-help/5c7b63f2-00e0-4892-a926-d132fa70bb8f%40googlegroups.com.
Maybe your speakers had too little data per speaker, or you had no speaker information and you were effectively adapting per utterance. This becomes a no-op for utterances under 5 seconds.
On Fri, Jan 12, 2018 at 1:27 AM, Kina T. <tessf...@gmail.com> wrote:
Hi Dan,When i analysed the performance of my GMM and DNN systems, using GMM LDA+MLLT features achieved 12.65%WER and when i trained the SAT over LDA+MLLT alignment i got 13.94%WER which shows performance reduction when i used the fmllr speaker adaption than the LDA+MLLT. On DNN case using the SAT alignment as an input i got 11.43%WER and using LDA+MLLR features i got 11% . My question is:GMM and DNN system build using fmllr speaker adaptions gives lower perfromance than LDA+MLLT features but i expect the speaker adaption improve the performance of the systems, what is the reason this happens and how can i try to solve the issues?With best regards,
On Thursday, January 11, 2018 at 12:07:43 PM UTC+8, Kina T. wrote:Dear all,i build context independent GMM and DNN system using kaldi. When i build the Context independent GMM, i train mono and [triphone models - with deltas, LDA+MLLT and SAT by giving --"context-opts = --context-width=1, central-potion=0" as training parameters for all triphones to make context independent models] and finally the context independent SAT system got above provided to the DNN as an input to come up with DNN acoustic model. Do i used the correct procedure? if not is there any means to build context independent way other than the above for in kaldi?With regards,
--
Go to http://kaldi-asr.org/forums.html find out how to join
---
You received this message because you are subscribed to the Google Groups "kaldi-help" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kaldi-help+...@googlegroups.com.
To post to this group, send email to kaldi...@googlegroups.com.
Thanks ,My speakers are 120 from those 100 speaker has 100 sentences per speaker and remain 20 has 50 sentences.
To unsubscribe from this group and stop receiving emails from it, send an email to kaldi-help+unsubscribe@googlegroups.com.
To post to this group, send email to kaldi...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kaldi-help/e64dedd7-cecd-4acc-8941-cc8a5042ce89%40googlegroups.com.
To unsubscribe from this group and stop receiving emails from it, send an email to kaldi-help+unsubscribe@googlegroups.com.
To post to this group, send email to kaldi...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kaldi-help/9b96c975-2a1e-4464-832e-17c524674aad%40googlegroups.com.
To unsubscribe from this group and stop receiving emails from it, send an email to kaldi-help+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kaldi-help/335c7967-69e2-46e1-836d-65115912bf35%40googlegroups.com.