Hybrid training isn't normally what we recommend. We normally
lattice-free MMI. It's structurally a little bit similar to CTC,
(i.e. if CTC is a directed graphical model, LF-MMI is like the
undirected form of that model)... also there are other differences,
On Sat, May 12, 2018 at 1:54 PM, saurabh vyas <
saurabh...@gmail.com> wrote:
> I see, thanks for your quick reply, Ill try hybrid training then :)
>
> On Sat, May 12, 2018 at 11:16 PM, Daniel Povey <
dpo...@gmail.com> wrote:
>>
>> > Hi, I am trying to learn Kaldi, I played with some scripts ( an4 ) and
>> > ran
>> > the commands given in run.sh, I am interested in end to end systems that
>> > make use of CTC Loss
>>
>> Kaldi doesn't support CTC, mostly because CTC doesn't work very well.
>> (After extensive experiments with it, I decided not to support it).
>>
>> > , or using encoder-decoder architecture ( with/without
>> > attention mechanism ), are there any documentation regarding these ? I
>> > have
>> > my own data , that is 16K mono .wav files, and sentence level
>> > utterances.
>>
>> Nothing like this is supported.
>>
>> Dan
>>
>>
>> > --
>> > Go to
http://kaldi-asr.org/forums.html find out how to join
>> > ---
>> > You received this message because you are subscribed to the Google
>> > Groups
>> > "kaldi-help" group.
>> > To unsubscribe from this group and stop receiving emails from it, send
>> > an