--
Go to http://kaldi-asr.org/forums.html find out how to join
---
You received this message because you are subscribed to the Google Groups "kaldi-help" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kaldi-help+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kaldi-help/13bcc4e4-d7e4-4ea3-a0e4-53471ad82aab%40googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kaldi-help/CAFHC87HUsU53rsC%3D0_Lv7WPm5YN_iWexL1HMMJJmHnL9TQVpNw%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kaldi-help/CAEWAuyS9BHeXWVM%2BTRoQjF-eLGOabUOOVGtpJ81JdM7VCfYt8Q%40mail.gmail.com.
OK, thank you!Just to clarify before I will make the changes and contribute the code back:1. On nnet3-xvector-get-egs.cc, the function SequentialBaseFloatMatrixReader should be replaced to SequentialGeneralMatrixReader.
2. What are the other parts in nnet3-xvector-get-egs.cc that use "BaseFloatMatrix" and should be changed to GeneralMatrix? (line 188 is the only part using the SequentialBaseFloatMatrixReader).
3. I don't understand how to use the ExtractRowRangeWithPadding() in nnet3-xvector-get-egs.cc:WriteExamples(), may I have more explanations on that?
To view this discussion on the web visit https://groups.google.com/d/msgid/kaldi-help/CAFHC87E%3DggtnKTSMBmVoxVJDxs4NiWNdorKo1jBE7S-jwUPszA%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kaldi-help/CAEWAuyQ1MDgph1rpOn9eNHChQnxVMbzpMe2zUE1a3CDpPveq9A%40mail.gmail.com.
Thanks,Bar
בתאריך יום ה׳, 21 בנוב׳ 2019 ב-4:12 מאת David Snyder <david.r...@gmail.com>:
Hi Bar,--You will probably find this discussion on Kaldi help to be useful: https://kaldi-asr.org/forums.html?place=msg%2Fkaldi-help%2FWLCtGOaT6Uc%2FjIWHIhF4BAAJ . It's about getting this architecture to work in Kaldi.Basically, you need to add the extra layers as specified in the paper, and increase the amount of training data. You can do the latter by retaining all of the augmented training data (it will take up a lot of space on disk, and will take awhile to train). Also, it helps if you reduce the archive size. I believe you can do that by reducing the number of frames per iter option in get_egs.sh. As I recall, having around 800 training archives where each has around 200,000 examples per archive will produce good results. You'll have to play with the options to get_egs.sh to figure out how obtain this. I believe I left comments in the scripts that should provide some guidance.Best,Davd
On Wednesday, November 20, 2019 at 5:25:41 AM UTC-5, Bar Madar wrote:Hey,I am reading the paper - "SPEAKER RECOGNITION FOR MULTI-SPEAKER CONVERSATIONS USING X-VECTORS" by David Snyder et al.The nnet architecture that used in the paper is different from the one that we train on the kaldi recipe "sitw/v2".Is there a recipe or code on kaldi project that train the same nnet as shown in the paper?If not, how should I modify the "run_xvector_1a.sh" to this architecture? just adding layers to the network.xconfig file or should I do more changes on other files?Thanks,Bar
Go to http://kaldi-asr.org/forums.html find out how to join
---
You received this message because you are subscribed to the Google Groups "kaldi-help" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kaldi...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kaldi-help/13bcc4e4-d7e4-4ea3-a0e4-53471ad82aab%40googlegroups.com.
--
Go to http://kaldi-asr.org/forums.html find out how to join
---
You received this message because you are subscribed to the Google Groups "kaldi-help" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kaldi...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kaldi-help/CAFHC87HUsU53rsC%3D0_Lv7WPm5YN_iWexL1HMMJJmHnL9TQVpNw%40mail.gmail.com.
--
Go to http://kaldi-asr.org/forums.html find out how to join
---
You received this message because you are subscribed to the Google Groups "kaldi-help" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kaldi...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kaldi-help/CAEWAuyS9BHeXWVM%2BTRoQjF-eLGOabUOOVGtpJ81JdM7VCfYt8Q%40mail.gmail.com.
--
Go to http://kaldi-asr.org/forums.html find out how to join
---
You received this message because you are subscribed to the Google Groups "kaldi-help" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kaldi...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kaldi-help/CAFHC87E%3DggtnKTSMBmVoxVJDxs4NiWNdorKo1jBE7S-jwUPszA%40mail.gmail.com.
--
Go to http://kaldi-asr.org/forums.html find out how to join
---
You received this message because you are subscribed to the Google Groups "kaldi-help" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kaldi...@googlegroups.com.
To unsubscribe from this group and stop receiving emails from it, send an email to kaldi-help+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kaldi-help/d53050e8-a504-436c-9a7a-40e669777ebb%40googlegroups.com.
Hey,
Can you please explain to me how to get this information (num of example per archive) from the ranges.* file?Or maybe link me to documentation about this?Thanks!Bar
בתאריך יום ד׳, 27 בנוב׳ 2019 ב-18:06 מאת David Snyder <david.r...@gmail.com>:
To view this discussion on the web visit https://groups.google.com/d/msgid/kaldi-help/d53050e8-a504-436c-9a7a-40e669777ebb%40googlegroups.com.
To unsubscribe from this group and stop receiving emails from it, send an email to kaldi-help+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kaldi-help/8333136e-85aa-4a26-a4b2-f9ba15d271a0%40googlegroups.com.
Thanks,I have 257635 examples per archive...is it ok?
בתאריך יום ד׳, 27 בנוב׳ 2019 ב-19:49 מאת David Snyder <david.r...@gmail.com>:
To view this discussion on the web visit https://groups.google.com/d/msgid/kaldi-help/8333136e-85aa-4a26-a4b2-f9ba15d271a0%40googlegroups.com.
I just checked the code of nnet3-xvector-get-egs.cc, and it looks like it is writing the egs without compression,which would make them a factor of 4 larger than they need to be.
I just checked the code of nnet3-xvector-get-egs.cc, and it looks like it is writing the egs without compression,which would make them a factor of 4 larger than they need to be.In line 188:SequentialBaseFloatMatrixReader feat_reader(feature_rspecifier);it should be of type SequentialGeneralMatrixReader. Other parts of the code would need to use GeneralMatrix aswell.Look in nnet3-get-egs.cc, where it uses ExtractRowRangeWithPadding(). It would be necessary to usethat function in nnet3-xvector-get-egs.cc:WriteExamples(), which would of course require figuring out theexact interface and how to convert the code to use it and have equivalent behavior.It would be great if you could contribute the code back, if you do this.Dan
On Sun, Nov 24, 2019 at 5:29 PM Bar Madar <mad...@post.bgu.ac.il> wrote:
Hey, thanks!I added extra layers due to the article, and changed the frame per iter in get_egs.sh to 160000000 so the number of training archives is around 800.I am using the 6M voxceleb augmented data to train the nnet.The problem is that the get_egs.sh is failed after few hours because of out of memory on the disk. I have a SSD hard disk of 4T and before I run the get_egs.sh there are 2.2T free, but before the script is finished, it used all the 2.2T and failed.Do I need more space for this run? or should I make some changes in the script so it will be more cost effective in memory?Thanks alot!Bar
בתאריך יום ה׳, 21 בנוב׳ 2019 ב-4:12 מאת David Snyder <david.r...@gmail.com>:
Hi Bar,--You will probably find this discussion on Kaldi help to be useful: https://kaldi-asr.org/forums.html?place=msg%2Fkaldi-help%2FWLCtGOaT6Uc%2FjIWHIhF4BAAJ . It's about getting this architecture to work in Kaldi.Basically, you need to add the extra layers as specified in the paper, and increase the amount of training data. You can do the latter by retaining all of the augmented training data (it will take up a lot of space on disk, and will take awhile to train). Also, it helps if you reduce the archive size. I believe you can do that by reducing the number of frames per iter option in get_egs.sh. As I recall, having around 800 training archives where each has around 200,000 examples per archive will produce good results. You'll have to play with the options to get_egs.sh to figure out how obtain this. I believe I left comments in the scripts that should provide some guidance.Best,Davd
On Wednesday, November 20, 2019 at 5:25:41 AM UTC-5, Bar Madar wrote:Hey,I am reading the paper - "SPEAKER RECOGNITION FOR MULTI-SPEAKER CONVERSATIONS USING X-VECTORS" by David Snyder et al.The nnet architecture that used in the paper is different from the one that we train on the kaldi recipe "sitw/v2".Is there a recipe or code on kaldi project that train the same nnet as shown in the paper?If not, how should I modify the "run_xvector_1a.sh" to this architecture? just adding layers to the network.xconfig file or should I do more changes on other files?Thanks,Bar
Go to http://kaldi-asr.org/forums.html find out how to join
---
You received this message because you are subscribed to the Google Groups "kaldi-help" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kaldi...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kaldi-help/13bcc4e4-d7e4-4ea3-a0e4-53471ad82aab%40googlegroups.com.
--
Go to http://kaldi-asr.org/forums.html find out how to join
---
You received this message because you are subscribed to the Google Groups "kaldi-help" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kaldi...@googlegroups.com.
To unsubscribe from this group and stop receiving emails from it, send an email to kaldi-help+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kaldi-help/feb9e11f-2fc4-494d-8cf9-ba17476c4f04%40googlegroups.com.
# The frame-level layers
input dim=23 name=input
relu-batchnorm-layer name=tdnn1 input=Append(-2,-1,0,1,2) dim=512
relu-batchnorm-layer name=tdnn2 dim=512
relu-batchnorm-layer name=tdnn3 input=Append(-2,0,2) dim=512
relu-batchnorm-layer name=tdnn4 dim=512
relu-batchnorm-layer name=tdnn5 input=Append(-3,0,3) dim=512
relu-batchnorm-layer name=tdnn6 dim=512
relu-batchnorm-layer name=tdnn7 input=Append(-4,0,4) dim=512
relu-batchnorm-layer name=tdnn8 dim=512
relu-batchnorm-layer name=tdnn9 dim=512
relu-batchnorm-layer name=tdnn10 dim=1500
# The stats pooling layer. Layers after this are segment-level.
# In the config below, the first and last argument (0, and 10000)
# means that we pool over an input segment starting at frame 0
# and ending at frame 10000 or earlier. The other arguments (1:1)
# mean that no subsampling is performed.
stats-layer name=stats config=mean+stddev(0:1:1:10000)
# This is where we usually extract the embedding (aka xvector) from.
relu-batchnorm-layer name=tdnn11 dim=512 input=stats
# This is where another layer the embedding could be extracted
# from, but usually the previous one works better.
relu-batchnorm-layer name=tdnn12 dim=512
output-layer name=output include-log-softmax=true dim=7361
LOG (nnet3-xvector-compute[5.5.0~1-c449]:ExplainWhyNotComputable():nnet-computation-graph.cc:171) *** cindex output(0, 0, 0) is not computable for the following reason: ***
output(0, 0, 0) is kNotComputable, dependencies: tdnn10.affine(0, 0, 0)[kNotComputable],
tdnn10.affine(0, 0, 0) is kNotComputable, dependencies: tdnn10.affine_input(0, 0, 0)[kNotComputable],
tdnn10.affine_input(0, 0, 0) is kNotComputable, dependencies: tdnn9.relu(0, 0, 0)[kNotComputable],
tdnn9.relu(0, 0, 0) is kNotComputable, dependencies: tdnn9.relu_input(0, 0, 0)[kNotComputable],
tdnn9.relu_input(0, 0, 0) is kNotComputable, dependencies: tdnn9.affine(0, 0, 0)[kNotComputable],
tdnn9.affine(0, 0, 0) is kNotComputable, dependencies: tdnn9.affine_input(0, 0, 0)[kNotComputable],
tdnn9.affine_input(0, 0, 0) is kNotComputable, dependencies: tdnn8.relu(0, 0, 0)[kNotComputable],
tdnn8.relu(0, 0, 0) is kNotComputable, dependencies: tdnn8.relu_input(0, 0, 0)[kNotComputable],
tdnn8.relu_input(0, 0, 0) is kNotComputable, dependencies: tdnn8.affine(0, 0, 0)[kNotComputable],
tdnn8.affine(0, 0, 0) is kNotComputable, dependencies: tdnn8.affine_input(0, 0, 0)[kNotComputable],
tdnn8.affine_input(0, 0, 0) is kNotComputable, dependencies: tdnn7.relu(0, 0, 0)[kNotComputable],
tdnn7.relu(0, 0, 0) is kNotComputable, dependencies: tdnn7.relu_input(0, 0, 0)[kNotComputable],
tdnn7.relu_input(0, 0, 0) is kNotComputable, dependencies: tdnn7.affine(0, 0, 0)[kNotComputable],
tdnn7.affine(0, 0, 0) is kNotComputable, dependencies: tdnn7.affine_input(0, 0, 0)[kNotComputable],
tdnn7.affine_input(0, 0, 0) is kNotComputable, dependencies: tdnn6.relu(0, -4, 0)[kNotComputable], tdnn6.relu(0, 0, 0)[kNotComputable]tdnn6.relu(0, 4, 0)[kNotComputable],
tdnn6.relu(0, -4, 0) is kNotComputable, dependencies: tdnn6.relu_input(0, -4, 0)[kNotComputable],
tdnn6.relu(0, 0, 0) is kNotComputable, dependencies: tdnn6.relu_input(0, 0, 0)[kNotComputable],
tdnn6.relu(0, 4, 0) is kNotComputable, dependencies: tdnn6.relu_input(0, 4, 0)[kNotComputable],
tdnn6.relu_input(0, -4, 0) is kNotComputable, dependencies: tdnn6.affine(0, -4, 0)[kNotComputable],
tdnn6.relu_input(0, 0, 0) is kNotComputable, dependencies: tdnn6.affine(0, 0, 0)[kNotComputable],
tdnn6.relu_input(0, 4, 0) is kNotComputable, dependencies: tdnn6.affine(0, 4, 0)[kNotComputable],
tdnn6.affine(0, -4, 0) is kNotComputable, dependencies: tdnn6.affine_input(0, -4, 0)[kNotComputable],
tdnn6.affine(0, 0, 0) is kNotComputable, dependencies: tdnn6.affine_input(0, 0, 0)[kNotComputable],
tdnn6.affine(0, 4, 0) is kNotComputable, dependencies: tdnn6.affine_input(0, 4, 0)[kNotComputable],
tdnn6.affine_input(0, -4, 0) is kNotComputable, dependencies: tdnn5.relu(0, -4, 0)[kNotComputable],
tdnn6.affine_input(0, 0, 0) is kNotComputable, dependencies: tdnn5.relu(0, 0, 0)[kNotComputable],
tdnn6.affine_input(0, 4, 0) is kNotComputable, dependencies: tdnn5.relu(0, 4, 0)[kNotComputable],
tdnn5.relu(0, -4, 0) is kNotComputable, dependencies: tdnn5.relu_input(0, -4, 0)[kNotComputable],
tdnn5.relu(0, 0, 0) is kNotComputable, dependencies: tdnn5.relu_input(0, 0, 0)[kNotComputable],
tdnn5.relu(0, 4, 0) is kNotComputable, dependencies: tdnn5.relu_input(0, 4, 0)[kNotComputable],
tdnn5.relu_input(0, -4, 0) is kNotComputable, dependencies: tdnn5.affine(0, -4, 0)[kNotComputable],
tdnn5.relu_input(0, 0, 0) is kNotComputable, dependencies: tdnn5.affine(0, 0, 0)[kNotComputable],
tdnn5.relu_input(0, 4, 0) is kNotComputable, dependencies: tdnn5.affine(0, 4, 0)[kNotComputable],
tdnn5.affine(0, -4, 0) is kNotComputable, dependencies: tdnn5.affine_input(0, -4, 0)[kNotComputable],
tdnn5.affine(0, 0, 0) is kNotComputable, dependencies: tdnn5.affine_input(0, 0, 0)[kNotComputable],
tdnn5.affine(0, 4, 0) is kNotComputable, dependencies: tdnn5.affine_input(0, 4, 0)[kNotComputable],
tdnn5.affine_input(0, -4, 0) is kNotComputable, dependencies: tdnn4.relu(0, -7, 0)[kNotComputable], tdnn4.relu(0, -4, 0)[kNotComputable]tdnn4.relu(0, -1, 0)[kNotComputable],
tdnn5.affine_input(0, 0, 0) is kNotComputable, dependencies: tdnn4.relu(0, -3, 0)[kNotComputable], tdnn4.relu(0, 0, 0)[kNotComputable]tdnn4.relu(0, 3, 0)[kNotComputable],
tdnn5.affine_input(0, 4, 0) is kNotComputable, dependencies: tdnn4.relu(0, 1, 0)[kNotComputable], tdnn4.relu(0, 4, 0)[kNotComputable]tdnn4.relu(0, 7, 0)[kNotComputable],
tdnn4.relu(0, -7, 0) is kNotComputable, dependencies: tdnn4.relu_input(0, -7, 0)[kNotComputable],
tdnn4.relu(0, -4, 0) is kNotComputable, dependencies: tdnn4.relu_input(0, -4, 0)[kNotComputable],
tdnn4.relu(0, -1, 0) is kNotComputable, dependencies: tdnn4.relu_input(0, -1, 0)[kNotComputable],
tdnn4.relu(0, -3, 0) is kNotComputable, dependencies: tdnn4.relu_input(0, -3, 0)[kNotComputable],
tdnn4.relu(0, 0, 0) is kNotComputable, dependencies: tdnn4.relu_input(0, 0, 0)[kNotComputable],
tdnn4.relu(0, 3, 0) is kNotComputable, dependencies: tdnn4.relu_input(0, 3, 0)[kNotComputable],
tdnn4.relu(0, 1, 0) is kNotComputable, dependencies: tdnn4.relu_input(0, 1, 0)[kNotComputable],
tdnn4.relu(0, 4, 0) is kComputable, dependencies: tdnn4.relu_input(0, 4, 0),
tdnn4.relu(0, 7, 0) is kComputable, dependencies: tdnn4.relu_input(0, 7, 0),
tdnn4.relu_input(0, -7, 0) is kNotComputable, dependencies: tdnn4.affine(0, -7, 0)[kNotComputable],
tdnn4.relu_input(0, -4, 0) is kNotComputable, dependencies: tdnn4.affine(0, -4, 0)[kNotComputable],
tdnn4.relu_input(0, -1, 0) is kNotComputable, dependencies: tdnn4.affine(0, -1, 0)[kNotComputable],
tdnn4.relu_input(0, -3, 0) is kNotComputable, dependencies: tdnn4.affine(0, -3, 0)[kNotComputable],
tdnn4.relu_input(0, 0, 0) is kNotComputable, dependencies: tdnn4.affine(0, 0, 0)[kNotComputable],
tdnn4.relu_input(0, 3, 0) is kNotComputable, dependencies: tdnn4.affine(0, 3, 0)[kNotComputable],
tdnn4.relu_input(0, 1, 0) is kNotComputable, dependencies: tdnn4.affine(0, 1, 0)[kNotComputable],
tdnn4.affine(0, -7, 0) is kNotComputable, dependencies: tdnn4.affine_input(0, -7, 0)[kNotComputable],
tdnn4.affine(0, -4, 0) is kNotComputable, dependencies: tdnn4.affine_input(0, -4, 0)[kNotComputable],
tdnn4.affine(0, -1, 0) is kNotComputable, dependencies: tdnn4.affine_input(0, -1, 0)[kNotComputable],
tdnn4.affine(0, -3, 0) is kNotComputable, dependencies: tdnn4.affine_input(0, -3, 0)[kNotComputable],
tdnn4.affine(0, 0, 0) is kNotComputable, dependencies: tdnn4.affine_input(0, 0, 0)[kNotComputable],
tdnn4.affine(0, 3, 0) is kNotComputable, dependencies: tdnn4.affine_input(0, 3, 0)[kNotComputable],
tdnn4.affine(0, 1, 0) is kNotComputable, dependencies: tdnn4.affine_input(0, 1, 0)[kNotComputable],
tdnn4.affine_input(0, -7, 0) is kNotComputable, dependencies: tdnn3.relu(0, -7, 0)[kNotComputable],
tdnn4.affine_input(0, -4, 0) is kNotComputable, dependencies: tdnn3.relu(0, -4, 0)[kNotComputable],
tdnn4.affine_input(0, -1, 0) is kNotComputable, dependencies: tdnn3.relu(0, -1, 0)[kNotComputable],
tdnn4.affine_input(0, -3, 0) is kNotComputable, dependencies: tdnn3.relu(0, -3, 0)[kNotComputable],
tdnn4.affine_input(0, 0, 0) is kNotComputable, dependencies: tdnn3.relu(0, 0, 0)[kNotComputable],
tdnn4.affine_input(0, 3, 0) is kNotComputable, dependencies: tdnn3.relu(0, 3, 0)[kNotComputable],
tdnn4.affine_input(0, 1, 0) is kNotComputable, dependencies: tdnn3.relu(0, 1, 0)[kNotComputable],
tdnn3.relu(0, -7, 0) is kNotComputable, dependencies: tdnn3.relu_input(0, -7, 0)[kNotComputable],
tdnn3.relu(0, -4, 0) is kNotComputable, dependencies: tdnn3.relu_input(0, -4, 0)[kNotComputable],
tdnn3.relu(0, -1, 0) is kNotComputable, dependencies: tdnn3.relu_input(0, -1, 0)[kNotComputable],
tdnn3.relu(0, -3, 0) is kNotComputable, dependencies: tdnn3.relu_input(0, -3, 0)[kNotComputable],
tdnn3.relu(0, 0, 0) is kNotComputable, dependencies: tdnn3.relu_input(0, 0, 0)[kNotComputable],
tdnn3.relu(0, 3, 0) is kNotComputable, dependencies: tdnn3.relu_input(0, 3, 0)[kNotComputable],
tdnn3.relu(0, 1, 0) is kNotComputable, dependencies: tdnn3.relu_input(0, 1, 0)[kNotComputable],
tdnn3.relu_input(0, -7, 0) is kNotComputable, dependencies: tdnn3.affine(0, -7, 0)[kNotComputable],
tdnn3.relu_input(0, -4, 0) is kNotComputable, dependencies: tdnn3.affine(0, -4, 0)[kNotComputable],
tdnn3.relu_input(0, -1, 0) is kNotComputable, dependencies: tdnn3.affine(0, -1, 0)[kNotComputable],
tdnn3.relu_input(0, -3, 0) is kNotComputable, dependencies: tdnn3.affine(0, -3, 0)[kNotComputable],
tdnn3.relu_input(0, 0, 0) is kNotComputable, dependencies: tdnn3.affine(0, 0, 0)[kNotComputable],
tdnn3.relu_input(0, 3, 0) is kNotComputable, dependencies: tdnn3.affine(0, 3, 0)[kNotComputable],
tdnn3.relu_input(0, 1, 0) is kNotComputable, dependencies: tdnn3.affine(0, 1, 0)[kNotComputable],
tdnn3.affine(0, -7, 0) is kNotComputable, dependencies: tdnn3.affine_input(0, -7, 0)[kNotComputable],
tdnn3.affine(0, -4, 0) is kNotComputable, dependencies: tdnn3.affine_input(0, -4, 0)[kNotComputable],
tdnn3.affine(0, -1, 0) is kNotComputable, dependencies: tdnn3.affine_input(0, -1, 0)[kNotComputable],
tdnn3.affine(0, -3, 0) is kNotComputable, dependencies: tdnn3.affine_input(0, -3, 0)[kNotComputable],
tdnn3.affine(0, 0, 0) is kNotComputable, dependencies: tdnn3.affine_input(0, 0, 0)[kNotComputable],
tdnn3.affine(0, 3, 0) is kNotComputable, dependencies: tdnn3.affine_input(0, 3, 0)[kNotComputable],
tdnn3.affine(0, 1, 0) is kNotComputable, dependencies: tdnn3.affine_input(0, 1, 0)[kNotComputable],
tdnn3.affine_input(0, -7, 0) is kNotComputable, dependencies: tdnn2.relu(0, -9, 0)[kNotComputable], tdnn2.relu(0, -7, 0)[kNotComputable]tdnn2.relu(0, -5, 0)[kNotComputable],
tdnn3.affine_input(0, -4, 0) is kNotComputable, dependencies: tdnn2.relu(0, -6, 0)[kNotComputable], tdnn2.relu(0, -4, 0)[kNotComputable]tdnn2.relu(0, -2, 0)[kNotComputable],
tdnn3.affine_input(0, -1, 0) is kNotComputable, dependencies: tdnn2.relu(0, -3, 0)[kNotComputable], tdnn2.relu(0, -1, 0)[kNotComputable]tdnn2.relu(0, 1, 0)[kNotComputable],
tdnn3.affine_input(0, -3, 0) is kNotComputable, dependencies: tdnn2.relu(0, -5, 0)[kNotComputable], tdnn2.relu(0, -3, 0)[kNotComputable]tdnn2.relu(0, -1, 0)[kNotComputable],
tdnn3.affine_input(0, 0, 0) is kNotComputable, dependencies: tdnn2.relu(0, -2, 0)[kNotComputable], tdnn2.relu(0, 0, 0)[kNotComputable]tdnn2.relu(0, 2, 0)[kNotComputable],
tdnn3.affine_input(0, 3, 0) is kNotComputable, dependencies: tdnn2.relu(0, 1, 0)[kNotComputable], tdnn2.relu(0, 3, 0)[kNotComputable]tdnn2.relu(0, 5, 0)[kNotComputable],
tdnn3.affine_input(0, 1, 0) is kNotComputable, dependencies: tdnn2.relu(0, -1, 0)[kNotComputable], tdnn2.relu(0, 1, 0)[kNotComputable]tdnn2.relu(0, 3, 0)[kNotComputable],
tdnn2.relu(0, -9, 0) is kNotComputable, dependencies: tdnn2.relu_input(0, -9, 0)[kNotComputable],
tdnn2.relu(0, -7, 0) is kNotComputable, dependencies: tdnn2.relu_input(0, -7, 0)[kNotComputable],
tdnn2.relu(0, -5, 0) is kNotComputable, dependencies: tdnn2.relu_input(0, -5, 0)[kNotComputable],
ERROR (nnet3-xvector-compute[5.5.0~1-c449]:CreateComputation():nnet-compile.cc:59) Not all outputs were computable, cannot create computation.
To unsubscribe from this group and stop receiving emails from it, send an email to kaldi-help+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kaldi-help/f931682f-815d-4a9e-9fce-8f257fb850a7%40googlegroups.com.
Hey,
בתאריך יום ד׳, 27 בנוב׳ 2019 ב-21:08 מאת David Snyder <david.r...@gmail.com>:
To view this discussion on the web visit https://groups.google.com/d/msgid/kaldi-help/f931682f-815d-4a9e-9fce-8f257fb850a7%40googlegroups.com.