train.py: error: the following arguments are required: --feat-dir, --tree-dir, --lat-dir, --dir while running CNN-TDNN

74 views
Skip to first unread message

Palash Jain

unread,
Mar 29, 2023, 5:51:25 AM3/29/23
to kaldi-help
I am running run_cnn_tdnn_1a.sh (CNN-TDNN) on my data. I have taken code from librispeech and wsj recipe. In both the cases my execution stuck at same point. Pls help me out and give suggestions. I am sharing the execution------

./runn.sh
local/chain/run_cnn_tdnn_1a.sh
local/nnet3/run_ivector_common.sh: preparing directory for low-resolution speed-perturbed data (for alignment)
fix_data_dir.sh: kept all 199 utterances.
fix_data_dir.sh: old files are kept in data/train/.backup
utils/data/perturb_data_dir_speed_3way.sh: making sure the utt2dur and the reco2dur files are present
... in data/train, because obtaining it after speed-perturbing
... would be very slow, and you might need them.
utils/data/get_utt2dur.sh: data/train/utt2dur already exists with the expected length.  We won't recompute it.
utils/data/get_reco2dur.sh: data/train/reco2dur already exists with the expected length.  We won't recompute it.
utils/data/perturb_data_dir_speed.sh: generated speed-perturbed version of data in data/train, in data/train_sp_speed0.9
fix_data_dir.sh: kept all 199 utterances.
fix_data_dir.sh: old files are kept in data/train_sp_speed0.9/.backup
utils/validate_data_dir.sh: Successfully validated data-directory data/train_sp_speed0.9
utils/data/perturb_data_dir_speed.sh: generated speed-perturbed version of data in data/train, in data/train_sp_speed1.1
fix_data_dir.sh: kept all 199 utterances.
fix_data_dir.sh: old files are kept in data/train_sp_speed1.1/.backup
utils/validate_data_dir.sh: Successfully validated data-directory data/train_sp_speed1.1
utils/data/combine_data.sh data/train_sp data/train data/train_sp_speed0.9 data/train_sp_speed1.1
utils/data/combine_data.sh: combined utt2uniq
utils/data/combine_data.sh: combined segments
utils/data/combine_data.sh: combined utt2spk
utils/data/combine_data.sh [info]: not combining utt2lang as it does not exist
utils/data/combine_data.sh: combined utt2dur
utils/data/combine_data.sh [info]: **not combining utt2num_frames as it does not exist everywhere**
utils/data/combine_data.sh: combined reco2dur
utils/data/combine_data.sh [info]: **not combining feats.scp as it does not exist everywhere**
utils/data/combine_data.sh: combined text
utils/data/combine_data.sh [info]: **not combining cmvn.scp as it does not exist everywhere**
utils/data/combine_data.sh [info]: not combining vad.scp as it does not exist
utils/data/combine_data.sh [info]: not combining reco2file_and_channel as it does not exist
utils/data/combine_data.sh: combined wav.scp
utils/data/combine_data.sh [info]: not combining spk2gender as it does not exist
fix_data_dir.sh: kept all 597 utterances.
fix_data_dir.sh: old files are kept in data/train_sp/.backup
utils/data/perturb_data_dir_speed_3way.sh: generated 3-way speed-perturbed version of data in data/train, in data/train_sp
utils/validate_data_dir.sh: Successfully validated data-directory data/train_sp
local/nnet3/run_ivector_common.sh: making MFCC features for low-resolution speed-perturbed data
steps/make_mfcc.sh --cmd run.pl --nj 8 data/train_sp
utils/validate_data_dir.sh: Successfully validated data-directory data/train_sp
steps/make_mfcc.sh [info]: segments file exists: using that.
steps/make_mfcc.sh: Succeeded creating MFCC features for train_sp
steps/compute_cmvn_stats.sh data/train_sp
Succeeded creating CMVN stats for train_sp
fix_data_dir.sh: kept all 597 utterances.
fix_data_dir.sh: old files are kept in data/train_sp/.backup
local/nnet3/run_ivector_common.sh: aligning with the perturbed low-resolution data
steps/align_fmllr.sh --nj 8 --cmd run.pl data/train_sp data/lang exp/tri4b exp/tri4b_ali_train_sp
steps/align_fmllr.sh: feature type is lda
steps/align_fmllr.sh: compiling training graphs
steps/align_fmllr.sh: aligning data in data/train_sp using exp/tri4b/final.alimdl and speaker-independent features.
steps/align_fmllr.sh: computing fMLLR transforms
steps/align_fmllr.sh: doing final alignment.
steps/align_fmllr.sh: done aligning data.
steps/diagnostic/analyze_alignments.sh --cmd run.pl data/lang exp/tri4b_ali_train_sp
steps/diagnostic/analyze_alignments.sh: see stats in exp/tri4b_ali_train_sp/log/analyze_alignments.log
184 warnings in exp/tri4b_ali_train_sp/log/align_pass1.*.log
179 warnings in exp/tri4b_ali_train_sp/log/align_pass2.*.log
61 warnings in exp/tri4b_ali_train_sp/log/fmllr.*.log
local/nnet3/run_ivector_common.sh: creating high-resolution MFCC features
utils/copy_data_dir.sh: copied data from data/train_sp to data/train_sp_hires
utils/validate_data_dir.sh: Successfully validated data-directory data/train_sp_hires
utils/copy_data_dir.sh: copied data from data/test to data/test_hires
utils/validate_data_dir.sh: Successfully validated data-directory data/test_hires
utils/data/perturb_data_dir_volume.sh: data/train_sp_hires/feats.scp exists; moving it to data/train_sp_hires/.backup/ as it wouldn't be valid any more.
utils/data/perturb_data_dir_volume.sh: added volume perturbation to the data in data/train_sp_hires
steps/make_mfcc.sh --nj 8 --mfcc-config conf/mfcc_hires.conf --cmd run.pl data/train_sp_hires
utils/validate_data_dir.sh: Successfully validated data-directory data/train_sp_hires
steps/make_mfcc.sh [info]: segments file exists: using that.
steps/make_mfcc.sh: Succeeded creating MFCC features for train_sp_hires
steps/compute_cmvn_stats.sh data/train_sp_hires
Succeeded creating CMVN stats for train_sp_hires
fix_data_dir.sh: kept all 597 utterances.
fix_data_dir.sh: old files are kept in data/train_sp_hires/.backup
steps/make_mfcc.sh --nj 8 --mfcc-config conf/mfcc_hires.conf --cmd run.pl data/test_hires
steps/make_mfcc.sh: moving data/test_hires/feats.scp to data/test_hires/.backup
utils/validate_data_dir.sh: Successfully validated data-directory data/test_hires
steps/make_mfcc.sh [info]: segments file exists: using that.
steps/make_mfcc.sh: Succeeded creating MFCC features for test_hires
steps/compute_cmvn_stats.sh data/test_hires
Succeeded creating CMVN stats for test_hires
fix_data_dir.sh: kept all 46 utterances.
fix_data_dir.sh: old files are kept in data/test_hires/.backup
local/nnet3/run_ivector_common.sh: computing a subset of data to train the diagonal UBM.
utils/data/subset_data_dir.sh: reducing #utt from 597 to 149
local/nnet3/run_ivector_common.sh: computing a PCA transform from the hires data.
steps/online/nnet2/get_pca_transform.sh --cmd run.pl --splice-opts --left-context=3 --right-context=3 --max-utts 10000 --subsample 2 exp/nnet3/diag_ubm/train_sp_hires_subset exp/nnet3/pca_transform
Done estimating PCA transform in exp/nnet3/pca_transform
local/nnet3/run_ivector_common.sh: training the diagonal UBM.
steps/online/nnet2/train_diag_ubm.sh --cmd run.pl --nj 30 --num-frames 700000 --num-threads 8 exp/nnet3/diag_ubm/train_sp_hires_subset 512 exp/nnet3/pca_transform exp/nnet3/diag_ubm
steps/online/nnet2/train_diag_ubm.sh: Directory exp/nnet3/diag_ubm already exists. Backing up diagonal UBM in exp/nnet3/diag_ubm/backup.dTn
steps/online/nnet2/train_diag_ubm.sh: initializing model from E-M in memory,
steps/online/nnet2/train_diag_ubm.sh: starting from 256 Gaussians, reaching 512;
steps/online/nnet2/train_diag_ubm.sh: for 20 iterations, using at most 700000 frames of data
Getting Gaussian-selection info
steps/online/nnet2/train_diag_ubm.sh: will train for 4 iterations, in parallel over
steps/online/nnet2/train_diag_ubm.sh: 30 machines, parallelized with 'run.pl'
steps/online/nnet2/train_diag_ubm.sh: Training pass 0
steps/online/nnet2/train_diag_ubm.sh: Training pass 1
steps/online/nnet2/train_diag_ubm.sh: Training pass 2
steps/online/nnet2/train_diag_ubm.sh: Training pass 3
local/nnet3/run_ivector_common.sh: training the iVector extractor
steps/online/nnet2/train_ivector_extractor.sh --cmd run.pl --nj 8 data/train_sp_hires exp/nnet3/diag_ubm exp/nnet3/extractor
steps/online/nnet2/train_ivector_extractor.sh: doing Gaussian selection and posterior computation
Accumulating stats (pass 0)
Summing accs (pass 0)
Updating model (pass 0)
Accumulating stats (pass 1)
Summing accs (pass 1)
Updating model (pass 1)
Accumulating stats (pass 2)
Summing accs (pass 2)
Updating model (pass 2)
Accumulating stats (pass 3)
Summing accs (pass 3)
Updating model (pass 3)
Accumulating stats (pass 4)
Summing accs (pass 4)
Updating model (pass 4)
Accumulating stats (pass 5)
Summing accs (pass 5)
Updating model (pass 5)
Accumulating stats (pass 6)
Summing accs (pass 6)
Updating model (pass 6)
Accumulating stats (pass 7)
Summing accs (pass 7)
Updating model (pass 7)
Accumulating stats (pass 8)
Summing accs (pass 8)
Updating model (pass 8)
Accumulating stats (pass 9)
Summing accs (pass 9)
Updating model (pass 9)
utils/data/modify_speaker_info.sh: copied data from data/train_sp_hires to exp/nnet3/ivectors_train_sp_hires/train_sp_hires_max2, number of speakers changed from 147 to 348
utils/validate_data_dir.sh: Successfully validated data-directory exp/nnet3/ivectors_train_sp_hires/train_sp_hires_max2
steps/online/nnet2/extract_ivectors_online.sh --cmd run.pl --nj 8 exp/nnet3/ivectors_train_sp_hires/train_sp_hires_max2 exp/nnet3/extractor exp/nnet3/ivectors_train_sp_hires
steps/online/nnet2/extract_ivectors_online.sh: extracting iVectors
steps/online/nnet2/extract_ivectors_online.sh: combining iVectors across jobs
steps/online/nnet2/extract_ivectors_online.sh: done extracting (online) iVectors to exp/nnet3/ivectors_train_sp_hires using the extractor in exp/nnet3/extractor.
steps/online/nnet2/extract_ivectors_online.sh --cmd run.pl --nj 6 data/test_hires exp/nnet3/extractor exp/nnet3/ivectors_test_hires
steps/online/nnet2/extract_ivectors_online.sh: extracting iVectors
steps/online/nnet2/extract_ivectors_online.sh: combining iVectors across jobs
steps/online/nnet2/extract_ivectors_online.sh: done extracting (online) iVectors to exp/nnet3/ivectors_test_hires using the extractor in exp/nnet3/extractor.
local/chain/run_cnn_tdnn_1a.sh: creating lang directory data/lang_chain with chain-type topology
local/chain/run_cnn_tdnn_1a.sh: data/lang_chain already exists, not overwriting it; continuing
steps/align_fmllr_lats.sh --nj 100 --cmd run.pl data/train_sp data/lang exp/tri4b exp/chain/tri4b_train_sp_lats
steps/align_fmllr_lats.sh: feature type is lda
steps/align_fmllr_lats.sh: compiling training graphs
steps/align_fmllr_lats.sh: aligning data in data/train_sp using exp/tri4b/final.alimdl and speaker-independent features.
steps/align_fmllr_lats.sh: computing fMLLR transforms
steps/align_fmllr_lats.sh: generating lattices containing alternate pronunciations.
steps/align_fmllr_lats.sh: done generating lattices from training transcripts.
118 warnings in exp/chain/tri4b_train_sp_lats/log/generate_lattices.*.log
281 warnings in exp/chain/tri4b_train_sp_lats/log/align_pass1.*.log
61 warnings in exp/chain/tri4b_train_sp_lats/log/fmllr.*.log
steps/nnet3/chain/build_tree.sh --frame-subsampling-factor 3 --context-opts --context-width=2 --central-position=1 --cmd run.pl 3500 data/train_sp data/lang_chain exp/tri4b_ali_train_sp exp/chain/tree_a_sp
steps/nnet3/chain/build_tree.sh: feature type is lda
steps/nnet3/chain/build_tree.sh: Using transforms from exp/tri4b_ali_train_sp
steps/nnet3/chain/build_tree.sh: Initializing monophone model (for alignment conversion, in case topology changed)
steps/nnet3/chain/build_tree.sh: Accumulating tree stats
steps/nnet3/chain/build_tree.sh: Getting questions for tree clustering.
steps/nnet3/chain/build_tree.sh: Building the tree
steps/nnet3/chain/build_tree.sh: Initializing the model
WARNING (gmm-init-model[5.5.1068~1-59299]:InitAmGmm():gmm-init-model.cc:55) Tree has pdf-id 51 with no stats; corresponding phone list: 207 208 209 210
WARNING (gmm-init-model[5.5.1068~1-59299]:InitAmGmm():gmm-init-model.cc:55) Tree has pdf-id 60 with no stats; corresponding phone list: 243 244 245 246
WARNING (gmm-init-model[5.5.1068~1-59299]:InitAmGmm():gmm-init-model.cc:55) Tree has pdf-id 80 with no stats; corresponding phone list: 323 324 325 326
WARNING (gmm-init-model[5.5.1068~1-59299]:InitAmGmm():gmm-init-model.cc:55) Tree has pdf-id 81 with no stats; corresponding phone list: 327 328 329 330
WARNING (gmm-init-model[5.5.1068~1-59299]:InitAmGmm():gmm-init-model.cc:55) Tree has pdf-id 88 with no stats; corresponding phone list: 355 356 357 358
WARNING (gmm-init-model[5.5.1068~1-59299]:InitAmGmm():gmm-init-model.cc:55) Tree has pdf-id 90 with no stats; corresponding phone list: 363 364 365 366
WARNING (gmm-init-model[5.5.1068~1-59299]:InitAmGmm():gmm-init-model.cc:55) Tree has pdf-id 111 with no stats; corresponding phone list: 447 448 449 450
WARNING (gmm-init-model[5.5.1068~1-59299]:InitAmGmm():gmm-init-model.cc:55) Tree has pdf-id 114 with no stats; corresponding phone list: 459 460 461 462
WARNING (gmm-init-model[5.5.1068~1-59299]:InitAmGmm():gmm-init-model.cc:55) Tree has pdf-id 119 with no stats; corresponding phone list: 479 480 481 482
WARNING (gmm-init-model[5.5.1068~1-59299]:InitAmGmm():gmm-init-model.cc:55) Tree has pdf-id 135 with no stats; corresponding phone list: 543 544 545 546
WARNING (gmm-init-model[5.5.1068~1-59299]:InitAmGmm():gmm-init-model.cc:55) Tree has pdf-id 142 with no stats; corresponding phone list: 571 572 573 574
WARNING (gmm-init-model[5.5.1068~1-59299]:InitAmGmm():gmm-init-model.cc:55) Tree has pdf-id 143 with no stats; corresponding phone list: 575 576 577 578
WARNING (gmm-init-model[5.5.1068~1-59299]:InitAmGmm():gmm-init-model.cc:55) Tree has pdf-id 144 with no stats; corresponding phone list: 579 580 581 582
WARNING (gmm-init-model[5.5.1068~1-59299]:InitAmGmm():gmm-init-model.cc:55) Tree has pdf-id 145 with no stats; corresponding phone list: 583 584 585 586
WARNING (gmm-init-model[5.5.1068~1-59299]:InitAmGmm():gmm-init-model.cc:55) Tree has pdf-id 146 with no stats; corresponding phone list: 587 588 589 590
WARNING (gmm-init-model[5.5.1068~1-59299]:InitAmGmm():gmm-init-model.cc:55) Tree has pdf-id 147 with no stats; corresponding phone list: 591 592 593 594
WARNING (gmm-init-model[5.5.1068~1-59299]:InitAmGmm():gmm-init-model.cc:55) Tree has pdf-id 148 with no stats; corresponding phone list: 595 596 597 598
WARNING (gmm-init-model[5.5.1068~1-59299]:InitAmGmm():gmm-init-model.cc:55) Tree has pdf-id 149 with no stats; corresponding phone list: 599 600 601 602
WARNING (gmm-init-model[5.5.1068~1-59299]:InitAmGmm():gmm-init-model.cc:55) Tree has pdf-id 150 with no stats; corresponding phone list: 603 604 605 606
WARNING (gmm-init-model[5.5.1068~1-59299]:InitAmGmm():gmm-init-model.cc:55) Tree has pdf-id 151 with no stats; corresponding phone list: 607 608 609 610
This is a bad warning.
steps/nnet3/chain/build_tree.sh: Converting alignments from exp/tri4b_ali_train_sp to use current tree
steps/nnet3/chain/build_tree.sh: Done building tree
local/chain/run_cnn_tdnn_1a.sh: creating neural net configs using the xconfig parser
tree-info exp/chain/tree_a_sp/tree
steps/nnet3/xconfig_to_configs.py --xconfig-file exp/chain/cnn_tdnn1a_sp/configs/network.xconfig --config-dir exp/chain/cnn_tdnn1a_sp/configs/
nnet3-init exp/chain/cnn_tdnn1a_sp/configs//ref.config exp/chain/cnn_tdnn1a_sp/configs//ref.raw
LOG (nnet3-init[5.5.1068~1-59299]:main():nnet3-init.cc:80) Initialized raw neural net and wrote it to exp/chain/cnn_tdnn1a_sp/configs//ref.raw
nnet3-info exp/chain/cnn_tdnn1a_sp/configs//ref.raw
nnet3-init exp/chain/cnn_tdnn1a_sp/configs//ref.config exp/chain/cnn_tdnn1a_sp/configs//ref.raw
LOG (nnet3-init[5.5.1068~1-59299]:main():nnet3-init.cc:80) Initialized raw neural net and wrote it to exp/chain/cnn_tdnn1a_sp/configs//ref.raw
nnet3-info exp/chain/cnn_tdnn1a_sp/configs//ref.raw
2023-03-29 14:28:09,427 [steps/nnet3/chain/train.py:35 - <module> - INFO ] Starting chain model trainer (train.py)
steps/nnet3/chain/train.py --stage=-10 --cmd=run.pl --feat.online-ivector-dir=exp/nnet3/ivectors_train_sp_hires --feat.cmvn-opts=--norm-means=false --norm-vars=false --chain.xent-regularize 0.1 --chain.leaky-hmm-coefficient=0.1 --chain.l2-regularize=0.00005 --chain.apply-deriv-weights=false --chain.lm-opts=--num-extra-lm-states=2000 --trainer.srand=0 --trainer.max-param-change=2.0 --trainer.num-epochs=4 --trainer.frames-per-iter=1500000  
['steps/nnet3/chain/train.py', '--stage=-10', '--cmd=run.pl', '--feat.online-ivector-dir=exp/nnet3/ivectors_train_sp_hires', '--feat.cmvn-opts=--norm-means=false --norm-vars=false', '--chain.xent-regularize', '0.1', '--chain.leaky-hmm-coefficient=0.1', '--chain.l2-regularize=0.00005', '--chain.apply-deriv-weights=false', '--chain.lm-opts=--num-extra-lm-states=2000', '--trainer.srand=0', '--trainer.max-param-change=2.0', '--trainer.num-epochs=4', '--trainer.frames-per-iter=1500000', ' ']
usage: train.py [-h] [--feat.online-ivector-dir ONLINE_IVECTOR_DIR]
                [--feat.cmvn-opts CMVN_OPTS]
                [--egs.chunk-left-context CHUNK_LEFT_CONTEXT]
                [--egs.chunk-right-context CHUNK_RIGHT_CONTEXT]
                [--egs.chunk-left-context-initial CHUNK_LEFT_CONTEXT_INITIAL]
                [--egs.chunk-right-context-final CHUNK_RIGHT_CONTEXT_FINAL]
                [--egs.dir EGS_DIR] [--egs.stage EGS_STAGE]
                [--egs.opts EGS_OPTS] [--trainer.srand SRAND]
                [--trainer.shuffle-buffer-size SHUFFLE_BUFFER_SIZE]
                [--trainer.max-param-change MAX_PARAM_CHANGE]
                [--trainer.samples-per-iter SAMPLES_PER_ITER]
                [--trainer.lda.rand-prune RAND_PRUNE]
                [--trainer.lda.max-lda-jobs MAX_LDA_JOBS]
                [--trainer.presoftmax-prior-scale-power PRESOFTMAX_PRIOR_SCALE_POWER]
                [--trainer.optimization.proportional-shrink PROPORTIONAL_SHRINK]
                [--trainer.optimization.num-jobs-initial NUM_JOBS_INITIAL]
                [--trainer.optimization.num-jobs-final NUM_JOBS_FINAL]
                [--trainer.optimization.num-jobs-step N]
                [--trainer.optimization.max-models-combine MAX_MODELS_COMBINE]
                [--trainer.optimization.max-objective-evaluations MAX_OBJECTIVE_EVALUATIONS]
                [--trainer.optimization.do-final-combination {true,false}]
                [--trainer.optimization.combine-sum-to-one-penalty COMBINE_SUM_TO_ONE_PENALTY]
                [--trainer.optimization.momentum MOMENTUM]
                [--trainer.dropout-schedule DROPOUT_SCHEDULE]
                [--trainer.add-option TRAIN_OPTS]
                [--trainer.optimization.backstitch-training-scale BACKSTITCH_TRAINING_SCALE]
                [--trainer.optimization.backstitch-training-interval BACKSTITCH_TRAINING_INTERVAL]
                [--trainer.compute-per-dim-accuracy {true,false}]
                [--stage STAGE] [--exit-stage EXIT_STAGE] [--cmd COMMAND]
                [--egs.cmd EGS_COMMAND] [--use-gpu {true,false,yes,no,wait}]
                [--cleanup {true,false}] [--cleanup.remove-egs {true,false}]
                [--cleanup.preserve-model-interval PRESERVE_MODEL_INTERVAL]
                [--reporting.email EMAIL]
                [--reporting.interval REPORTING_INTERVAL]
                [--egs.chunk-width CHUNK_WIDTH] [--egs.nj EGS_NJ]
                [--chain.lm-opts LM_OPTS]
                [--chain.l2-regularize L2_REGULARIZE]
                [--chain.xent-regularize XENT_REGULARIZE]
                [--chain.right-tolerance RIGHT_TOLERANCE]
                [--chain.left-tolerance LEFT_TOLERANCE]
                [--chain.leaky-hmm-coefficient LEAKY_HMM_COEFFICIENT]
                [--chain.apply-deriv-weights {true,false}]
                [--chain.frame-subsampling-factor FRAME_SUBSAMPLING_FACTOR]
                [--chain.alignment-subsampling-factor ALIGNMENT_SUBSAMPLING_FACTOR]
                [--chain.left-deriv-truncate LEFT_DERIV_TRUNCATE]
                [--trainer.input-model INPUT_MODEL]
                [--trainer.num-epochs NUM_EPOCHS]
                [--trainer.frames-per-iter FRAMES_PER_ITER]
                [--trainer.num-chunk-per-minibatch NUM_CHUNK_PER_MINIBATCH]
                [--trainer.optimization.initial-effective-lrate INITIAL_EFFECTIVE_LRATE]
                [--trainer.optimization.final-effective-lrate FINAL_EFFECTIVE_LRATE]
                [--trainer.optimization.shrink-value SHRINK_VALUE]
                [--trainer.optimization.shrink-saturation-threshold SHRINK_SATURATION_THRESHOLD]
                [--trainer.deriv-truncate-margin DERIV_TRUNCATE_MARGIN]
                --feat-dir FEAT_DIR --tree-dir TREE_DIR --lat-dir LAT_DIR
                --dir DIR [--chain-opts CHAIN_OPTS]
train.py: error: the following arguments are required: --feat-dir, --tree-dir, --lat-dir, --dir

Desh Raj

unread,
Mar 31, 2023, 11:13:06 AM3/31/23
to kaldi...@googlegroups.com
The error message is shown right there:

train.py: error: the following arguments are required: --feat-dir, --tree-dir, --lat-dir, --dir

You need to provide these arguments to the train.py script invocation.

Desh

--
Go to http://kaldi-asr.org/forums.html to find out how to join the kaldi-help group
---
You received this message because you are subscribed to the Google Groups "kaldi-help" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kaldi-help+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kaldi-help/90e01e58-6cdb-4743-828b-e9f639e43d84n%40googlegroups.com.

Daniel Povey

unread,
Apr 1, 2023, 3:47:16 AM4/1/23
to kaldi...@googlegroups.com
maybe a file was accidentally edited to put a space after an end-of-line backslash "\",
breaking up the command

Joseph Brightly

unread,
Apr 4, 2023, 8:38:36 AM4/4/23
to kaldi...@googlegroups.com
Agreed. I find it important to run all kaldi shell scripts with -xv parameters for this reason. As well as adding educational value it would surface issues like this. 

So perhaps try running as: bash -xv run_cnn_tdnn_1a.sh

Reply all
Reply to author
Forward
0 new messages