Starting run_tdnn_1j.sh with the following options:
stage=0
decode_nj=10
train_set=train
test_sets=test
gmm=tri3b
nnet3_affix=
Checking if required files exist...
exp/tri3b/final.mdl exists.
data/train_sp_hires/feats.scp exists.
exp/nnet3/ivectors_train_sp_hires/ivector_online.scp exists.
data/train_sp/feats.scp exists.
exp/tri3b_ali_train_sp/ali.1.gz exists.
local/chain/tuning/run_tdnn_1j.sh: Creating lang directory data/lang_chain with chain-type topology
local/chain/tuning/run_tdnn_1j.sh: data/lang_chain already exists and appears to be older than data/lang. Not re-creating it.
local/chain/tuning/run_tdnn_1j.sh: Aligning data in data/train_sp using exp/tri3b and saving to exp/tri3b_ali_train_sp
local/chain/tuning/run_tdnn_1j.sh: Generating lattices for chain training using exp/tri3b_ali_train_sp
local/chain/tuning/run_tdnn_1j.sh: Building decision tree in exp/chain/tree_sp
local/chain/tuning/run_tdnn_1j.sh: Creating neural net configs in exp/chain/tdnn1j_sp/configs
local/chain/tuning/run_tdnn_1j.sh: Training chain model in exp/chain/tdnn1j_sp
2024-08-25 23:24:23,504 [/mnt/d/kaldi/egs/mini_librispeech/s5/steps/nnet3/chain/train.py:35 - <module> - INFO ] Starting chain model trainer (train.py)
steps/nnet3/chain/train.py --stage=-10 --cmd=
run.pl --mem 4G --feat.online-ivector-dir=exp/nnet3/ivectors_train_sp_hires --feat.cmvn-opts=--norm-means=false --norm-vars=false --chain.xent-regularize 0.1 --chain.leaky-hmm-coefficient=0.1 --chain.l2-regularize=0.0 --chain.apply-deriv-weights=false --chain.lm-opts=--num-extra-lm-states=2000 --trainer.add-option=--optimization.memory-compression-level=2 --trainer.srand=0 --trainer.max-param-change=2.0 --trainer.num-epochs=20 --trainer.frames-per-iter=3000000 --trainer.optimization.num-jobs-initial=2 --trainer.optimization.num-jobs-final=5 --trainer.optimization.initial-effective-lrate=0.002 --trainer.optimization.final-effective-lrate=0.0002 --trainer.num-chunk-per-minibatch=128,64 --egs.chunk-width=140,100,160 --egs.dir= --egs.opts=--frames-overlap-per-eg 0 --cleanup.remove-egs=true --use-gpu=no --reporting.email= --feat-dir=data/train_sp_hires --tree-dir=exp/chain/tree_sp --lat-dir=exp/chain/tri3b_train_sp_lats --dir=exp/chain/tdnn1j_sp
['steps/nnet3/chain/train.py', '--stage=-10', '--cmd=
run.pl --mem 4G', '--feat.online-ivector-dir=exp/nnet3/ivectors_train_sp_hires', '--feat.cmvn-opts=--norm-means=false --norm-vars=false', '--chain.xent-regularize', '0.1', '--chain.leaky-hmm-coefficient=0.1', '--chain.l2-regularize=0.0', '--chain.apply-deriv-weights=false', '--chain.lm-opts=--num-extra-lm-states=2000', '--trainer.add-option=--optimization.memory-compression-level=2', '--trainer.srand=0', '--trainer.max-param-change=2.0', '--trainer.num-epochs=20', '--trainer.frames-per-iter=3000000', '--trainer.optimization.num-jobs-initial=2', '--trainer.optimization.num-jobs-final=5', '--trainer.optimization.initial-effective-lrate=0.002', '--trainer.optimization.final-effective-lrate=0.0002', '--trainer.num-chunk-per-minibatch=128,64', '--egs.chunk-width=140,100,160', '--egs.dir=', '--egs.opts=--frames-overlap-per-eg 0', '--cleanup.remove-egs=true', '--use-gpu=no', '--reporting.email=', '--feat-dir=data/train_sp_hires', '--tree-dir=exp/chain/tree_sp', '--lat-dir=exp/chain/tri3b_train_sp_lats', '--dir=exp/chain/tdnn1j_sp']
2024-08-25 23:24:23,512 [/mnt/d/kaldi/egs/mini_librispeech/s5/steps/nnet3/chain/train.py:258 - process_args - WARNING ] Without using a GPU this will be very slow. nnet3 does not yet support multiple threads.
2024-08-25 23:24:23,514 [/mnt/d/kaldi/egs/mini_librispeech/s5/steps/nnet3/chain/train.py:284 - train - INFO ] Arguments for the experiment
{'alignment_subsampling_factor': 3,
'apply_deriv_weights': False,
'backstitch_training_interval': 1,
'backstitch_training_scale': 0.0,
'chain_opts': '',
'chunk_left_context': 0,
'chunk_left_context_initial': -1,
'chunk_right_context': 0,
'chunk_right_context_final': -1,
'chunk_width': '140,100,160',
'cleanup': True,
'cmvn_opts': '--norm-means=false --norm-vars=false',
'combine_sum_to_one_penalty': 0.0,
'command': '
run.pl --mem 4G',
'compute_per_dim_accuracy': False,
'deriv_truncate_margin': None,
'dir': 'exp/chain/tdnn1j_sp',
'do_final_combination': True,
'dropout_schedule': None,
'egs_command': None,
'egs_dir': None,
'egs_nj': 0,
'egs_opts': '--frames-overlap-per-eg 0',
'egs_stage': 0,
'email': None,
'exit_stage': None,
'feat_dir': 'data/train_sp_hires',
'final_effective_lrate': 0.0002,
'frame_subsampling_factor': 3,
'frames_per_iter': 3000000,
'initial_effective_lrate': 0.002,
'input_model': None,
'l2_regularize': 0.0,
'lat_dir': 'exp/chain/tri3b_train_sp_lats',
'leaky_hmm_coefficient': 0.1,
'left_deriv_truncate': None,
'left_tolerance': 5,
'lm_opts': '--num-extra-lm-states=2000',
'max_lda_jobs': 10,
'max_models_combine': 20,
'max_objective_evaluations': 30,
'max_param_change': 2.0,
'momentum': 0.0,
'num_chunk_per_minibatch': '128,64',
'num_epochs': 20.0,
'num_jobs_final': 5,
'num_jobs_initial': 2,
'num_jobs_step': 1,
'online_ivector_dir': 'exp/nnet3/ivectors_train_sp_hires',
'preserve_model_interval': 100,
'presoftmax_prior_scale_power': -0.25,
'proportional_shrink': 0.0,
'rand_prune': 4.0,
'remove_egs': True,
'reporting_interval': 0.1,
'right_tolerance': 5,
'samples_per_iter': 400000,
'shrink_saturation_threshold': 0.4,
'shrink_value': 1.0,
'shuffle_buffer_size': 5000,
'srand': 0,
'stage': -10,
'train_opts': ['--optimization.memory-compression-level=2'],
'tree_dir': 'exp/chain/tree_sp',
'use_gpu': 'no',
'xent_regularize': 0.1}
Traceback (most recent call last):
File "/mnt/d/kaldi/egs/mini_librispeech/s5/steps/nnet3/chain/train.py", line 651, in main
train(args, run_opts)
File "/mnt/d/kaldi/egs/mini_librispeech/s5/steps/nnet3/chain/train.py", line 287, in train
chain_lib.check_for_required_files(args.feat_dir, args.tree_dir,
File "/mnt/d/kaldi/egs/mini_librispeech/s5/steps/libs/nnet3/train/chain_objf/acoustic_model.py", line 378, in check_for_required_files
raise Exception('Expected {0} to exist.'.format(file))
Exception: Expected exp/chain/tree_sp/ali.1.gz to exist.