While running the run_tdnn.sh I am getting the following errors,
run.pl: job failed, log is in exp/chain/tree_sp/log/compile_questions.logrun_tdnn.sh: creating neural net configs using the xconfig parser
tree-info exp/chain/tree_sp/tree
ERROR (tree-info[5.4.264~1-f788]:Input():kaldi-io.cc:756) Error opening input stream exp/chain/tree_sp/tree
[ Stack-Trace: ]
kaldi::MessageLogger::HandleMessage(kaldi::LogMessageEnvelope const&, char const*)
kaldi::MessageLogger::~MessageLogger()
kaldi::Input::Input(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, bool*)
void kaldi::ReadKaldiObject<kaldi::ContextDependency>(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, kaldi::ContextDependency*)
main
__libc_start_main
_start
steps/nnet3/xconfig_to_configs.py --xconfig-file exp/chain/tdnn1b_sp/configs/network.xconfig --config-dir exp/chain/tdnn1b_sp/configs/
***Exception caught while parsing the following xconfig line:
***Â Â relu-batchnorm-dropout-layer name=tdnn1 l2-regularize=0.03 dropout-proportion=0.0 dropout-per-dim-continuous=true dim=768
Traceback (most recent call last):
 File "steps/nnet3/xconfig_to_configs.py", line 250, in <module>
   main()
 File "steps/nnet3/xconfig_to_configs.py", line 243, in main
   all_layers = xparser.read_xconfig_file(args.xconfig_file)
 File "steps/libs/nnet3/xconfig/parser.py", line 69, in read_xconfig_file
   this_layer = xconfig_line_to_object(line, all_layers)
 File "steps/libs/nnet3/xconfig/parser.py", line 50, in xconfig_line_to_object
   raise e
RuntimeError: No such layer type 'relu-batchnorm-dropout-layer'2018-09-24 23:00:49,278 [steps/nnet3/chain/train.py:33 - <module> - INFO ] Starting chain model trainer (train.py)
steps/nnet3/chain/train.py --stage=-10 --cmd=
run.pl --feat.online-ivector-dir=exp/nnet3/ivectors_train_sp_hires --feat.cmvn-opts=--norm-means=false --norm-vars=false --chain.xent-regularize 0.1 --chain.leaky-hmm-coefficient=0.1 --chain.l2-regularize=0.0 --chain.apply-deriv-weights=false --chain.lm-opts=--num-extra-lm-states=2000 --trainer.dropout-schedule 0,0...@0.20,0...@0.50,0 --trainer.add-option=--optimization.memory-compression-level=2 --trainer.srand=0 --trainer.max-param-change=2.0 --trainer.num-epochs=8 --trainer.frames-per-iter=3000000 --trainer.optimization.num-jobs-initial=2 --trainer.optimization.num-jobs-final=5 --trainer.optimization.initial-effective-lrate=0.001 --trainer.optimization.final-effective-lrate=0.0001 --trainer.num-chunk-per-minibatch=128,64 --egs.chunk-width=140,100,160 --egs.dir= --egs.opts=--frames-overlap-per-eg 0 --cleanup.remove-egs=true --use-gpu=true --reporting.email= --feat-dir=data/train_sp_hires --tree-dir=exp/chain/tree_sp --lat-dir=exp/chain/tri3b_train_sp_lats --dir=exp/chain/tdnn1b_sp
['steps/nnet3/chain/train.py', '--stage=-10', '--cmd=
run.pl', '--feat.online-ivector-dir=exp/nnet3/ivectors_train_sp_hires', '--feat.cmvn-opts=--norm-means=false --norm-vars=false', '--chain.xent-regularize', '0.1', '--chain.leaky-hmm-coefficient=0.1', '--chain.l2-regularize=0.0', '--chain.apply-deriv-weights=false', '--chain.lm-opts=--num-extra-lm-states=2000', '--trainer.dropout-schedule', '0,0...@0.20,0...@0.50,0', '--trainer.add-option=--optimization.memory-compression-level=2', '--trainer.srand=0', '--trainer.max-param-change=2.0', '--trainer.num-epochs=8', '--trainer.frames-per-iter=3000000', '--trainer.optimization.num-jobs-initial=2', '--trainer.optimization.num-jobs-final=5', '--trainer.optimization.initial-effective-lrate=0.001', '--trainer.optimization.final-effective-lrate=0.0001', '--trainer.num-chunk-per-minibatch=128,64', '--egs.chunk-width=140,100,160', '--egs.dir=', '--egs.opts=--frames-overlap-per-eg 0', '--cleanup.remove-egs=true', '--use-gpu=true', '--reporting.email=', '--feat-dir=data/train_sp_hires', '--tree-dir=exp/chain/tree_sp', '--lat-dir=exp/chain/tri3b_train_sp_lats', '--dir=exp/chain/tdnn1b_sp']
usage: train.py [-h] [--feat.online-ivector-dir ONLINE_IVECTOR_DIR]
               [--feat.cmvn-opts CMVN_OPTS]
               [--egs.chunk-left-context CHUNK_LEFT_CONTEXT]
               [--egs.chunk-right-context CHUNK_RIGHT_CONTEXT]
               [--egs.chunk-left-context-initial CHUNK_LEFT_CONTEXT_INITIAL]
               [--egs.chunk-right-context-final CHUNK_RIGHT_CONTEXT_FINAL]
               [--egs.transform_dir TRANSFORM_DIR] [--egs.dir EGS_DIR]
               [--egs.stage EGS_STAGE] [--egs.opts EGS_OPTS]
               [--trainer.srand SRAND]
               [--trainer.shuffle-buffer-size SHUFFLE_BUFFER_SIZE]
               [--trainer.add-layers-period ADD_LAYERS_PERIOD]
               [--trainer.max-param-change MAX_PARAM_CHANGE]
               [--trainer.samples-per-iter SAMPLES_PER_ITER]
               [--trainer.lda.rand-prune RAND_PRUNE]
               [--trainer.lda.max-lda-jobs MAX_LDA_JOBS]
               [--trainer.presoftmax-prior-scale-power PRESOFTMAX_PRIOR_SCALE_POWER]
               [--trainer.optimization.num-jobs-initial NUM_JOBS_INITIAL]
               [--trainer.optimization.num-jobs-final NUM_JOBS_FINAL]
               [--trainer.optimization.max-models-combine MAX_MODELS_COMBINE]
               [--trainer.optimization.combine-sum-to-one-penalty COMBINE_SUM_TO_ONE_PENALTY]
               [--trainer.optimization.momentum MOMENTUM]
               [--trainer.dropout-schedule DROPOUT_SCHEDULE] [--stage STAGE]
               [--exit-stage EXIT_STAGE] [--cmd COMMAND]
               [--egs.cmd EGS_COMMAND] [--use-gpu {true,false}]
               [--cleanup {true,false}] [--cleanup.remove-egs {true,false}]
               [--cleanup.preserve-model-interval PRESERVE_MODEL_INTERVAL]
               [--reporting.email EMAIL]
               [--reporting.interval REPORTING_INTERVAL]
               [--background-polling-time BACKGROUND_POLLING_TIME]
               [--egs.chunk-width CHUNK_WIDTH] [--chain.lm-opts LM_OPTS]
               [--chain.l2-regularize L2_REGULARIZE]
               [--chain.xent-regularize XENT_REGULARIZE]
               [--chain.right-tolerance RIGHT_TOLERANCE]
               [--chain.left-tolerance LEFT_TOLERANCE]
               [--chain.leaky-hmm-coefficient LEAKY_HMM_COEFFICIENT]
               [--chain.apply-deriv-weights {true,false}]
               [--chain.frame-subsampling-factor FRAME_SUBSAMPLING_FACTOR]
               [--chain.alignment-subsampling-factor ALIGNMENT_SUBSAMPLING_FACTOR]
               [--chain.left-deriv-truncate LEFT_DERIV_TRUNCATE]
               [--trainer.num-epochs NUM_EPOCHS]
               [--trainer.frames-per-iter FRAMES_PER_ITER]
               [--trainer.num-chunk-per-minibatch NUM_CHUNK_PER_MINIBATCH]
               [--trainer.optimization.initial-effective-lrate INITIAL_EFFECTIVE_LRATE]
               [--trainer.optimization.final-effective-lrate FINAL_EFFECTIVE_LRATE]
               [--trainer.optimization.shrink-value SHRINK_VALUE]
               [--trainer.optimization.shrink-saturation-threshold SHRINK_SATURATION_THRESHOLD]
               [--trainer.deriv-truncate-margin DERIV_TRUNCATE_MARGIN]
               --feat-dir FEAT_DIR --tree-dir TREE_DIR --lat-dir LAT_DIR
               --dir DIR
train.py: error: unrecognized arguments: --trainer.add-option=--optimization.memory-compression-level=2 # compile-questions --leftmost-questions-truncate=-1 --context-width=2 --central-position=1 data/lang_chain/topo exp/chain/tree_sp/
questions.int exp/chain/tree_sp/questions.qst
# Started at Mon Sep 24 23:00:49 CDT 2018
#
Compile questions
Usage:Â compile-questions [options] <topo> <questions-text-file> <questions-out>
e.g.:
 compile-questions questions.txt questions.qst
Options:
 --binary                   : Write output in binary mode (bool, default = true)
 --central-position         : Central position in phone context window [must match acc-tree-stats] (int, default = 1)
 --context-width            : Context window size [must match acc-tree-stats]. (int, default = 3)
 --num-iters-refine         : Number of iters of refining questions at each node. >0 --> questions not refined (int, default = 0)
Standard options:
 --config                   : Configuration file to read (this option may be repeated) (string, default = "")
 --help                     : Print out usage message (bool, default = false)
 --print-args               : Print the command line arguments (to stderr) (bool, default = true)
 --verbose                  : Verbose level (higher->more logging) (int, default = 0)
Command line was: compile-questions --leftmost-questions-truncate=-1 --context-width=2 --central-position=1 data/lang_chain/topo exp/chain/tree_sp/
questions.int exp/chain/tree_sp/questions.qst
ERROR (compile-questions[5.4.264~1-f788]:Read():parse-options.cc:372) Invalid option --leftmost-questions-truncate=-1
[ Stack-Trace: ]
kaldi::MessageLogger::HandleMessage(kaldi::LogMessageEnvelope const&, char const*)
kaldi::MessageLogger::~MessageLogger()
kaldi::ParseOptions::Read(int, char const* const*)
main
__libc_start_main
_start
# Accounting: time=0 threads=1
# Ended (code 255) at Mon Sep 24 23:00:49 CDT 2018, elapsed time 0 seconds