Hello!
For fine tuning, is there a way to use eval_listfile to monitor the model's performance on evaluation fonts? I tried to pass this parameter to lstmtraining, but the log does not show any message related to using eval_listfile.
If eval_listfile has no effect on training, other than experimenting different models on the evaluation fonts with different target_error_rate or max_iterations, or the various checkpoint files saved from training, is there a more systematic way that people have been following to prevent overfitting?
I'm following the instruction under "Fine Tuning for Impact" and use the following command (except for the last line):
training/lstmtraining --model_output /path/to/output \
--continue_from /path/to/existing/model \
--traineddata /path/to/original/traineddata \
--target_error_rate 0.01 \
--train_listfile /path/to/list/of/filenames.txt \
--eval_listfile /path/to/eval_list/of/filenames.txt
Thanks,
Joan