how to define a good training?

49 views
Skip to first unread message

xqg mickle

unread,
Mar 8, 2017, 9:06:44 AM3/8/17
to Caffe Users
Hi,guys!
how to judge if a training is good or not?
what's gap between the final accuracy of train and val, 0.1 or 0.01?
generally speaking, how much is the final loss of train and val probably?
sorry for a dump question ,but i think it maybe useful!

xqg mickle

unread,
Mar 9, 2017, 3:49:51 AM3/9/17
to Caffe Users
up!

Patrick McNeil

unread,
Mar 10, 2017, 3:55:15 PM3/10/17
to Caffe Users
I think what you are looking for is how to determine how well your model is performing.

The best way to do this is to define a set of testing data to compare your model's performance.  When I was working on different models, I segmented my data into three pieces: training, validation, and testing.  I used the training and validation as part of the training process.  I then used the testing data set to test and compare the results of each model.  Since the validation data is not used for training, you could also use the validation set as the testing set to get the actual performance.  What you don't want to use is the training data (since that is data that is being used to adjust the weights in the model).

I don't think the final loss values or the final predictions were very useful since that is just the output from the latest iteration of testing (I don't think they are cumulative across the training process).

Patrick

xqg mickle

unread,
Mar 11, 2017, 9:25:31 PM3/11/17
to Caffe Users
Thanks! In cross-validation, the val dataset is used for adjusting hyperparameters such as learning rate. I want to know how to judge if those hyperparameters are fit and when the training is so good as to put it into practice. how much is the gap between the final error of train and val?
Reply all
Reply to author
Forward
0 new messages