Using multiple callbacks at once..

2,863 views
Skip to first unread message

Noob

unread,
Nov 26, 2016, 9:43:57 PM11/26/16
to Keras-users
I am currently training my network using crossvalidation, but have no idea how many epochs i need to train it for. 

I see that you can append callbacks for the model training.  Where the one called ReduceLROnPlateau and EarlyStopping seems interesting. But can these be used at the same time?

seed = 7
np
.random.seed(seed)
kfold
= KFold(n_splits=10, shuffle=False, random_state=None)
print "Splits"
cvscores_acc
= []
cvscores_loss
= []
hist
= []
i
= 0
for train, test in kfold.split(train_set_data_vstacked_normalized):


   
print "Model definition!"
    model
= Sequential()


   
#act = PReLU(init='normal', weights=None)
    model
.add(Dense(output_dim=400,input_dim=400, init="normal",activation=K.tanh))


   
#act1 = PReLU(init='normal', weights=None)
    model
.add(Dense(output_dim=400,input_dim=400, init="normal",activation=K.tanh))


   
#act2 = PReLU(init='normal', weights=None)
   
#model.add(Dense(output_dim=400, input_dim=400, init="normal",activation=K.tanh))


    act4
=ELU(100)
    model
.add(Dense(output_dim=13, input_dim=400, init="normal",activation=act4))


   
print "Compiling"
    model
.compile(loss='mean_squared_error', optimizer='RMSprop')
   
print "Compile done! "


   
print '\n'


   
print "Train start"


    reduce_lr
=ReduceLROnPlateau(monitor='val_loss', factor=0.1, patience=10, verbose=1, mode='auto', epsilon=0.0001, cooldown=0, min_lr=0)
    log
=csv_logger = CSVLogger('training_'+str(i)+'.csv')
   
#early_stop=EarlyStopping(monitor='val_loss', min_delta=0, patience=0, verbose=1, mode='auto')




    hist_current
= model.fit(train_set_data_vstacked_normalized[test],train_set_output_vstacked[test], shuffle=False,validation_split=0.1 ,  nb_epoch=1000,verbose=1,callbacks=[reduce_lr,log])


    hist
.append(hist_current)


    loss
, accuracy = model.evaluate(x=train_set_data_vstacked_normalized[train],y=train_set_output_vstacked[train],verbose=1)
   
print
   
print('loss: ', loss)
   
print('accuracy: ', accuracy)
   
print()
   
print model.summary()
   
print "New Model:"
    cvscores_acc
.append(accuracy)
    cvscores_loss
.append(loss)
   
print "Model stored"
    model
.save("Model"+str(i))
    i
=i+1


print("%.2f%% (+/- %.2f%%)" % (numpy.mean(cvscores_acc), numpy.std(cvscores_acc)))
print("%.2f%% (+/- %.2f%%)" % (numpy.mean(cvscores_loss), numpy.std(cvscores_loss)))


So i use earlystopping to stop the training if the val_los don't change after a while and  ReduceLROnPlateau reduce the learning rate when no decrease in val_loss has been seen. 
I tried it, but earlystopping didn't seem to be very fond of the model fit in the for loop.

acha...@gmail.com

unread,
Mar 31, 2017, 4:39:40 PM3/31/17
to Keras-users
I had a similar problem. I fixed it by doing something like this:
    earlyStopImprovement = EarlyStopping(monitor='loss', min_delta=0, patience=10, verbose=verbose, mode='auto')
    checkpoint = ModelCheckpoint(filepath, monitor='loss', verbose=verbose, save_best_only=True, mode='auto')

    callbacks_list = [checkpoint,earlyStopImprovement]
    model.fit(X1, Y1, epochs=5000, batch_size=10,  verbose=2, callbacks=callbacks_list)

Note that in the fit call, it's a single variable, not a list.

Loser

unread,
Apr 1, 2017, 12:38:20 AM4/1/17
to Keras-users, acha...@gmail.com
As for solution I just added in stop in the callback list and everything worked. 
Reply all
Reply to author
Forward
0 new messages