Patch for interrupting and resuming

57 views
Skip to first unread message

Sanne de Roever

unread,
Jul 22, 2014, 7:59:55 AM7/22/14
to py-ne...@googlegroups.com
Please find below a patch that allows the training to be interrupted and resumed. The goals of the patch is as follows. If the parameters are initialized very small then restricting the number of epochs provides a way to apply regularisation: basically one can stop the growth since the parameters go from very small to their intentional size. After an interrupt an validation set error can be simmed to check if the error is increasing or decreasing.

Cheers,

Sanne 

Index: neurolab/core.py
IDEA additional info:
Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP
<+>UTF-8
===================================================================
--- neurolab/core.py (revision )
+++ neurolab/core.py (revision )
@@ -252,7 +252,7 @@
     
     """
     
-    def __init__(self, Train, epochs=500, goal=0.01, show=100, **kwargs):
+    def __init__(self, Train, epochs=500, interrupt=None, goal=0.01, show=100, **kwargs):
         """
         :Parameters:
             Train: Train instance
@@ -274,6 +274,7 @@
         self.defaults['goal'] = goal
         self.defaults['show'] = show
         self.defaults['epochs'] = epochs
+        self.defaults['interrupt'] = interrupt
         self.defaults['train'] = kwargs
         if Train.__init__.__defaults__:
             #cnt = Train.__init__.func_code.co_argcount
@@ -335,16 +336,22 @@
             self.error.append(err)
             epoch = len(self.error)
             show = self.params['show']
+            interrupt = self.params['interrupt']
             if show and (epoch % show) == 0:
                 print("Epoch: {0}; Error: {1};".format(epoch, err))
             if err < self.params['goal']:
                 raise TrainStop('The goal of learning is reached')
             if epoch >= self.params['epochs']:
                 raise TrainStop('The maximum number of train epochs is reached')
+            if  interrupt and epoch % interrupt == 0:
+                raise TrainStop('Training is interrupted at {} epochs.'.format(epoch))
-        
+
         train = self._train_class(net, *args, **self.params['train'])
         Train.__init__(train, epochf, self.params['epochs'])
-        self.error = []
+
+        #superfluous?
+        #self.error = []
+
         try:
             train(net, *args)
         except TrainStop as msg:

Sanne de Roever

unread,
Jul 22, 2014, 11:26:58 AM7/22/14
to py-ne...@googlegroups.com
 The intention is that you can call train several times.

Evgeny Zuev

unread,
Jul 22, 2014, 1:50:48 PM7/22/14
to py-ne...@googlegroups.com
I dont understand why you nead it? net.train(ip, out, interrupt=5) will stop after 5 epochs? Why you not use prm epochs for do it?

Sanne de Roever

unread,
Jul 22, 2014, 2:27:56 PM7/22/14
to py-ne...@googlegroups.com
What I wanted to do is the following. Set epoch=500, and at each 50, 100, 150, 200, etc... epochs interrupt the training to calculate errors on a validation set. If I call net.train again, it will pick up training where it stopped.  I can see which number of epochs is best for my dataset.This way I can apply a form of regularisation to neurolab: one only has to start with very small initial weights.

Sanne de Roever

unread,
Jul 22, 2014, 2:35:58 PM7/22/14
to py-ne...@googlegroups.com
The usage is like this:

epochs=500
interrupt=50
for i in range(epochs/interrupt):
    error = net.train(imgs, rxy, epochs=epochs, interrupt=interrupt, show=10, goal=0.001, lr=0.001)
    #now I can calculate validation error here after an interrupt and reach 500 epochs in one run

Sanne de Roever

unread,
Jul 22, 2014, 2:48:35 PM7/22/14
to py-ne...@googlegroups.com

Evgeny Zuev

unread,
Jul 23, 2014, 8:04:21 AM7/23/14
to py-ne...@googlegroups.com
Ok. But I think more universal way is use callback function as parameter, something like that:
 
 def __init__(self, Train, epochs=500, goal=0.01, show=100, сallback=None, **kwargs): 
 ...
     if callback >= callback(self, net, epoch);

and you may use different train technics inside callback

 

Sanne de Roever

unread,
Jul 23, 2014, 11:51:17 AM7/23/14
to py-ne...@googlegroups.com
Sounds better indeed. I'll post an example application later.
Reply all
Reply to author
Forward
0 new messages