Regularization

53 views
Skip to first unread message

Max Lotstein

unread,
Jun 15, 2014, 1:22:13 PM6/15/14
to py-ne...@googlegroups.com
I have been trying to figure out how to implement some form of regularization and I have been unsuccessful.

Two questions:
(1) Does the code already have, for example, weight decay, implemented?
(2) If not, where would such an addition go?
Message has been deleted

Evgeny Zuev

unread,
Jun 17, 2014, 7:27:05 AM6/17/14
to py-ne...@googlegroups.com
Now Neurolab not support regularization methods. 
Although they may have in the algorithms scipy.optimize (fmin_bfgs, fmin_cg, fmin_ncg).

About your implementation:

(1) I think that 'regularizer' is not a network property it is parametr of train process.

(2)  function ff_grad calculate gradient only. I think, regularization must be in train func. 

Example TrainGD with regularizer

class TrainGDR(Train):
    """
    Gradient descent backpropagation with regularization
    
    """
    
    def __init__(self, net, input, target, lr=0.01, adapt=False regularizer=0):
        self.adapt = adapt
        self.lr = lr
        self.reg = regularizer
        
    def __call__(self, net, input, target):
        if not self.adapt:
            while True:
                g, output = self.calc(net, input, target)
                e = self.error(net, input, target, output)
                self.epochf(e, net, input, target)
                self.learn(net, g)
        else:
            while True:
                for i in range(input.shape[0]):
                    g = self.calc(net, [input[i]], [target[i]])[0]
                    self.learn(net, g)
                e = self.error(net, input, target)
                self.epochf(e, net, input, target)
        return None
            
    def calc(self, net, input, target):
        g1, g2, output = tool.ff_grad(net, input, target)
        return g1, output
    
    def learn(self, net, grad):
        for ln, layer in enumerate(net.layers):
            layer.np['w'] -= self.lr * grad[ln]['w'] + self.reg * sum(grad[ln]['w'])
            layer.np['b'] -= self.lr * grad[ln]['b'] + self.reg * sum(grad[ln]['b'])
        return None
 
I'm not sure about trusty self.reg * sum(grad[ln]['w'])

Sanne de Roever

unread,
Jul 22, 2014, 8:17:09 AM7/22/14
to py-ne...@googlegroups.com, mlot...@gmail.com
Hi Max, 

I've made a little patch that allows for a form of regularization. See my other post with a patch. Maybe I can dig up some theoretics.

Cheers,

Sanne

Evgeny Zuev

unread,
Jan 25, 2015, 11:45:51 PM1/25/15
to py-ne...@googlegroups.com, mlot...@gmail.com
Reply all
Reply to author
Forward
0 new messages