Initializing neurolab.train.train_gdx in neurolab.net.newff

538 views
Skip to first unread message

Arjun V

unread,
Mar 24, 2014, 4:30:28 PM3/24/14
to py-ne...@googlegroups.com
Hi,

I would like to initialize neurolab.train.train_gdx with adapt=True parameter while using in neurolab.net.newff.
Could you provide me an example for the same?

Thanks
Arjun

Arjun V

unread,
Mar 24, 2014, 5:34:07 PM3/24/14
to py-ne...@googlegroups.com
Just to calrify

        >>> net = nl.net.newff([[-0.5, 0.5], [-0.5, 0.5]], [5, 1])
        >>> err = net.train(input, target, show=15, adapt=True)
This doesn't work,
Error Message - TypeError: fmin_bfgs() got an unexpected keyword argument 'adapt'

Evgeny Zuev

unread,
Mar 25, 2014, 12:08:36 AM3/25/14
to py-ne...@googlegroups.com
Hi!

Default train function for multilayer perceptron now is train_bfgs(not support 'adapt' option), for change use:
>>> net.trainf = nl.train.train_gdx
See also:


25 марта 2014 г., 3:34 пользователь Arjun V <arjun...@gmail.com> написал:

--
Вы получили это сообщение, поскольку подписаны на группу "NeuroLab".
Чтобы отменить подписку на эту группу и больше не получать от нее сообщения, отправьте письмо на электронный адрес py-neurolab...@googlegroups.com.
Чтобы настроить другие параметры, перейдите по ссылке https://groups.google.com/d/optout.

Arjun V

unread,
Mar 25, 2014, 7:23:40 AM3/25/14
to py-ne...@googlegroups.com
Hi Evgeny,

Thanks for the reply.

As per https://pythonhosted.org/neurolab/lib.html#neurolab.train.train_gdx, the default value for adapt is false and I presume the learning doesn't change unless the 'adapt' value is set to True.

net.trainf = nl.train.train_gdx doesn't do that right?

Is there a way to explicitly mention the parameters?
Kindly excuse my limitations with **kwargs or the way functions calls are initialized.

Thanks

Evgeny Zuev

unread,
Mar 25, 2014, 7:31:12 AM3/25/14
to py-ne...@googlegroups.com
>>> net.trainf = nl.train.train_gdx - change default train function on nl.train.train_gdx. For adapt network use this:

>>> net = nl.net.newff([[-0.5, 0.5], [-0.5, 0.5]], [5, 1])
>>> net.trainf = nl.train.train_gdx

Arjun V

unread,
Mar 25, 2014, 7:35:07 AM3/25/14
to py-ne...@googlegroups.com
Brilliant. Thanks a lot.

That makes total sense.

Arjun V

unread,
Mar 28, 2014, 1:07:36 PM3/28/14
to py-ne...@googlegroups.com
Hi Evgeny,

I have 3 questions here:

1. My use case involves a scalar integral output from 0 to 5. Andrew Ng (in Coursera) suggests using a vector output like the below example.

Assume the possible scalar values are 0,1,2,3,4,5)

0 = [1,0,0,0,0,0]
1 = [0,1,0,0,0,0]
2 = [0,0,1,0,0,0]

Since the output is in the range of [0,1], I believe LogSig is the right transfer function so that I get the output in the same range while running the developed model. 
Am I correct with this, or should I do some sort of transformation? If yes, how should my target values look like and what is the transfer function I should be using?

2. If LogSig is indeed correct for my model, can you help me with the following error to use LogSig transfer function correctly?

 net = nl.net.newff(meta,[43,43,43,6],transf=[nl.trans.LogSig,nl.trans.LogSig,nl.trans.LogSig,nl.trans.LogSig]) # how I initialize
 err = net.train(input[train_index], target_t[train_index], show=1, goal = 100, epochs=100000) #error in this line
ErrorStack:
TypeError                                 Traceback (most recent call last)
<ipython-input-48-0868ef6b2721> in <module>()
----> 1 err = net.train(input[train_index], target_t[train_index], show=1, goal = 100, epochs=100000)

/usr/local/lib/python2.7/dist-packages/neurolab/core.pyc in train(self, *args, **kwargs)
    163 
    164         """
--> 165         return self.trainf(self, *args, **kwargs)
    166 
    167     def reset(self):

/usr/local/lib/python2.7/dist-packages/neurolab/core.pyc in __call__(self, net, input, target, **kwargs)
    347         self.error = []
    348         try:
--> 349             train(net, *args)
    350         except TrainStop as msg:
    351             if self.params['show']:

/usr/local/lib/python2.7/dist-packages/neurolab/train/spo.pyc in __call__(self, net, input, target)
     68 
     69         x = fmin_bfgs(self.fcn, self.x.copy(), fprime=self.grad, callback=self.step,
---> 70                       **self.kwargs)
     71         self.x[:] = x
     72 

/usr/lib/python2.7/dist-packages/scipy/optimize/optimize.pyc in fmin_bfgs(f, x0, fprime, args, gtol, norm, epsilon, maxiter, full_output, disp, retall, callback)
    706             'return_all': retall}
    707 
--> 708     res = _minimize_bfgs(f, x0, args, fprime, callback=callback, **opts)
    709 
    710     if full_output:

/usr/lib/python2.7/dist-packages/scipy/optimize/optimize.pyc in _minimize_bfgs(fun, x0, args, jac, callback, gtol, norm, eps, maxiter, disp, return_all, **unknown_options)
    760     else:
    761         grad_calls, myfprime = wrap_function(fprime, args)
--> 762     gfk = myfprime(x0)
    763     k = 0
    764     N = len(x0)

/usr/lib/python2.7/dist-packages/scipy/optimize/optimize.pyc in function_wrapper(x)
    259     def function_wrapper(x):
    260         ncalls[0] += 1
--> 261         return function(x, *args)
    262     return ncalls, function_wrapper
    263 

/usr/local/lib/python2.7/dist-packages/neurolab/train/spo.pyc in grad(self, x)
     24     def grad(self, x):
     25         self.x[:] = x
---> 26         gr = tool.ff_grad(self.net, self.input, self.target)[1]
     27         return gr
     28 

/usr/local/lib/python2.7/dist-packages/neurolab/tool.pyc in ff_grad(net, input, target)
    228     output = []
    229     for inp, tar in zip(input, target):
--> 230         out = net.step(inp)
    231         ff_grad_step(net, out, tar, grad)
    232         output.append(out)

/usr/local/lib/python2.7/dist-packages/neurolab/core.pyc in step(self, inp)
    123                 signal = self.layers[ns].out if ns != -1 else inp
    124             if nl != len(self.layers):
--> 125                 self.layers[nl].step(signal)
    126         self.out = signal
    127         return self.out

/usr/local/lib/python2.7/dist-packages/neurolab/core.pyc in step(self, inp)
    231         """ Layer simulation step """
    232         assert len(inp) == self.ci
--> 233         out = self._step(inp)
    234         self.inp = inp
    235         self.out = out

/usr/local/lib/python2.7/dist-packages/neurolab/layer.pyc in _step(self, inp)
     49         self.s = np.sum(self.np['w'] * inp, axis=1)
     50         self.s += self.np['b']
---> 51         return self.transf(self.s)
     52 
     53 

TypeError: this constructor takes no arguments


3. My data is of length ~3.8k, which I use for training the data. Any idea about what the goal (error value) should be for a decent model? I could get close to 600 with the above model while using TanSig() though I think I was doing it wrong as the given outputs were in the range [0,1] while TanSig gives output in the range [-1,1]

Thanks a lot for your precious time and patient reply.
-Arjun

Arjun V

unread,
Mar 28, 2014, 1:15:12 PM3/28/14
to py-ne...@googlegroups.com
Ahh.. Never mind about my 2nd question.
I figured the correct way 

net = nl.net.newff(meta,[43,43,43,6],transf=[nl.trans.LogSig(),nl.trans.LogSig(),nl.trans.LogSig(),nl.trans.LogSig()])

I din't initialize the functions previously. 

My 1st and 3rd questions still holds.

Thanks

Evgeny Zuev

unread,
Mar 28, 2014, 2:57:56 PM3/28/14
to py-ne...@googlegroups.com
Hi Arjun

1. You may use LogSig or or sclable you target data to [-1, 1]:
>>> target = target * 2 - 1
Some autors tolks that TasSig is better for for training, you may test it

Arjun V

unread,
Mar 30, 2014, 1:21:14 PM3/30/14
to py-ne...@googlegroups.com
Hi Evgeny,

Thanks for the suggestion.
Though TanSig quickens the gradient descend process, it seems to converge at a higher value.
Maybe because the outputs are different (eg - [1,0,0,0,0,0] for LogSig vs [1,-1,-1,-1,-1,-1] for TanSig) in my case.
I thought it was agnostic to the activation functions, at least in an ideal scenario.

Really appreciate your help all along.

-Arjun

Evgeny Zuev

unread,
Mar 31, 2014, 3:28:08 AM3/31/14
to py-ne...@googlegroups.com
Ok!  Glad to help you
Reply all
Reply to author
Forward
0 new messages