1. My use case involves a scalar integral output from 0 to 5. Andrew Ng (in Coursera) suggests using a vector output like the below example.
Since the output is in the range of [0,1], I believe LogSig is the right transfer function so that I get the output in the same range while running the developed model.
Am I correct with this, or should I do some sort of transformation? If yes, how should my target values look like and what is the transfer function I should be using?
2. If LogSig is indeed correct for my model, can you help me with the following error to use LogSig transfer function correctly?
net = nl.net.newff(meta,[43,43,43,6],transf=[nl.trans.LogSig,nl.trans.LogSig,nl.trans.LogSig,nl.trans.LogSig]) # how I initialize
err = net.train(input[train_index], target_t[train_index], show=1, goal = 100, epochs=100000) #error in this line
ErrorStack:TypeError Traceback (most recent call last)
<ipython-input-48-0868ef6b2721> in <module>()
----> 1 err = net.train(input[train_index], target_t[train_index], show=1, goal = 100, epochs=100000)
/usr/local/lib/python2.7/dist-packages/neurolab/core.pyc in train(self, *args, **kwargs)
163
164 """
--> 165 return self.trainf(self, *args, **kwargs)
166
167 def reset(self):
/usr/local/lib/python2.7/dist-packages/neurolab/core.pyc in __call__(self, net, input, target, **kwargs)
347 self.error = []
348 try:
--> 349 train(net, *args)
350 except TrainStop as msg:
351 if self.params['show']:
/usr/local/lib/python2.7/dist-packages/neurolab/train/spo.pyc in __call__(self, net, input, target)
68
69 x = fmin_bfgs(self.fcn, self.x.copy(), fprime=self.grad, callback=self.step,
---> 70 **self.kwargs)
71 self.x[:] = x
72
/usr/lib/python2.7/dist-packages/scipy/optimize/optimize.pyc in fmin_bfgs(f, x0, fprime, args, gtol, norm, epsilon, maxiter, full_output, disp, retall, callback)
706 'return_all': retall}
707
--> 708 res = _minimize_bfgs(f, x0, args, fprime, callback=callback, **opts)
709
710 if full_output:
/usr/lib/python2.7/dist-packages/scipy/optimize/optimize.pyc in _minimize_bfgs(fun, x0, args, jac, callback, gtol, norm, eps, maxiter, disp, return_all, **unknown_options)
760 else:
761 grad_calls, myfprime = wrap_function(fprime, args)
--> 762 gfk = myfprime(x0)
763 k = 0
764 N = len(x0)
/usr/lib/python2.7/dist-packages/scipy/optimize/optimize.pyc in function_wrapper(x)
259 def function_wrapper(x):
260 ncalls[0] += 1
--> 261 return function(x, *args)
262 return ncalls, function_wrapper
263
/usr/local/lib/python2.7/dist-packages/neurolab/train/spo.pyc in grad(self, x)
24 def grad(self, x):
25 self.x[:] = x
---> 26 gr = tool.ff_grad(
self.net, self.input, self.target)[1]
27 return gr
28
/usr/local/lib/python2.7/dist-packages/neurolab/tool.pyc in ff_grad(net, input, target)
228 output = []
229 for inp, tar in zip(input, target):
--> 230 out = net.step(inp)
231 ff_grad_step(net, out, tar, grad)
232 output.append(out)
/usr/local/lib/python2.7/dist-packages/neurolab/core.pyc in step(self, inp)
123 signal = self.layers[ns].out if ns != -1 else inp
124 if nl != len(self.layers):
--> 125 self.layers[nl].step(signal)
126 self.out = signal
127 return self.out
/usr/local/lib/python2.7/dist-packages/neurolab/core.pyc in step(self, inp)
231 """ Layer simulation step """
--> 233 out = self._step(inp)
234 self.inp = inp
235 self.out = out
/usr/local/lib/python2.7/dist-packages/neurolab/layer.pyc in _step(self, inp)
49 self.s = np.sum(
self.np['w'] * inp, axis=1)
---> 51 return self.transf(self.s)
52
53
TypeError: this constructor takes no arguments
3. My data is of length ~3.8k, which I use for training the data. Any idea about what the goal (error value) should be for a decent model? I could get close to 600 with the above model while using TanSig() though I think I was doing it wrong as the given outputs were in the range [0,1] while TanSig gives output in the range [-1,1]
Thanks a lot for your precious time and patient reply.
-Arjun