Getting back wight-values from the newp() Delta-rule

52 views
Skip to first unread message

PM

unread,
Jan 24, 2015, 6:21:06 AM1/24/15
to py-ne...@googlegroups.com

Hello! I just started playing with the neurolab and it looks fantastic! Great job!

What I cannot figure out (and I do apologize for my limited knowledge here) is how to get the values of weights between inputs and outputs, i.e., what function gives me back those values? Here is my (trivial) example, with 10 inputs, 2 outputs, and 24 trials:

# -*- coding: utf-8 -*-
import pandas as pd
import numpy as np
import neurolab as nl

# datafile
data
= np.genfromtxt('input.txt', dtype=None)

# splitting inputs and outputs
inp
= np.hsplit(data, np.array([10,20]))[0].tolist()
outp
= np.hsplit(data, np.array([10,20]))[1].tolist()

# range of input values
inpRange
= [[-1, 1], [-1, 1], [-1, 1], [-1, 1], [-1, 1], [-1, 1], [-1, 1], [-1, 1],
   
[-1, 1], [-1, 1]]

# newp
net
= nl.net.newp(cueRange, 2)
error
= net.train(cues, outs, lr=0.1)

Discrimination gets perfect after three iterations:
>>> error
[12.0, 6.0, 0.0]

So, can anyone explain other functions that might be useful?
And, again, how to get back discrimination weights between inputs and outputs?

Many thanks, PM
input.txt

Evgeny Zuev

unread,
Jan 25, 2015, 11:39:41 PM1/25/15
to py-ne...@googlegroups.com
What is var 'cueRange'?

PM

unread,
Jan 26, 2015, 4:12:58 AM1/26/15
to py-ne...@googlegroups.com

I only followed what is said in the documentation:

neurolab.net.newp(minmax, cn, transf=<neurolab.trans.HardLim instance at 0x98b8a0c>)Create one layer perceptron

     
Parameters: minmax: list ci x 2

     
Range of input value


So, if I understand it well, this is where we need to specify min/max possible values.

What I am trying to do here is to get the Delta rule behave like Rescorla-Wagner learning rule. The later is quite important in the psychology of learning -- learning theory. It is defined as:

where \alpha is the learning rate, \lambda is the max strength, and V is the strength between a given input (cue) and the output (outcome). The learning theory states that there is some change in the association strength if an outcome is still surprising given cue. This model explains many phenomena in psychology of learning.

Since Delta rule is:

similarity is obvious and, again, many considers Delta rule to sort of "neural realization" for the Rescorla-Wagner.

What is important for the average psychologist is to get the learning weights between cues and outcomes.


I hope this helps. Thanks, PM

PM

unread,
Jan 26, 2015, 4:38:41 AM1/26/15
to py-ne...@googlegroups.com



What would be also very helpful is to link/explain which parameters in neurolab implementation correspond to the above equation.

Thanks again for the great work! PM

Evgeny Zuev

unread,
Jan 30, 2015, 9:30:05 AM1/30/15
to py-ne...@googlegroups.com
Hi Petar!

For getting weight back use:
>>> net.layers[0].np['w'] # for weight
and
>>> net.layers[0].np['b'] # for bias

About your equation:
newp only configurate network. Train process (Delta rule) is in file train/delta.py ( use net.train(...)) has next parameters:

:Parameters:
        input: array like (l x net.ci)
            train input patterns
        target: array like (l x net.ci)
            train target patterns
        epochs: int (default 500)
            Number of train epochs
        show: int (default 100)
            Print period
        goal: float (default 0.01)
            The goal of train
        lr: float (default 0.01)
            learning rate

dA = a*(A -T):

A is 'input' array
T is 'target' array
a is 'lr'
Fot neural netwok Delta rule is:
dA = a *(error(network.sim(A) - T))

default 'error' is SSE (neurolab.error.SSESum squared error function


Regards!

Petar Milin

unread,
Jan 30, 2015, 11:35:08 AM1/30/15
to py-ne...@googlegroups.com, zue...@gmail.com
Hello Evgeny,
This looks very promising indeed!

One additional question thou:
if net.layers[0].np return weights and bias(es), I wonder if those are only after initialization/configuration. That is, how can I get those after training:
once I used net.train(…)? net.train(…) gives back only error?

And what the parameter goal stands for? What is serves?

Evgeny Zuev

unread,
Jan 30, 2015, 11:54:34 AM1/30/15
to py-ne...@googlegroups.com, zue...@gmail.com
net.train(...) change net.layers[0].np, and net (before training) != net after training
Goal need for stop train. Train process will stop when train error < goal
Reply all
Reply to author
Forward
0 new messages