# -*- coding: utf-8 -*-
import pandas as pd
import numpy as np
import neurolab as nl
# datafile
data = np.genfromtxt('input.txt', dtype=None)
# splitting inputs and outputs
inp = np.hsplit(data, np.array([10,20]))[0].tolist()
outp = np.hsplit(data, np.array([10,20]))[1].tolist()
# range of input values
inpRange = [[-1, 1], [-1, 1], [-1, 1], [-1, 1], [-1, 1], [-1, 1], [-1, 1], [-1, 1],
[-1, 1], [-1, 1]]
# newp
net = nl.net.newp(cueRange, 2)
error = net.train(cues, outs, lr=0.1)
>>> error
[12.0, 6.0, 0.0]
What I am trying to do here is to get the Delta rule behave like Rescorla-Wagner learning rule. The later is quite important in the psychology of learning -- learning theory. It is defined as: where \alpha is the learning rate, \lambda is the max strength, and V is the strength between a given input (cue) and the output (outcome). The learning theory states that there is some change in the association strength if an outcome is still surprising given cue. This model explains many phenomena in psychology of learning. Since Delta rule is: similarity is obvious and, again, many considers Delta rule to sort of "neural realization" for the Rescorla-Wagner. What is important for the average psychologist is to get the learning weights between cues and outcomes. I hope this helps. Thanks, PM
|