http://groups.google.com/group/comp.soft-sys.matlab/...
msg/d334cc8e741c03be
msg/a7dbae46b73e7f26
I've discovered, to my amazement, that
1. NEWFF can be used with the Gaussian activation
function RADBAS.
2. Consequently, Elliptical Basis Function Neural
Networks (EBFNNs) can be designed using NEWFF.
3. For the 1st hidden layer the activations are of
the form
exp(-(W1*p+b1*ones(1,N)).^2)
4. For additional hidden layers and/or an output
layer, the activation is of the form
exp(-(W2*h+b2*ones(1,N)).^2)
5. In general, each unit corresponds to a different
elliptical shape.
Consider the XOR problem
close all, clear all, clc,
p = [-1 1 1 -1; -1 -1 1 1]
t = [ 0 1 0 1]
[I N] = size(p)
[O N] = size(t)
plot(p,t,'o')
axis([-1.5 1.5 -0.5 1.5])
hold on
MSE00 = mse(t-mean(t))
H = 2
Ntrials = 10
rand('state',0)
for j=1:Ntrials
net = newff(minmax(p),[H O],{'radbas' 'radbas'});
net.trainParam.goal = MSE00/100;
[net tr Y E] = train(net,p,t);
Nepochs(j,1) = tr.epoch(end);
R2(j,1) = 1- tr.perf(end)/MSE00;
end
summary = [Nepochs R2]
% summary =
% Nepochs R^2
% 56 0.33333 % Minimum gradient reached,
% 6 0.99210
% 2 0.99283
% 2 0.99786
% 3 0.99649
% 7 -8.8818e-16 % Minimum gradient reached
% 7 0.99736
% 2 0.99501
% 7 0.99107
% 4 0.99759
% For the last case
figure(1)
plot(p,Y,'r*')
W1 = net.IW{1,1}
b1 = net.b{1}
W2 = net.LW{2,1}
b2 = net.b{2}
h = radbas(W1*p+b1*ones(1,N))
y = radbas(W2*h+b2*ones(1,N))
e = t-y
R210 = 1-mse(e)/MSE00
% W1 = -1.8243 -1.651
% 2.8339 2.841
% b1 = 2.1723
% 1.8503e-3
%
% W2 = 2.42 -1.7106
% b2 = 1.8388
%
% h = 1.4045e-014 0.018382 0.18305 0.0040804
% 1.0538e-014 0.99997 1.0104e-14 0.99992
% y = 0.034006 0.9706 0.0054801 0.98107
% e = -0.034006 0.029402 -0.0054801 0.018926
%
% R210 = 0.99759
Hope this helps,
Greg