I have been working on the Artificial Neural Nets applications of
Shen.
The "half official" scientific literature solution of my current work,
solving NP-complete problems with the Hopfield ANNs, seems to be
flawed. This entry discusses that point, and its ramifications.
------------------------------------------------------------
I wrote an entry about the evidently flawed literature solution of the
Hopfield TSP solution, and posted it into the neural nets group.
Individuals read that entry of mine, but no one has answered.
I don't quite know what to do. There is something wrong about the
business.
The Hopfield ANN has the expression of the energy of the network;
E(H); and it can be shown that every operation of the Hopfield ANN
lowers the energy E(H). That much is clear, and can be very clearly
shown. Therefore, the Hopfield ANN will converge to the minimum
energy state. (Physical systems do.)
Now individuals have said that this property can be utilized to solve
NP-complete problems.
And then, the ordinarily presented literature solution is rather
suspicious.
There is something fishy about this.
I'm thinking, I might now simply do the ANN backpropagation algorithm,
and sell it. The backpropagation algorithm (the best presentation of
it is in P H Winston's AI textbook, 3rd ed) is a real work horse of
the ANN industry, it is a general-purpose trainable ANN capability.
But I'm uncertain about, what I really should do. Anyway, the Lambda
will get the ANN software, it's just that there was this fishy point
about the Hopfield ANNs and NP-complete problems.
The below entry is what I posted into the Internet neural nets group:
------------------------------------------------------------
Dear Wizards in the ANN group:
I have been writing a Hopfield ANN package, for solving minimization
problems with Hopfield ANNs.
It very much seems to me that the solution given for the Hopfield
NP-complete problems program, given in the professional literature, is
clearly wrong. Very surprisingly!
I shall explain, below:
The Traveling Salesman Problem Hopfield net is expected to converge to
real values close to 1.0, which values are understood as 1, and to
real values close to 0.0, which values are understood as 0.
So, if Town number #A is, on the optimal TSP route, the town number
#N, then the output of the neuron N(A,N) will converge to 1.0, and
otherwise to 0.0.
The following is from my Hopfield program, written in the Shen
programming language by Dr Mark Tarver:
------------------------------------------------------------
\\
\\ Let there be n cities, and calculate the TSP for them.
\\
\\ There is there the distance matrix D(i,j) where the distance
\\ from city i to city j is D(i,j).
\\
\\ There are there n^2 neurons in the resulting network.
\\ The neurons are as follows. The neuron matrix for 4 cities:
\\
\\ #1 #2 #3 #4
\\
\\ City 1 0 1 0 0 2nd in TSP route
\\
\\ City 2 1 0 0 0 1st in TSP route
\\
\\ City 3 0 0 0 1 4th in TSP route
\\
\\ City 4 0 0 1 0 3rd in TSP route
\\
\\ The Luger Hopfield ANN works in {-1, 1} Hamming space -- the
\\ h.docx Reference /1/ ANN works in the [0, 1] real numbers
\\ space. A real number close to 1.0 is considered to be 1;
\\ a real number close to 0.0 is considered to be 0.
\\
------------------------------------------------------------
Now then: The formula given in the professional literature for the
activation of the Hopfield TSP neurons, has five negative terms.
Then, the activation of each and every neuron will be a small negative
real number. ("Small" wrt to their absolute value.)
The treshold function used here is typically the sigmoidal function,
in my Hopfield Shen file:
------------------------------------------------------------
\\ The Hopfield ANN treshold function in the reference, in this
\\ discussion, in the discussion of the Reference /1/ paper:
\\
\\ The output of the neuron (i,j), ie X(i,j), is computed as
\\ in the sigmoidal, conventional ANN design:
\\
\\ X(i,j) = (1 + tanh(lambdap * a(i,j))) / 2
\\ where
\\
\\ a(i,j) = the activation of the neuron in question;
\\ lambdap = 3.0 in the paper in question.
\\
(define htreshold
{ number number --> number }
Activation Lambda ->
(/ (+ 1 (tanh (* Activation Lambda))) 2.0)
) \\ end function
------------------------------------------------------------
With the above definitions: If every activation is a small negative
real number, (small wrt to its absolute value), and the treshold
function is the above sigmoidal function -- then all outputs of all
neurons are real numbers somewhat less than 0.5. In one run, my
Hopfield network converged into all neurons having the output value
0.43.
But then, it is impossible for the network to converge into real
values, close to 0.0 or close to 1.0.
The above discussion would seem to indicate that the above solution --
which ordinarily is given in the professional literature -- is wrong.
Any shrewd help would be highly appreciated.
kind regards,
Dr Antti Ylikoski
HELSINKI
Finland
The EU
------------------------------------------------------------
Mark Tarver recommended for me to post this into the Shen group.
A. J. Y.