Shen, the ANNs, the Hopfield ANNs

167 views
Skip to first unread message

Antti Ylikoski

unread,
Mar 23, 2016, 1:19:13 PM3/23/16
to Shen
I have been working on the Artificial Neural Nets applications of
Shen.

The "half official" scientific literature solution of my current work,
solving NP-complete problems with the Hopfield ANNs, seems to be
flawed.  This entry discusses that point, and its ramifications.

------------------------------------------------------------

I wrote an entry about the evidently flawed literature solution of the
Hopfield TSP solution, and posted it into the neural nets group.

Individuals read that entry of mine, but no one has answered.

I don't quite know what to do.  There is something wrong about the
business.

The Hopfield ANN has the expression of the energy of the network;
E(H); and it can be shown that every operation of the Hopfield ANN
lowers the energy E(H).  That much is clear, and can be very clearly
shown.  Therefore, the Hopfield ANN will converge to the minimum
energy state.  (Physical systems do.)

Now individuals have said that this property can be utilized to solve
NP-complete problems.

And then, the ordinarily presented literature solution is rather
suspicious.

There is something fishy about this.

I'm thinking, I might now simply do the ANN backpropagation algorithm,
and sell it.  The backpropagation algorithm (the best presentation of
it is in P H Winston's AI textbook, 3rd ed) is a real work horse of
the ANN industry, it is a general-purpose trainable ANN capability.

But I'm uncertain about, what I really should do.  Anyway, the Lambda
will get the ANN software, it's just that there was this fishy point
about the Hopfield ANNs and NP-complete problems.

The below entry is what I posted into the Internet neural nets group:

------------------------------------------------------------

Dear Wizards in the ANN group:

I have been writing a Hopfield ANN package, for solving minimization
problems with Hopfield ANNs.

It very much seems to me that the solution given for the Hopfield
NP-complete problems program, given in the professional literature, is
clearly wrong.  Very surprisingly!

I shall explain, below:

The Traveling Salesman Problem Hopfield net is expected to converge to
real values close to 1.0, which values are understood as 1, and to
real values close to 0.0, which values are understood as 0.

So, if Town number #A is, on the optimal TSP route, the town number
#N, then the output of the neuron N(A,N) will converge to 1.0, and
otherwise to 0.0.

The following is from my Hopfield program, written in the Shen
programming language by Dr Mark Tarver:

------------------------------------------------------------

\\
\\ Let there be n cities, and calculate the TSP for them.
\\
\\ There is there the distance matrix D(i,j) where the distance
\\ from city i to city j is D(i,j).
\\
\\ There are there n^2 neurons in the resulting network.
\\ The neurons are as follows.  The neuron matrix for 4 cities:
\\
\\            #1      #2       #3        #4      
\\
\\ City 1      0       1        0         0   2nd in TSP route
\\
\\ City 2      1       0        0         0   1st in TSP route
\\
\\ City 3      0       0        0         1   4th in TSP route
\\
\\ City 4      0       0        1         0   3rd in TSP route
\\
\\ The Luger Hopfield ANN works in {-1, 1} Hamming space -- the 
\\ h.docx Reference /1/ ANN works in the [0, 1] real numbers
\\ space.  A real number close to 1.0 is considered to be 1;
\\ a real number close to 0.0 is considered to be 0.
\\ 

------------------------------------------------------------

Now then: The formula given in the professional literature for the
activation of the Hopfield TSP neurons, has five negative terms.
Then, the activation of each and every neuron will be a small negative
real number.  ("Small" wrt to their absolute value.)

The treshold function used here is typically the sigmoidal function,
in my Hopfield Shen file:

------------------------------------------------------------

\\ The Hopfield ANN treshold function in the reference, in this
\\ discussion, in the discussion of the Reference /1/ paper:
\\
\\ The output of the neuron (i,j), ie X(i,j), is computed as
\\ in the sigmoidal, conventional ANN design: 
\\
\\ X(i,j) = (1 + tanh(lambdap * a(i,j))) / 2
\\ where
\\
\\ a(i,j) = the activation of the neuron in question;
\\ lambdap = 3.0 in the paper in question.
\\

(define htreshold
  { number number --> number }
  Activation Lambda ->
    (/ (+ 1 (tanh (* Activation Lambda))) 2.0)
) \\ end function

------------------------------------------------------------

With the above definitions: If every activation is a small negative
real number, (small wrt to its absolute value), and the treshold
function is the above sigmoidal function -- then all outputs of all
neurons are real numbers somewhat less than 0.5.  In one run, my
Hopfield network converged into all neurons having the output value
0.43.

But then, it is impossible for the network to converge into real
values, close to 0.0 or close to 1.0.

The above discussion would seem to indicate that the above solution --
which ordinarily is given in the professional literature -- is wrong.

Any shrewd help would be highly appreciated.

kind regards,

Dr Antti Ylikoski
HELSINKI
Finland
The EU

------------------------------------------------------------

Mark Tarver recommended for me to post this into the Shen group.

A. J. Y.


Mark Tarver

unread,
Mar 23, 2016, 1:57:27 PM3/23/16
to Shen
W.o pretending to solve your problem

(define htreshold
  { number number --> number }
  Activation Lambda ->
    (/ (+ 1 (tanh (* Activation Lambda))) 2.0)

should be

(define threshold
  { number --> number --> number }
  Activation Lambda ->
    (/ (+ 1 (tanh (* Activation Lambda))) 2.0)

Mark

Mark Tarver

unread,
Mar 23, 2016, 2:05:46 PM3/23/16
to Shen
If the group cannot help - it seems a very specialised problem - might I suggest mailing the author of this?


Mark

Antti Ylikoski

unread,
Mar 23, 2016, 2:18:50 PM3/23/16
to Shen
Thanks Mark: Typos are one of the programmer's nuisances!!

AJY

Antti Ylikoski

unread,
Mar 23, 2016, 11:37:51 PM3/23/16
to Shen
In the Hopfield TSP papers, the below notation is usually used:

u(x,i) ==def the activation of the Neuron (X,I)

du/dt  ==def its time derivative

V(x,i) ==def the output of Neuron (X,I)

where V = (threshold-function u)

and the threshold function is ordinarily a sigmoidal function.

See Formulas (7) and (8) in the Polish paper, on Page 5.  It seems
that the change to the to the u(x,i) invariably consists of negative
terms, and therefore is negative.  And the quantity u(x,i)(t+1) looks
like as if it were negative.

But then the net obviously will not converge to values close to 0.0
and 1.0.

Are we dealing with scientific fraud?

I have BTW three times in my life seen a doctoral dissertation having
been rejected during the public defence of the thesis, because of
scientific fraud.

yours, AJY
FINLAND
The EU

Mark Tarver

unread,
Mar 24, 2016, 6:16:10 AM3/24/16
to Shen
'Fraud' is a hard word - maybe an error - I've come across them before in published work.  Have you written to this guy?  Maybe he has an explanation.

Mark

agjf....@gmail.com

unread,
Mar 24, 2016, 12:02:29 PM3/24/16
to Shen
I don't think it's true to say the change will be invariably negative.  The term -C(∑xj vxj(t)-(n+𝜎)) in equations (7) and (8) will become positive as soon as average activation reaches a sensible level.

Thomas Bartscher

unread,
Mar 26, 2016, 2:30:21 PM3/26/16
to Shen


Am Mittwoch, 23. März 2016 18:19:13 UTC+1 schrieb Antti Ylikoski:
The Hopfield ANN has the expression of the energy of the network;
E(H); and it can be shown that every operation of the Hopfield ANN
lowers the energy E(H).  That much is clear, and can be very clearly
shown.  Therefore, the Hopfield ANN will converge to the minimum
energy state.  (Physical systems do.)

Now individuals have said that this property can be utilized to solve
NP-complete problems.
This claim seems suspicious.
First: This either means that there are no operations on a Hopfield ANN that already is in the minimum energy state or that there is no minimum energy state.
Second: Even if that is just a faulty formulation, in NP-complete problems local minima are quite common, I think. Is it proven that Hopfield ANNs leave those if they are not the global minimum?

Antti Ylikoski

unread,
Mar 26, 2016, 3:41:43 PM3/26/16
to Shen
My references say that getting trapped in local minima can happen with Hopfield nets.

This actually is an often occurring problem with many minimization algorithms, many methods have been devised to overcome the problem, such as random restarts.

Dr A. J. Y.
HELSINKI
Finland, the EU

Robert Herman

unread,
Apr 7, 2016, 9:46:34 AM4/7/16
to Shen
I'm not a Shen expert, but from my past experience with Hopfield ANNs, you train them to 'remember' the solution, and anything near that produces the result expected. I know with graduated descent algorithms the randomness causes it to sometimes jump to a lower, global energy attractor. Are you using -1 to 1, or 0 to 1 as your weighting in the Hopfield ANN, and what type of thresholding? I'll look over your post more thoroughly later. Would I be able to replicate it in the OS Shen? I'll give it a go if so.
I am currently working with LFE (Lisp Flavoured Erlang) in working my way through Gene Sher's book the "Handbook of Neuroevolution Through Erlang", which uses TWEANNs (Topology and Weight Evolving ANNs) that change their topology as well as their weights making Lisps good for working with them. I always use a Lisp for this stuff. Genetic Programming ala Koza in the 1990s, and now what seems another Lisp renaissance.
I love Shen (I bought the 2nd and 3rd Ed. of TBoS), but I only work with the OS version for now, until I find employment again, or a great use for Shen. I think it is ideal for all of the new computational intelligence work. Now to make an excuse to use it as soon as I can!

Rob

Robert Herman

unread,
Apr 7, 2016, 10:21:18 AM4/7/16
to Shen
Sorry, meant 'stochastic gradient descent' below. Also, here's an old but goodie article on Hopfield networks, local minima and simulated annealing (SA) that might be of interest: http://ecee.colorado.edu/~ecen4831/hoplecs/hoplec2.html

Antti Ylikoski

unread,
May 29, 2016, 5:58:34 PM5/29/16
to Shen

I'm proceeding with my activity as to the Shen ANN business.

I attach in this newsgroup entry two, in the public domain papers,
concerning what one can do with Hopfield nets.

There are a there a number of points about this science: But right now
I'm thinking, what if: I shall do a general-purpose Hopfield net,

* functionally
* with lists
* and type secure
* and with a working and tested demo

and then let the customer do whatever he/she wants to implement with
it.  Image compression, or optimization problems, or pattern
associative recall, or whatever.

yours, A. J. Y.
Finland, the EU
Hopfield and Tank.pdf
icip_2014_hop.pdf

agjf....@gmail.com

unread,
May 29, 2016, 7:55:29 PM5/29/16
to Shen
Not sure how realistically you can expect to write such a thing both useful and general-purpose.  Anyone with the time and the expertise to make it work for any specific case might find it simpler to roll their own.
Reply all
Reply to author
Forward
0 new messages