Message from discussion What is Needed to Teach a Computer to Read?
Received: by 10.66.84.202 with SMTP id b10mr840851paz.43.1349887444262;
Wed, 10 Oct 2012 09:44:04 -0700 (PDT)
Subject: Re: What is Needed to Teach a Computer to Read?
From: c...@kcwc.com (Curt Welch)
References: <X6SdnYKmS4V5ld3NnZ2dnUVZ_gednZ2d@giganews.com> <5pSdnQy-qsYZo-3NnZ2dnUVZ5owAAAAA@giganews.com> <firstname.lastname@example.org> <JeOdnQA-68AmFO3NnZ2dnUVZ_qcAAAAA@giganews.com> <email@example.com> <fqudnUbvBsjIPu3NnZ2dnUVZ_vGdnZ2d@giganews.com> <firstname.lastname@example.org> <H_GdnUZGyNu4Ju3NnZ2dnUVZ_hqdnZ2d@giganews.com> <email@example.com> <m4OdnVRZBpn5bO_NnZ2dnUVZ_rednZ2d@giganews.com> <firstname.lastname@example.org> <email@example.com> <firstname.lastname@example.org> <email@example.com> <firstname.lastname@example.org> <email@example.com>
X-NRC-Trace: NewsReader.Com M0l9=Vf=wQIyK29y6n0=dphyz5-l,xioIeyeGQ@1rrSXOQ
Date: 10 Oct 2012 16:44:03 GMT
casey <jgkjca...@yahoo.com.au> wrote:
> On Oct 9, 4:38=A0pm, c...@kcwc.com (Curt Welch) wrote:
> > casey <jgkjca...@yahoo.com.au> wrote:
> > > On Oct 9, 2:25=3DA0pm, c...@kcwc.com (Curt Welch) wrote:
> > > > [...]
> > > > See the last page of the paper:
> > > >http://static.googleusercontent.com/external_content/untrusted_dlcp/
> > > >re=
> > > > ea=3D
> > > rch.google.com/en/us/archive/unsupervised_icml2012.pdf
> > > There is nothing in that article that conflicts
> > > with what I have written.
> > > Knowing ALL the actual weights or connections isn't
> > > knowing the features it extracts.
> > > You seem to confuse "beyond all human knowing the
> > > details" with "beyond all human understanding the
> > > principles discovered by the ANN".
> > I'm not confusing it John. =A0You are just choosing to pretend the
> > detail=
> > that are beyond your understanding aren't important and don't need to
> > be understood.
> The details aren't important. It is not that they are beyond
> understanding they are just not relevant
A learning program and create software, that can do things, you can't
If you could understand it, you would program it yourself instead of having
to use a learning process to create it for you.
The details are important. Without the details, the neural net would not
be able to do the task - such as hand written digit classifications form
The details are important, because if you could understand them, you could
code the solution yourself. But these things are too complex for us to
understand, which is why we can't code them.
How many times do I have to repeat myself before you catch on to what is
> Detail is thrown
> away that is the whole trick to seeing similarity between
The neural networks DO NOT THROW AWAY THE DETAILS john. They use the
details to make the system to it's job correctly. They throw away the
details when they produce the answer "6" (meaning the complex input was
classified as a 6. But the implementation of the machine uses all the
details to make that determination that it was a 6 and not something else.
Our brain fails to translate the details into spoken words and concepts
that represent the details because it's doing the same thing these neural
networks are doing. It's classifying a lot of complexity, down to a few
words "cat detector".
Other programs, we can very much understand, and code by hand. These digit
classifiers that work as well as the neural networks work, we can not.
They are just beyond our understanding.
> > > The actual weights and connections may vary widely
> > > between ANNs that have discovered the same features.
> > Yes. But if I give you a pen an paper, there is no way in hell you
> > understand the concept well enough to fill in a set of numbers that
> > will actually make the network work.
> You have no evidence we couldn't understand the algorithms
> discovered by an ANN. The "numbers" just determine the
> logic of the algorithm and if you understand the logic you
> type or wire that in.
Yes, if you can understand the logic represented in 1000 real numbers, you
could code it by hand. Go ahead, and demonstrate to me you, or anyone, can
understand the logic.
> >=A0It is beyond your understanding.
> Repeating something doesn't make it true.
7+-2. Human brains have very real limits John. How can you not grasp
this? How is it that you are making the assumption that humans brains have
unlimited processing and storage capacity? Do you not understand the brain
is just a machine, and like all machines, it's got very real limits - just
as all computers have only a limited amount of memory.
A computer with only 10 bytes of memory can't calculate the digits of PI
out to a million places. It doesn't have enough memory. All machines
have limited memory, and thus they are are limited in what they can do.
For any size machine, there is always a problem beyond it's ability to
process - to understand.
In the case of a the human brain, understanding these large neural networks
to the level needed to duplicate their performance by hand-coding a program
that doesn't use learning, is just beyond what a human brain can do. Our
brains are just not large enough to understand and deal with and process
patterns with such great complexity.
> > These neural networks, are beyond our understanding, and we can not
> > hand-code them. =A0We can understand how to write the learning
> > algorithm,=
> > let the learning algorithm do the work of "coding" the solutions for
> > us. But we can't understand the coded solution it produced. =A0We can
> > only understand the top 1% of abstract "concept" of what it has done,
> > by sayin=
> > it's created a "cat face" detector and other vague ideas like that.
> Just looking at the weights may not tell you anything but
> understanding the functional result of those weights may
> tell you a lot. That we don't know how to translate those
> weighted connections into a higher level statement at this
> point in time doesn't mean one day we won't.
Yes, but the fact that you WANT it to be possible, doesn't make it possible
either John. I WANT to fly (without the help of a plane), but yet there's
no evidence we can or should be able to. But yet, I can use your same
absurd argument to take the position, "maybe we can fly, but we just
haven't figured out how to flap our arms correctly yet". And I could say
to you the same sill nonsense you say to me: "repeating the fact that we
can't fly doesn't make it true".
John, many people who have worked with learning systems, have repeated the
fact that these systems are able to create solutions, they don't
understand. It's been down with software. It's been done with electrical
hardware - they create circuits that works that the engineers don't even
understand how they could possibly works, because they have taken advantage
of side-effects that exists in the chips that the engineers didn't even
know where there (like cross talk between wires).
Trial and error evolution produces complexity beyond human understanding
easily. For something real small, like a network with 10 parameters, it's
likely we could study it for a long time, and get a basic sense of what it
was doing and why it was working, and memorize the weights, and then any
time we wanted, sit down and code the net from memory. But when you get
1000 parameters, then a human would not be able to understand the details,
and would never be able to hand code the same solution.
Give it a million parameters, and no human could even come close to
understanding it because of how all the parameters interacted with each
The very fact that these solutions work by making the things interact with
each other is why they quickly exceed our ability to understand. When we
do engineering, or write software systems with millions of lines of code,
we limit the design so that the interaction between modules are very
constrained and simple. No single part of the system has more than a small
handful of modules interacting with each other at the same time. We write
a method in a large software system, and it only interacts with a few
objects that are passed to it, and a few methods it calls. So the number
of items we have to keep track of in our head as we study that one
function, is limited to the range of 10 or 100 things.
But these neural networks are not divided into simple modules like that to
make them easy for us to understand. They have thousands of parameters
that all cross interact with each other at the same time making the number
of interacts we have to understand grow exponentially. It quickly exceeds
are brain's ability to react with.
We can't hand-code one of these networks, because when we change one
parameter, it effects almost everything the machine does. The code is
"holistic" in that it's not divided into modules that relate to a small
part of the behavior of the system that make it easy for a human with our
limited brains to understand.
It's just a type of system that is, again, beyond our understanding. We
can understand the principles that allow it to work, but we can not
understand the details that make it work. They are too complex for a human
> > > THe important thing is that the ANN has discovered
> > > the features where humans may have failed and we
> > > need to extract those features or methods so we
> > > can understand them.
> > A big neural network will extract 100 MILLION features john.
> Did you count them?
Each node in a neural network is a feature detector John. If you build a
network with 100 million nodes it will detect 100 million features.
The fact that you think we would need to "count" the features shows a
distance lack of understanding of what neural nets do and how they work.
> > =A0There's no
> > way a human can "understand" 100 million different features. =A0We can
> > pi=
> > 100 of the features, and study them, and come to some weak vague
> > generalized understanding of what they are like, but we will never
> > understand what all the 100 million features extracted from a billion
> > images "mean". =A0Nor we will gain a full understanding of even one of
> > th=
> > features. =A0There's no reason we would even want to understand it.
> Most likely there are only a few features and all the recognition
> is found in combining those features (measurements).
> For example if the ANN happened to wire up to measure the
> area of each of the characters in this text how well do you think it
> might be able to discriminate between them all? And that is
> only ONE feature.
Neural networks don't wire themselves up randomly John. They don't just
pick some feature at random to "learn". The learning algorithms use
statistical algorithms that force the network to learn features that
distribute the information from the data across all the features evenly.
It's a compression-like process that computes an optimal set of features
that will best represent all the information from all the data data.
Your "most likely it's only a few features" comment is total nonsense and
shows your total lack of understanding of these technology. If you build
network with 100 million nodes, and train it on a billion complex images,
it will form itself into 100 million unique, equally weighted features.
Not just "only a few".
Such a technique optimizes the definition of the features to maximize the
system's power to discriminant all the data in the data set.
> > For example, even in the cat face detector, there is likely some very
> > non-cat information hidden in there to help the network distinguish
> > betwe=
> > other pictures that look something like a cat face. =A0Such as maybe a
> > cat-face logo that looks a lot like a cat, but is not an actual cat.
> > =A0T=
> > network might be able to correctly tell the difference between that
> > logo and a real cat, and the information it uses to make that
> > discrimination i=
> > coded partially into the feature we called the "cat fact" feature but
> > thi=
> > is the type of subtle point we are likely to never understand unless we
> > take the time to study how the network detects that cat-logo as not
> > being=
> > cat.
> > For a complex system like this, we can isolate, and study and learn a
> > lot about some limited, isolated, behavior, but the whole thing is too
> > comple=
> > to understand fully.
> > Humans can understand simple things, but they just can't understand
> > thing=
> > once they become too complex. =A0Most things in the universe, are just
> > to=
> > complex for us to ever understand. =A0So we just just extract out the
> > sim=
> > features we can understand, and work with those, and call the
> > complexity =
> > can't understand "noise".
> The noise is filtered and thresholded out.
You show no understanding of how these statistical unsupervised learning
classifiers work John. Nothing is noise to them. They use every last bit
of information in the training set to define the behavior of the
> > A prime reason AI progress has been so slow, is exactly because the
> > brain is too complex for a human to understand. =A0People keep trying
> > to "understand" the different algorithms and processes at work, but
> > only eve=
> > manage to get the tip of the iceberg, and leave out all the real
> > complexi=
> > that makes us human.
> My impression is you haven't any idea of the work being done
> on understanding the higher level functioning modules of the
> brain because you want to believe they exist.
> > If we had the power to understand the full complexity of the brain, we
> > could just sit down with a million programmers and code a machine that
> > acted like a human. =A0But we can't.
> > We can however, do the same thing evolution did, and figure out how to
> > build a machine that writes its own code, by trial and error, without
> > eve=
> > trying to "understand" any of it. =A0that's the beauty of these
> > learning systems. they can build systems, that no one understand and
> > which the process that creates it doesn't "understand" any any sense.
> > That's is how evolution works. =A0It doesn't "design" complex systems
> > by understanding what is needed and building it. =A0It evolves complex
> > syste=
> > that are beyond our full understanding by trial and error.
> We understand a lot about how things that evolved work.
That's the error of not knowing what you don't know. All you have to
measure, is the size of what you do now, and it seems to be "a lot of
stuff". But what you don't know is often 1000's of times greater than what
you do know.
> Given the same problem evolution has come up with similar solutions
> we might have come up with such as a lens to focus an image on an
> array of sensors or a pump using valves to circulate nutrients.
> > The brain configures itself using the same type of trial and error
> > learni=
> > process, and the "intelligent system" it turns itself into, is beyond
> > our understanding.
> Saying "is beyond our understanding" again and again doesn't make it
There is not ONE example of a human that has shown an ability to understand
the operation of one of these networks. NOT ONE. For you to deny this,
and say, "they might understand it later", is silly.
What is your answer to explain why no one has every been able to understand
any of these networks after they have been trained? Why is it, that no
human, has every been able to hand-program one of these larger networks to
make it perform as well as the training process can program them? Whey is
it, that these networks can perform functions, that no human has every been
able to hand-code?
No one had hand-coded a backgammon program to play better than TD-Gammon
for example. Why do you think that is? It's because WE DON"T UNDERSTAND
HOW TO PLAY BACKGAMMON well enough to hand code a solution like this. The
game of backgammon is too complex for us to understand and hand-code a
When we play the game, we don't understand what we are doing. We pick
moves based on our gut instincts. Our gut instincts our a neural network,
trained by years of experience playing the game. But we, as humans, don't
understand our own instincts, and can't translate them into computer code,
which plays as well as well as we play. Only by building machines that
learn, can we duplicate the power of human performance.
> >=A0We can certainly understand a little of it, but not much of
> > it, which is why human behavior has always looked so "magical". It's
> > too complex for any human to understand.
> > Even the small neural networks quickly escape our understanding. =A0We
> > ca=
> > train a small network to do digit recognition, study the weights, and
> > eve=
> > get to the point of being able to say, "OK, I understand it all now,
> > I'll show I understand it by writing my own code to duplicate what it's
> > doing.=
> > No human can do that, unless they simply memorize all the weights, and
> > hand-code the same weights into their own version of the network (which
> > i=
> > not what I would call "understanding").
> Stop thinking at the level of weights and you might start to
Sure, ignore what you don't understand and just pretend it's not there.
That's the way to "understand" isn't it! Stick you head in the sand and
pretend it's not there. That's all you are doing here John.
You have this powerful cognitive dissonance working against you that seems
to block your ability to see the obvious. Humans have very limited ability
to "understand". We are just these weak signal processing processing with
very limited abilities. Most of what happens in the universe, is just way
beyond our understanding. We can not describe why it happened, we can not
predict what will happen next, it's just way too complex (by many orders of
magnitude) for us to understand.
We can't understand the weather for example. It's way too complex. It's
chaotic. We can only understand the forces at work, and model them on a
"weather simulator" that runs faster than reality, so that we can make the
machine make predictions for us. We gave up long ago believing we could
understand the weather and predict it ourselves. It's a system too chaotic
for us to predict. Most the world is too chaotic for us to understand.
These neural networks are likewise too chaotic for a human to understand.
As is the human brain, it'a a chaotic system that is too complex for a
human to understand. Like with the weather, we can understand the
underlying forces that the chaotic process is built on, and we can
duplicate the process, but we can never understand the chaotic process
itself. They are too complex for humans to understand.
Curt Welch http://CurtWelch.Com/