Newsgroups: sci.lang, comp.ai.philosophy, comp.ai.nat-lang
From: casey <jgkjca...@yahoo.com.au>
Date: Tue, 9 Oct 2012 20:45:56 -0700 (PDT)
Local: Tues, Oct 9 2012 11:45 pm
Subject: Re: What is Needed to Teach a Computer to Read?
On Oct 9, 4:38 pm, c...@kcwc.com (Curt Welch) wrote:
> casey <jgkjca...@yahoo.com.au> wrote:The details aren't important. It is not that they are beyond
> > On Oct 9, 2:25=A0pm, c...@kcwc.com (Curt Welch) wrote:
> > > [...]
> > > See the last page of the paper:
> > >http://static.googleusercontent.com/external_content/untrusted_dlcp/res
> > There is nothing in that article that conflicts
> > Knowing ALL the actual weights or connections isn't
> > You seem to confuse "beyond all human knowing the
> I'm not confusing it John. You are just choosing to pretend the details
understanding they are just not relevant. Detail is thrown
away that is the whole trick to seeing similarity between
> > The actual weights and connections may vary widelyYou have no evidence we couldn't understand the algorithms
> > between ANNs that have discovered the same features.
> Yes. But if I give you a pen an paper, there is no way in hell you
discovered by an ANN. The "numbers" just determine the
logic of the algorithm and if you understand the logic you
type or wire that in.
> It is beyond your understanding.Repeating something doesn't make it true.
> These neural networks, are beyond our understanding, and we can notJust looking at the weights may not tell you anything but
> hand-code them. We can understand how to write the learning algorithm, and
> let the learning algorithm do the work of "coding" the solutions for us.
> But we can't understand the coded solution it produced. We can only
> understand the top 1% of abstract "concept" of what it has done, by saying
> it's created a "cat face" detector and other vague ideas like that.
understanding the functional result of those weights may
tell you a lot. That we don't know how to translate those
weighted connections into a higher level statement at this
point in time doesn't mean one day we won't.
> > THe important thing is that the ANN has discoveredDid you count them?
> > the features where humans may have failed and we
> > need to extract those features or methods so we
> > can understand them.
> A big neural network will extract 100 MILLION features john.
> There's noMost likely there are only a few features and all the recognition
> way a human can "understand" 100 million different features. We can pick
> 100 of the features, and study them, and come to some weak vague
> generalized understanding of what they are like, but we will never
> understand what all the 100 million features extracted from a billion
> images "mean". Nor we will gain a full understanding of even one of the
> features. There's no reason we would even want to understand it.
is found in combining those features (measurements).
For example if the ANN happened to wire up to measure the
> For example, even in the cat face detector, there is likely some veryThe noise is filtered and thresholded out.
> non-cat information hidden in there to help the network distinguish between
> other pictures that look something like a cat face. Such as maybe a
> cat-face logo that looks a lot like a cat, but is not an actual cat. The
> network might be able to correctly tell the difference between that logo
> and a real cat, and the information it uses to make that discrimination is
> coded partially into the feature we called the "cat fact" feature but this
> is the type of subtle point we are likely to never understand unless we
> take the time to study how the network detects that cat-logo as not being a
> For a complex system like this, we can isolate, and study and learn a lot
> Humans can understand simple things, but they just can't understand things
> A prime reason AI progress has been so slow, is exactly because the brainMy impression is you haven't any idea of the work being done
> is too complex for a human to understand. People keep trying to
> "understand" the different algorithms and processes at work, but only every
> manage to get the tip of the iceberg, and leave out all the real complexity
> that makes us human.
on understanding the higher level functioning modules of the
brain because you want to believe they exist.
> If we had the power to understand the full complexity of the brain, weWe understand a lot about how things that evolved work.
> could just sit down with a million programmers and code a machine that
> acted like a human. But we can't.
> We can however, do the same thing evolution did, and figure out how to
> That's is how evolution works. It doesn't "design" complex systems by
Given the same problem evolution has come up with similar solutions
> The brain configures itself using the same type of trial and error learningSaying "is beyond our understanding" again and again doesn't make it
> process, and the "intelligent system" it turns itself into, is beyond our
> We can certainly understand a little of it, but not much ofStop thinking at the level of weights and you might start to
> it, which is why human behavior has always looked so "magical". It's too
> complex for any human to understand.
> Even the small neural networks quickly escape our understanding. We can't
> No human can do that, unless they simply memorize all the weights, and
You must Sign in before you can post messages.
To post a message you must first join this group.
Please update your nickname on the subscription settings page before posting.
You do not have the permission required to post.