OT Google builds neural network with 1 billion connections

92 views
Skip to first unread message

Erwin

unread,
Jun 29, 2012, 7:28:47 PM6/29/12
to opencog
I think Google is the world's number one AI company. They have the
brainpower and the compute capacity to make big leaps in the AI field.
They trained a neural network with one billion connections by training
it for a week on 16,000 processors. They used unlabeled stills from
youtube videos (unsupervised learning) and it discovered high level
features like the concept of 'cat'. After some additional training
with labeled data they broke the world record for labeling pictures.

More info:
https://plus.google.com/u/0/117790530324740296539/posts/EMyhnBetd2F
http://googleblog.blogspot.be/2012/06/using-large-scale-brain-simulations-for.html

Paper:
http://research.google.com/archive/unsupervised_icml2012.pdf

Ben Goertzel

unread,
Jun 29, 2012, 11:02:22 PM6/29/12
to ope...@googlegroups.com
That is some quite cool work, indeed ;)

However, I think that a hierarchical recurrent neural net is not an
AGI architecture. The neural net that these guys used is similar to
the DeSTIN vision algorithm Itamar Arel developed and that we're
integrating with OpenCog now -- but actually the one Google used is
less sophisticated. It was just implemented at greater scale....

IMO, this sort of hierarchical pattern recognition architecture (be it
DeSTIN or Andrew Ng's network that Google used) is suitable as a
perceptual cortex for certain kinds of sense modalities (good for
vision and audition, not so good for olfaction and haptics...).... I
think it just corresponds to one among many aspects of human-like
general intelligence.

Google has a lot of smart staff and a lot of money, but I think that
unless they explicitly aim to build an AGI architecture, they're not
going to get one. I see no evidence right now that they are making an
explicit attempt of this nature -- rather they're focusing on more
specialized problems. This is in line with Peter Norvig's idea, that
he communicated to me several times, that AGI can be achieved by
scaling up and piecing together best-of-breed narrow AI algorithms.
I'm unsure this can work; and I think that if it can work, it will not
be the fastest nor best route...

-- Ben
> --
> You received this message because you are subscribed to the Google Groups "opencog" group.
> To post to this group, send email to ope...@googlegroups.com.
> To unsubscribe from this group, send email to opencog+u...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/opencog?hl=en.
>



--
Ben Goertzel, PhD
http://goertzel.org

"My humanity is a constant self-overcoming" -- Friedrich Nietzsche

Tim Josling

unread,
Jun 30, 2012, 1:27:13 AM6/30/12
to ope...@googlegroups.com

> Erwin <eni...@gmail.com> Jun 30 01:28AM +0200        
> I think Google is the world's number one AI company. They have the brainpower and the compute capacity to make big leaps in the AI field.

And think of the data they have!

Tim Josling

Matt Mahoney

unread,
Jul 5, 2012, 1:54:06 PM7/5/12
to ope...@googlegroups.com
Has anyone looked at using cloud services (Amazon AWS?) for doing
massively parallel AI experiments?

Google's vision model is quite tiny compared to human vision. A human
equivalent model would train on about 10 billion high resolution
frames at 10 per second. Each eye has 137 million rods and cones,
although the retina reduces this to 1 million low level patterns
(spot, edge, movement). This could be reduced further by simulating a
fovea and eye movements to the parts of the image with high
information content. Still, a human brain sized neural network would
have at least 10^5 times as many connections, implying 10^9 cores,
unless a more efficient algorithm can be found. So far I know of no
evidence that such algorithms even exist, though plenty have
speculated. Estimates of the computing power required for AGI have
mostly been wild guesses.


-- Matt Mahoney, mattma...@gmail.com

Charles Esterbrook

unread,
Jul 6, 2012, 11:15:13 AM7/6/12
to ope...@googlegroups.com
On Thu, Jul 5, 2012 at 10:54 AM, Matt Mahoney <mattma...@gmail.com> wrote:
> On Sat, Jun 30, 2012 at 1:27 AM, Tim Josling <tim.j...@gmail.com> wrote:
>>
>>> Erwin <eni...@gmail.com> Jun 30 01:28AM +0200
>>> I think Google is the world's number one AI company. They have the
>>> brainpower and the compute capacity to make big leaps in the AI field.
>>
>> And think of the data they have!
>
> Has anyone looked at using cloud services (Amazon AWS?) for doing
> massively parallel AI experiments?

Have you calculated the cost?

--
Charles Esterbrook
http://charles-esterbrook.com

Matt Mahoney

unread,
Jul 6, 2012, 7:55:29 PM7/6/12
to ope...@googlegroups.com
Google's experiment using 16,000 cores trained for 3 days would cost
$92,000 on AWS assuming standard (small) instances at $0.08 per hour.
http://aws.amazon.com/ec2/pricing/

They probably did a lot of experiments, however. 100 experiments would
cost around $10M. I guess they can afford it with a market cap of $180
billion.

This is small compared to a model of human vision. A better model
would include a fovea and a model of eye movements. Their experiment
was like flashing a small photo on the screen for an instant and
asking people to identify it without any of the context or real-world
knowledge. Let's assume 10^9 images (a few frames per second for a
decade) instead of 10^7, with 10 times as many low level features
(similar to the optic nerve). To train on 1000 times as much data, you
need a neural network 1000 times as large, run for 1000 times as long,
or 10^6 times the computing power. This would cost $100 billion per
experiment.


-- Matt Mahoney, mattma...@gmail.com

Linas Vepstas

unread,
Jul 8, 2012, 2:46:06 PM7/8/12
to ope...@googlegroups.com
On 29 June 2012 22:02, Ben Goertzel <b...@goertzel.org> wrote:

IMO, this sort of hierarchical pattern recognition architecture (be it
DeSTIN or Andrew Ng's network that Google used) is suitable as a
perceptual cortex for certain kinds of sense modalities (good for
vision and audition, not so good for olfaction and haptics...)....

Why not olfaction and haptics?  Most sports are about haptics; I've been seriously involved in one sport, rowing, which, haptically is a lot like swimming.  It has taken me about 5 years or rowing 3x or 4x a week, maybe 1000 strokes per session, so maybe 5x3x50x1000= 750K repetitions of the basic movement before I have finally been able to become aware of (and thus control) "what's happening" during the strokes (and I'm not done yet).   It took this long despite strong conscious effort and focus, despite endless coaching and video-taping, comments and critiques (and trials-by-fire aka races).  So I'm tempted to conclude that there is some rather sophisticated neural net in the haptic area, trying to integrate stimulus and pick out pattern perceptions. It takes a huge amount of training. Despite the coaching, I don't think its at all fair to call this "supervised training", one just has to simply wait until the neurons learn whatever it is they learn.  

This has become apparent, as I've also started trying to swim well ... I've been swimming since I was 5 or 6, but its become clear that my level of swimming is that of a "novice", and am now painfully aware that it would take me another 3-4-5 years of training 2x or 4x a week, before I finally learn how to swim fast.  There's haptic feedback, but its just hard to make sense of.

 
  This is in line with Peter Norvig's idea, that
he communicated to me several times, that AGI can be achieved by
scaling up and piecing together best-of-breed narrow AI algorithms.
I'm unsure this can work; and I think that if it can work, it will not
be the fastest nor best route...

Funny that you say this, as I often get the sense that this is kind of how opencog gets put together...

--linas 

Ben Goertzel

unread,
Jul 9, 2012, 12:30:33 AM7/9/12
to ope...@googlegroups.com
>> IMO, this sort of hierarchical pattern recognition architecture (be it
>> DeSTIN or Andrew Ng's network that Google used) is suitable as a
>> perceptual cortex for certain kinds of sense modalities (good for
>> vision and audition, not so good for olfaction and haptics...)....
>
>
> Why not olfaction and haptics?

Olfaction, in the brain, seems to work mainly via combinatory
connections, not hierarchical ones. Put simply, there's not so much
hierarchy in smell recognition.... Gary Lynch wrote some great stuff
about this in the 80s.... Interestingly, many of the cognitive parts
of the human cortex emerged from the reptile's olfactory
cortex,inheriting a lot of combinatory connection patterning therefrom
-- a fact that is ignored by folks like Hawkins & Kurzweil who focus
on the hierarchical structure of the visual and auditory cortex.

Neurally, hierarchy plays more role in touch than in smell, but not
as much as it does in hearing or vision.... Touch is more about
neural tissue that reflects the actual geometric structure of the body
parts whose sensation it reports...

>> This is in line with Peter Norvig's idea, that
>> he communicated to me several times, that AGI can be achieved by
>> scaling up and piecing together best-of-breed narrow AI algorithms.
>> I'm unsure this can work; and I think that if it can work, it will not
>> be the fastest nor best route...
>
>
> Funny that you say this, as I often get the sense that this is kind of how
> opencog gets put together...

The algorithms in OpenCog were mainly designed specifically for use
within an integrated AI system. This is certainly true of PLN and
MOSES, and it's true of the new NLP module Ruiting is making, etc.
Also true of Fishgram.... These algorithms may have standalone
narrow-AI uses, but it's still a different approach than piecing
together algorithms that were created without ultimate integration in
mind.... This may seem a slight difference, but in practice I believe
it's a very significant one.... For instance, MOSES, unlike GP, is
able to easily use initial guesses or biases provided by other AI
components; this "small difference" means that MOSES will be able to
benefit from integration much more than GP....

-- Ben G
Reply all
Reply to author
Forward
0 new messages