Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Google, man, and ape

92 views
Skip to first unread message

RichD

unread,
Jul 5, 2015, 11:12:29 PM7/5/15
to
What, no comments on the recent Google Image Recognition
flap? You know, which identified a pair of black hominids as gorillas -

It's funny, but even funnier, and astonishing, is that Google
APOLOGIZED; yes, they apologized for an ALGORITHM!
bad algorithm, naughty algorithm! hmmmm... why doesn't
the algorithm issue its own apology, why go through corporate PR?

Didn't Google perform due diligence before hiring, did they
discover any racist affiliations in its background?

And then they announced they'll FIX IT, toute suite! Of
course that sounds reasonable to Joe Sixpack, but I
assume everyone here understands how preposterous
that is. on many levels -

--
Rich

Don Stockbauer

unread,
Jul 6, 2015, 1:26:45 PM7/6/15
to
So, does it recognize gorillas as black hominids?

RichD

unread,
Jul 7, 2015, 3:33:13 PM7/7/15
to
On July 6, Don Stockbauer wrote:
> So, does it recognize gorillas as black hominids?

That's the fix!

Any statistician will tell you, it's simply a
matter of balancing Type 1 vs. Type 2 errors -


--
Rich

Don Stockbauer

unread,
Jul 7, 2015, 3:45:15 PM7/7/15
to
I just figured that Google would have to fix the converse problem as well.

keghn feem

unread,
Jul 8, 2015, 10:00:21 AM7/8/15
to



The error rate for Google deep neural network is around 5%, no sure about this.

In the human mind, if it has trouble figuring out what is seeing, it
will see danger. Such as seeing tiger in the bushes, in the jungle, on a
windy moon lite night. The mind take advantage of errors. A survival mechanism.



What the dog-fish and camel-bird can tell us about how our brains work:

http://phys.org/news/2015-07-dog-fish-camel-bird-brains.html



keghn feem

unread,
Jul 8, 2015, 1:29:28 PM7/8/15
to

Deep Visualization Toolbox:
https://www.youtube.com/watch?v=AgkfIQ4IGaM

menti...@gmail.com

unread,
Jul 16, 2015, 1:02:42 AM7/16/15
to
Satire on Global Innovation Exchange

http://ai.neocities.org/GIX.html

keghn feem

unread,
Jul 18, 2015, 9:32:25 AM7/18/15
to



DeepDream: Inside Google's 'Daydreaming' Computers:

https://www.youtube.com/watch?v=3hnWf_wdgzs







TruthSlave

unread,
Jul 24, 2015, 10:18:17 AM7/24/15
to
Its sobering when you think the same A.i is 'out there' with the
same level of error, doing more than just associating images to
categories of images. At least in this case we can 'see' its
results to question to way its programmed.

In other incarnations i imagine the same A.i must exist to profile
its users, playing 'six degree of separation' between identities
and the types defined by man. Idea matching. A.i presenting its
users with choices just so that it can satisfy its objectives.

I wonder who would have access to that program's inner workings
to challenge its conclusions?

Imagine A.i let loose on the worlds data, A.i trained to seek-out
what we as H.i struggled to define. A.i with back door access
trawling through our data for the illusive. A.i trained with our
ill-defined examples, simply reproduce our bias conclusions,
and then issuing orders.

What follows doesn't bare thinking about... there's no accounting
for the way we typically respond to error'd information.


"umaneyes the machine"

TruthSlave

unread,
Jul 24, 2015, 10:29:56 AM7/24/15
to
On 08/07/2015 15:00, keghn feem wrote:
>
>
>
[correction]


Its sobering when you think the same A.i is 'out there' with the
same level of error, doing more than just associating images to
categories of images. At least in this case we can 'see' its
results to question the way its programmed.

In other incarnations i imagine the same A.i must exist to profile
its users, playing 'six degree of separation' between identities
and the types defined by man. Idea matching. A.i presenting its
users with choices just so that it can satisfy this objective.

I wonder who would have access to that program's inner workings
to challenge its conclusions?

Imagine A.i let loose on the worlds data, A.i trained to seek-out
what we as H.i struggled to define. A.i with back door access
trawling through our data for the illusive. A.i trained with our
ill-defined examples, simply reproduce our bias conclusions,
and then issuing orders.

What follows doesn't bare thinking about... there's no accounting
for the way we typically respond to error'd information. There's
no accounting for the accumulative effect of our response.


"umaneyes the machine"

ck

unread,
Jul 24, 2015, 10:46:11 AM7/24/15
to
Is it possible for a.i to learn 'unassisted'?
Can A.i learn without access to its mistakes?

Even with all those real world examples of African
Americans using this tool, A.i wasn't learning.
It had stopped learning and was simply categorising.


---A few of the current headlines for posterity----

http://www.sfchronicle.com/business/article/How-tech-s-lack-of-diversity-leads-to-racist-6398224.php

"
The problem is likely twofold, experts say. Not enough photos of
African Americans were fed into the program that it could recognize a
black person. And there probably weren’t enough black people involved
in testing the program to flag the issue before it was released.
"


http://gizmodo.com/youre-using-neural-networks-every-day-online-heres-h-1711616296

"
Taking inspiration from the human brain, neural networks are software
systems that can train themselves to make sense of the human world.
They use different layers of mathematical processing to make ever more
sense of the information they’re fed, from human speech to a digital
image. Essentially, they learn and change over time. That’s why they
provide computers with a more intelligent and nuanced understanding of
what confronts them. But it’s taken a long time to make that happen.
"

http://www.independent.co.uk/life-style/gadgets-and-tech/news/google-photos-tags-black-people-as-gorillas-puts-pictures-in-special-folder-10357668.html


"
The automatic recognition software is intended to spot characteristics
of photos and sort them together — so that all pictures of cars in a
person’s library can be found in one place, for instance. But the tool
seems to be identifying black people as animals.
"


Its not enough to train A.i with examples of 'what is', it also needs
to be trained with examples of 'what is not'.

Curt Welch

unread,
Jul 25, 2015, 5:36:31 PM7/25/15
to
ck <ck_N...@ntlworld.co> wrote:
> On 06/07/2015 04:12, RichD wrote:
> > What, no comments on the recent Google Image Recognition
> > flap? You know, which identified a pair of black hominids as gorillas
> > -
> >
> > It's funny, but even funnier, and astonishing, is that Google
> > APOLOGIZED; yes, they apologized for an ALGORITHM!
> > bad algorithm, naughty algorithm! hmmmm... why doesn't
> > the algorithm issue its own apology, why go through corporate PR?
> >
> > Didn't Google perform due diligence before hiring, did they
> > discover any racist affiliations in its background?
> >
> > And then they announced they'll FIX IT, toute suite! Of
> > course that sounds reasonable to Joe Sixpack, but I
> > assume everyone here understands how preposterous
> > that is. on many levels -
> >
> > --
> > Rich
> >
>
> Is it possible for a.i to learn 'unassisted'?

Some yes.

> Can A.i learn without access to its mistakes?

No. By definition. All learning algorithms learn using their mistakes.


> Even with all those real world examples of African
> Americans using this tool, A.i wasn't learning.
> It had stopped learning and was simply categorising.

Learning to categorize is still learning. Trying to suggest it's not
learning is just invalid.
That's the same thing.

The training set has a picture and the answer -- "cat". The learning
algorithm must learn NOT to classify that as DOG.

If you don't include enough pictures of cats in the training set, but do
include a lot of pictures of dogs, then the system will see a cat and call
it a dog as it's best guess.

If you don't train it to recognize cats it won't recognize cats.

The training data that was used by google clearly didn't include enough
examples of African Americans labled as "people" but did include too many
examples of Gorillas.

These learning algorithms don't yet work like humans do for the most part.
We don't learn the difference between cats and dogs by looking a 2D photos.
We learn the difference by looking at real time data streams of cats and
dogs (movies in effect). The brain extracts a lot of important features
from the real world based on how it changes over time and this allows the
brain to correctly classify objects long before the objects are given names
like "human", or "cat". A one year old in effect has been trained with an
entire year's worth of "video" input before he has to learn that one type
of feature in the video data is called a cat an another is called a dog.

At 30 images a second for 12 hours a day, a one year old human has been
"trained" on the temporal relations of 500 million sequential images. This
allows it to learn to recognize how objects change as you look at them over
time -- like when a cube rotates in our hand, or a human spins around and
we see the front of their fact turn into the back of their head. The learn
to associate all the different views of a human as one type of object and
at the same time, is learning all the natural contexts in which we tend to
see humans, or tend to see zoo animals. We see humans riding a bus, we see
zoo animals in cages or in the forest.

The algorithm google is using has no access to a massive real world video
feed. It's given only a few million static pictures to analize with no
temporal relationship to allow it to know that the back of a human's head
is the same human as his face.

The google algorithm that is only looking at static pictures, has no way at
all to learn that the human is separate from the background. So when it's
given a picture of humans with trees in the background, and told this is a
picture of "humans", it thinks the trees are "humans" as well as the rest
of the picture.

But after millions of examples of humans with different backgrounds it
learns to pick out the human features separate from the background. Or, to
use the background as a clue as to what is a "human" background. It never
learns the concept of the background as separate from the foreground.

In the picture that google got wrong, the background was just white sky.
There were no other objects in the picture to give the algorithm a clue
that these were "people". No cell phones, no hats, no clothing (just head
shots), no typical human background that the algorithm would use to make a
guess it was human.

Odds are, the training set of Gorillas likely, had pictures with no
human-like background. No gorillas wearing a hats, or suits and ties, or
T-shirts, or holding cell phones, or a starbucks coffee cup. So when the
algorithm classified the "image" as "Gorillas" it was saying as much about
the background, as it was about the humans in the foreground. It doesn't
understand the difference between the background and the foreground to
start with as humans do.

The training set for "humans" probably had very few examples of humans with
nothing but a white background so when it saw dark faced "animals" without
human objects around it, it's best guess due to the full context of the
image was gorillas.

This is just a very trivial case of the algorithm not working the same way
humans learn, (no temporal data to learn from), and not having enough
training examples to learn the subtle difference between dark faced animals
without cell phones, being homosapiens instead of gorillas.

In other words, it was not classifying the humans as gorillas, it was
classifying the entire picture as "a picture of gorillas" because it had no
good understanding that the people were separate objects from the
background and the background looked more like "gorilla background" than
"human" background.

--
Curt Welch http://CurtWelch.Com/
cu...@kcwc.com http://NewsReader.Com/

ck

unread,
Jul 29, 2015, 7:22:16 AM7/29/15
to
My comments was based on the application, which was no longer learning.

It was simply applying its rules or learned categories to the images
it was presented with. It wasn't using those images to learn from, nor
was this application asking questions of the images presented to it ..
It had stopped learning.

There was no uncertainty which one might also call an intelligent
response when presented with the unknown. This so called intelligence
was simply looking for a close enough match to its existing knowledge
base. Where in this application was the label "unsure"?
I would disagree. A training set of 'what is', Is not the same
as the examples which makes the point of 'What is not'. You might
make that assumption, the default assumption which states 'x',
and simply leave everything else as 'not x', but it not the same
thing.

If "A.i" isn't told "what is not", then it is may at some point
extrapolate to take from "what is not" to perceive it as "what is".
Other factors in its application might force it to this conjecture.

Imagine A.i is told 'X' exist, when it does not. It then searches
to find what it is told must exist. Eventually it lowers its
threshold to see 'what is not', as 'what is'. The way that search
was phrased would determine what was found.

> The training set has a picture and the answer -- "cat". The
> learning algorithm must learn NOT to classify that as DOG.
>
> If you don't include enough pictures of cats in the training set, but do
> include a lot of pictures of dogs, then the system will see a cat and call
> it a dog as it's best guess.
>
> If you don't train it to recognize cats it won't recognize cats.

So you are saying that A.i will only apply categories to those labels
it is supplied and wont reserve a question mark for those categories
which exist in the data but are without labels.

Let us say in the data there were random objects. Are you saying A.i
would assign the same label to those objects as it would to those images
it was being trained to recognize? Let us say it was shown imagines of
Dogs and cats and tables and chairs, all with four legs, but only dogs
had the label Dog. Are you saying it would not see any
distinctions between Dogs and Cats, without specific training for
each of those other categories?

And if so, would not the category 'Not Dog' make the point of i am
alluding to.
So what is the A.i learning?

Isn't its relationships of points within the picture mapped to pseudo
representative object? If i were creating A.i i wouldn't just have it
relating to every other imagine, but related to an internal template
upon which were mapped those relationship of points. My a.i would start
with a core idea of geometry and how to orientate that geometry.

>
> In the picture that google got wrong, the background was just white sky.
> There were no other objects in the picture to give the algorithm a clue
> that these were "people". No cell phones, no hats, no clothing (just head
> shots), no typical human background that the algorithm would use to make a
> guess it was human.

No context. In other words the object image exist in isolation, without
our typical common sense relationship to the world. I have to wonder
what happens to people when we follow this A.i view of the 'data'. Would
we respond in an equally mechanical fashion. To see the world without
context, seeing information in isolation of its truth, all according to
this intelligence codified with our acceptance of A.i.

"it is because a.i says it is".

> Odds are, the training set of Gorillas likely, had pictures with no
> human-like background. No gorillas wearing a hats, or suits and ties, or
> T-shirts, or holding cell phones, or a starbucks coffee cup. So when the
> algorithm classified the "image" as "Gorillas" it was saying as much about
> the background, as it was about the humans in the foreground. It doesn't
> understand the difference between the background and the foreground to
> start with as humans do.

Is this the same A.i making its way into the military? I shouldn't joke.

> The training set for "humans" probably had very few examples of humans with
> nothing but a white background so when it saw dark faced "animals" without
> human objects around it, it's best guess due to the full context of the
> image was gorillas.
>
> This is just a very trivial case of the algorithm not working the same way
> humans learn, (no temporal data to learn from), and not having enough
> training examples to learn the subtle difference between dark faced animals
> without cell phones, being homosapiens instead of gorillas.
>
> In other words, it was not classifying the humans as gorillas, it was
> classifying the entire picture as "a picture of gorillas" because it had no
> good understanding that the people were separate objects from the
> background and the background looked more like "gorilla background" than
> "human" background.
>

For all that, this story provides us with a rare view into the
algorithms we simply accept as A.i. We are only able to 'see'
its flaws because we are in a position to question its results.
One has to wonder where A.i is accepted as its applied to our
expanding sea of data, and we have no way to grasp at its
function.

One also has to wonder about this relationship we have to A.i,
our typically human response to the information which trickles
down from the unattributed sources, is simply to accept and
confirm. Its rare that we challenge our sources, which means
it would be even rarer for A.i to learn of, or learn from its
errors.



Curt Welch

unread,
Jul 31, 2015, 2:05:18 PM7/31/15
to
That's true. I assume.

> There was no uncertainty which one might also call an intelligent
> response when presented with the unknown. This so called intelligence
> was simply looking for a close enough match to its existing knowledge
> base. Where in this application was the label "unsure"?

You have to be trained to say "unsure" in the same way you are trained to
say "cat". It was not trained to do that. Even if it had been it might
not have been unsure about it's label.

What it in fact wasn't trained to do, was to be sensitive to our social
race discrimination problems.
A training set that says a picture is a cat, is most certainly also a
training set that says the picture is not a dog.

The point you are trying to but failing to describe correctly is a
different issue.

>
> If "A.i" isn't told "what is not", then it is may at some point
> extrapolate to take from "what is not" to perceive it as "what is".
> Other factors in its application might force it to this conjecture.

You are just lost here dude. The picture that was mis classified was not a
training example. Had it been part of the training examples, the AI
wouldn't have made the mistake.

If you put it into the training set, and label it "people" that also means
it was labeled as "not Gorilla". It would most certainly have trained the
system to know the picture was "not" a Gorilla.

This was not a failure due to the system having no ability to understand
"not a gorilla". It was a failure becuase the system was never told that
picture was not a gorilla. If it had been trained with that picture it
would understand it's "not" a gorilla.

> Imagine A.i is told 'X' exist, when it does not. It then searches
> to find what it is told must exist. Eventually it lowers its
> threshold to see 'what is not', as 'what is'. The way that search
> was phrased would determine what was found.

We are talking about an AI that is trained with pixels, not words. How an
AI might interpret natural language has nothing to do with this example.

Understand the meaning of "exists" and having the ability to "search" is
100% different problem than labeling a picture "human" or "gorilla". You
have lost all context here.

> > The training set has a picture and the answer -- "cat". The
> > learning algorithm must learn NOT to classify that as DOG.
> >
> > If you don't include enough pictures of cats in the training set, but
> > do include a lot of pictures of dogs, then the system will see a cat
> > and call it a dog as it's best guess.
> >
> > If you don't train it to recognize cats it won't recognize cats.
>
> So you are saying that A.i will only apply categories to those labels
> it is supplied and wont reserve a question mark for those categories
> which exist in the data but are without labels.

These AIs work by creating internal concepts of distance (similarity). A
given label and picture will have a distance measure to all other pictures.
There is no such thing as "unknown". There is only the concept of how
similar a given picture is to all the different training examples.

If a given test image exceeds a given distance threshold the AI could be
trained to label that as "unknown", but you have to train the AI to do that
if you want. These AIs don't get trained that way, they pick the training
example that is closest. There is no formal concept of "unlabeled" there
is only lots of examples of "not very much like".


> Let us say in the data there were random objects. Are you saying A.i
> would assign the same label to those objects as it would to those images
> it was being trained to recognize?

It would create a distance measure to all the objects.

> Let us say it was shown imagines of
> Dogs and cats and tables and chairs, all with four legs, but only dogs
> had the label Dog. Are you saying it would not see any
> distinctions between Dogs and Cats, without specific training for
> each of those other categories?

If the pixel values are not identical, then the software will "see the
difference". It knows that a pixel value of 6, is not the same as the
picture value of 8. Seeing the difference, is trivally easy. Seeing how
they are similar, is what's hard.

And to see how different pictures are similar, all these systems develop an
internal concept of "distance" between stimulus signals and calculate how
"close" on image is to another.

> And if so, would not the category 'Not Dog' make the point of i am
> alluding to.

There is no category "Not dog". Which is why your point is not valid.
You have to TRAIN the AI to create a category of "NOT DOG".

If all you train it to understand is "dog" and "cat", then it doesn't have
a label for "not dog" or "not cat".

If you want to show it pictures of tables and train it to be "not dog" you
can'. But it's more useful to train it to be "table".

These are all clustering problems in AI.

https://en.wikipedia.org/wiki/Cluster_analysis

See the graphic here:

https://en.wikipedia.org/wiki/Cluster_analysis#/media/File:Cluster-2.svg

It's got objects that are plotted on a 2D space where we can assign
distance on this graph by the straight line distance between objects. See
how they are assigned one of three colors based on their location on this
2D graph?

That's what is at work here. If you give the algorithm only three labels,
RED, YELLOW, BLUE, it will assign every object to one of those three
labels based on which is closest to the training examples.

There is no "not green" concept at work here. There is an object that
hasn't been assigned a label, and there is the objects "distance measure"
to the known examples (from the training set).
For image classification? Two things can be learned. Simple systems use a
hard wired "distance" measure, and it only learns the location in space of
the training examples.

The more advanced systems adjust their internal measure of distance to try
and fit the training examples. They don't just learn the labels, they
learn what features of the image is the cause of the label.

Human's however, with access to the real time data streams, learn stuff
from the data stream, that is not in the image training sets. It learns to
estimate how close in TIME to different images are. The brain uses time as
it's distance measure. Most of these AI image classification systems do not
have that ability.

> Isn't its relationships of points within the picture mapped to pseudo
> representative object? If i were creating A.i i wouldn't just have it
> relating to every other imagine, but related to an internal template
> upon which were mapped those relationship of points. My a.i would start
> with a core idea of geometry and how to orientate that geometry.

The brain LEARNS that core idea of geometry from the time data.

You can hardcode geometry concepts into an AI and make it work better on
simple geometric line drawings. But giving it complex real world
photographs, there's no easy way to leverage simple geometry concepts.
There's no easy algorithms to map pixels to 3D geometry.

The whole nature of this beast is hidden in the Ais implementation of
"distance" -- how does it tell if two objects are similar?

When we see a cube, rotating in space, we are seeing a 2D version of the 3D
object. This makes the 2D image change in very odd ways over time as the 3D
cube rotates.

It might look like this at one second:

ooooo
o o
o o
ooooo

And then a moment later, look like this (excuse the bad ascii art)

ooooo
oo o
o o o
o ooooo
o o o
oo o
ooooo


How does the AI know these should be seen as "similar" (both a cube)?

The brain learns these are "similar" because these two very different
images show up close together IN TIME. It uses time as the distance
measure to classify images.

An AI that learns from static photos that don't change over time, must try
to figure out similarly because it was given those two very different
images and told "they are both cubes".

The brain has a lot more data to work with most the time than the sort of
image sets the google AI had to train with. BIG data is key here in making
it all work well.

>
> >
> > In the picture that google got wrong, the background was just white
> > sky. There were no other objects in the picture to give the algorithm a
> > clue that these were "people". No cell phones, no hats, no clothing
> > (just head shots), no typical human background that the algorithm would
> > use to make a guess it was human.
>
> No context. In other words the object image exist in isolation, without
> our typical common sense relationship to the world.

Yes, our brain learns all this common sense stuff by watching and
interacting with the real world for years. The training sets we put
together to train something like the google image labeling AI is trivally
small by comparison. But the training sets are getting larger and larger
and the results are getting better and better as the size of the training
sets grow.

> I have to wonder
> what happens to people when we follow this A.i view of the 'data'. Would
> we respond in an equally mechanical fashion. To see the world without
> context, seeing information in isolation of its truth, all according to
> this intelligence codified with our acceptance of A.i.

Yes, if our brain had such little data to work with, we would be acting
pretty stupid. In fact, we are pretty stupid a lot of the time, we are just
too stupid to know how stupid we are being.

> "it is because a.i says it is".

Soon the AIs will be FAR better than the human brain, and it will become
obvious to people how stupid we are once we see how smart a machine can be.

>
> > Odds are, the training set of Gorillas likely, had pictures with no
> > human-like background. No gorillas wearing a hats, or suits and ties,
> > or T-shirts, or holding cell phones, or a starbucks coffee cup. So
> > when the algorithm classified the "image" as "Gorillas" it was saying
> > as much about the background, as it was about the humans in the
> > foreground. It doesn't understand the difference between the
> > background and the foreground to start with as humans do.
>
> Is this the same A.i making its way into the military? I shouldn't joke.

Sure it is. But it's not the AI that is "stupid", as much as it's just a
lack of training. Would you give a 1 year old a gun and ask them to kill
any "bad guys" that walked into to room? That's what we would be doing if
we gave these current AIs that power to decide for itself what to shoot at.

Future AIs with better algorithms and better training however, will be able
to make far better decisions than any human soldier. At that point, it
would be stupid to give a human soldier a gun.

> > The training set for "humans" probably had very few examples of humans
> > with nothing but a white background so when it saw dark faced "animals"
> > without human objects around it, it's best guess due to the full
> > context of the image was gorillas.
> >
> > This is just a very trivial case of the algorithm not working the same
> > way humans learn, (no temporal data to learn from), and not having
> > enough training examples to learn the subtle difference between dark
> > faced animals without cell phones, being homosapiens instead of
> > gorillas.
> >
> > In other words, it was not classifying the humans as gorillas, it was
> > classifying the entire picture as "a picture of gorillas" because it
> > had no good understanding that the people were separate objects from
> > the background and the background looked more like "gorilla background"
> > than "human" background.
> >
>
> For all that, this story provides us with a rare view into the
> algorithms we simply accept as A.i. We are only able to 'see'
> its flaws because we are in a position to question its results.
> One has to wonder where A.i is accepted as its applied to our
> expanding sea of data, and we have no way to grasp at its
> function.

NO GRASP AT ALL! That is the danger of what we are heading into.

But oddly enough, humans have no grasp at all of how their own brain works.
And this leads to endless stupid decisions on the part of humans that
falsely "think" they are "smart".

> One also has to wonder about this relationship we have to A.i,
> our typically human response to the information which trickles
> down from the unattributed sources, is simply to accept and
> confirm. Its rare that we challenge our sources, which means
> it would be even rarer for A.i to learn of, or learn from its
> errors.

People need to learn to challenge the source. It's something we need to
train more people to do.
0 new messages