Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Biggest AI event from the last 12 months

0 views
Skip to first unread message

Ben Rees

unread,
Jul 13, 2001, 9:18:45 AM7/13/01
to
[[Reposted to fix propagation problem]]

Hi - I'm writing a brief overview of AI software at the moment and was
hoping to mention one event or advance from the last 12 months which the AI
community would consider to be the most significant for that period. I guess
Big Blue wouldn't really fit in to this timeframe, but it's this sort of
thing I'm looking for. Any ideas ?

Thanks in advance.

Ben

[ comp.ai is moderated. To submit, just post and be patient, or if ]
[ that fails mail your article to <com...@moderators.isc.org>, and ]
[ ask your news administrator to fix the problems with your system. ]

Jorn Barger

unread,
Jul 13, 2001, 10:52:24 AM7/13/01
to
Ben Rees <be...@forager.co.uk> wrote:
> Hi - I'm writing a brief overview of AI software at the moment and was
> hoping to mention one event or advance from the last 12 months which the AI
> community would consider to be the most significant for that period. I guess
> Big Blue wouldn't really fit in to this timeframe, but it's this sort of
> thing I'm looking for. Any ideas ?

I'll suggest several:

1) The adoption of 'FaceIt' face-recognition software by the city of
Tampa caused a major scandal. That this difficult AI/vision problem has
progressed so far is very impressive.
Citations: http://groups.google.com/groups?ic=1&th=4bc9e6d587b793c3

2) The XML/semantic-web initiative has reached critical mass, and people
are (finally) really looking for general representation schemes that
will appeal to the broadest possible user-base.

3) 'The Sims' and 'Black & White' were videogame blockbusters, due
exclusively to their quantum-leap AI, representing human behavior more
_interestingly_ than ever before.


--
http://www.robotwisdom.com/ "Relentlessly intelligent
yet playful, polymathic in scope of interests, minimalist
but user-friendly design." --Milwaukee Journal-Sentinel

Jorn Barger

unread,
Jul 13, 2001, 10:53:01 AM7/13/01
to
I just wrote:
> I'll suggest several:

One more-- the Japanese have mastered bipedal robots that walk. (The
first one I saw had me _totally_ convinced there was a person inside.)

max reason

unread,
Jul 15, 2001, 11:00:55 AM7/15/01
to

Jorn Barger

unread,
Jul 15, 2001, 11:00:01 AM7/15/01
to
cancelled <9in20d$i1f$1...@mulga.cs.mu.OZ.AU>

Jorn Barger

unread,
Jul 15, 2001, 11:00:03 AM7/15/01
to
cancelled <9in1v8$i05$1...@mulga.cs.mu.OZ.AU>

Ben Rees

unread,
Jul 15, 2001, 11:00:10 AM7/15/01
to
cancelled <9imsfl$dje$1...@mulga.cs.mu.OZ.AU>

Ben Rees

unread,
Jul 15, 2001, 11:00:08 AM7/15/01
to

Jorn Barger

unread,
Jul 15, 2001, 11:00:14 AM7/15/01
to

Jorn Barger

unread,
Jul 15, 2001, 11:00:17 AM7/15/01
to

max reason

unread,
Jul 15, 2001, 11:00:20 AM7/15/01
to

max reason

unread,
Jul 15, 2001, 11:00:43 AM7/15/01
to
To usenet mafia: Return comp.ai to uncensored state.

Jorn Barger

unread,
Jul 15, 2001, 11:00:46 AM7/15/01
to

Jorn Barger

unread,
Jul 15, 2001, 11:00:49 AM7/15/01
to

Ben Rees

unread,
Jul 15, 2001, 11:00:55 AM7/15/01
to

Jorn Barger

unread,
Jul 15, 2001, 11:00:10 AM7/15/01
to

Jorn Barger

unread,
Jul 15, 2001, 11:00:12 AM7/15/01
to

max reason

unread,
Jul 15, 2001, 11:00:15 AM7/15/01
to

Jorn Barger

unread,
Jul 15, 2001, 11:00:51 AM7/15/01
to

Jorn Barger

unread,
Jul 15, 2001, 11:00:54 AM7/15/01
to

max reason

unread,
Jul 15, 2001, 11:00:56 AM7/15/01
to

max reason

unread,
Jul 15, 2001, 11:00:32 AM7/15/01
to

Jorn Barger

unread,
Jul 15, 2001, 11:00:36 AM7/15/01
to

Jorn Barger

unread,
Jul 15, 2001, 11:00:39 AM7/15/01
to

[repost because of rogue cancel]

unread,
Jul 16, 2001, 7:15:31 AM7/16/01
to
[[Repost because of rogue cancel by net vandal Gennady Kalmykov / Bloxy]]

Ben Rees <be...@forager.co.uk> wrote:
> Hi - I'm writing a brief overview of AI software at the moment and was
> hoping to mention one event or advance from the last 12 months which the AI
> community would consider to be the most significant for that period. I guess
> Big Blue wouldn't really fit in to this timeframe, but it's this sort of
> thing I'm looking for. Any ideas ?

I'll suggest several:

1) The adoption of 'FaceIt' face-recognition software by the city of
Tampa caused a major scandal. That this difficult AI/vision problem has
progressed so far is very impressive.
Citations: http://groups.google.com/groups?ic=1&th=4bc9e6d587b793c3

2) The XML/semantic-web initiative has reached critical mass, and people
are (finally) really looking for general representation schemes that
will appeal to the broadest possible user-base.

3) 'The Sims' and 'Black & White' were videogame blockbusters, due
exclusively to their quantum-leap AI, representing human behavior more
_interestingly_ than ever before.

--
http://www.robotwisdom.com/ "Relentlessly intelligent
yet playful, polymathic in scope of interests, minimalist
but user-friendly design." --Milwaukee Journal-Sentinel

[ comp.ai is moderated. To submit, just post and be patient, or if ]

[repost because of rogue cancel]

unread,
Jul 16, 2001, 7:15:33 AM7/16/01
to
[[Repost because of rogue cancel by net vandal Gennady Kalmykov / Bloxy]]

I just wrote:
> I'll suggest several:

One more-- the Japanese have mastered bipedal robots that walk. (The
first one I saw had me _totally_ convinced there was a person inside.)

Ben Rees

unread,
Jul 16, 2001, 7:17:26 AM7/16/01
to
[[Repost because of rogue cancel by net vandal Gennady Kalmykov / Bloxy]]

Hi - I'm writing a brief overview of AI software at the moment and was


hoping to mention one event or advance from the last 12 months which the AI
community would consider to be the most significant for that period. I guess
Big Blue wouldn't really fit in to this timeframe, but it's this sort of
thing I'm looking for. Any ideas ?

Thanks in advance.

Ben

Jorn Barger

unread,
Jul 20, 2001, 9:02:22 AM7/20/01
to
I just nominated four recent breakthrus in AI that I think are
especially significant: robots that walk on two legs, software that can
recognise faces, the computer games 'The Sims' and 'Black & White', and
the 'semantic web' (XML) movement...

But I'm thinking it might be helpful to spell out in more detail what I
find significant in each:

1) Bipedal robots

What the Japanese engineers finally realised was:

-- if you lift one foot, and you're not balanced, you can fall over
-- balancing on one leg requires shifting your center of gravity
*to the side*
-- this requires hip joints that can shift side to side as well as
front and back

The demos prominently feature the robot lifting one foot and shifting
its other hip to stay balanced.


2) Face-recognition

The breakthru here is much more generally useful. When the eye (or a
camera) sees a face under normal lighting, there are varying levels of
shadow whose arrangement depends mainly on: 1) the position of the light
source, 2) the shape of the object casting the shadow (eg the nose), and
3) the shape (and coloring) of the surface onto which it falls (eg the
cheek).

So the software scans the camera image looking for shapes that _might_
be faces, and then tries to guess the dimensions of the nose and cheek
that must have generated those particular shadows.

If it can work from a moving picture-- with multiple variations in the
angles shown-- it can refine its guess much more quickly. (The Tampa
system worked from standard front-and-side mugshots, but you can bet
that police will soon start demanding 3-D scans. Face-It only considers
nose/cheeks/eyes and completely ignores facial hair and hairstyles.)

Unlike bipedal walking, this technique can be generalised for any sort
of visual recognition-- I'm sure we can eventually expect add-on modules
for body positions, clothing styles, vehicles, etc etc etc.


3) The Sims

The Sims has a simple and original model for generating semi-realistic
behavior, based (surprisingly) on 'object-oriented' design.

The world of the Sims is full of physical objects like couches and
televisions, and each object 'knows' a few likely interactions it can
have with a person. (Eg for couch: sit on, lie on, stand on, jump on,
etc.)

When a person approaches an object, the object queries the person's
motivational state (hungry, thirsty, bored) and suggests any
interactions that might gratify the motive. The person then chooses
whether or not to pursue any of the suggestions, and how to customise
them for the exact current circumstances.

We might imagine a longterm, collective 'open source' initiative to
refine and extend this database of plausible object-interactions into an
encyclopedia of the virtual world.

And such a database will eventually be needed, as well, by the
visual-recognition system (#2 above): before it can recognise that a
person is getting out of a car, it has to understand that among the many
interactions between persons and cars are 'getting in' and 'getting
out'.


4) Semantic Web

The situation with XML seems to me by far the strangest to explain. AI
has been trying to 'tag' the semantics of documents for 40 years, and
mostly failing.

Tim Berners-Lee has what appears to me as an extremely naive enthusiasm
for this *longterm* AI research project, and has by sheer force of will
leveraged his authority in the Web-world to get semantic-tagging tools
built into web-browsers at the deepest level.

When he started this campaign, his claim was that tags for displaying
text-styles (italic, bold, headline) should logically be reduced to
their underlying semantic tags (emphasis, *strong* emphasis, and, um,
***big*** emphasis).

If this were true, XML would be eagerly embraced by page designers, and
quickly, naturally, inevitably supplant HTML. But I don't think anybody
is trying to claim this anymore...

Instead, XML is being eagerly embraced for exchanging data between
databases, which is a perfectly honorable enterprise, but has absolutely
nothing to do with display-styles in browsers (nor should it).

And while most XML databases at first will be on the (boring) level of
widget-inventories, they'll also inevitably have to add customer
databases... and as soon as you try to fit human beings into
well-defined database categories, you find yourself back at the level of
1960s AI, and the whole can of worms it's been trying to avoid!

But this is good.

Customer behavior might now be analysed in the sort of object-oriented
terms pioneered by 'The Sims'-- customer buys widget, customer loses
widget, customer breaks widget.

So TimBL, by his stubborn naivete, has created this empty semantic
juggernaut that we no longer have any excuse _not_ to fill...

--
http://www.robotwisdom.com/ "Relentlessly intelligent
yet playful, polymathic in scope of interests, minimalist
but user-friendly design." --Milwaukee Journal-Sentinel

[ comp.ai is moderated. To submit, just post and be patient, or if ]

Oliver Ford

unread,
Jul 21, 2001, 9:32:17 AM7/21/01
to
jo...@enteract.com (Jorn Barger) wrote in message news:<9j9a4u$cqi$1...@mulga.cs.mu.OZ.AU>...

> I just nominated four recent breakthrus in AI that I think are
> especially significant: robots that walk on two legs, software that can
> recognise faces, the computer games 'The Sims' and 'Black & White', and
> the 'semantic web' (XML) movement...
>
> But I'm thinking it might be helpful to spell out in more detail what I
> find significant in each:
>
> 1) Bipedal robots
>
> What the Japanese engineers finally realised was:
>
> -- if you lift one foot, and you're not balanced, you can fall over
> -- balancing on one leg requires shifting your center of gravity
> *to the side*
> -- this requires hip joints that can shift side to side as well as
> front and back
>
> The demos prominently feature the robot lifting one foot and shifting
> its other hip to stay balanced.
>

It's not really relevant to AI (If you mean "thinking machines". It's
just another advance in robotics engineering.

> 2) Face-recognition
>
> The breakthru here is much more generally useful. When the eye (or a
> camera) sees a face under normal lighting, there are varying levels of
> shadow whose arrangement depends mainly on: 1) the position of the light
> source, 2) the shape of the object casting the shadow (eg the nose), and
> 3) the shape (and coloring) of the surface onto which it falls (eg the
> cheek).
>
> So the software scans the camera image looking for shapes that _might_
> be faces, and then tries to guess the dimensions of the nose and cheek
> that must have generated those particular shadows.
>
> If it can work from a moving picture-- with multiple variations in the
> angles shown-- it can refine its guess much more quickly. (The Tampa
> system worked from standard front-and-side mugshots, but you can bet
> that police will soon start demanding 3-D scans. Face-It only considers
> nose/cheeks/eyes and completely ignores facial hair and hairstyles.)
>
> Unlike bipedal walking, this technique can be generalised for any sort
> of visual recognition-- I'm sure we can eventually expect add-on modules
> for body positions, clothing styles, vehicles, etc etc etc.
>

I don't think it will help in actual thinking. It just seems to be the
next stage on from OCR.

> 3) The Sims
>
> The Sims has a simple and original model for generating semi-realistic
> behavior, based (surprisingly) on 'object-oriented' design.
>
> The world of the Sims is full of physical objects like couches and
> televisions, and each object 'knows' a few likely interactions it can
> have with a person. (Eg for couch: sit on, lie on, stand on, jump on,
> etc.)
>
> When a person approaches an object, the object queries the person's
> motivational state (hungry, thirsty, bored) and suggests any
> interactions that might gratify the motive. The person then chooses
> whether or not to pursue any of the suggestions, and how to customise
> them for the exact current circumstances.
>
> We might imagine a longterm, collective 'open source' initiative to
> refine and extend this database of plausible object-interactions into an
> encyclopedia of the virtual world.
>
> And such a database will eventually be needed, as well, by the
> visual-recognition system (#2 above): before it can recognise that a
> person is getting out of a car, it has to understand that among the many
> interactions between persons and cars are 'getting in' and 'getting
> out'.
>

Can it learn at all? Does it have any form of common sense? If it is
purely rule-based like expert systems then it is not an advance.

> 4) Semantic Web
>
> The situation with XML seems to me by far the strangest to explain. AI
> has been trying to 'tag' the semantics of documents for 40 years, and
> mostly failing.
>
> Tim Berners-Lee has what appears to me as an extremely naive enthusiasm
> for this *longterm* AI research project, and has by sheer force of will
> leveraged his authority in the Web-world to get semantic-tagging tools
> built into web-browsers at the deepest level.
>
> When he started this campaign, his claim was that tags for displaying
> text-styles (italic, bold, headline) should logically be reduced to
> their underlying semantic tags (emphasis, *strong* emphasis, and, um,
> ***big*** emphasis).
>
> If this were true, XML would be eagerly embraced by page designers, and
> quickly, naturally, inevitably supplant HTML. But I don't think anybody
> is trying to claim this anymore...
>
> Instead, XML is being eagerly embraced for exchanging data between
> databases, which is a perfectly honorable enterprise, but has absolutely
> nothing to do with display-styles in browsers (nor should it).
>
> And while most XML databases at first will be on the (boring) level of
> widget-inventories, they'll also inevitably have to add customer
> databases... and as soon as you try to fit human beings into
> well-defined database categories, you find yourself back at the level of
> 1960s AI, and the whole can of worms it's been trying to avoid!
>

I don't think that will help at all in advancing AI.

Kirt Undercoffer

unread,
Jul 23, 2001, 1:24:57 AM7/23/01
to
Rule based systems can incorporate learning and
small scale "common sense." Large scale common
sense (common sense outside of a small domain)
has not yet been demonstrated by any system (and
since I don't think common sense is real I'd personally
include humans and animals - but of course most people
do have faith in common sense).

Advances in expert systems (whether rule based or not)
and advances in rule based systems continues. There are
usually a few papers presented at large conferences dealing
with expert systems doing bigger and better things as well
as conferences specific to expert systems (and off the
top of my head I can only think of the SOAR workshops).


Kirt Undercoffer

Oliver Ford wrote:

> Can it learn at all? Does it have any form of common sense? If it is
> purely rule-based like expert systems then it is not an advance.

[ comp.ai is moderated. To submit, just post and be patient, or if ]

Jorn Barger

unread,
Jul 24, 2001, 4:45:00 AM7/24/01
to
Oliver Ford <olive...@softhome.net> wrote:
> > 1) Bipedal robots [...]

> It's not really relevant to AI (If you mean "thinking machines". It's
> just another advance in robotics engineering.

As the cliche goes, it was an AI problem, _until_ it was solved.

> > 2) Face-recognition [...]


> I don't think it will help in actual thinking. It just seems to be the
> next stage on from OCR.

The aspect of 'thinking' you seem to be missing is 'imagination'. ;^/

> > 3) The Sims [...]


> Can it learn at all? Does it have any form of common sense? If it is
> purely rule-based like expert systems then it is not an advance.

It's about knowledge representation, applied to the uniquely thorny
domain of human behavior. (You can't even attempt the expert system
until you have a basic representation of the domain.)

> > 4) Semantic Web [...]


> I don't think that will help at all in advancing AI.

Thanks, you've neatly encapsulated all the major blindspots of the
anti-symbolic-AI faction... which was my primary intended target!

--
http://www.robotwisdom.com/ "Relentlessly intelligent
yet playful, polymathic in scope of interests, minimalist
but user-friendly design." --Milwaukee Journal-Sentinel

[ comp.ai is moderated. To submit, just post and be patient, or if ]

0 new messages