Hi - I'm writing a brief overview of AI software at the moment and was
hoping to mention one event or advance from the last 12 months which the AI
community would consider to be the most significant for that period. I guess
Big Blue wouldn't really fit in to this timeframe, but it's this sort of
thing I'm looking for. Any ideas ?
Thanks in advance.
Ben
[ comp.ai is moderated. To submit, just post and be patient, or if ]
[ that fails mail your article to <com...@moderators.isc.org>, and ]
[ ask your news administrator to fix the problems with your system. ]
I'll suggest several:
1) The adoption of 'FaceIt' face-recognition software by the city of
Tampa caused a major scandal. That this difficult AI/vision problem has
progressed so far is very impressive.
Citations: http://groups.google.com/groups?ic=1&th=4bc9e6d587b793c3
2) The XML/semantic-web initiative has reached critical mass, and people
are (finally) really looking for general representation schemes that
will appeal to the broadest possible user-base.
3) 'The Sims' and 'Black & White' were videogame blockbusters, due
exclusively to their quantum-leap AI, representing human behavior more
_interestingly_ than ever before.
--
http://www.robotwisdom.com/ "Relentlessly intelligent
yet playful, polymathic in scope of interests, minimalist
but user-friendly design." --Milwaukee Journal-Sentinel
One more-- the Japanese have mastered bipedal robots that walk. (The
first one I saw had me _totally_ convinced there was a person inside.)
Ben Rees <be...@forager.co.uk> wrote:
> Hi - I'm writing a brief overview of AI software at the moment and was
> hoping to mention one event or advance from the last 12 months which the AI
> community would consider to be the most significant for that period. I guess
> Big Blue wouldn't really fit in to this timeframe, but it's this sort of
> thing I'm looking for. Any ideas ?
I'll suggest several:
1) The adoption of 'FaceIt' face-recognition software by the city of
Tampa caused a major scandal. That this difficult AI/vision problem has
progressed so far is very impressive.
Citations: http://groups.google.com/groups?ic=1&th=4bc9e6d587b793c3
2) The XML/semantic-web initiative has reached critical mass, and people
are (finally) really looking for general representation schemes that
will appeal to the broadest possible user-base.
3) 'The Sims' and 'Black & White' were videogame blockbusters, due
exclusively to their quantum-leap AI, representing human behavior more
_interestingly_ than ever before.
--
http://www.robotwisdom.com/ "Relentlessly intelligent
yet playful, polymathic in scope of interests, minimalist
but user-friendly design." --Milwaukee Journal-Sentinel
[ comp.ai is moderated. To submit, just post and be patient, or if ]
I just wrote:
> I'll suggest several:
One more-- the Japanese have mastered bipedal robots that walk. (The
first one I saw had me _totally_ convinced there was a person inside.)
Hi - I'm writing a brief overview of AI software at the moment and was
hoping to mention one event or advance from the last 12 months which the AI
community would consider to be the most significant for that period. I guess
Big Blue wouldn't really fit in to this timeframe, but it's this sort of
thing I'm looking for. Any ideas ?
Thanks in advance.
Ben
But I'm thinking it might be helpful to spell out in more detail what I
find significant in each:
1) Bipedal robots
What the Japanese engineers finally realised was:
-- if you lift one foot, and you're not balanced, you can fall over
-- balancing on one leg requires shifting your center of gravity
*to the side*
-- this requires hip joints that can shift side to side as well as
front and back
The demos prominently feature the robot lifting one foot and shifting
its other hip to stay balanced.
2) Face-recognition
The breakthru here is much more generally useful. When the eye (or a
camera) sees a face under normal lighting, there are varying levels of
shadow whose arrangement depends mainly on: 1) the position of the light
source, 2) the shape of the object casting the shadow (eg the nose), and
3) the shape (and coloring) of the surface onto which it falls (eg the
cheek).
So the software scans the camera image looking for shapes that _might_
be faces, and then tries to guess the dimensions of the nose and cheek
that must have generated those particular shadows.
If it can work from a moving picture-- with multiple variations in the
angles shown-- it can refine its guess much more quickly. (The Tampa
system worked from standard front-and-side mugshots, but you can bet
that police will soon start demanding 3-D scans. Face-It only considers
nose/cheeks/eyes and completely ignores facial hair and hairstyles.)
Unlike bipedal walking, this technique can be generalised for any sort
of visual recognition-- I'm sure we can eventually expect add-on modules
for body positions, clothing styles, vehicles, etc etc etc.
3) The Sims
The Sims has a simple and original model for generating semi-realistic
behavior, based (surprisingly) on 'object-oriented' design.
The world of the Sims is full of physical objects like couches and
televisions, and each object 'knows' a few likely interactions it can
have with a person. (Eg for couch: sit on, lie on, stand on, jump on,
etc.)
When a person approaches an object, the object queries the person's
motivational state (hungry, thirsty, bored) and suggests any
interactions that might gratify the motive. The person then chooses
whether or not to pursue any of the suggestions, and how to customise
them for the exact current circumstances.
We might imagine a longterm, collective 'open source' initiative to
refine and extend this database of plausible object-interactions into an
encyclopedia of the virtual world.
And such a database will eventually be needed, as well, by the
visual-recognition system (#2 above): before it can recognise that a
person is getting out of a car, it has to understand that among the many
interactions between persons and cars are 'getting in' and 'getting
out'.
4) Semantic Web
The situation with XML seems to me by far the strangest to explain. AI
has been trying to 'tag' the semantics of documents for 40 years, and
mostly failing.
Tim Berners-Lee has what appears to me as an extremely naive enthusiasm
for this *longterm* AI research project, and has by sheer force of will
leveraged his authority in the Web-world to get semantic-tagging tools
built into web-browsers at the deepest level.
When he started this campaign, his claim was that tags for displaying
text-styles (italic, bold, headline) should logically be reduced to
their underlying semantic tags (emphasis, *strong* emphasis, and, um,
***big*** emphasis).
If this were true, XML would be eagerly embraced by page designers, and
quickly, naturally, inevitably supplant HTML. But I don't think anybody
is trying to claim this anymore...
Instead, XML is being eagerly embraced for exchanging data between
databases, which is a perfectly honorable enterprise, but has absolutely
nothing to do with display-styles in browsers (nor should it).
And while most XML databases at first will be on the (boring) level of
widget-inventories, they'll also inevitably have to add customer
databases... and as soon as you try to fit human beings into
well-defined database categories, you find yourself back at the level of
1960s AI, and the whole can of worms it's been trying to avoid!
But this is good.
Customer behavior might now be analysed in the sort of object-oriented
terms pioneered by 'The Sims'-- customer buys widget, customer loses
widget, customer breaks widget.
So TimBL, by his stubborn naivete, has created this empty semantic
juggernaut that we no longer have any excuse _not_ to fill...
--
http://www.robotwisdom.com/ "Relentlessly intelligent
yet playful, polymathic in scope of interests, minimalist
but user-friendly design." --Milwaukee Journal-Sentinel
[ comp.ai is moderated. To submit, just post and be patient, or if ]
It's not really relevant to AI (If you mean "thinking machines". It's
just another advance in robotics engineering.
> 2) Face-recognition
>
> The breakthru here is much more generally useful. When the eye (or a
> camera) sees a face under normal lighting, there are varying levels of
> shadow whose arrangement depends mainly on: 1) the position of the light
> source, 2) the shape of the object casting the shadow (eg the nose), and
> 3) the shape (and coloring) of the surface onto which it falls (eg the
> cheek).
>
> So the software scans the camera image looking for shapes that _might_
> be faces, and then tries to guess the dimensions of the nose and cheek
> that must have generated those particular shadows.
>
> If it can work from a moving picture-- with multiple variations in the
> angles shown-- it can refine its guess much more quickly. (The Tampa
> system worked from standard front-and-side mugshots, but you can bet
> that police will soon start demanding 3-D scans. Face-It only considers
> nose/cheeks/eyes and completely ignores facial hair and hairstyles.)
>
> Unlike bipedal walking, this technique can be generalised for any sort
> of visual recognition-- I'm sure we can eventually expect add-on modules
> for body positions, clothing styles, vehicles, etc etc etc.
>
I don't think it will help in actual thinking. It just seems to be the
next stage on from OCR.
> 3) The Sims
>
> The Sims has a simple and original model for generating semi-realistic
> behavior, based (surprisingly) on 'object-oriented' design.
>
> The world of the Sims is full of physical objects like couches and
> televisions, and each object 'knows' a few likely interactions it can
> have with a person. (Eg for couch: sit on, lie on, stand on, jump on,
> etc.)
>
> When a person approaches an object, the object queries the person's
> motivational state (hungry, thirsty, bored) and suggests any
> interactions that might gratify the motive. The person then chooses
> whether or not to pursue any of the suggestions, and how to customise
> them for the exact current circumstances.
>
> We might imagine a longterm, collective 'open source' initiative to
> refine and extend this database of plausible object-interactions into an
> encyclopedia of the virtual world.
>
> And such a database will eventually be needed, as well, by the
> visual-recognition system (#2 above): before it can recognise that a
> person is getting out of a car, it has to understand that among the many
> interactions between persons and cars are 'getting in' and 'getting
> out'.
>
Can it learn at all? Does it have any form of common sense? If it is
purely rule-based like expert systems then it is not an advance.
> 4) Semantic Web
>
> The situation with XML seems to me by far the strangest to explain. AI
> has been trying to 'tag' the semantics of documents for 40 years, and
> mostly failing.
>
> Tim Berners-Lee has what appears to me as an extremely naive enthusiasm
> for this *longterm* AI research project, and has by sheer force of will
> leveraged his authority in the Web-world to get semantic-tagging tools
> built into web-browsers at the deepest level.
>
> When he started this campaign, his claim was that tags for displaying
> text-styles (italic, bold, headline) should logically be reduced to
> their underlying semantic tags (emphasis, *strong* emphasis, and, um,
> ***big*** emphasis).
>
> If this were true, XML would be eagerly embraced by page designers, and
> quickly, naturally, inevitably supplant HTML. But I don't think anybody
> is trying to claim this anymore...
>
> Instead, XML is being eagerly embraced for exchanging data between
> databases, which is a perfectly honorable enterprise, but has absolutely
> nothing to do with display-styles in browsers (nor should it).
>
> And while most XML databases at first will be on the (boring) level of
> widget-inventories, they'll also inevitably have to add customer
> databases... and as soon as you try to fit human beings into
> well-defined database categories, you find yourself back at the level of
> 1960s AI, and the whole can of worms it's been trying to avoid!
>
I don't think that will help at all in advancing AI.
Advances in expert systems (whether rule based or not)
and advances in rule based systems continues. There are
usually a few papers presented at large conferences dealing
with expert systems doing bigger and better things as well
as conferences specific to expert systems (and off the
top of my head I can only think of the SOAR workshops).
Kirt Undercoffer
Oliver Ford wrote:
> Can it learn at all? Does it have any form of common sense? If it is
> purely rule-based like expert systems then it is not an advance.
[ comp.ai is moderated. To submit, just post and be patient, or if ]
As the cliche goes, it was an AI problem, _until_ it was solved.
> > 2) Face-recognition [...]
> I don't think it will help in actual thinking. It just seems to be the
> next stage on from OCR.
The aspect of 'thinking' you seem to be missing is 'imagination'. ;^/
> > 3) The Sims [...]
> Can it learn at all? Does it have any form of common sense? If it is
> purely rule-based like expert systems then it is not an advance.
It's about knowledge representation, applied to the uniquely thorny
domain of human behavior. (You can't even attempt the expert system
until you have a basic representation of the domain.)
> > 4) Semantic Web [...]
> I don't think that will help at all in advancing AI.
Thanks, you've neatly encapsulated all the major blindspots of the
anti-symbolic-AI faction... which was my primary intended target!
--
http://www.robotwisdom.com/ "Relentlessly intelligent
yet playful, polymathic in scope of interests, minimalist
but user-friendly design." --Milwaukee Journal-Sentinel
[ comp.ai is moderated. To submit, just post and be patient, or if ]