Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

AI, Semantic Web, etc.

8 views
Skip to first unread message

Mike Beedle

unread,
Apr 10, 2002, 2:14:57 PM4/10/02
to
All,

I have been told that using Lisp and Lisp-like exchanges, ontologies,
i.e. KIF, FIPA ACL, KQML, Ontolingue is not "fashionable"
these days, and that everything should go the XML way.

I disagree. I think the world is heading to a "very confined" place
with the Semantic Web, RDF, DAML, etc.; that will exclude the
possiblity of having true distributed intelligent and mobile agents
in the future.

Here is what I see.

AI made The Economist recently.
http://www.economist.com/science/tq/displayStory.cfm?story_id=1020789
and The Economist predicts a strong future AI prescience:

Why is this interesting? Because, by definition if The Economist
plays to it... it has to be big enough, at least in perception.

These are some of the drivers:

1) "The semantic Web" and related technologies:
http://www.sciam.com/2001/0501issue/0501berners-lee.html
http://www.w3.org/2001/sw/
http://www.semanticweb.org/
http://www.w3.org/RDF/

2) Darpa with DAML:
http://www.daml.org/

3) Microsoft with .NET, check:
Actively advertising "intelligent" technologies.
http://www.gotdotnet.com/terrarium/
http://www.gotdotnet.com

4) Intelligence required for B2B, ebXML, RosettaNet, EAI, etc.
http://www.rosettanet.org/rosettanet/Rooms/DisplayPages/LayoutInitial
http://www.ebxml.org/
etc.
EAI/Workflow: App Servers, EAI, Queues (MQ, Open JMS, etc.), Web Services,
Workflow, and Business Process Management (Vitria, etc.)

5) IBM's biggest iron is advertised as "intelligent":
http://www-1.ibm.com/servers/eserver/introducing/eliza/

6) Lifestreams, Linda on the Web, etc.; all requiring some basic
AI techniques like pattern-matching, and distributed agents that
know about ontologies.

And add to that the push that AI is getting from video games and
intelligent toys.

So, AI seems to be coming back from many but _major_
different areas:

IBM,
Gates/Microsoft,
Darpa,
W3C,
Gelertner,
Tim Berners-Lee (inventor of HTTP/HTML)
Business Standards (B2B, ebXML, EAI, RosettaNET, etc.)
Research Projects (aLife, Biological Metaphors, Digital Life)
Video Games
etc.

But help me cut through some Gordian Knots... why does AI have to
be implemented through RPC, client-server, XMLish technologies, that:

1) have many layers of bloat (serializations/deserializations),
2) bring discomfort and confusion by introducing
"disconnected layered languages", and
3) don't have the appropriate semantics, facilities, libraries
and power to do AI jobs?

As early as 1975, (written in 1975, published in 1982), "intelligent"
business exchanges have been proposed like the "The Common Business
Communication Language":
http://www-formal.stanford.edu/jmc/cbcl2/cbcl2.html

where basic exchanges like:

(REQUEST-QUOTE (ADJECTIVE (PENCILS #2) YELLOW)
(GROSS 100))

have replies like:

(WE-QUOTE (OUR-STOCK-NUMBER A7305)
(QUANTITY 100)
(DELIVERY-DATE 3-10-77)
(PRICE $1.00))

Even FIPA ACL, KQML and other ACLs (agent communication languages)
are Lisp-like.

You can also do this with X12 and now XML, but using a LISP-like syntax,
we can _also_ send rules, computable things (classes, functions, patterns,
etc.), do pattern-matching, send/share ontologies, do knowledge exchanges,
etc. So, imo, the infrastructure that LISP provides is superior
to do AI because it:

1) provides a larger number of existing resources available: libraries,
programs, etc.; for:

a) knowledge representation
b) reasoning
c) logical programming
d) expert systems
e) genetic programming
f) game playing (plans, strategies, intentions, actions, etc.)
g) parsing natural languages
etc.

this is important we want to implement:

a) voting
b) auctions
c) coalitions
d) negotiation
e) bidding/awarding
etc.

2) requires the least amount of conversions
(serialization/deserialization) when the app servers are
LISP based

3) provides the greater amount of computational power
and flexibility

4) it is more intuitive since the parsing language can be
the same as the exchange language.

To me, it doesn't make any sense to reinvent the DAI (distributed
AI) wheel with XMLish technologies ..... this may in fact contribute
to the second commercial failure of AI,

- Mike


Brad Miller

unread,
Apr 10, 2002, 4:57:44 PM4/10/02
to

"Mike Beedle" <bee...@e-architects.com> wrote in message
news:Bi%s8.255573$702.48876@sccrnsc02...

> All,
>
> I have been told that using Lisp and Lisp-like exchanges, ontologies,
> i.e. KIF, FIPA ACL, KQML, Ontolingue is not "fashionable"
> these days, and that everything should go the XML way.
>
> I disagree. I think the world is heading to a "very confined" place
> with the Semantic Web, RDF, DAML, etc.; that will exclude the
> possiblity of having true distributed intelligent and mobile agents
> in the future.

I kind of agree with you. Certainly XML is just syntax, and RDF, DAML, etc.
are too limited while the Semantic Web is a much bigger problem than the
program adherents give it credit for. (I'm not sure just who they expect to
mark up all the web pages, it's too hard to do manually, and anything
automatic will require a KR much more expressive than, e.g. DAML+OIL).

But KIF, FIPA ACL, KQML, Ontolingua (really OKBC?) are in some sense also
too weak, i.e., not expressive enough, too hard to use for many purposes
that they aren't going to get us to "true distributed intelligent and mobile
agents in the future" either. I have some limited hopes for opencyc, but
there are several weaknesses in their model as well that will make it
difficult to base a community of agents on such a representation (in
particular with respect to representations of plans, actions, time, and
perspective. Umm, did I leave anything out? :-).

So I guess the "right" thing isn't really even on the radar yet. There's
some promising work, and some of it is actually been around for a while, but
the curent talent black hole called the semantic web isn't really fighting
the right cause, and using the wrong tools to do it. IMHO, of course. ;-)


Erik Naggum

unread,
Apr 10, 2002, 5:37:14 PM4/10/02
to
* "Mike Beedle" <bee...@e-architects.com>

| I think the world is heading to a "very confined" place with the Semantic
| Web, RDF, DAML, etc.; that will exclude the possiblity of having true
| distributed intelligent and mobile agents in the future.

This is the whole point. Unintelligent people would be obsolete if
computers got intelligent, indeed, they are already obsolete. So they
resist all kinds of progress. Microsoft gives unintelligent people a
warm and fuzzy feeling that no matter how retarded you are, there will
always be a "for dummies" book that explains the idiot software to you
so you be deluded to think you can be part of human progress.

Artificial intelligence will never get past the natural unintelligence
blocking its path. Computers will simply not be allowed to be smarter
than a majority of the electorate, especially not if it keeps electing
presidents that should have been replaced by a computer.

///
--
In a fight against something, the fight has value, victory has none.
In a fight for something, the fight is a loss, victory merely relief.

Post with compassion: http://home.chello.no/~xyzzy/kitten.jpg

Mike Beedle

unread,
Apr 10, 2002, 5:38:29 PM4/10/02
to

Brad Miller wrote:
> > I disagree. I think the world is heading to a "very confined" place
> > with the Semantic Web, RDF, DAML, etc.; that will exclude the
> > possiblity of having true distributed intelligent and mobile agents
> > in the future.
>
> I kind of agree with you. Certainly XML is just syntax, and RDF, DAML,
etc.
> are too limited while the Semantic Web is a much bigger problem than the
> program adherents give it credit for. (I'm not sure just who they expect
to
> mark up all the web pages, it's too hard to do manually, and anything
> automatic will require a KR much more expressive than, e.g. DAML+OIL).

Brad:

Thanks for your response. This certainly echoes some concerns
that I had: I am glad to see that other also feel that is very strange
and makes little sense to put ontologies on "static" web pages.

This comment alone makes me feel that at least some people
also recognize how cumbersome this really is. Btw, so that you know,
before I posted this ideas here, I had much less luck in other
circles:

- distributed objects
- distributed agents
- patterns people
- general software development people
- etc.

I kept hearing the popularity argument:

"businesses love XML and HTML so we have to do DAI with XML
and HTML ...."

That kind of thing.

Brad Miller wrote:
> But KIF, FIPA ACL, KQML, Ontolingua (really OKBC?) are in some sense also
> too weak, i.e., not expressive enough, too hard to use for many purposes
> that they aren't going to get us to "true distributed intelligent and
mobile
> agents in the future" either. I have some limited hopes for opencyc, but
> there are several weaknesses in their model as well that will make it
> difficult to base a community of agents on such a representation (in
> particular with respect to representations of plans, actions, time, and
> perspective. Umm, did I leave anything out? :-).
>
> So I guess the "right" thing isn't really even on the radar yet. There's
> some promising work, and some of it is actually been around for a while,
but
> the curent talent black hole called the semantic web isn't really fighting
> the right cause, and using the wrong tools to do it. IMHO, of course. ;-)

That's an interesting position, and I agree with you, even if
we used our best AI tools, and that includes many Lisp tools or
Lisp-friendly tools available, we would still be under the mark,
but I guess much, much closer to it than with their XMLish counterparts,

- Mike


lin8080

unread,
Apr 11, 2002, 9:42:25 PM4/11/02
to
Erik Naggum schrieb:

> This is the whole point. Unintelligent people would be obsolete if
> computers got intelligent, indeed, they are already obsolete. So they
> resist all kinds of progress. Microsoft gives unintelligent people a
> warm and fuzzy feeling that no matter how retarded you are, there will
> always be a "for dummies" book that explains the idiot software to you
> so you be deluded to think you can be part of human progress.

Well

I guess intelligent people with intelligent software on modern computers

will not pay any cent ...


stefan

Erik Naggum

unread,
Apr 12, 2002, 6:14:43 AM4/12/02
to
* lin8080 <lin...@freenet.de>

| I guess intelligent people with intelligent software on modern computers
|
| will not pay any cent ...

This mode of thinking should be reversed so its true meaning emerges:

will not get paid any cent

Brad Miller

unread,
Apr 12, 2002, 9:42:17 PM4/12/02
to

"Mike Beedle" <bee...@e-architects.com> wrote in message
news:ph2t8.256694$702.48963@sccrnsc02...

>
> Brad Miller wrote:
> > > I disagree. I think the world is heading to a "very confined" place
> > > with the Semantic Web, RDF, DAML, etc.; that will exclude the
> > > possiblity of having true distributed intelligent and mobile agents
> > > in the future.
> >
> > I kind of agree with you. Certainly XML is just syntax, and RDF, DAML,
> etc.
> > are too limited while the Semantic Web is a much bigger problem than the
> > program adherents give it credit for. (I'm not sure just who they expect
> to
> > mark up all the web pages, it's too hard to do manually, and anything
> > automatic will require a KR much more expressive than, e.g. DAML+OIL).
>
> Brad:
>
> Thanks for your response. This certainly echoes some concerns
> that I had: I am glad to see that other also feel that is very strange
> and makes little sense to put ontologies on "static" web pages.
>
> This comment alone makes me feel that at least some people
> also recognize how cumbersome this really is. Btw, so that you know,
> before I posted this ideas here, I had much less luck in other
> circles:
[...]

I know what you mean, and I've had similar difficulties internally and
externally as well. I suspect most (but not all) of the computational
linguistics crowd are going to be receptive to this idea, because really
it's the same kinds of problems they've been working on already. OTOH, it's
also where there is current money, and there's something about not biting
the hand that feeds you. One of the better expositions against the current
semantic web I heard from Chris Manning at CSLI a couple years ago. He may
have some slides if you poke around, or I can fax you my set (I don't think
I have an electronic copy).

> Brad Miller wrote:
> > But KIF, FIPA ACL, KQML, Ontolingua (really OKBC?) are in some sense
also
> > too weak, i.e., not expressive enough, too hard to use for many purposes
> > that they aren't going to get us to "true distributed intelligent and
> mobile

> > agents in the future" either. [...]


> >
> > So I guess the "right" thing isn't really even on the radar yet. There's
> > some promising work, and some of it is actually been around for a while,
> but
> > the curent talent black hole called the semantic web isn't really
fighting
> > the right cause, and using the wrong tools to do it. IMHO, of course.
;-)
>
> That's an interesting position, and I agree with you, even if
> we used our best AI tools, and that includes many Lisp tools or
> Lisp-friendly tools available, we would still be under the mark,
> but I guess much, much closer to it than with their XMLish counterparts,

Certainly the current XML counterparts. There's nothing impossible about
recrating KQML as an XML protocol (yes I said "re-crating" ;-), though there
probably isn't much advantage to doing so. But most of the XML work I've
seen is heavy on the "I know Java so the semantic web will be easy" side of
the fence, and there don't seem too many informed opinions who have actual
experience in KR or NLP making the arguments.

Oh well. The best we can do is to keep trying to address the shortcomings
and approaches we think will ultimately pan out. Even if we don't get a lot
of credit for it today, there will come a time, and I don't anticipate it
will be much longer, when the semantic web fad will fade, and the lessons
learned from more solid work will outlive anything we see today.

On the lighter side, I hear Genesereth and Sowa are now pushing a joint KR,
so you can get all the shortcomings of KIF and conceptual graphs in one
convenient package!


Bob Ingria

unread,
Apr 13, 2002, 3:01:45 AM4/13/02
to
"Brad Miller" <Bradford...@motorola.com> wrote in message news:<a982dq$l4u$1...@newshost.mot.com>...
> ...

> One of the better expositions against the current
> semantic web I heard from Chris Manning at CSLI a couple years ago. He may
> have some slides if you poke around, or I can fax you my set (I don't think
> I have an electronic copy).

This web page:

http://www-db.stanford.edu/dbseminar/Archive/FallY2000/

has a link to a set of slides by Chris Manning on this topic. This
may not be the exact talk you heard, but it seems to present the same
kind of argument.

-30-
Bob Ingria
As always, at a slight angle to the universe

lin8080

unread,
Apr 12, 2002, 10:16:47 PM4/12/02
to
Erik Naggum schrieb:
>
> * lin8080 <lin...@freenet.de>

> | I guess intelligent people with intelligent software on modern computers
> |
> | will not pay any cent ...
>
> This mode of thinking should be reversed so its true meaning emerges:
>
> will not get paid any cent
>
> ///

jou


0 new messages