Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

PhML = Philosophy Markup Language

3 views
Skip to first unread message

Jorn Barger

unread,
Mar 1, 2003, 8:07:48 AM3/1/03
to
Lately I've been wrestling with pre-Kant philosophies of mind
and language, trying to summarise them from the perspective of
artificial intelligence, for a timeline I've been compiling:
http://www.robotwisdom.com/ai/timeline1.html

I tried asking myself if I could differentiate Locke, Berkeley,
and Hume by drawing a diagram of world-senses-mind and tagging
different elements as 'certain' or 'doubtful' according to
those thinkers...

And then I started wondering if I could apply my 'history
markup language' project to philosophy:
http://www.robotwisdom.com/web/biography.html
so that each item in the timeline of philosophy was reduced
to a few differentiating parameters (like 'doubt')... and
then I realised that Mortimer Adler's Syntopicon:
http://www.robotwisdom.com/ai/syntopicon.html
is quite literally an attempt at a Philosophy Markup Language,
sifting thru the classics and choosing a limited vocabulary
of topics that can be assigned to subsections across many
different authors...

Jorn Barger

unread,
Mar 16, 2003, 8:37:04 AM3/16/03
to
On can-l, I wrote in message news:<16e613ec.03030...@posting.google.com>...

> Lately I've been wrestling with pre-Kant philosophies of mind
> and language, trying to summarise them from the perspective of
> artificial intelligence, for a timeline I've been compiling:
http://www.robotwisdom.com/ai/timeline/0000.html
> [...] I realised that Mortimer Adler's Syntopicon:

> http://www.robotwisdom.com/ai/syntopicon.html
> is quite literally an attempt at a Philosophy Markup Language,
> sifting thru the classics and choosing a limited vocabulary
> of topics that can be assigned to subsections across many
> different authors...

And now I'm seeing this converging with this quote from a
separate message to can-l:
> In 1959, John McCarthy described the path to computers with
> common sense: http://www-formal.stanford.edu/jmc/mcc59.html
> "Interesting changes in behavior must be expressible in a
> simple way... In order for a program to be capable of
> learning something it must first be capable of being told it."

So I'm picturing an even-more-simplified version of Adler's
'markup', that accurately nails the various shades of
philosophical belief among the Greeks, especially.

But what I mean by 'simplified' will have to start from a modern,
AI-savvy model of the human psyche, especially the analogy that
the brain mimics a computer running object-oriented simulations
of realworld systems.

One of the earliest philosophical riddles was 'why is there
suffering?', so we should picture ancient philosophers trying
to postulate a world-simulation that accounts for suffering--
are there karmic laws that punish the wicked? Is there a
malevolent superhuman Adversary battling a superhuman
benevolence? Or is it all just unknowable?

The Greeks were great debaters, which made them hyperconscious
of logic and fallacies and language. The brain's simulations
can be more or less accurate/logical, and language can be used
to articulate details of the simulations, and to communicate
corrections regarding their inaccuracies.

When we test the grammaticalness of a sentence, we're
referring to a different mental faculty than our simulation
of the _topic_ of the sentence. But JL Austin showed that
ordinary language has a deeper layer of 'correctness' that
can detect misuse of words like 'if' and 'can'. (Might this
involve our mental-models of other minds?)

For reasons I'm still puzzling over, one of the biggest
quandaries in philosophy has concerned the reliability of
perception and the ontological status of universals. I think
even modern cognitive-psych still has issues here, but we
can at least try to reconcile the modern vocabulary with
the classical one.

The special credibility of formal proofs in _geometry_ had a
huge impact on philosophical theories of knowledge-- but do
our current models of mind resolve this yet? It took 2000
years to clarify the scientific method... but I'm not sure
if cog-sci has integrated that model, in McCarthy's 'simple
expressions' sense....

ian glendinning

unread,
Mar 30, 2003, 9:19:29 AM3/30/03
to
How does this relate to this work by the American Philosophical Association Jorn ?

http://radio.weblogs.com/0110772/2003/03/28.html#a847

Jorn Barger

unread,
Mar 30, 2003, 1:19:15 PM3/30/03
to
i...@psybertron.org (ian glendinning) wrote in message news:<f7b2e276.03033...@posting.google.com>...

> How does this relate to this work by the American Philosophical
> Association Jorn ? http://radio.weblogs.com/0110772/2003/03/28.html#a847

My guess is that as academics they'll find it necessary to be
as opaque as possible (ie, nothing like Adler's Syntopicon).
But hopefully they'll surprise us all...

ian glendinning

unread,
Apr 10, 2003, 7:27:12 PM4/10/03
to
jo...@enteract.com (Jorn Barger) wrote ....

> My guess is that as academics they'll find it necessary to be
> as opaque as possible (ie, nothing like Adler's Syntopicon).
> But hopefully they'll surprise us all...


Oops I spoke too soon.

Assuming opaqueness was not in itself an objective, could you perhaps
explain why you felt it was worth mentioning in the first place ?

(I am absolutely no academic BTW)

0 new messages