I tried asking myself if I could differentiate Locke, Berkeley,
and Hume by drawing a diagram of world-senses-mind and tagging
different elements as 'certain' or 'doubtful' according to
those thinkers...
And then I started wondering if I could apply my 'history
markup language' project to philosophy:
http://www.robotwisdom.com/web/biography.html
so that each item in the timeline of philosophy was reduced
to a few differentiating parameters (like 'doubt')... and
then I realised that Mortimer Adler's Syntopicon:
http://www.robotwisdom.com/ai/syntopicon.html
is quite literally an attempt at a Philosophy Markup Language,
sifting thru the classics and choosing a limited vocabulary
of topics that can be assigned to subsections across many
different authors...
And now I'm seeing this converging with this quote from a
separate message to can-l:
> In 1959, John McCarthy described the path to computers with
> common sense: http://www-formal.stanford.edu/jmc/mcc59.html
> "Interesting changes in behavior must be expressible in a
> simple way... In order for a program to be capable of
> learning something it must first be capable of being told it."
So I'm picturing an even-more-simplified version of Adler's
'markup', that accurately nails the various shades of
philosophical belief among the Greeks, especially.
But what I mean by 'simplified' will have to start from a modern,
AI-savvy model of the human psyche, especially the analogy that
the brain mimics a computer running object-oriented simulations
of realworld systems.
One of the earliest philosophical riddles was 'why is there
suffering?', so we should picture ancient philosophers trying
to postulate a world-simulation that accounts for suffering--
are there karmic laws that punish the wicked? Is there a
malevolent superhuman Adversary battling a superhuman
benevolence? Or is it all just unknowable?
The Greeks were great debaters, which made them hyperconscious
of logic and fallacies and language. The brain's simulations
can be more or less accurate/logical, and language can be used
to articulate details of the simulations, and to communicate
corrections regarding their inaccuracies.
When we test the grammaticalness of a sentence, we're
referring to a different mental faculty than our simulation
of the _topic_ of the sentence. But JL Austin showed that
ordinary language has a deeper layer of 'correctness' that
can detect misuse of words like 'if' and 'can'. (Might this
involve our mental-models of other minds?)
For reasons I'm still puzzling over, one of the biggest
quandaries in philosophy has concerned the reliability of
perception and the ontological status of universals. I think
even modern cognitive-psych still has issues here, but we
can at least try to reconcile the modern vocabulary with
the classical one.
The special credibility of formal proofs in _geometry_ had a
huge impact on philosophical theories of knowledge-- but do
our current models of mind resolve this yet? It took 2000
years to clarify the scientific method... but I'm not sure
if cog-sci has integrated that model, in McCarthy's 'simple
expressions' sense....
My guess is that as academics they'll find it necessary to be
as opaque as possible (ie, nothing like Adler's Syntopicon).
But hopefully they'll surprise us all...
> My guess is that as academics they'll find it necessary to be
> as opaque as possible (ie, nothing like Adler's Syntopicon).
> But hopefully they'll surprise us all...
Oops I spoke too soon.
Assuming opaqueness was not in itself an objective, could you perhaps
explain why you felt it was worth mentioning in the first place ?
(I am absolutely no academic BTW)