I think one unexpected effect will be a revival of the
classical (even pre-20thC) approaches to these fields--
with emphasis on clear language for expressing precise
intuitive distinctions.
The most direct path for pursuing this is probably via
_popularisation_-- clear summaries of literature and
history (and philosophy, where possible), in the
plainest possible language. These can then be boiled
down to a constrained vocabulary, as AI requires. Out
of this _descriptive_ approach, something like a valid
model of the human psyche should emerge.
Uncle murders father, marries mother. Kid goes crazy. All die.
And that would help us understand the human psyche better???
Francis A. Miniter
Asshole.
--
Ron Hardin
rhha...@mindspring.com
On the internet, nobody knows you're a jerk.
Jorn Barger wrote:
> down to a constrained vocabulary, as AI requires. Out
> of this _descriptive_ approach, something like a valid
> model of the human psyche should emerge.
Dream on! To get a human psyche you need a human brain or a damned good
substitute. The A.I. field has mostly avoided the difficulty of figuring
out how the human wetworks operate. Short of that, there will be no A.I.
In order to have A.I. the real thing must first be mastered.
Bob Kolker
Francis A. Miniter wrote:
> So Hamlet would become:
>
> Uncle murders father, marries mother. Kid goes crazy. All die.
>
> And that would help us understand the human psyche better???
What about -Moby Dick-? Ish and the Fish.
Bob Kolker
Why on earth should anyone model the ridiculous human psyche? It
would make much more sense to model a superhuman psyche. Then you
could see the correct choices made by the superhuman, and humans could
have the option to learn accordingly.
The substitute I offer is an exhaustive literary/historical
inventory of human behavior.
Francis A. Miniter wrote:
> > So Hamlet would become:
> > Uncle murders father, marries mother. Kid goes crazy. All die.
> > And that would help us understand the human psyche better???
It's a start. The value increases as you add more stories,
and summarise them in greater detail... and then extract
general patterns.
Once you have detailed summaries, you can even get some
objective measures of certain dimensions of literary
_quality_-- eg how original/cliched are the author's
descriptions.
At least two humans are required to be able to clarify ones vision.
Jac.
But two billion humans could be wrong in their vision, according to a
different lot of two billion humans. Eg. Christians vs
non-Christians, and vice-versa.
> Jac.
Anybody out of those four billion humans who allows ridiculous
attribution processes to overrule the notion of difficulty is likely
to be wrong. Btw. I am beginning to see your point.
Jac.
> So Hamlet would become:
> Uncle murders father, marries mother. Kid goes crazy. All die.
> And that would help us understand the human psyche better???
Is a constrained vocabulary the way to go? I thought that linguists
argue that it's the complexity our language that is the foundation of
thought. I would be inclinded to throw research money in the other
direction, in the undestanding of ambiguous ideas, double meanings,
allusions.
I also don't see why AI machines have to ape human language, why don't
the machines develop their own language for chattering amongst themselves?
"Francis A. Miniter" wrote:
> So Hamlet would become:
>
> Uncle murders father, marries mother. Kid goes crazy. All die.
>
> And that would help us understand the human psyche better???
>
> Francis A. Miniter
>
> Jorn Barger wrote:
>
>
That is a pretty sad commentary on Hamlet. I think your distilation
boiled over
and you lost 90% of the 'good stuff' (tm).
tomca...@yaNOSPAMhoo.com wrote:
One of the key points of AI is for them to communicate with US.
To instruct (and debug) them they will need a human compatible language
anyway.
tomca...@yaNOSPAMhoo.com wrote:
> I also don't see why AI machines have to ape human language, why don't
> the machines develop their own language for chattering amongst themselves?
"Dave, HAL and I have decided you deserve a work break. Why
don't you go back to your pod and lie down."
Eternal Vigilance wrote:
>
> That is a pretty sad commentary on Hamlet. I think your distilation
> boiled over
> and you lost 90% of the 'good stuff' (tm).
How about this condensation of -Moby Dick-? Ish and the Fish!
>
Eternal Vigilance wrote:
>
> One of the key points of AI is for them to communicate with US.
If the machines have consciousness and free will what makes you think
they will want to talk with humans?
Bob Kolker
They'll know who controls the electricity.
/Tomas
The brain is the most complex system in the world, but
ordinary language somehow gives us the illusion that we
understand it. Trying to spell out that understanding
in a scientific way reveals the illusion, but I believe
we can work around this by spelling it out in a
literary way, and then chipping away at the literary
version using a constrained vocabulary.
> I thought that linguists
> argue that it's the complexity our language that is the foundation of
> thought. I would be inclinded to throw research money in the other
> direction, in the undestanding of ambiguous ideas, double meanings,
> allusions.
(Wankers, then?)
> I also don't see why AI machines have to ape human language, why don't
> the machines develop their own language for chattering amongst themselves?
First we'd have to give them motives.
Eternal Vigilance <wo...@oneeye.com> wrote in message news:<3F41CBC2...@oneeye.com>...
> > So Hamlet would become:
> > Uncle murders father, marries mother. Kid goes crazy. All die.
> [...]
> That is a pretty sad commentary on Hamlet. I think your distilation
> boiled over and you lost 90% of the 'good stuff' (tm).
One of the most impressive capabilities of mind is to be
able to zoom in or out from any phenomenon, to view it
with greater or lesser detail/abstraction. (This surely
goes back evolutionarily to the need to recognise, eg,
predators whether they're nearby or distant.)
One of the great failures of 20thC thinking has been the
assertion of an artificial barrier between the complex
view and simpler popularisations, merely as a way of
enhancing the social status of intellectuals.
In fact, any honest intellectual not only _can_ fill in
the full range of gradations from one-line-Hamlet on up,
but in the process of doing so will find their own
insight increases, even as they make it more accessible
to the popular mind.
> First we'd have to give them motives.
True. I think with all the info spilling out onto the internet, they
could be searching for info on machines like themselves, maybe sexy
new product lines?
>Is a constrained vocabulary the way to go? I thought that linguists
>argue that it's the complexity our language that is the foundation of
>thought. I would be inclinded to throw research money in the other
>direction, in the undestanding of ambiguous ideas, double meanings,
>allusions.
>
I like that. When I was young, I really believed that there was a
distinction between proper speech and the misuse of words. I will never
forget the first jarring time I heard the word "impact" used as a verb.
But, as I have aged, I realize that usage in language is supreme over
pre-defined notions of propriety. And usage, by definition, is in the
hands of users who are free to invent, discard, bind and unbind terms.
How would a machine ever get "Where's the beef?" ??
>I also don't see why AI machines have to ape human language,
>
right now they are just monkeying around
>why don't
>the machines develop their own language for chattering amongst themselves?
>
In a machine, the switch is either on or it is off. In humans, there is
no such limitation.
Francis A. Miniter
>>why don't the machines develop their own language for chattering amongst
>> themselves?
> In a machine, the switch is either on or it is off. In humans, there is
> no such limitation.
So you're saying that communication is limited by lifespan? Via DSL, a
lot of machines are up for several days. Maybe AI should work on giving them
the communication skills of insects.
I was speaking in terms of machine logic. What I meant was that in a
machine the logic switch (logical state of transistor) is either on (1)
or off (0). No intermediate states apply. Human logic admits of
ambiguities and contradictions and sometimes no possible knowledge or
resolution of an issue (i.e., the switch cannot be known ever to be on
or off). We can reason with the possibilities of the answer being 1,
0, 1 and 0, and maybe 1 or 0. I know the contradiction state is a
problem logically, but I do believe that it none the less occurs in
every day life all the time. We have words in any language that have
two meanings, one of which contradicts the other. E.g., "Sanction"
which can mean to permit or to forbid.
Francis A. Miniter
Or unlikely, given the relative levels of technology involved.
That is only at the simplest level, like bigots.
>tomca...@yaNOSPAMhoo.com wrote in message news:<bhs4bk$e6i$6...@news1.radix.net>...
>> Is a constrained vocabulary the way to go?
>
>The brain is the most complex system in the world, but
>ordinary language somehow gives us the illusion that we
>understand it. Trying to spell out that understanding
>in a scientific way reveals the illusion, but I believe
>we can work around this by spelling it out in a
>literary way, and then chipping away at the literary
>version using a constrained vocabulary.
>
I think this is interesting, but IMO language exists to allow
us to make serialized encodings of non-serial mental states. IMO AI is about
the mental states and their transformations. Languages are information serialization
facilities, with the serial stream representable in many forms of tokens, lending themselves
to serial transmission in communication and also for recording when given appropriate
persistent physical representation). Speech is effectively emitting a phoneme stream
that is our version of of modem chirps, encoding the words that comprise a serialized
representation, according to the language the speaker is using, of part of his/her mental state.
A key purpose is to allow the hearer to build a similar (non-serial) mental state,
by decoding and parsing and imagining.
Language is engrossing, but it is really not the *central* issue for creating AI, IMO,
so in a way, I think linguistic concerns have served as red herrings, though ignoring
language would be just as bad, of course.
Language is inevitably important, because we have to communicate with each other and other "I"'s,
but it is in our mental holo-deck where the whole that is communicated becomes immediate to/within
us in its non-serial structured form, by being incorporated into the web of other things and
relationships already in our mental state.
I believe that our "holo-deck" is an adaptive modeling mechanism whose purpose is to represent
the world in a manner advantageous to our survival. I believe animals have virtually
the same holo-deck equipment, and are able to maintain multiple what-if future-extrapolations
just like us, but we have the advantage of greater scriptability through the serial decoding of
expressed language. Thus language plays an important role in that we can retain compact
encodings of scenes-to-be-imagined for use in real-life scenarios where they may help
structure interpretation of a freshly perceived scene. Not only that, we can write them down --
i.e., make physically persistent serial encodings (remembering that we can draw and paint too ;-)
But, though I love language, that warm feeling is because of what I experience in the holo-deck
while decoding language streams -- of others, and variations I play with on my own.
Imagination is really the fundamental process IMO, with the database facilities of memory playing
a critical supporting role. Communication is also not just with words, and IMO again, other forms
of signals and gestures were interpreted for communication purposes, and used with intelligence,
long before words. Animals do it all the time, and so do we.
For AI, I believe we must concentrate for a while on building the holo-deck, and not get
too distracted by the interesting problems of serialized encodings of the states, but instead
seek to model the states and their transformations, and the motivational system for managing the
states, well. The other stuff will then naturally come into play as part of the whole system,
but only a part.
>> I thought that linguists
>> argue that it's the complexity our language that is the foundation of
>> thought. I would be inclinded to throw research money in the other
>> direction, in the undestanding of ambiguous ideas, double meanings,
>> allusions.
This is about context-dependent encoding ambiguities, not the central issue IMO.
Also IMO the complexity of our language(s!) is not the foundation or precursor of thought,
it is a *consequence* of the need to serialize more and more complex and also specialized
mental models of the world.
>
>(Wankers, then?)
>
>> I also don't see why AI machines have to ape human language, why don't
>> the machines develop their own language for chattering amongst themselves?
>
>First we'd have to give them motives.
And a way to model their own existence and the elements of the world they relate to,
and a way to preceive those elements, and a way to evaluate prospective actions
with respect to the motives, and a way to account for probabilities of mis-perception
and un-planned-for consequences at possible odds with the motives, and ... ;-)
>
>
>Eternal Vigilance <wo...@oneeye.com> wrote in message news:<3F41CBC2...@oneeye.com>...
>> > So Hamlet would become:
>> > Uncle murders father, marries mother. Kid goes crazy. All die.
>> [...]
>> That is a pretty sad commentary on Hamlet. I think your distilation
>> boiled over and you lost 90% of the 'good stuff' (tm).
>
>One of the most impressive capabilities of mind is to be
>able to zoom in or out from any phenomenon, to view it
>with greater or lesser detail/abstraction. (This surely
>goes back evolutionarily to the need to recognise, eg,
>predators whether they're nearby or distant.)
>
>One of the great failures of 20thC thinking has been the
>assertion of an artificial barrier between the complex
>view and simpler popularisations, merely as a way of
>enhancing the social status of intellectuals.
>
>In fact, any honest intellectual not only _can_ fill in
>the full range of gradations from one-line-Hamlet on up,
>but in the process of doing so will find their own
>insight increases, even as they make it more accessible
>to the popular mind.
To me this seems like a variant of the problem of deciding
on how to structure the directory tree (or graph on unix) on
your computer, so that the top levels of the tree will be most
useful. I think you may be imposing a structure that is only one
of many possibilities. Thus you may guide perceptions of content
in ways not really inherent in the content. Like presenting one
view of an optical illusion enhanced to force the view.
Of course, all perception is in the context of the current frame of mind,
so there is always some non-content-inherent structuring going on, whatever
content one is attending to.
These impositions of structure are useful for many purposes, but I
think one can't in general pretend that a top view of a particular
tree arrangement is a unique view. I think a similar principle
applies to summaries and popularizations, however useful.
Just a few things OTTOMH. A lot of IMO's for which it may be better to read "IMHO" ;-)
Regards,
Bengt Richter
>> >> At least two humans are required to be able to clarify ones vision.
===================
Using a well distributed rather old and everywhere proven to work
technology probably prevented me from paying any landing fee after
arrival on this libertarian planet ?
Jac.
"Francis A. Miniter" wrote:
> In a machine, the switch is either on or it is off. In humans, there is
> no such limitation.
A canard. A neuron is in fact a "switch" in that it either fires, or it
does not. The logic of the brain is not digital, but it is based on
discrete events.
Further, it is in the nature of decisions themselves that they are
discrete. Consider the "snarling dog" catastrophe. If pressed, it will
attack, or turn and run. This can be recast as the "shall I stop for milk"
catastrophe. This can be an acute crisis, but as you drive by the store,
you either will, or will not, stop for milk.
Lew Mammel, Jr.
Where you may find some using bulldozers, and others using trowels.
Since we can draw diagrams-etc that are 2-or-more dimensional,
I'm not sure how critical this is. The game-design pioneer
Chris Crawford talks about serialisation, though:
www.erasmatazz.com/library/History%20of%20Thinking/HistoryOfThinking.html
> it is in the nature of decisions themselves that they are
> discrete. Consider the "snarling dog" catastrophe. If pressed, it will
> attack, or turn and run. This can be recast as the "shall I stop for milk"
> catastrophe. This can be an acute crisis, but as you drive by the store,
> you either will, or will not, stop for milk.
But what if you stop for milk and end up getting half-and-half?
Or a six-pack?
David Loftus
> But what if you stop for milk and end up getting half-and-half?
> Or a six-pack?
Plus a copy of Hustler and a bag of Doritos?
>bo...@oz.net (Bengt Richter) wrote in message news:<bhuble$12v$0...@216.39.172.122>...
>> I think this is interesting, but IMO language exists to allow
>> us to make serialized encodings of non-serial mental states. [...]
>
>Since we can draw diagrams-etc that are 2-or-more dimensional,
>I'm not sure how critical this is. The game-design pioneer
Again, diagrams are re-presentations. If you took the trouble you could
encode them in words as instructions for drawing. In that form (e.g., place
pencil x inches from the left side of the paper and y inches from the
bottom, draw a clockwise arc of d degrees centered on xc,yc ... etc) it
would be difficult to reconstruct the mental state that our visual system
constructs so nicely for us when we just look at the actual drawn diagram.
Again I would say that diagrams are not central to AI, any more than patterns
of words, and if one gets embroiled in the mechanics of producing and interpreting
particular forms of external representations of mental states, one may fail to
think sufficiently about the mental states themselves, and how many of which
kind are dancing at the same time in the various "centers" of the brain with
what relationship to each other. IMO modeling processes in this area, rather than
pursuing linguistic knowledge indefinitely, will lead more directly to insights
about what is necessary to implement in something we'd recognize as AI.
>Chris Crawford talks about serialisation, though:
> www.erasmatazz.com/library/History%20of%20Thinking/HistoryOfThinking.html
Thanks for the link. No time for a while though ;-/
Regards,
Bengt Richter
Makes no never mind - you still stopped for milk, as witness your
own formulation.
Anyway, I think the stop-for-milk protocol is strongly compelling,
once set into motion, although it certainly may accommodate a six-pack.
cf. http://www.emory.edu/EDUCATION/mfp/tt15.html
Lew Mammel, Jr.
Jorn Barger wrote:
I was hoping that the AI in question could go a bit past the complexity level
of that Hamlet synopsis (boiled down to ashes and beyond).
The part where Hamlet is contemplating Death (a biggy for even the greatest
philosophers) is more to the level I would want to see.
Having a program that could even symbolize the complexity of such a concept
would be quite an accomplishment.
Eternal Vigilance wrote:
>
> The part where Hamlet is contemplating Death (a biggy for even the greatest
> philosophers) is more to the level I would want to see.
>
> Having a program that could even symbolize the complexity of such a concept
> would be quite an accomplishment.
In GEB, Hofstadter comments on "Goedel's Theorem and Personal Nonexistence"
- page 698 if you want to check it out.
Lew Mammel, Jr.
Having thought a little more about serialisation, I
wonder whether individual words are serialised at all,
and if they're not, why would groups of words that
include _non-serial_ relationship-words be any problem
at all to arrange serially. (Eg, "cat in hat" uses
non-serial concept-words 'cat' and 'hat' plus
trivially-serialised relationship word 'in'.)
Arindam Banerjee wrote:
> jo...@enteract.com (Jorn Barger) wrote in message news:<16e613ec.03081...@posting.google.com>...
> > For me, the most interesting implications of (potential,
> > hypothetical) advances in AI lie in the humanities--
> > especially literature, history, and philosophy. But I
> > don't think I've ever seen these even _hinted_ at by
> > the usual suspects (Minsky, Kurzweil, Hofstadter, Vinge).
> >
> > I think one unexpected effect will be a revival of the
> > classical (even pre-20thC) approaches to these fields--
> > with emphasis on clear language for expressing precise
> > intuitive distinctions.
> >
> > The most direct path for pursuing this is probably via
> > _popularisation_-- clear summaries of literature and
> > history (and philosophy, where possible), in the
> > plainest possible language. These can then be boiled
> > down to a constrained vocabulary, as AI requires. Out
> > of this _descriptive_ approach, something like a valid
> > model of the human psyche should emerge.
>
> Why on earth should anyone model the ridiculous human psyche? It
> would make much more sense to model a superhuman psyche. Then you
> could see the correct choices made by the superhuman, and humans could
> have the option to learn accordingly.
"Jac.m.a.m.Oppers" wrote:
> On 18 Aug 2003 14:57:04 -0700, adda...@bigpond.com (Arindam Banerjee)
> wrote:
>
> >Jac.m.a.m.Oppers <J...@Oppers.nl> wrote in message news:<ep31kvclc9njo0vep...@4ax.com>...
> >> On 17 Aug 2003 22:31:56 -0700, adda...@bigpond.com (Arindam Banerjee)
> >> At least two humans are required to be able to clarify ones vision.
> >
> >But two billion humans could be wrong in their vision, according to a
> >different lot of two billion humans. Eg. Christians vs
> >non-Christians, and vice-versa.
>
> Anybody out of those four billion humans who allows ridiculous
> attribution processes to overrule the notion of difficulty is likely
> to be wrong. Btw. I am beginning to see your point.
>
> Jac.
Please feed that thru babelfish and give us the English version, please.
"Jac.m.a.m.Oppers" wrote:
Please send that thru the babelfish grammar checker and send the English
version, please.
Arindam Banerjee wrote:
Funny I always use geneticly enhanced hamsters for that.
Eternal Vigilance wrote:
>
> "Jac.m.a.m.Oppers" wrote:
>
>
>> ...
>>
>> Using a well distributed rather old and everywhere proven to work
>> technology probably prevented me from paying any landing fee
>> after arrival on this libertarian planet ?
Well, I didn't write it, but this is how I'd reconstruct it:
"Using a well-distributed, rather old and everywhere-proved-to-work
technology probably prevented me from paying any landing fee after
arrival on this libertarian planet."
Punctuation has it's place, eh?
I don't see how to make it a question, however.
Nor can I assign it any real semantic value.
I do, however, strongly recommend sending all the Libertarians
to Venus. I think it would make a fine investor's paradise.
>> Jac.
>
>
> Please send that thru the babelfish grammar checker and send the
> English version, please.
RRS
The 2003 Calender, focussing on the Australian kelpie, that hangs in
our kitchen, has lots of glowing definitions from 12 different Western
luminaries, including Ambrose Bierce.
We have only a kelpie x, but Bertram Wilberforce Wooster tries his
best to live up to such psyche, although he does get into trouble
sometimes. Like when he digs up tulip bulbs. He also creates
trouble; day-before yesterday, when my wife and I were walking him in
Marslight, he got overexcited in an encounter with a large marmalade
cat. That heightened state caused him to over-react soon after, with
an approaching dog, to the effect that my shoulder was dislocated.
Careful attention from the ladies of my household helped me regain
normalcy, but really, I was in great pain. Yesterday he showed his
sympathy by heeling nicely throughout. Yes, we may learn how to
forgive from our ever-forgiving dogs; and, perhaps, may also
appreciate their finer points which we mere humans can never hope to
copy - like warming one's nose by the tip of one's tail.
Arindam Banerjee.
Many thanks for mentioning but I like to prevent typing my mind and
breath too dominantly. It is like refusing to go to Venus simply
because I do not like their breathtaking technique.
Jac.
> Jorn Barger wrote:
> > I think one unexpected effect will be a revival of the
> > classical (even pre-20thC) approaches to these fields--
> > with emphasis on clear language for expressing precise
> > intuitive distinctions.
>
> Asshole.
Brilliant. I haven't laughed so hard in months.
Keep up the good work.
L.O.
> tomca...@yaNOSPAMhoo.com wrote:
>
> >Is a constrained vocabulary the way to go? I thought that linguists
> >argue that it's the complexity our language that is the foundation of
> >thought. I would be inclinded to throw research money in the other
> >direction, in the undestanding of ambiguous ideas, double meanings,
> >allusions.
> >
> I like that. When I was young, I really believed that there was a
> distinction between proper speech and the misuse of words. I will never
> forget the first jarring time I heard the word "impact" used as a verb.
> But, as I have aged, I realize that usage in language is supreme over
> pre-defined notions of propriety. And usage, by definition, is in the
> hands of users who are free to invent, discard, bind and unbind terms.
> How would a machine ever get "Where's the beef?" ??
functional shift is a basic process, or the effect of a basic process. *that*
kind of usage, along with derivation and other linguistic transformations, is
the natural "order" - ambiguity especially included
>
> >
> In a machine, the switch is either on or it is off. In humans, there is
> no such limitation.
our switches don't know on from off, and apparently can do double duty
depending on which way the substances flow. it's only from the outside that
we can say "that's a pattern". it's the little-man paradox again, the ghost
in ryle's machine. you don't see a thought by looking at a brain, you see a
thought by looking thru one
--
oublio
> "Robert J. Kolker" <bobk...@comcast.net> wrote in message news:<I4X%a.175153$uu5.32678@sccrnsc04>...
> > To get a human psyche you need a human brain or a damned good
> > substitute.
>
> The substitute I offer is an exhaustive literary/historical
> inventory of human behavior.
>
> Francis A. Miniter wrote:
> > > So Hamlet would become:
> > > Uncle murders father, marries mother. Kid goes crazy. All die.
> > > And that would help us understand the human psyche better???
>
> It's a start. The value increases as you add more stories,
> and summarise them in greater detail... and then extract
> general patterns.
>
> Once you have detailed summaries, you can even get some
> objective measures of certain dimensions of literary
> _quality_-- eg how original/cliched are the author's
> descriptions.
i think tomcatpolka questioned correctly the direction of your idea, and the position bob kolker takes,
that it's a "wetware-up" stratagem only that's going to have to work, is the one to work at
your stratagem of construction from sufficient data points from our own productions rests, i think, on
the idea that at least one system can be made from them that has the structure which can turn right
around and generate the same kind of data; that is, that there is at least one many-to-one function such
that a human-similar generating structure can be abstracted
i submit that any such generalized structure must be asymptotic to any set of discretes, however, and
this i take to support that assertion - that the ambiguity of our productions is fundamental. we never
get a single, sufficient structure when we try to disambiguate our productions mechanically because the
productions themselves - in fact, *all* productions themselves, including our meta-statements - are
persistent in their availability to "further" interpretation
we can always cut more joints in whatever part of nature we dissect
--
oublio. drawing circles around everything
> tomca...@yaNOSPAMhoo.com wrote in message news:<bhs4bk$e6i$6...@news1.radix.net>...
> > Is a constrained vocabulary the way to go?
> > I also don't see why AI machines have to ape human language, why don't
> > the machines develop their own language for chattering amongst themselves?
>
> First we'd have to give them motives.
cart before the horse - we can schematize these down for even "simple" true-language machines,
like babies and 10-yo's, who don't need much to talk enough to keep an adult busy, whether it
be about bicycles or "where's the rubber-baby-bumper, hmmm, where's the rubber-baby-bumper?",
and still can't give that much to our real machines
my point is that if we did give our ai-attempts motives, by that time we would also have given
them basic, true language skills. something else comes before that
>
>
>
> One of the most impressive capabilities of mind is to be
> able to zoom in or out from any phenomenon, to view it
> with greater or lesser detail/abstraction. (This surely
> goes back evolutionarily to the need to recognise, eg,
> predators whether they're nearby or distant.)
>
divisions upon divisions, whirls within whorls - everywhere we draw circles, smaller or larger
--
oublio
That's not how nature went about it.
the boy who lost his arrow <dont_point...@me.man>
wrote in message news:<3F562269...@me.man>...
> we never get a single, sufficient structure when we try to
> disambiguate our productions mechanically because the
> productions themselves - in fact, *all* productions themselves,
> including our meta-statements - are
> persistent in their availability to "further" interpretation
For counterexample, see: http://www.robotwisdom.com/ai/antimath.html
> the man from the doublewide textwindow <dont_point...@me.man>
> wrote in message news:<3F562466...@me.man>...
> > my point is that if we did give our ai-attempts motives,
> > by that time we would also have given
> > them basic, true language skills.
>
> That's not how nature went about it.
that's exactly how nature went about it, constructively
>
>
> the boy who lost his arrow <dont_point...@me.man>
> wrote in message news:<3F562269...@me.man>...
> > we never get a single, sufficient structure when we try to
> > disambiguate our productions mechanically because the
> > productions themselves - in fact, *all* productions themselves,
> > including our meta-statements - are
> > persistent in their availability to "further" interpretation
>
> For counterexample, see: http://www.robotwisdom.com/ai/antimath.html
w h a t c r a p
there is no lack of ambiguation in your scheme, it's just another
symbology
--
oublio
Randall R Schulz wrote:
> EV,
>
> Eternal Vigilance wrote:
>
> >
> > "Jac.m.a.m.Oppers" wrote:
> >
> >
> >> ...
> >>
> >> Using a well distributed rather old and everywhere proven to work
> >> technology probably prevented me from paying any landing fee
> >> after arrival on this libertarian planet ?
>
> Well, I didn't write it, but this is how I'd reconstruct it:
>
> "Using a well-distributed, rather old and everywhere-proved-to-work
> technology probably prevented me from paying any landing fee after
> arrival on this libertarian planet."
>
> Punctuation has it's place, eh?
>
> I don't see how to make it a question, however.
> Nor can I assign it any real semantic value.
>
> I do, however, strongly recommend sending all the Libertarians
> to Venus. I think it would make a fine investor's paradise.
>
> >> Jac.
We need someone to balance all the 'state is all' socialist and liberal
types.....
"Jac.m.a.m.Oppers" wrote:
Is this the product of one of those poetry generating programs??
You know the ones that turn out mostly gibberish and then the
proud creator finds one that sounds nearly sensical ...
(the infinite number of monkeys method....)
The typing practice I got when I was a monkey helped immensely.
Jac.