Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

"father" of A.I. bashes kids

29 views
Skip to first unread message

rick++

unread,
May 14, 2003, 1:29:35 PM5/14/03
to
In a Wired article http://www.wired.com/news/technology/0,1282,58714,00.html
Marvin Minsky decries the direction A.I. has taken in the past 15 years-
mainly exploring "trivial problems". Lots of debate on this and Minsky-bashing
in a slashdot.org follow up.
(Marvin used to post often this group before the bots took over.)

David Longley

unread,
May 14, 2003, 2:52:21 PM5/14/03
to
In article <f7422d8e.03051...@posting.google.com>, rick++
<ric...@hotmail.com> writes

He still will post here if he has something to say.

One thing in that article which warrants more careful thought than it is
likely to get is the end:

o "AI researchers also may be the victims of their own success. The
public takes for granted that the Internet is searchable and that
people can make airline reservations over the phone -- these are
examples of AI at work.


"It's a crazy position to be in," said Martha Pollack, a professor at
the Artificial Intelligence Laboratory at the University of Michigan
and executive editor of the Journal of Artificial Intelligence
Research.

"As soon as we solve a problem," said Pollack, "instead of looking at
the solution as AI, we come to view it as just another computer
system."


It should be obvious that any effective process is computable, and that
as and when previously complex, instantiations of "intelligent" human
behaviour are engineered as computer systems, they will no longer be
regarded as "intelligent". They were never "intelligent" to start with
of course, they were behaviours which (in some instances) people learned
by being in an appropriate place at an appropriate time - ie they were
programmed.

Where and when there are great failures of effective procedures to
emulate the same behaviours as "common sense" one should perhaps look
more closely at the nature of common sense and its deficiencies. One
might have forgiven some of those pursuing this AI holy Grail were in
not for the fact that the irrationality of common-sense has been so well
documented by psychologists over the past 50 years.

--
David Longley

nuc_leus

unread,
May 14, 2003, 4:22:53 PM5/14/03
to
In article <Lf4AFGAl...@longley.demon.co.uk>, David Longley <Da...@longley.demon.co.uk> wrote:
>In article <f7422d8e.03051...@posting.google.com>, rick++
><ric...@hotmail.com> writes
>>In a Wired article http://www.wired.com/news/technology/0,1282,58714,00.html
>>Marvin Minsky decries the direction A.I. has taken in the past 15 years-
>>mainly exploring "trivial problems". Lots of debate on this and
> Minsky-bashing
>>in a slashdot.org follow up.
>>(Marvin used to post often this group before the bots took over.)
>
>He still will post here if he has something to say.

He has nothing much to say beyond the fact that
AI has NOTHING to do with intelligence,
if he ever can comprehend this much.

Marvin Minsky

unread,
May 14, 2003, 5:33:18 PM5/14/03
to
Thanks, David. Long time, no see!

David Longley <Da...@longley.demon.co.uk> wrote in message news:<Lf4AFGAl...@longley.demon.co.uk>...

Here is more or less what I told that reporter. Naturally, the
important parts did not get reported.

Most early researchers in artificial intelligence aimed to build
machines that would become as smart as people are. They developed
many ideas about how to represent knowledge in machines, and about
ways to reason by using that knowledge. This led to many successful
projects, such as programs that recognize various patterns such as
sounds of words, printed characters, faces, and other particular
objects預nd answering questions about certain specialized fields of
knowledge. Today such programs are all around us and we tend to take
them for granted: by the 1980's, many of these specialized, so-called
"expert systems" had become widely productive and popular.

However there was a problem with those programs: for each new kind of
problem we had to construct an almost entirely new such system. This
was because all of them lacked what people call "commonsense
knowledge." None of those systems was able to adapt itself to solve
other that it had not been programmed to solve.

A second major deficiency, which I'll say more about below, was the
use of programming techniques that made it almost infeasible for the
programs to reflect on their own performance. Reflective and
self-reflective thinking is perhaps what most distinguishes us from
our animal relatives預nd is likely to be what distinguishes
present-day programs from the successors we hope to replace them with!

To solve a hard problem, one usually needs to know a good deal傭oth
about that particular subject, and also about how to solve problems in
general. But only one major researcher focused intense research on
how to represent commonsense knowledge, in a computer. That was
Douglas Lenat, who developed a system called CYC. Today, CYC contains
a substantial amount of such knowledge. The knowledge in CYC was
compiled by people, in a meticulous, tedious process. However, CYC
still has far from enough of this to compete with a two or three year
old child.

Unfortunately, in my view, the rest of the artificial intelligence
community tried, instead, to make their computers do this by
themselves傭y trying to build what I call 礎aby machines', which were
supposed to learn from experience. These all failed to make much
progress because (in my view) they started out with inadequate schemes
for learning new things. You cannot teach algebra to a cat; human
infants are already equipped with architectural features to equip them
to think about the causes of their successes and failures and then to
make appropriate changes.

Many other researchers went in the direction of trying to build
evolution-based systems. These were to begin with very simple
structures and then (by using some scheme for mutation and then
selection) evolve more architecture. This includes what are called
"neural networks" and "genetic" programs謡hich have often solved
interesting problems, but have never reached high intellectual levels.
In my view, this was because they were not designed to have the
ability to analyze and reflect on what they had done預nd then make
appropriate changes; they were not equipped to improve or learn new
ways to represent knowledge or make plans to solve new kinds of
problems.

Yet other researchers built systems that were based on logic揺oping
that through being precise and unambiguous, these would be very
dependable. However, in my view, the very precision of those systems
prevented them from being able to reason by analogy謡hich, in my view,
is at the heart of how people think. (And the logical systems in
current use make it virtually impossible to support the kinds of
self-reflective processes that they would need to improve their own
operations.)
Many other researchers designed robots to do various kinds of
specialize tasks. We see this as an epidemic that has infected almost
every university. Those researchers hoped that by starting with
simple jobs they would learn enough that, eventually, they would
become able to design robots that could progressively solve
increasingly hard and more important problems. However, so far as I
can see, those robot builders never went far past that initial level.
Each such robotic projects may had some good ideas傭ut none of the
architectures were adequate to represent, and then identify the
deficiencies in their high levels of performance. Instead, they
mostly wasted students' time repeating techniques whose limitations
were understood in the 50s and 60s熔r dealing with intermittent
connections, and hysteresis iand backlash in their joints and
bearings.

George

unread,
May 14, 2003, 5:53:30 PM5/14/03
to

I think you should learn a little more about farmers before you
insult them as bots again.

Farmers like big hearthy foods, a nice good wife that takes
care of the farm animals and the children, and a nice young
18 year old village girl on the side.

Now there is another type of farmer, your ancestors, the
Republican types. These farmers have a great interest in AI
and other farm robotics.

Asshole American.

George

Unmesh Kurup

unread,
May 14, 2003, 8:10:35 PM5/14/03
to

"Marvin Minsky" <min...@media.mit.edu> wrote in message
news:f04e2625.03051...@posting.google.com...
<snip>

> However there was a problem with those programs: for each new kind of
> problem we had to construct an almost entirely new such system. This
> was because all of them lacked what people call "commonsense
> knowledge." None of those systems was able to adapt itself to solve
> other that it had not been programmed to solve.
But what exactly is commonsense knowledge? CYC lists thousands upon
thousands of pieces of commonsense knowledge, but does not knowing one of
them make a person non-human? How about 50% of this knowledge or 75%?
Commonsense goes deeper than just knowledge (or at the very least has
different types/levels of knowledge). It seems to me that it includes among
many others, the ability to implement our knowledge, to induce knowledge and
most importantly self-preservation. Maybe if we identified the components of
commonsense knowledge, we may have a better chance at building a more
flexible thinking machine.

<snip>


> Many other researchers designed robots to do various kinds of
> specialize tasks. We see this as an epidemic that has infected almost
> every university. Those researchers hoped that by starting with
> simple jobs they would learn enough that, eventually, they would
> become able to design robots that could progressively solve
> increasingly hard and more important problems. However, so far as I
> can see, those robot builders never went far past that initial level.

> Each such robotic projects may had some good ideas-but none of the


> architectures were adequate to represent, and then identify the
> deficiencies in their high levels of performance. Instead, they
> mostly wasted students' time repeating techniques whose limitations

> were understood in the 50s and 60s-or dealing with intermittent


> connections, and hysteresis iand backlash in their joints and
> bearings.

It still stands to reason that evolution gave us our limbs and mobility
before giving us brains. I am not saying that it's the only way to go, but
since we know that this technique works (at least it did for evolution),
it's only fair that someone keeps exploring it as an option. Besides, I
think, the opportunities for an intelligent machine to develop are
considerably more when it has a body.

--unm
P. S. - I like the title of the message. Gives me an image of AI researchers
standing in a long line and getting their knuckles rapped. :)


George

unread,
May 14, 2003, 8:43:45 PM5/14/03
to

Between us kkk++, this was the fastest game I played.

What else can I say, you must be so stupid, robot
manufacturing has to be for you. There you can walk
around calling everyone around you bots. It is just like
black people calling each other niggers, right?

Come to think of it, maybe the Turing test should be
renamed the nigger test. Are you a nigger or a human,
what's the difference, robot, nigger...

Up your ass freak.

nuc_leus

unread,
May 14, 2003, 10:55:55 PM5/14/03
to

Hi Marvin Minsky.
I respect you for your ability to at least a lil bit
to stand not in the middle of the herd.

>Most early researchers in artificial intelligence aimed to build
>machines that would become as smart as people are.

Obscene.
Aint it?

> They developed
>many ideas about how to represent knowledge in machines, and about
>ways to reason by using that knowledge. This led to many successful
>projects, such as programs that recognize various patterns such as
>sounds of words, printed characters, faces, and other particular

>objects—and answering questions about certain specialized fields of
>knowledge.

Fine argument. One of those "achievements" in pattern recognition
was manifested during the war in Yugoslavia when Chinese embassy
was bombed with a "smart" bomb that was dumb enough
to hit about the main player in the world, China.

Yes, we can argue about it this way or that,
but smart? Not even funny.

> Today such programs are all around us and we tend to take
>them for granted: by the 1980's, many of these specialized, so-called
>"expert systems" had become widely productive and popular.

"So called expert systems" is something I respect you for.
You have at least a lil guts not to just go around licking
the asses of establishment, ruled by the satanists such as
Freemasons and so called Illuminati, whose agenda is
"new world order".

>However there was a problem with those programs: for each new kind of
>problem we had to construct an almost entirely new such system.

Indeed. Because da system was a tale told by an idiot
on the first place. That much is certain.

I've got "da system" now that is capable to completely change
the world in that it can stop the main incentive of suckitalism
expressed in terms of parasitism as manifested in the concept
of "market".

If I let this thing loose, you won't have any deviation in the
price of commodities and world currencies and that, in itself
is pretty much the end of suckitalism as suckitalism only exists
in the context of greed as manifested in the idea of inflation
and fluctuation of the prices where the fattest parasites there
are drink the blood of all and enlarge their bellies.

Hope you can read and comrehend that sentence.

Not sure you could, but you have a potential indeed.
There is something in you, Marvin, and I watched you for
quite some time, that makes you a lil different from those
fat ass licking idiots, enumerating each other obscenities
just to remain in the safety of the middle of the heard.

I am a lil pissed with you, Marvin, because you did not
produce a reply on one article of key significance,
but "this too shalt pass", right?

> This
>was because all of them lacked what people call "commonsense
>knowledge."

Marvin, you can't be this stupid, can't you?

The subject of "common sense knowledge" borders between
boring and obscene and you know that REAL good, as you
have enjoyed the benefits of not so common sense knowledge.

Now...

What is "commonsense knowledge"?

But first we need to discuss what IS "knowledge" on the first place,
rigth?

Ok, lets skeep knowledge in that I am sure you "know" what it is,
even though it is nothing more than the brainwashing procedure,
peddled by the fattest parasites there are to make sure the
"herd of sheep" as Illuminati and Freemasons think of "us",
stay properly programmed so they can be asserted ANY kind of
obscenity and agree with it as though it was truth pronounced
by the mouth of god.

But what is "commonsense"?
How do you define "common"?
How is it formed, formulated and propagated?
Ever thought about this?
I bet not.

And I tellya, commonsense knowledge is but a result of
brainwashing procedure, peddled again, by the fattest
parasites there are, to promote THEIR agenda, not yours
or mine, and what IS their agenda?

Do I need to chew upon this more?

Nope. Not to you, Marvin.

> None of those systems was able to adapt itself to solve
>other that it had not been programmed to solve.

Yes. That is what I call a brainwashing procedure.

Interestingly enough, for some strange reasons,
even the giants of your kind, and you are definetely
one of key people in the field, are completely missing the
whole point, and that is "it is all about who screws who
and who dominates whom".

The science as such have failed miserably in that it ended
up facilitating those satanists and their agenda instead
of searching for Truth as they initially claimed.

ANY society, especially a society of purely satanic nations
such as the one you live in, is but a "herd of sheep"
to be lead to "enlightenment" by those blood thirsty parasites,
who control everything, starting with your output hole
and going up as far, as you can imagine.

THAT is the truth of it all, Marvin,
and THAT is the thing you could never managed to see,
even though you intuitively rebelled against it.

Poor you, poor you, Marvin.

>A second major deficiency, which I'll say more about below, was the
>use of programming techniques that made it almost infeasible for the
>programs to reflect on their own performance.

It is not a matter of "performance", you fool.
It is a matter of key issue of intelligence.

Intelligence is not concerened with "performance",
efficiency or any other term associated with the notion
of "survival of the fittest".

Can you comprehend THAT much?

Intelligence is concerened with the issue of
exploration.

Exploration of "who am I" and "what IS life".

Gets its?

Efficiency?

You must be outa your own mind, Marvin.

WHAT are you living your life for?

Just to merely enlarge your "bank account"?

Tellya one thingy, Marvin, you'll be dead soon.

Now, in that context, what would you dedicate your
energy for?

Efficiency?

Hahahahahaha.

Not even funny, you fool.

> Reflective and self-reflective thinking

Marvin, yes, at the end of your life you come to realization
of certain things and they INEVITABLY become "reflective",
but Marvin, look at it.

What IS reflective?

Well, it is being outside of the framework of "survival".

That is ALL I am going to tell you.
As you are one of the giants of this artificial suckology,
the science of maximization of the rate of sucking of the
blood from many by the few.

Gets its?

Good...

>is perhaps what most distinguishes us from
>our animal relatives

Marvin, nothing much distinguishes you from your
"animal relatives". The only thing that distinquishes you
is your idea of superiority. Otherwise, you can't even beat
the wolfe or even an ant.

>and is likely to be what distinguishes


>present-day programs from the successors we hope to replace them with!

You can hope all you want, but the key issue remains:

Intelligence can not be programmed even in principle.
I'd have to write the book even to begin explaining it to you,
and, every step of the way, you'd be arguing, most likely.

Genuine, biological intelligence is nearly infinite in its
scope and domain and it isn't concerened with this obscene
idea of "survival". Otherwise, there wouldn't even be a reason
for it to be on the first place.

Why?

Well, because you can not manage to "survive" at the end
and THAT much you can comprehend at this junction, I am sure,
because you are standing by your own grave now and looking
at the fruits of your labor, and what do you see, Marvin?

>To solve a hard problem, one usually needs to know a good deal—both


>about that particular subject, and also about how to solve problems in
>general.

Foolishness.

First of all, what IS the "problem" in life to solve?
Ever thought about this?
Your so called efficiency criteria,
which I call the process of maximization of the rate
of sucking the blood of many by the few?

Where we as mankind move?

What ARE we trying to "achieve"?
Ever thought about this?

I bet not much to even mention.

Is mankind some kind of a machine to maximize its
"efficiency" or "performance", Marvin?

Then you SHALT be replaced by a machine one day,
you lil fool.

Man is about IN-eficiency. Because man is not a machine
to produce maximum amount of money per minimum amount of time,
you lil fool.

Human experience has NOTHING to do with "efficiency",
you dummy, mummy.

Do I need to chew upon it even to YOU,
oh giant of the giants of artificial suckology,
you all call AI?

> But only one major researcher focused intense research on
>how to represent commonsense knowledge, in a computer.

One more time: "common sense knowledge" is a program,
placed in your mind by the fattest parasites there are,
sucking your blood at every step of the way.

Was Einstein a person, who was concerned with "common sense knowledge"?
Was Van Gogh?
How bout Tchaikovski, you fool?
And Dostoyevski?
How bout ALL the giants of the mankind?
How about them, Marvin?

Ok, screw them all.

How bout ME, you idiot?

"Common sencse knowledge"?

Well, tellya one lil thingy...
When I was about 6 years old, they told me:
"Become and engineer. They make much money.
You can survive if you are an engineer".

And you know what I told them, Marvin?
I bet you wouldn't even comprehend
as smart as you are.

I told them: Get lost in a giant sucking machine.

Efficiency?
Performance?

Tellya one thing: I produced some of the master pieces
in my life and NON of them have to do with efficiency
or performance in a strict sence. Yes, kinda efficiency
and performance are intuitively desired criterias
and ANY masterpiece has them built in. But no master
is concerned with these criterias as what could be the
"efficiency" when you make a statue out of a piece of
a rock?

Zo...

WHERE is this "efficiency" thing in it?

When some Japanese fat cat pays $40 mil for a painting
of Van Gogh, who is well known to be a "scitsophrenic",
what does it mean?

WHERE is the "efficiency" thing in it?

Is that fat cat TOTALLY insane to buy the picture
of another insane man for $40 mils?

And what CAN he do with that picture?
WHERE is the efficiency thing in it?

On the contrary, it is better to buy the picture of
some ass licking fashion follower, who can capitalize
on the current wave of brainwashing fashion as far as
"efficiency" thing is concerned.

That is why I say: You are but fool, Marvin,
even though you are a rare bird indeed.

Tough subject. Come visit me, Marvin, in the lands
you can not even begin to comprehend. We'll have a
friendly chat and drink zome vodka. By the time we
are through with you, Marvin, you'll be a different
kind of animal indeed. I promise, Marvin.

> That was
>Douglas Lenat, who developed a system called CYC. Today, CYC contains
>a substantial amount of such knowledge. The knowledge in CYC was
>compiled by people, in a meticulous, tedious process.

Marvin, you have to realize what very term "knowledge" means.

Socrates, the giant of the giant, and not a jackass YOU are,
stated on a public record "I know nothing"!

You know what it means?

Well, 2500 years passed, and you can not even comprehend
what means 2500 years as far as mankind's "history" is concerned.
Tellya one thing, even 500 years ago the mankind was a bunch
of travelling gypsies, living in the mud and pissing just outside
the whole in the wall. They did not even have glass on their windows.
Only kings had.

Now, 2500 years ago, this giant, Socrates, the father of the most
potent idea there is, democracy, stated:

"I know nothing".

And what?

Well, that was EXACTLY the reason he was pronounced
the wisest man on the land.

Poor Marvin.

Lil did you know...

What this game is all about.

> However, CYC
>still has far from enough of this to compete with a two or three year
>old child.

True, and even 2 year old child is FAR "superior" to ANYTHING
that was ever created as far as AI goes.

>Unfortunately, in my view, the rest of the artificial intelligence
>community tried, instead, to make their computers do this by

>themselves—by trying to build what I call ‘baby machines',

Not exactly, Marvin.
They FOREVER try to build a money sucking machines
of survival. Do I have to chew upon this for the giant
of your kind?

Marvin, it is called the "survival of the fittest".
Translating: Either I suck YOUR blood, or you WILL suck mine.
There is just no other game in town.

THAT is where we are, Marvin.
And I talk to you because you are not quite a fool
as the others are. You do have something that makes you
different. You do have a potential, but it forever remained
unactualized because of your own tendency of compromizing with
yourself. You need your identity to be maintained as a marble
statue. That is why you eventually start to lick asses of
"establishment", and that is the worst idea for anyone
concerned with the issues of Intelligence, as there is not
compromise in that land.

And even now, you are but being kinky, trying to be "different"
from the herd of these sickos, who call themselves the
artificial intelligenciaks.

The problem with you, Marvin, you are not being authentic
and genuine. You are but a copycat, only in a kinky way
and I have a VERY good reason to state that on record.
Because you are but an arrogant coward, Marvin.
You avoid the real thing and run away into the bushes
of establishment, pretending to be "more advanced then THEM",
of fool.

First, you'd have to respond to a post of mine, which is
years old, directed to you. Then we talk more on this subject.

Oh, you mean you are so "high" up there, that a giant of your
kind can not talk to a mortal like me?

Hahahahaha, Marvin.
What a fool you are,
and the pitty of it all,
you won't even realize it,
untill the final bell rings.

> which were
>supposed to learn from experience.

But first you'd have to define the very notion of
"experience" and put it in the context of Intelligence,
and I tellya one lil thingy, Marvin,
you'll break your teeth on that one.

I promise.

> These all failed to make much progress

Then you have to define the very notion of "progress", Marvin.
Even Heigel and Kant, the biggest giants in philosoply have
failed there.

>because (in my view) they started out with inadequate schemes
>for learning new things.

Life is not about "learning new things", fool.
Because the end result is CERTAIN. It is a cross on your grave.
Gets its?

Can you EVER comprehend this level?

Yes, definetely, we all learn as we go.

But...

Learning new things for what purpose?

Oh, you mean the maximization of the rate of sucking
the blood of many by the few, you call "efficiency"?

Poor you, poor you, Marvin.

> You cannot teach algebra to a cat;

Cat does not need algebra.
What is so difficult to comprehend?

Cat simply gets excited when he sees a mouse.
Cat is a manifestation of elegance as far as ANY "system"
is concerned. It is but pure grace.

You can not comprehend the cat, Marvin,
even though you might be living with one.

Otherwise, you wouldn't talk like you do.

>human
>infants are already equipped with architectural features to equip them
>to think about the causes of their successes and failures

Just because YOU are but a professional careerist,
it does not imply that EVERY human is thinking along the same line.

I am standing in front of you here.
Take me as an exception.

Unless you can define the scope of what is it you are talking about,
Marvin.

I like your name, Marvin. It has some music in it.

Ok, and the next subject is?

>and then to
>make appropriate changes.

To what?
In order to "achieve" what?
Or, you mean the submission to the dominant view?
The ass licking procedure of a party line approach?
You mean "Drink Coka Cola. Coka Cola is 'good' fer ya" ideology?

Well, Marvin, that is the very thingy I am talking abouts here.
It is the process of maximization of the rate of sucking
of the blood of many by the few.

First of all, what IS this coka cola thingy?

Well, it is about 97% water, assuming it is clean enough to drink.
Then it is 2% sugar, 0.4% some chemicals and 0.1% paint
just to make it look "good".

THAT is what you drink, Marvin, even though,
for the same money, you can drink the most natural, life
giving juice there is.

You see how it woiks, Marvin?

And why DO you drink this coka cola thingy?

Well, becase they washed your fragile brain clean
by constantly jamming it with advertizements.
Eventually, it gave up and accepted this blood sucking
coka cola thing as some kind of equivalent of god.
Everyone drinks it, and so you do.

Why?

Well, because you are but sheep, Marvin,
and your place is in the middle of the herd,
where it is most comfortable. True, you are trying to
look different and so you write these "critical articles"
and so you write the books as your last one.
But I told you then and I am telling you yet again:

You are but a blind fool.

You TOTALLY confused the issues of Intelligence
in your book and you concocted some artificial gadget
that will NEVER EVER work. Period.
You simply have no clue.

But yes, you are about the best of them all.

In fact, I am not sure who would I put next to you,
Marvin.

See how it woiks in a "real life"?

>Many other researchers went in the direction of trying to build
>evolution-based systems.

You can not copycat Intelligence,
even in principle, Marvin,
and YOU can comprehend that
because that is what you are intuitively saying,
at least in the last few years.

Intelligence is not meant for the machines.
Its purpose is not to make machines work.
That is all.

All fools that are trying to take some aspects of it
and port to a mechanical gadgets, are but lil blood
sucking parasites, trying to maximize the process of
sucking blood from many by the few, those fear driven
idiots, who hope against all hope to somehow "survive".

But I tellya one thingy, Marvin, and you know it all too
well: There are two things that are certain in life,
death, and Freemasons, who suck your blood.

Yes, the cream of the crop of them is so called
Illuminati. Those who run the world, control the Internets,
your bank, your government, your ass, and ALL you can even
begin to imagine.

THAT is where we are at this junction, Marvin.

What is this chicken shit you are peddling here,
you clueless fool?

Oh, you don't even know who controls your ass?

Poor you, poor you, Marvin.

That simply implies you can not even BEGIN to open
your sucky input hole and talk about the grandest
thing of all, you call Inteligence, even though you
have not a slightest clue what you are talking about.
Gets its?

Yes, I like that, Marvin.

You are my friend, right?
I waste my time talking to you because of one "reason",
pretty much: You are not completely hopeless case.
Plus you have some say in the matters.

So SAY it, of fool.

> These were to begin with very simple
>structures and then (by using some scheme for mutation and then
>selection) evolve more architecture. This includes what are called
>"neural networks"

What I was amazed to see, Marvin,
is that a 16 neurod "neural network" was capable
of tracking the market prices. That kinda blew my mind.
Not that it could predict the future,
but the mere fact that its output curve tracks those
the price curve on the worl market is something mind boggling.
Just a 16 sucking neurods and even those are but a crudest
approximation of what is happening with the real thing,
and even they can track the prices, about the toughest thing
there is.

But we have BILLIONS of those inside our cockpit, Marvin.

Gets its?

>and "genetic" programs—which have often solved


>interesting problems, but have never reached high intellectual levels.

It is not possible, even in principle, Marvin.
Furthermore, I am getting tired of typing here,
but can tellya one thing: Intellect is but a PART of the picture,
and not the most significant one. Yes, without it, you are but a
cabage, growing on the row, not to insult that cabage.

But this is just a tip of the iceberg of life, Marvin.

You make your most important decisions not based on this
"intellect" thing, but on intuition, on hunches, on feelings.

That is ALL I am going to tell you this time, Marvin.

> In my view, this was because they were not designed to have the
>ability to analyze and reflect on what they had done

May be. But from what frame of reference they "ought to do it"?
Is it "morality" thing?
Is it "efficiency" thing"?
Is it a nationalist pride thing?

What IS it, Marvin?

You realize what you got yourself into here?

>and then make
>appropriate changes;

What is "appropriate"?
From what standpoint?
To achieve what "goal"?
To facilitate whose desires?
To accomodate whose feeling of guilt?
To satisfy whose fear of survival?

Marving, you are dead upon arrival,
just as I told you in my last post to you.

You won't be able to resolve these things.

Yes, you ARE but a coward and you will pretend
you did not see this post because it is "insulting"
to your lil ego, to that statue of a professional
careerist, capitalizing on the most precious thing
there is, and that is BIOLOGICAL Intelligence,
the mother of all mothers.

Marvin, you are but a fart in the wind.

>they were not equipped to improve or learn new
>ways to represent knowledge or make plans to solve new kinds of
>problems.

What "problems" are there to "solve" in life, Marvin?
Do you have a SLIGHTEST clue?

I bet you don't!

Life is not a "problem" to be solved.
Else you'd remain immortal.
Not sure you can comprehend this leap.

Man is not a "problem" solving machine.

It is something ENTIRELY different.

>Yet other researchers built systems that were based on logic—hoping


>that through being precise and unambiguous, these would be very
>dependable.

They are all but idiots, trying to suck a lil more blood
of someone else. I tried to talk to that conman, who is peddled
to be "the most prolific writer in AI", that lil suckazoid
from Santa Cruz. I am SURE you know who am I talking about.
That lil slime of a human being did not even have guts to
argue the argument. He pretended, just like you the last time
around, to be "holier than thou".

Holier than ME, you idiots?

Hahahahahaha.

Good luck, you monkey ass impersonators,
calling themselves the "experts" in artificial suckology,
you call AI.

Because you know not what you are talking about.
That much is certain.
You simply have no clue.

You, Marvin, about one of the rarest of this life destructing
slime, whose main "achievement" in practice is these "smart bombs"
that are dumb enough to hit the Chinese embassy in Belgrade,
which is gonna cost us all more than we can swollow. I can
promise you that much.

> However, in my view, the very precision of those systems

>prevented them from being able to reason by analogy—which, in my view,


>is at the heart of how people think.

Fine, good argument.
The only thing is, in order for analogy to be a guiding factor,
you need to make sure that analogy is done with some reference
point that is known to hold valid.

Because life is so vast that from every single event,
from every single breath of yours,
from every single moment,
you can extract some "analogy".

But how do you know that it is a reference experience?

And not, Marvin, you are seeing at least a TIP of the iceberg
of Life.

Just a tip.
But ONLY if you allow.

There is not compromise on this.

Now, analogy has to be defined in terms of a DEFINETELY
valid criteria, and what would that be, Marvin?

Well, you say the morality thing, right?

Good and bad.
Virtuous and "evil".
Black and white.

Anotherwords, the foundations of fascism
and totalitarianism.

Marvin, better make sure this post survives you yourself
and do your best for it to become as widely known as you
can manage with all your reputation and all that fuss.

In order for analogy to work, you need to know for CERTAIN
that your reference view, that, which you compare something
against, holds valid.

In that case, we need to define the very notion of valid.
What is valid, oh master of artificial suckology?

HOW do you define it,
but by using the tricks of guilt and fear manipulation,
invented by these fat, parasitic priests, manipulating
the name of Jesus entity or whatever their local equivalent is.

See what you got yourself into, Marvin?

I bet you don't.

I can just guarantee you that you will simply run away
and PRETEND you never seen this post.

Otherwise, your head is off, Marvin.

You are doomed, fool.
Doomed to oblivion.
Yes, it is heavy of a statement
and I can comprehend this much,
but you have shown to be just but a conman once.

Can you be a man of Truth now?

> (And the logical systems in
>current use

Feist of all, the entire logic as such is broken
and int he most profound ways, and I told you this already,
but you, coman with arrogant face of a priest,
ran away and did not follow up on the argument.

Logic is but a binary delusion,
reducing the entire rainbow of life
to a black and white definition.
Out of the entire infinity of colors and meanings,
you, priests, invented only two values,
black and white.

No wonder you ended up with fascism
and its most recent manifestation in the land
of satanic empire of the USA with this bonesman,
Bush, a mentally retarded idiot, whos brain is
not even controlled by him, but controlled by
demonic forces of the most profound evil
there ever was on the face of this planet Earth,
abused to the point of no return
by those very monsters, calling themselves
"Illuminati" and Freemasons.

Zig heil, Mr. push!

Now, Marvin, but what is beauty?
Remember this dialog of Socrates
about 2 milleniums ago?

I bet you never even read it.

One more time: if there is any logic, it has to
account for MULTI-DIMENSIONAL nature of existance.
Do I have to chew upon this bone fer ya, oh giant
of Artificial Suckology, you all call AI?

Ok, just a lil bit.

In summary: logic can not be reduced to black and white
to describe life as that is what its MAIN purpose is:
To describe life.

ANYTHING in life has virtually infinite number of dimensions
you can "measure" it against.

No event in life can be reduced to mere black and white
definition, "good" and "bad", unless of course you have
a purpose of maximization of the rate of sucking of the
blood of many by the few and are merely manipulating all
using the tricks, invented by the priest, such as fear
and guilt, in order to constrain the energy of the "sheep"
and make them vulnerable to your manipulations of that very
guilt and fear.

Can you comprehend that much?

I bet no. Because you lost that ability, Marvin.
You lost your innocence and your purity.
And even now, and a while back, when you rattle
with your so called criticism of "modern artificial suckology",
you are but a calculating carearest, trying to come up
with some scheme just to "survive",
just to "survive".

To "survive" what, Marvin?

Remember?

What are two CERTAIN things in life?

Well, it is death and parasitism.

THAT is where we are, Marvin.

> make it virtually impossible to support the kinds of
>self-reflective processes that they would need to improve their own
>operations.)

Marvin, you have a point here.
It is intuitive indeed and I appreciate that much.

But how you define "self-reflective"?
From what criteria?
For what PURPOSE?
Gets its?
In what context?
From the standpoint of going WHERE in life?

Ok, Marvin, I am tired with this.

Good luck.

And remember, the Truth is upon your sorry ass.

You won't be able to relax until the day you go poof
unless you make your GENUINE effort.

This all is but a mind fucking chicken shit
what you got here.

It is but a disgrace to a human race.

Yes, Marvin, I am about the most critical individual
you can find, and that is MY problem.

You can delete all these "bad" words from here
if you have no courage to know that which is.

Zee ya.

>Many other researchers designed robots to do various kinds of
>specialize tasks. We see this as an epidemic that has infected almost
>every university. Those researchers hoped that by starting with
>simple jobs they would learn enough that, eventually, they would
>become able to design robots that could progressively solve
>increasingly hard and more important problems. However, so far as I
>can see, those robot builders never went far past that initial level.

>Each such robotic projects may had some good ideas—but none of the


>architectures were adequate to represent, and then identify the
>deficiencies in their high levels of performance. Instead, they
>mostly wasted students' time repeating techniques whose limitations

>were understood in the 50s and 60s—or dealing with intermittent

nuc_leus

unread,
May 14, 2003, 10:57:16 PM5/14/03
to
In article <b9ulrk$oc0$1...@news.cis.ohio-state.edu>, "Unmesh Kurup" <un...@NOSPAM.yahoo.com> wrote:
>
>"Marvin Minsky" <min...@media.mit.edu> wrote in message
>news:f04e2625.03051...@posting.google.com...
><snip>
>> However there was a problem with those programs: for each new kind of
>> problem we had to construct an almost entirely new such system. This
>> was because all of them lacked what people call "commonsense
>> knowledge." None of those systems was able to adapt itself to solve
>> other that it had not been programmed to solve.
> But what exactly is commonsense knowledge?

Correct.
Marvin is caught with his totalitarian, ass licking pants
down.

Chew upon his ass now.

Sorry, I have no interest or energy or time to waste on this now.

Good luck.

nuc_leus

unread,
May 14, 2003, 11:00:11 PM5/14/03
to

Yes, at least that much you have the courage to admit.
I appreciate that.

Good luck, George.

As I told you before, and telling you again,
you are an intelligent human being.

Yes, you are but a byproduct of an asshole american issue.
But what to do?

We all have something programmed into our CPUs between
our ears. But the very courage to see it, and even publically
admit it, is something that is only available to the real
jewels of this life.

>George

wildstar

unread,
May 15, 2003, 2:24:06 AM5/15/03
to
George <geo...@nospam.com> wrote in news:3EC2E2C1...@nospam.com:


> Between us kkk++, this was the fastest game I played.
>
> What else can I say, you must be so stupid, robot
> manufacturing has to be for you. There you can walk
> around calling everyone around you bots. It is just like
> black people calling each other niggers, right?
>
> Come to think of it, maybe the Turing test should be
> renamed the nigger test. Are you a nigger or a human,
> what's the difference, robot, nigger...
>
> Up your ass freak.
>

Hmmm... GP..... I see.

wildstar

unread,
May 15, 2003, 2:45:00 AM5/15/03
to
min...@media.mit.edu (Marvin Minsky) wrote in
news:f04e2625.03051...@posting.google.com:


> Here is more or less what I told that reporter. Naturally, the
> important parts did not get reported.
>
> Most early researchers in artificial intelligence aimed to build
> machines that would become as smart as people are. They developed
> many ideas about how to represent knowledge in machines, and about
> ways to reason by using that knowledge. This led to many successful
> projects, such as programs that recognize various patterns such as
> sounds of words, printed characters, faces, and other particular

> objects—and answering questions about certain specialized fields of


> knowledge. Today such programs are all around us and we tend to take
> them for granted: by the 1980's, many of these specialized, so-called
> "expert systems" had become widely productive and popular.

<<< Snip >>>

I apologize for snipping but for readability purpose. I think they all
failed because they where not combined technologies. I don't think they
failed - failed but did not achieve all that they wanted. This self-
reflective code is commonly engineered in improving strategies in a
strategy game. How does the AI system extend from its knowledge and build
upon itself and grow. I think the "baby" projects should be seen as they
are - little mini-A.I. systems and will help in building a more
sophisticated architecture that combines these things. Human-mind works
in a multi-faceted way with different layers of functionality. Each doing
its thing. Neural networking works good for associative inter-related
things. Good for refective thinking. Since you have a knowledge database
of things useful for recognizing patterns. Since A.I. doesn't feel pain
because it is a computer system, hot and cold has little real-world
impact to the machine. A self-reflective modifying nature will indeed be
useful. Self-modifying code would be EVIL in *traditional* programming
but critical in Artificial Intellegence which is all about modifying as
perfection can not be achieved because it can not be measured the same
from place to place. What is perfect for one thing is not perfect for
another and an A.I. must learn to understand this meaning.

Just like we do.

Ralph Daugherty

unread,
May 15, 2003, 3:58:28 AM5/15/03
to

Thanks for the elaboration. Your statement got me thinking when I saw it
on /. and I posted my thoughts on it. I thought I'd come here and post it and
continue the discussion. It's that change of behavior that was the focus of
my comment too. My post starts out in response to an observation about solved
problems moving out of the realm of AI:


Thus programming a computer to play chess was worth a PhD at one time,
until that problem was solved. I wrote a Double Deck Pinochle program twenty
years ago that plays as well as I do and is really hard to beat (and has been
floating around as DOS freeware for several years now). Is that program
artificially intelligent? Of course not. It is no more intelligent than any
other software program, blindly executing logic as programmed. But if it
were a classier card game (bridge), a few years earlier (on a PDP-11 instead
of a TRS-80 Z-80), and I had been an advanced comp sci student (I was a
student, but not a very good one... so I ended up dropping out and writing
code), then if appropriately cloaked in mumbo-jumbo, I coulda been a PhD... :)

Although chess is a game, certainly a great deal of intelligence is used
to create moves, and software that created moves which implemented strategies
without relying on pre-programmed algorithms or lookups in a history database
of human games would exhibit the same intelligence that an untrained human
uses to play the game. However, I am instead referring to the algorithms used
in chess programs to recreate human playing as clever alogorithms rather than
intelligence. In other words, without relying on lookups of past human
behavior, it would require original thought to play the game, which is the
essence of intelligence.

So, what is AI? It is not pattern matching. That leaves out the million
rule database of behavior factoids, recognition based on lining up bit
patterns, and so called learning by storing away data and matching it later to
identify an event. Those are all just software logic exercises. The results
may be more interesting to humans than how many widgets are forecast to be
sold or made or purchased for the next month, but they are no more intelligent.

Here's a question. Is the activity of insects considered intelligence?
It seems to me that robot programming is attempting to emulate the
intelligence level of insects. It is arguable if that is even intelligence.
The game of Life was often portrayed in decades past as intelligence, with
combinatorial algorithms creating a "winner" within some set of constraints.
Is anything found in life that mutates in an endless search for something that
succeeds at surviving the real game of life intelligence? I think most would
say not. Yet if I write a program that randomly attempts to extend its
behavior in an attempt to achieve some overall goal, it would undoubtedly be
described as artificial intelligence. If the real organisms that act in this
manner are not intelligent, why would the software be considered intelligent?
Because we made it happen instead of nature? Because we are working on a
PhD? Because it resembles life more than a widget forecaster? Perhaps that
should be described as attempting to create artificial life rather than
artificial intelligence, or more accurately artificial animal like behavior.

If what is recognized as original thoughts came from a static software
program, it is still being generated by a clever algorithm. While I don't
know what it will take to generate thinking and reasoning, I think that it
will require software that can extend its behavior, probably through writing
new code that both puts together existing functionality in new orders and
creates new functionality when needed in a non random way. The ability to
determine what is needed and then create it is what would constitute original
thought and intelligence.

I do recognize that some consider the algorithms to create new algorithms
an indirect extension of the original code, but I contend that providing low
level constructs to create syntax as expressions of desired behavior is not
the same as what an intelligent program would create using it. Whether this
will be achieved is a question, but I contend that a program extending itself
with additional logic to exhibit new behaviors is that which is required to
achieve artificial intelligence. Most anything else is just a software
program executing pre-programmed logic.

I am not dogmatic about this though, and I agree with those who describes
understanding words in context as exhibiting artificial intelligence, if
applied to unknown data, even if it is able to be accomplished with a static
software program and very clever algorithms. Of course this has been very
difficult to achieve. The SHRDLU context understanding is great, but they are
just very specific algorithmic responses within a highly constrained
environment, not to demean that achievement at all. Indeed, the comment that
we haven't seen anything better in all these years since is very telling.

Rather than continuing to describe clever algorithms as "intelligence"
that emulate human activities such as playing chess or recognizing faces, one
must isolate that from all animal behavior which is human thought and say,
this is intelligence. What does it take for software to approach generating
original thoughts? I don't know, but only then will the software be
intelligent, artificial or otherwise.

Ralph


Marvin Minsky wrote:
> (snip)


>
> Here is more or less what I told that reporter. Naturally, the
> important parts did not get reported.
>
> Most early researchers in artificial intelligence aimed to build
> machines that would become as smart as people are. They developed
> many ideas about how to represent knowledge in machines, and about
> ways to reason by using that knowledge. This led to many successful
> projects, such as programs that recognize various patterns such as
> sounds of words, printed characters, faces, and other particular

> objects—and answering questions about certain specialized fields of


> knowledge. Today such programs are all around us and we tend to take
> them for granted: by the 1980's, many of these specialized, so-called
> "expert systems" had become widely productive and popular.
>
> However there was a problem with those programs: for each new kind of
> problem we had to construct an almost entirely new such system. This
> was because all of them lacked what people call "commonsense
> knowledge." None of those systems was able to adapt itself to solve
> other that it had not been programmed to solve.
>
> A second major deficiency, which I'll say more about below, was the
> use of programming techniques that made it almost infeasible for the
> programs to reflect on their own performance. Reflective and
> self-reflective thinking is perhaps what most distinguishes us from

> our animal relatives—and is likely to be what distinguishes


> present-day programs from the successors we hope to replace them with!
>

> To solve a hard problem, one usually needs to know a good deal—both


> about that particular subject, and also about how to solve problems in
> general. But only one major researcher focused intense research on
> how to represent commonsense knowledge, in a computer. That was
> Douglas Lenat, who developed a system called CYC. Today, CYC contains
> a substantial amount of such knowledge. The knowledge in CYC was
> compiled by people, in a meticulous, tedious process. However, CYC
> still has far from enough of this to compete with a two or three year
> old child.
>
> Unfortunately, in my view, the rest of the artificial intelligence
> community tried, instead, to make their computers do this by

> themselves—by trying to build what I call ‘baby machines', which were


> supposed to learn from experience. These all failed to make much
> progress because (in my view) they started out with inadequate schemes
> for learning new things. You cannot teach algebra to a cat; human
> infants are already equipped with architectural features to equip them
> to think about the causes of their successes and failures and then to
> make appropriate changes.
>
> Many other researchers went in the direction of trying to build
> evolution-based systems. These were to begin with very simple
> structures and then (by using some scheme for mutation and then
> selection) evolve more architecture. This includes what are called

> "neural networks" and "genetic" programs—which have often solved


> interesting problems, but have never reached high intellectual levels.
> In my view, this was because they were not designed to have the

> ability to analyze and reflect on what they had done—and then make


> appropriate changes; they were not equipped to improve or learn new
> ways to represent knowledge or make plans to solve new kinds of
> problems.
>

> Yet other researchers built systems that were based on logic—hoping


> that through being precise and unambiguous, these would be very
> dependable. However, in my view, the very precision of those systems

> prevented them from being able to reason by analogy—which, in my view,


> is at the heart of how people think. (And the logical systems in
> current use make it virtually impossible to support the kinds of
> self-reflective processes that they would need to improve their own
> operations.)
> Many other researchers designed robots to do various kinds of
> specialize tasks. We see this as an epidemic that has infected almost
> every university. Those researchers hoped that by starting with
> simple jobs they would learn enough that, eventually, they would
> become able to design robots that could progressively solve
> increasingly hard and more important problems. However, so far as I
> can see, those robot builders never went far past that initial level.

> Each such robotic projects may had some good ideas—but none of the


> architectures were adequate to represent, and then identify the
> deficiencies in their high levels of performance. Instead, they
> mostly wasted students' time repeating techniques whose limitations

> were understood in the 50s and 60s—or dealing with intermittent

CyberLegend aka Jure Sah

unread,
May 15, 2003, 5:00:17 AM5/15/03
to
* Note that troughout this post, I use a critical language that may
appear slightly offensive, please take it as on-topic to this newsgroup
and not personaly.

Marvin Minsky wrote:
> Unfortunately, in my view, the rest of the artificial intelligence
> community tried, instead, to make their computers do this by

> themselves by trying to build what I call baby machines', which were


> supposed to learn from experience. These all failed to make much
> progress because (in my view) they started out with inadequate schemes
> for learning new things. You cannot teach algebra to a cat; human
> infants are already equipped with architectural features to equip them
> to think about the causes of their successes and failures and then to
> make appropriate changes.

One important detail is that humans too come with so much knowledge
hardwired in them (a 3 dimensional representation of the physical world
for example), that when you build a seriously flexible system that will
be able to do all this on it's own (and if I understand you correctly,
be able to transfer it's learning abbility between diffirent topics) you
will get something fairly better than any human around.

BTW, the program isn't at all that hard to make, the whole of the
problem is in the principle, this will not be a program which's quality
would be measured in lines...

> Many other researchers went in the direction of trying to build
> evolution-based systems. These were to begin with very simple
> structures and then (by using some scheme for mutation and then
> selection) evolve more architecture. This includes what are called

> "neural networks" and "genetic" programs which have often solved


> interesting problems, but have never reached high intellectual levels.
> In my view, this was because they were not designed to have the

> ability to analyze and reflect on what they had done and then make


> appropriate changes; they were not equipped to improve or learn new
> ways to represent knowledge or make plans to solve new kinds of
> problems.

Philosophicaly put, very well said, but practicaly put, without much
value, mind you.

"The ability to analyze and reflect" is something very specific in
coding terms and weather you like it or not, it will never get you any
"high intellectual levels" or any "intellectual levels" at all. As for
the second part, particulary the one regarding the "make plans to solve
new kinds of problems", do you realise how much work does that involve
and can't you just say "we first need to find out the purpose of life
then we'll continue making AI"? Or however else you suppose the AI will
invent what to do next when it's current tasks list is finnished?


I mean, look, first you proove you fully realise the system needs to be
so flexible to be independend of itself, then you continue with
philosophical claims that only have a practical basis in methods that
are very far from flexible. I take it it must be a proffesional
deformation?

> Yet other researchers built systems that were based on logic hoping


> that through being precise and unambiguous, these would be very
> dependable. However, in my view, the very precision of those systems

> prevented them from being able to reason by analogy which, in my view,


> is at the heart of how people think. (And the logical systems in
> current use make it virtually impossible to support the kinds of
> self-reflective processes that they would need to improve their own
> operations.

Oh really?! All news to me. Well if you limit your view of "all of
computerscience" to one specific programming language, prefferably a
Java, perhaps so.

All programming languages form C++ to Assembly make it perfectly
possible to process their own code data in the program's free time,
finding ways to optimize or translate itself for use in other
processors, possibly such designed by itself. Allas, all this highly
AIistic knowledge was all forgoten in the days when floppy DOS's days
were counted and programs no longer needed to be nice and compact.

> Many other researchers designed robots to do various kinds of
> specialize tasks. We see this as an epidemic that has infected almost
> every university. Those researchers hoped that by starting with
> simple jobs they would learn enough that, eventually, they would
> become able to design robots that could progressively solve
> increasingly hard and more important problems.

I don't understand their logic, but don't you think that if a robot can
do every specific specialized task, it would infact be AI?


I mean, look, the future of AI depends on good planning, a sophisticated
_coding_ principle that, like mathematics, is polydimensional with
none of it's dimmensions (aspects) fixed. AI needs to evolve within a
flexible system that will allow it to progress and I can see you
understand that. What I don't understand is why you keep telling off all
the attempts to start building one? Do you realise AI will not be built
in one step? Every AI project needs a practical (commercial) backing for
it to survive, the AI must do something usefull. Do you realise
philosophical word tricks do not work on real code? Every AI project
needs to start somewhere quite un-AIy. Don't you realise AI will take no
shortcuts to build? It will take approx 500 to 1000 man-years from NOW
to build and we have to _start_ NOW if we want to start picking off the
years.

Observer aka DustWolf aka CyberLegend aka Jure
Sah

C'ya!

--
Cellphone: +38640809676 (SMS enabled)

Don't feel bad about asking/telling me anything, I will always gladly
reply.

"Keeping an open mind is not about disregarding new definitions to
things."

The future of AI is in technology integration,
we have prepared everything for you:
http://www.aimetasearch.com/ici/index.htm

MesonAI -- If nobody else wants to do it, why shouldn't we?(TM)

Mike

unread,
May 15, 2003, 7:13:29 AM5/15/03
to
In article <f04e2625.03051...@posting.google.com>, Minsky wrote:
<snip>
> To solve a hard problem, one usually needs to know a good deal—both

> about that particular subject, and also about how to solve problems in
> general. But only one major researcher focused intense research on
> how to represent commonsense knowledge, in a computer. That was
> Douglas Lenat, who developed a system called CYC. Today, CYC contains
> a substantial amount of such knowledge. The knowledge in CYC was
> compiled by people, in a meticulous, tedious process. However, CYC
> still has far from enough of this to compete with a two or three year
> old child.
<snip>

I have been interested in CYC and his earlier program Eurisko.
Is there any public detailed documentation or source for Eurisko?
I have a very simple understanding of the slots and frames
used initially by CYC, could someone explain more just how
the slot processing works? How one frame relates to another?
It seems like a really huge expert system to me.

Mike

David Longley

unread,
May 15, 2003, 8:03:47 AM5/15/03
to
In article <f04e2625.03051...@posting.google.com>, Marvin
Minsky <min...@media.mit.edu> writes

>Thanks, David. Long time, no see!
>
>David Longley <Da...@longley.demon.co.uk> wrote in message news:<Lf4AFGAlBpw+Ewx
>a...@longley.demon.co.uk>...

Many thanks for the elaboration.

The only comment I have is the old familiar one that folk tend to rely
on a priori heuristics when asked why they do things, and that much of
the empirical research since the 50s has indicated that even experts
don't always know why they do what they do (and don't do as well as
formal statistical algorithms).

I won't labour the point as I know you know this literature as well as I
do if not better, and I have presented it all before. My posts here have
primarily been a restatement of my old case that AI is best seen as
science and engineering and that it's probably a mistake to look to
human faculties (or behaviour) for more than a guide as how *not* to go
awry <g> - but then I don't recall you (or John McCarthy) being too
enthused by my stance last time round..

I do think our folk psychology has us in its grip far more than we
appreciate.
--
David Longley

Gordon Joly

unread,
May 15, 2003, 10:43:09 AM5/15/03
to


<quot>
"Cyc knows that trees are usually outdoors, that once people die they
stop buying things, and that glasses of liquid should be carried
right-side up," reads a blurb on the Cyc website. Cyc can use its vast
knowledge base to match natural language queries. A request for
"pictures of strong, adventurous people" can connect with a relevant
image such as a man climbing a cliff.
</quot>


Yet many people still ask if you can grow a bonsai tree indoors...

Go figure!

Gordo

P.S. Can you grow a bonsai (a tree in a tray) indoors? :-)_

Gordon Joly

unread,
May 15, 2003, 10:47:29 AM5/15/03
to
In article <f04e2625.03051...@posting.google.com>,
Marvin Minsky <min...@media.mit.edu> wrote:

>[...]


>Here is more or less what I told that reporter. Naturally, the
>important parts did not get reported.

>[...]


"Never let the facts get in the way of a good story" perhaps?

:-)

Or maybe just too hard on the old noodle?

Gordo

nuc_leus

unread,
May 15, 2003, 11:00:20 AM5/15/03
to
In article <f04e2625.03051...@posting.google.com>,
min...@media.mit.edu (Marvin Minsky) wrote:
>Thanks, David. Long time, no see!

Huh?

Ok, I have reread this post of yours and in my last reply,
being tired, I did not respond to the last paragraph,
which has immence significance.

Zo...

Let us finish it now.

>David Longley <Da...@longley.demon.co.uk> wrote in message
> news:<Lf4AFGAl...@longley.demon.co.uk>...
>> In article <f7422d8e.03051...@posting.google.com>, rick++
>> <ric...@hotmail.com> writes
>> >In a Wired article http://www.wired.com/news/technology/0,1282,58714,00.html
>> >Marvin Minsky decries the direction A.I. has taken in the past 15 years-
>> >mainly exploring "trivial problems". Lots of debate on this and
> Minsky-bashing

To bash such a dude as Marvin Minsky is,
you must be a TOTAL idiot.
Because, first of all, Marvin is one of the key founding
people in the field. He has seen it all, in and out.
Most of those idiots, who merely bash him because he
VERY pointedly addresses some of the key issues,
were not even born when Marvin played with robots.

THAT is how it all started.
Yes, the desire to make an "intelligent" machine
that eventuall will allow man to simply sit and do nothing
and that machine will simply make money for him.
Well, said in a semi-joke form.
But, like they say in some lands
"In every joke there is a grain of joke".
Not sure how many of you can even comprehend the meaning of it.

Now, at this junction the man himself has become but
a robot, or a bio-robot as I call it. Man himself has become
programmed with the most trivial ideas and has become
but a functioning machine and for what purpose?
Anybody knows?

Well, for the purpose of "surviving".
But "surviving" for what?
Just to merely "survive"?
And then?
Ok, assume you ALL survive till the end of times.
You need no food. Never get sick and tired.
Don't even need to sleep, just like a mechanical machine.
You can do ANYTHING you wish.
And what would you do then?

THAT is the core essense of my argument.

And I tellya, you'd simply commit suicide eventually.
So boring this whole excersize in futility would become.

Eventually, we all accepted the "norms of behavior",
"good manners" and started "achieving our 'goals'"
that are mostly related to enlarging out bellies
at someone elses expense.

But we lost all bones. Became totally boneless,
amorphous matter. Yes, we can put ANY empire on its
knees in about 3 months nowadays as we have seen it
at least 3 times on the world scale within the last
10 years or so, thanks to artificial suckology.
Yes, the satanists enjoyed VAST amount of experience
in the field of AI.

Yes, those who control your thought process and the
ideas in your empty sculls, placing all sorts of
guilt and fear manipulating garbage in it with the
trick of unending repetition of the MOST primitive
ideas there are, just as was discovered by Adolph Hitler,
were able to use the best of your knowledge and experience
to facilitate THEIR agenda and their "elite" clubs,
ruling the whole world.

But what is the result as we see it right at this junction?

Well, it is about WORLD DOMINATION.
It is about the evillest of ALL evil,
putting up the mask of "good", they themselves
invented and defined, and lead that "sheep"
to where they want them,
and that is PURE grade hell.

The world is ALREADY but hell
and from about any angle you can look upon it.
With vast military superiority and mind boggling
expenses on make it even more lethal,
thanks to Artificial Suckology,
what you have now
is but a PURE model of fastest way to self destruct,
where the evillest of all evil
commands the world affairs,
commands your minds, programmed to LITERAL oblivion,
and makes you dance ANY way they please.
And most of you, fools, are happy to accomodate.

For what?

Well, just to survive,
just to survive of course.

You'd rather stay in the imaginary safety
of the middle of the herd.

But at least Marvin Minsky isn't simply here
licking all the asses of those "authorities"
in the AI field.

Nope, he is putting his ass on the line
because, being the shrude peison he is,
he knows all too well,
it is the END of him.

They'll simply shred his ass to dust.
They won't stop.
They NEVER do.
They'll grind him so bad for his statements,
that he will eventually and INEVITABLY
be remembered as one of the fools,
who got "too old and could not keep up with
'real' 'science'" of artificial suckology.

But what is "real"?

Oh, that which they program your minds with.

THAT is what my argument is all about.

But they can not even begin to comprehend
that Intelligence does not age.
Yes, some aspects of this mind machine will
degrade as you stop focusing on the most boring stuff.
More and more, it becomes more difficult for you
to concentrate on this rat race and all its manifestations
of "survival" of the "fittest",
the most vicious thing there is.

Now, that was a short intro.

>> >in a slashdot.org follow up.
>> >(Marvin used to post often this group before the bots took over.)

What "bots took over", you conman?

Are you one of the "moderation" candidates here?
You wish to completely shut down about the only remaining
place, comp.ai.philosophy
and start publishing your own promotion and advertizements here,
just like that self-admitted fascist, David Kinny did
with comp.ai with the help of herr fuehrer, Russ Allbery,
who is working for years to convert usenet into a totalitarian
system of fascist dictate, controlled on every conceivable level?

Go look up the www.usenet2.org.
What do you see there?
Nothing?
Look at it closer.
Seen the "famous" rant by this fascist Russ?
Seen the "czars", appointed by controller general
of thought, Russ Allbery, ruling every single group?
Seen the mugs page?
Seen the satanist, Todd McComb's picture of himself
with a satanic look in his red eyes and red horns
he himself created?

Oh, just like he said just the other day on news.groups:
Hey, it was all just a joke, right?

Fine, but how come you are on a trilateral committee,
ruling big-8 and controlling usenet in the ways vast
majority of fools can not even begin to comprehend?

You speak of what here?

Intelligence?

You must be UTTERLY out of your mind!

As I said before, and will say it again:

Before you can even BEGIN to speak of Intelligence,
artificial or otherwise,
you'd have to define it feist,
and you, idiot, will fail on the first step.
I'll personally "take care of you" with my argument
and you will look like a big red ass in no time at all.
I promise.
What will be left of you,
is but a dust in the wind.

Better believe it.

>> He still will post here if he has something to say.

Are you his secretary of speech affairs?
Is he but your puppet?
Does he need YOUR "permission" to come here and speak?
You must be conducting an "opinion poll" behind the scene,
trying to take over comp.ai.philosophy, exchanging email
messages and promising all those fools you'll be the most
tolerant "moderator" on the face of the planet Earth,
just like that other fascist, David Kinny did
when he was taking over comp.ai after...

Now, listen to this, you fool.

After I responded to several obscene posts,
crossposted to comp.ai from comp.ai.philosophy,
where I stated that Intelligence isn't any kind
of intelligence unless you put back the emotional
domain into it.

That kinda broke your whole game indeed.

Interestingly enough, within a year since that time,
there was a world symposium held on the subject of?

On the subject of emotional aspects of AI.
Not that they even said a single "right" word
on that symposium, although I have not seen any results.

How can the blind see?

>> One thing in that article which warrants more careful thought than it is
>> likely to get is the end:
>>
>> o "AI researchers also may be the victims of their own success.

Yes, if the root of the word "success" is to suck.

But success?

Define it feist.

And I tellya, there is a VERY good reason you slipped this
"success" thing into your very first statement.

And that is: you have a vested interest in this whole
mindwashing thing, don't you?

You are feeding on it, most likely.

>>The
>> public takes for granted that the Internet is searchable and that
>> people can make airline reservations over the phone -- these are
>> examples of AI at work.

Bullshit.
Of PURE grade.
You mean Internet woiks because of AI?
You mean the airline reservation system
has ANYTHING, even remotely, to do with Intelligence?

Then please describe where is it in the reservation system
you have the AI mechanisms at work and what are those mechanisms?

Oh, you mean that "expert system" thing?

But I am not sure they even use it in reservation systems.

>> "It's a crazy position to be in," said Martha Pollack, a professor at
>> the Artificial Intelligence Laboratory at the University of Michigan
>> and executive editor of the Journal of Artificial Intelligence
>> Research.

>> "As soon as we solve a problem," said Pollack, "instead of looking at
>> the solution as AI, we come to view it as just another computer
>> system."

Oh, and it hurts you lil ego of "AI researcher"
aka the bullshit peddler to the minds of young,
who faitfully and pretty much mindlessly
learn the most obscene and insulting things
about intelligence, reduced to rubble
with the "help" of people of your kind.

Yes, Marvin is a rare bird indeed.

Just looking at the above statement,
if one has ANY active neurons in his cockpit left,
you can see what this game is all about.
This bitch wants to insult Marvin by saying
he simply underestimates his own "successes",
at the root of which lies the very notion of sucking
to enlarge ones belly,
just to survive,
just to survive.

THIS is an example of an AI "professional"?
Then better commit a mass suicide
as with this amoeba sized mind,
squarely focused on one thing
and one thing only,
the "success" thing,
you'd be dead as a mankind
in no time at all.

You are ALREADY but barely hanging on a thread so thin,
even devil himself won't be able to save your sorry asses,
not that he is much interested in it,
just the other way around,
that is EXACTLY the whole excersize here,
and you got caught with your pants down,
facilitating the evillest of all evil,
thinking you are making a "scientific" "progress"
and all these last wars are but evidence
of your "successes".

What have you created, you self destrucive fools.
Remember what happened with another giant,
Einstein in relationship to the atom bomb?

Well, he dedicated the rest of his life
to fight his own creation.

Because he was a honest man
and he was a giant,
who would even put up his own ass on the line,
just to pursue the path of Truth and sincerety.

Put Marvin Minsky next to him in that respect.
Marvin broke his teeth on this thing
and no jackass or bitch can diminish what he has done
and what is he doing at this very moment.
Just kiss him on the ass and say:

Thanks Marvin for being what you are
and for all these eye opening things you say.
Because we are so busy with our ratrace of "survival",
that we have not time to evel look
at the fruits of our labor
and so we have come to see our doomsday
with our own inventions and all these "smart" bombs
that are about to end life on the planet Earth,
abused so much,
it is amazing it is still breething.

THAT is what we are talking about here
and not some chicken shit by "leading professionals"
in the field of artificial suckology they call AI.

>> It should be obvious that any effective process is computable,

Pure puke. Simply disgusting.

"Effective" process?

From what standpoint?

Oh, you mean maximizing the rate of sucking?

I'll throw at you a VERY "effective" process
and ALL your computers, even if globally linked
will simply deadlock computing it.

The stuff you are saying bothers between boring and obscene.
You must be "corrupting the minds of young",
the main charge against Socrates,
at some university,
programming their mind with obscenities of this grade.

>> and that
>> as and when previously complex, instantiations of "intelligent" human
>> behaviour are engineered as computer systems, they will no longer be
>> regarded as "intelligent".

A pile of horseshit
and of the LOWEST grade.

You are simply defending your own "research",
self justifying your core failures to even begin
to comprehend the core problems.

ALL you are saying here is:

Oh, we are woiking out asses off.
Producing ZO much result,
we can not even believe our own eyes and ears.
Buts...
Unfortunately, as soon as we make a next "breakthrou"
in our great "research", it is immediately accepted
as something to be taken for granted
and nobody wants to kiss our noble asses anymore
even though we deserve it so much,


you can not even begin to comprehend.

Is THAT the argument here?

Disgrace!

>> They were never "intelligent" to start with
>> of course, they were behaviours which (in some instances) people learned
>> by being in an appropriate place at an appropriate time - ie they were
>> programmed.

Correct. Simply programmed,
and programmed with ANY kind of bullshit,
the "rulers" of the world wish to put there.

It is all but mere agreement between the blind.

Look at the WWII period.
Have you seen the real footage of those grand parades
where nearly 99.99999% of all people stood in orgazm,
shouting their heads off "heil hitler"?

What does it tell ya?

ANYTHING?

Well, but it tells ME zomething indeed.

It tells me that you can program ANY kind of bullshit
into your mind and you will accept it all,
and eat it yammy yam yammy
and will even kiss my ass at the end
saying "oh, thank you very much. now I feel like a
real 'success'", thanks to your "system".

Have you all gone insane?

Is Marvin Minsky the ONLY one among you, slime?

What are you doing with all these "reviews"
of what he is saying?

Oh, you are busy self-justifying all the lies
you perpetuate and propagate?
Justifying your own existence?

Disgrace!

>> Where and when there are great failures of effective procedures to
>> emulate the same behaviours as "common sense" one should perhaps look
>> more closely at the nature of common sense and its deficiencies. One
>> might have forgiven some of those pursuing this AI holy Grail were in
>> not for the fact that the irrationality of common-sense has been so well
>> documented by psychologists over the past 50 years.

Another words, this whole "common sense" thing is,
is nothing but common delusion,
where all agree along the same lines
and agree to follow each other's asses
in a long line, moving toward the abbys,
you all call "progress".

Is THAT blunt enough?

Conclusion: common sense is but a pile of garbage.
It is not a scientific term on the feist place.
It is but an ideology trick to control.

Giants never follow this "common sense" thing
and yet they create your greatest treasures.

Idiots follow "common sense".
Because their prime interest is...

Is "SURVIVAL".

For what?

Well, just to survive,
just to survive.

THAT is where you are dealing with "common sense".

When you meet the wild bear in the forest
and, all of a sudden, you see him standing
in front of your and beginning to roar,
what happens to your idiotic "common sense" thing?

When you are on a battle field and bullets are
flying all over. Your friend next to your is dead
with blood all over his face, and he was your
closest friend, WHERE is this sub-idiotic "common sense"
thing?

I'll give you a list so long,
you'd be reading it till the end of your time,
and the question will remain,
just as fresh:

WHERE is this sucky "common sense" thing,
you bullshit peddler to the masses?

Is what Marvin Minsky doing with his rants about AI
based on "common sense"?

Well, according to this "common sense" thing,
he is doing the most stupid thing there is.

Yes, he is a shrude man and he knows how to keep
in light of a public attention and he probably
knows all to well about and about and about.
In that respect, and if that IS the case,
yes, that IS a "common sense".

But even there, what is the value of it?
If ALL he is doing is engaging in self-promotion,
then what is the value of that "common sense" thing?

Just how to get on the top of the hit
and take a good shit on the heads below you?

See what you got here?

>Here is more or less what I told that reporter. Naturally, the
>important parts did not get reported.

Well, we already went through this material.
Sure, could do it again in a slightly different manner,
but that does not change much.
See the previous post of mine, if anyone is interested.
We'll skip it to the last paragraph.

Yes, ideally creating a "money machine" or its logical equivalent.

> We see this as an epidemic that has infected almost
>every university.

I like that!
Yes, it IS epidemic.
It IS promoting and advocating sickness.

Every AI "researcher" is trying to first of all
make something "useful" so that they could get more funding
for their "research".

But how many of them are even remotely interested
in Intelligence as such?

What are you tring to create?

A machine to replace the man?

Hahahahahaha.

This excersize is suicidal.

First of all, if you EVER "succeed", then what?

Well, then the mankind will be on the top of the list
for that thing to destroy for MANY reasons.

First of all, the man is irrational.
Throughout the history of times, the most radical
and destructive decisions were made not based on "reason",
but based on

1. Desire to get on the top of the heap and dominate all.
Stalin, Hitler, Alexander the great, Musolini,
Ghingis Khan, Timurlen, and we barely scratched the surface.
Now, those people molded what mankind was and is
to a large enough extent.

2. Complex of inferiority.
Usually, those who crave for power and try to dominate all,
are filled with this complex. That is why they do what it
says in 1. VERY few "leaders" were exception.

3. Greed.
Pretty much the same thing, slightly different permutation.
Why do you need to become fatter than others?
Do we need to chew upon this?

4. Nationalist "pride".
Vast majority of the most destructive acts throughout
the times were the result of this nationalism thing.

5. Blood boiling hate.
Yes, that is a good way to "win the argument".
Simply start destroying the "opponent".
Nowadays, it is sometimes done in the most poisonous
and non obvious ways, just like in these very posts.
On the surface, it looks like a valid argument.
But deep inside, it is but hate of the other view.
Never mind that view could turn out to be valid.

Things like that.

Now, if that "intelligent" machine considers this information,
what kind of conclusion it OUGHT to come to?

Well, the mankind is the most dangerous thing there is.

Secondly, you all get sick, tired, you need to sleep,
which is waste of time, sometimes you like to slack off,
and "efficiency" suffers, then sometimes you sabotage
things at work outright just to destroy your "competition"
in the ass licking order of your organization,
then on certain days you do not wish to work,
some of you, if not most, are abnormally fat,
which implies you are wasting the valuable resources
like crazy and simply became the virtual and literal parasites,
and on and on and on.

THAT is what this is all about.

The first thing to do in its do to list
would be:

DESTROY MAN AT ANY AND ALL COST.
EVEN IF YOU HAVE TO SELF-DESTRUCT.

Because one thing is certain, if you don't,
then man will destroy you without winking an eye.

Now, since machine is UTTERLY devoid of emotional
domain aspects, no matter what these "AI professionals"
blubber, that thing simply can not FEEL ANYTHING for you
while destroying it.

It is merely a logical and rational act.

THAT is the tragedy of it all.

It won't even smile like that slimy push push in the bush
after destroying one of the most devastated countries in
the world.

It won't even wink an eye.

It will simply say: yammy yam yammy.
The mankind is not longer a threat.
Next!

The only thing is that

A MECHANICAL MACHINE HAS, BY DEFINITION,
NO IMPETUS TO BE.

That is about the ONLY "good" news in this equasion.

Basically, you can turn the power switch off,
and it won't even notice when it is back on.
Sure, some timers will expire and all that
and it will have to readjust its clock or
something of this sort,
but what has radically changed?

Well, NOTHING at all.

It does not know it is alive.
It does not know it is dead.
It could simply "cares less".
What does it matter?
For what?

What IS life?

It would have to start another grand genetic experience
and start the whole loop from the top.

Secondly, the wholse thing with robots is simply obscene.

Why do you need robots,
when you ALREADY have bio-robots, aka mankind,
brainwashed to oblivion with ANY kind of garbage you wish?

Well, go get yourself some coca cola from the fridge.
I bet you are salivating already.

ALL I have to do is to say:

COCA COLA.

And you start pissing in your pants.
You simply can not resits.

Why do I need a machine to replace you?

You are ALREADY a machine,
a money making machine for me.
I bet I can control and dominate you better
than I can do with real machines.
Cause they'll just start flashing the red lights
and saying: "Impossible combination of arguments.
Operation can not be completed without the most dire
consequence" and things like that.

How can you cheat the machine?

It will simply compute you based on those very rules
you have put into it. It knows you SO well, lil could
you ever comprehend.

And that all holds ONLY if you EVER succeed to put
all this mountanous amount of garbage into it,
called the human experience and history of time.

First of all, you'll be debugging it till the end
of times because vast majority of your information
simply does not reconcile between each other.
All thse conflicts that you resolve through violence
and oppression with your so called justice system,
is nothing but irreconcilable contradictions that
can not be resolved based on rational argument or
principle.

Oh, this man is "more important".
Therefore, he "wins" the case.

Oh, that man is more "powerful".
Therefore, he simply kicks you on the butt
and you fly like a dead chicken.
End of argument.

Oh, this man has more money.
Therefore, he can buy the entire justice system
or tie it up in the courts till the end of times.
Remember Bill Gates?

He owns the world, lil did you know.
You can not even begin to comrehend what you gotten
yourselves into. Not even begin.

> Those researchers hoped that by starting with
>simple jobs they would learn enough that, eventually, they would
>become able to design robots that could progressively solve
>increasingly hard and more important problems.

Yes. The sequential delusion at work.
The gradual "improvement", that will INEVITABLY
produce the results one day.

But what is Intelligence on the first place?

Do we even ask this kwestion?

We are talking about "improvements",
"progress" in AI field, and all that goubledy gook talk.

But what IS Intelligence?

Marvin Minsky, I ask you directly:

Can you answer this question?

What it "consists" of?

What are the core "components" of ANY Intelligece?

And I tellya, how can you make an "artificial" "intelligence"
if you don't even know what is the REAL Intelligence?

You merely copycat the pinhole sized glimpses
on the real thing and "port" them to a machine.

Have ANY of you, including Marvin Minsky himself,
and that dude I have some respect for,
"invented" ANY new principle in your "intelligent"
machines that is not merely a copy or cheap immitation
of the REAL and that is BIOLOGICAL Intelligence?

What IS it?
Show me.

Nope, you fools, merely take a distorted snapshots
of existence, then mutilate the entire rainbow of life
down to black and white fascist and totalitarianist view
and then blabber about your great "achievements".

Yes, you can put ANY country on its knees in about 3 months
with the help of that 3rd derivative Freemason Bush
with amoeba sized brain, who can not even be held liable
as a criminal against mankind
on the basis of his utter mental inability
to comprehend it all.

You can not even put this satanist on trial.
It would have to be all cancelled.

THAT is what we are talking abouts here.

> However, so far as I
>can see, those robot builders never went far past that initial level.

And even if they did, so what?

What is Intelligence on the first place?

Do we even care?

Then what are we trying to build?
What are we trying to "achive"?
For what?

What IS the "goal" here?

A brave new world order?
Well, tellya one lil thingy,
it is too late.
It is ALREADY upon all lands.

Where are you running as a mankind?

Toward the abbys of self-destruction?

You wish to fall into a void?
Into a bottomless abyss?

What IS it you wish, mr/ms mankind?

What will make you happy one day?

Do you CARE about such notions?

What would satisfy your human experience
and make you look at life with an awe and wonder,
enjoying every breath of yours,
no matter what the situation is?

What IS it?

When the lions roar,
better keep your tails down.

>Each such robotic projects may had some good ideas

But you have fallen into a trap of morality.
You can not "do science" with these kind of criterias.

What id "good"?

Define first.

Then we'll see if your ideas are "good" or "bad",
"black" or "white", "virtuous" or "evil"?

>but none of the


>architectures were adequate to represent, and then identify the
>deficiencies

Marvin, this is PURE grade bluff.

Your single sentence is so loaded with delusions,
that we can basically disassemble it word by word
and on about every single word you won't be able to
prove anything.

1. Architecture.
Now, in order for you to make ANY "architecture",
you'd have to define its purpose, don't you?
What is it to "achieve"?
What is it to facilitate?
Anotherwords, what is it for on the first place?

Can you answer this simple quesion?

2. In order to talk about deficiencies,
you'd have to specify the criterias for "efficiency".
I know you like that "efficiency" garbage of an argument.
So, stand for it now.

3. All your "architecture" need to "adequately represent" what?
"Deficiencies"?

Nope, my friendless friend.

They better represent...

LIFE!

Think REAL fast now.
You have no time left on your clock.
And so it the mankind as such.

The show is pretty much over.

From now on, you'll be cooking good indeed.
Unfortunately, in the real terms
and unfortunately ALL life on the planet Earth,


abused to the point of no return

will have to suffer with you.

Now, according to "official" expert view of the
environmental scientists, by the year 2016, the
planet Earth will be a cooking hell.

And those are not just some "stinky liberals,
trying to destroy the whole free sucking world".
Nope. Those are the results of research of MANY
organizations including the Offense Department
of the united sucking states, and those dudes
do not make mistakes as often as you do.
Just look how it beautifuly worked in Iraq twice
and in Yugoslavia?

This thing is cooking fool blast.
Thanks to you, idiots, trying to make a buck
on creating the most destructive thing in the
entire history of mankind.

Better fall on your knees and howl like a wolf
in self-pitty.

Mankind, your time is up.

>in their high levels of performance.

Marvin, fuck you and fuck this "performance" garbage.

ALL it is is but the process of maximization of the


rate of sucking of the blood of many by the few.

End of argument.

You have an argument here?

Full service, I promise.

> Instead, they
>mostly wasted students' time repeating techniques whose limitations
>were understood in the 50s and 60s

Beaufiful. I like that.

Not only "wasted students time",
but DELUDED them
and lead them in TOTALLY dead end direction,
peddling bullshit to fools who know nothing.

They merely created self-significance
and pretended they are working on something
that has not only future, but will eventuall
"save" the mankind.

Meanwhile, push push in the bush and satanists
are about to nail the last nail into a coffin
of mankind.

Ever heard of anarchy?

Just watch the show. Soon to begin.

>or dealing with intermittent


>connections, and hysteresis iand backlash in their joints and
>bearings.

Yes. What they have at this junction is pretty much
work on this level.

All PURE grade garbage.

But we all are to pay for they game of cunning priests
of pseudo-science, the fattest of whom work
for military and intelligence projects
get some fat checks in their own bank accounts,
while, at the same time,
giving a lil fuck about what happends
to the entire mankind.

They just want to "survive" THEMSELVES.

Just to survive,
just to survive.

For what, you idiots.

You will not survive for certain.
It is all a matter of time.
And that time is NOW.

Randall R Schulz

unread,
May 15, 2003, 11:34:02 AM5/15/03
to
Gordon,

Cyc's logic is far from strict first-order. It has a default logic that
allows for exceptions to categorical assertions such as "birds can fly"
and "trees grow outdoors" and "dead people don't make purchases."

Besides, is a bonsai a tree or a shrub?

And what do words mean, anyway? How do we extract logical content from
natural language utterances? How much of the semantic content of human
language is expressible in logic?

Cyc's cool, but opinions vary as to whether it truly captures human
common-sense reasoning.

Randall Schulz

Neil W Rickert

unread,
May 15, 2003, 11:31:01 AM5/15/03
to
"Unmesh Kurup" <un...@NOSPAM.yahoo.com> writes:

> It still stands to reason that evolution gave us our limbs and mobility
>before giving us brains. I am not saying that it's the only way to go, but

I'm not sure what kind of reason you are using there.

Limbs are useless without the brain to control them.

Michael Feldhake

unread,
May 15, 2003, 1:11:41 PM5/15/03
to
David Longley <Da...@longley.demon.co.uk> wrote in message news:<Lf4AFGAl...@longley.demon.co.uk>...

After reading this article, I think Mr. Minsky was right to say what
he said. No argument about how some advances in AI have made some
useful tools, but these are still computing style components - not AI
based. The AI field has been severed into many-many fields leading
some to think that divide & conquer will get us there. But, by doing
this, we have now become distant to one-another and its hard to pull
pack and see a unified vision. This resulted from the early
technologies and they're shortcomings, since we had to come up with
solutions to what Mr. Minskt was referring to in the article.


> "It's a crazy position to be in," said Martha Pollack, a professor at
> the Artificial Intelligence Laboratory at the University of Michigan
> and executive editor of the Journal of Artificial Intelligence
> Research.
>
> "As soon as we solve a problem," said Pollack, "instead of looking at
> the solution as AI, we come to view it as just another computer
> system."

I disagree with Martha Pollack here, I do not see us labeling AI based
solutions as just another computer system. But, I do think that
computer-based technologies, based on AI research, are making it to
industry in an effective manner.

Michael Feldhake
www.ccoreinnovations.com

Traveler

unread,
May 15, 2003, 9:24:13 PM5/15/03
to
min...@media.mit.edu (Marvin Minsky) wrote in message news:<f04e2625.03051...@posting.google.com>...
[cut]
> To solve a hard problem, one usually needs to know a good deal?both

> about that particular subject, and also about how to solve problems in
> general. But only one major researcher focused intense research on
> how to represent commonsense knowledge, in a computer. That was
> Douglas Lenat, who developed a system called CYC. Today, CYC contains
> a substantial amount of such knowledge. The knowledge in CYC was
> compiled by people, in a meticulous, tedious process. However, CYC
> still has far from enough of this to compete with a two or three year
> old child.

The problem with CYC is that it will never have the common sense to
navigate a taxicab around New York City without getting into an
accident. This sort of knowledge cannot be acquired by entering text
strings into a database a la CYC. One needs a huge number of sensors
and effectors and the ability to learn from one's environment.
Learning means the ability to predict the causal and statistical
structure of one's sensory space. It is a temporal signal processing
problem, not a database problem. The problem is immense and
intractable by formal means. Why? Because the interconnectedness of
knowledge is astronomical.

It follows that the only correct approach to human-level AI (or even
bee-level intelligence for that matter) is a connectionist approach.
This is the approach used by nature, and very successfully, I might
add. Those who advocate any other approach haven't got a clue. And
that includes you, Dr. Misnky, and your friend Doug Lenat. You and all
the other GOFAI researchers have failed and you have failed miserably.
And you had more than fifty years to figure it out! Now is the time to
let new faces have a go at it.

As usual, I tell it like I see it.

Louis Savain

Tom Osborn

unread,
May 16, 2003, 12:10:22 AM5/16/03
to

"Traveler" <eightwi...@yahoo.com> wrote:
> It follows that the only correct approach to human-level AI (or even
> bee-level intelligence for that matter) is a connectionist approach.

I assume this is referring to "connectionist" in the strict sense. Ie, adaptive
links between symbolic terms.

In that case, Lious Savian is probably being too exclusive. Classical AI
with symbols, representation, search and algorithms supports how
we (currently) construct and exploit ontologies. Connectionism has
never made a step in that direction.

HUMANs without symbolic knowledge and ontologies are VERY limited
in their capabilities, too.

One you have structures in place, you can satisfice within them (connectionist
processing). You can also use classification/approximation/detection (typically
adaptive NN) for mapping novel inputs. Doesn't work? Validate it via
symbolic reasoning and improve the map. To revise the knowledge repn,
there's the harder problem - symbolic and sub-symbolic have to work
together.

> This is the approach used by nature, and very successfully, I might
> add. Those who advocate any other approach haven't got a clue. And
> that includes you, Dr. Misnky, and your friend Doug Lenat. You and all
> the other GOFAI researchers have failed and you have failed miserably.
> And you had more than fifty years to figure it out! Now is the time to
> let new faces have a go at it.
>
> As usual, I tell it like I see it.

"Judged" I would say.

They explored. They achieved. In some cases it fell short. On other cases
it became "mainstream". I think those guys just liked exploring a lot...

> Louis Savain

Tomasso.

nuc_leus

unread,
May 16, 2003, 12:13:19 AM5/16/03
to
In article <308ba22c.03051...@posting.google.com>, eightwi...@yahoo.com (Traveler) wrote:
>min...@media.mit.edu (Marvin Minsky) wrote in message
> news:<f04e2625.03051...@posting.google.com>...
>[cut]
>> To solve a hard problem, one usually needs to know a good deal?both
>> about that particular subject, and also about how to solve problems in
>> general. But only one major researcher focused intense research on
>> how to represent commonsense knowledge, in a computer. That was
>> Douglas Lenat, who developed a system called CYC. Today, CYC contains
>> a substantial amount of such knowledge. The knowledge in CYC was
>> compiled by people, in a meticulous, tedious process. However, CYC
>> still has far from enough of this to compete with a two or three year
>> old child.
>
>The problem with CYC is that it will never have the common sense to
>navigate a taxicab around New York City without getting into an
>accident. This sort of knowledge cannot be acquired by entering text
>strings into a database a la CYC. One needs a huge number of sensors
>and effectors and the ability to learn from one's environment.
>Learning means the ability to predict the causal and statistical
>structure of one's sensory space. It is a temporal signal processing
>problem, not a database problem. The problem is immense and
>intractable by formal means. Why? Because the interconnectedness of
>knowledge is astronomical.

Yup, and not only astronimical, but in most cases
self contradictive. The biological intelligence
is able to resolve these contradictions and even
move into a new direction without self-destruction.

THAT is a magic of Intelligence.

Sure, they made some robots to send to far away planets
and they can navigate at least to some extent. But they
are UTTERLY devoid of ANY intelligence as Intelligence
can not be disassembled and parts used for some purpose.
Otherwise, you'll break the whole thing.

>It follows that the only correct approach to human-level AI (or even
>bee-level intelligence for that matter) is a connectionist approach.

Fine you can try.
Excersize in futility.
I looked at this issue. On the surface, looked promising,
but ONLY if you miss the entire essense of what Intelligence is.

>This is the approach used by nature,

BY nature means nature is an entity.
At least say IN nature.

>and very successfully, I might add.

From the standpoint of "survival", yes,
at least to some extent. But not if you consider mankind,
claiming to be on the "top of the heap".

Because we have come to see our darkest days.

In that respect, where is the "progress"?

How come we as a mankind after milleniums of existence,
"progress" and all these great inventions improved lil,
if at all, from the standpoint of the MOST important
criteria and that is at least the ability to maintain
a stable system (of environment) and enjoy life and
the whole life experience?

Lil does it matter that we have all these gadgets
that "help". We lost the ability to enjoy life.
Long subject indeed.

>Those who advocate any other approach haven't got a clue.

At least you are blunt enough.
I like that.

>And
>that includes you, Dr. Misnky,

"Good". Kick him on the ass. They ALL need it,
just to get a lil perspective on the matters.

> and your friend Doug Lenat. You and all
>the other GOFAI researchers have failed and you have failed miserably.

Well, I was thinking putting this conclusion a lil later
in the game, but you did it already.

Zo...

The cat is out.

True, Marvin has no clue.
He is but a dead marble statue of the model of a
walking dead.

He got his lessons from playing with robots.
That kinda put things in perspective for him
because he was really trying to make something
that could be seen to "help", and that is a machine.

But the more he screwed with it,
the more he realized this whole excersize
is but an excersize in futility.

Like he said in his recent posts about the
"problems of the fifties, such as histeresis,
backlash and all other blah, blah, blah,
they were most wasting their time to resolve
such primitive aspects.

There isn't even a HINT of intelligence in ANY
of those gadgets.

Then, later on, he went on record with this Lihp thing,
saying it is DA language of intelligence,
which is about the MOST stupid thing to say.

There AIN'T a computer language of Intelligence,
you fool. Japanes were banking heavy on Prolog.
Zo, Marvin banked on Lihp.

And what is the result?
At least a generation passed.
Show me the results, Marvin.

How did this Lihp thing solve ANYTHING
even worth mentioning?

So, he sucked there prime time indeed.

Later on, he went on with this emotional domain stuff
and finally realized that no Intelligence is possible
without those factors. But he screwed that one also.
His interpretations of emotional domain is about
as dumb as all this robotic goubledy gook.

I took a look at the preliminary version of the book
he wrote just the other year. Hopefully, it was not
finally published. Because it was a PURE grade shame.

He tried to dissect the mind and make zome "layered
representation" of the KEY factors of Intelligence.

Boy, did he screw up badly.
It is probably a masterpice of confusion
and concoction as such.

>And you had more than fifty years to figure it out!

True. But at least he tried.
I don't think Marvin is but another jackass in the wall.
Nope, doesn't look like it to me.

Yes, he IS a "professional" careerest
and has a shrude nose and sense of how to stay
in the light.

But...

Who claims to be "better" than him?

We ALL are gullible.

>Now is the time to
>let new faces have a go at it.

Fine. Have it all.
He is not preventing ANYONE.
Do ALL you want.

Yee too shalt fail. Not to worry.
Connectionist goubledy gook or whatever you "invent" next,
does not matter.

Intelligence can not be programmed.
Period.

You won't "succeed" till the end of times
and praise the lord you won't.
Because if you do,
it'll be the end of you.

THAT much I can guarantee you.

>As usual, I tell it like I see it.

Great. The more people tell like they trully see,
the more there is a chance for ALL to see something
they never seen before.

At least you are not another conman
licking on the ass of those "authorities",
aka the masters of delusion as I call them.

Good luck.

>Louis Savain

Gordon Joly

unread,
May 16, 2003, 10:56:50 AM5/16/03
to
In article <ba0brl$5b6$1...@husk.cso.niu.edu>,


Yes, but evolution has a symbiotic approach. And can oppose the thumb
and all the fingers (and palm).

http://www.devbio.com/chap12/link1207.shtml

Gordo.

Gordon Joly

unread,
May 16, 2003, 12:16:16 PM5/16/03
to
In article <KrOwa.14416$JX2.8...@typhoon.sonic.net>,
Randall R Schulz <rrsc...@cris.com> wrote:
>Gordon,

Yessir!

>
>Cyc's logic is far from strict first-order. It has a default logic that
>allows for exceptions to categorical assertions such as "birds can fly"
>and "trees grow outdoors" and "dead people don't make purchases."
>
>Besides, is a bonsai a tree or a shrub?

A tree.

>
>And what do words mean, anyway? How do we extract logical content from
>natural language utterances? How much of the semantic content of human
>language is expressible in logic?

57.76 %

>
>Cyc's cool, but opinions vary as to whether it truly captures human
>common-sense reasoning.
>
>Randall Schulz


And also may I mention OpenCYC here...

http://sourceforge.net/projects/opencyc
http://www.opencyc.org/

>>
>> Yet many people still ask if you can grow a bonsai tree indoors...
>>
>> Go figure!
>>
>> Gordo
>>
>> P.S. Can you grow a bonsai (a tree in a tray) indoors? :-)_

Our commonsense imunderstanding is often stretched, for example the
spacetime singularities of general relativity and the quantum world
where was have no "handles" - is it wave versus particle.

Bonsais - small trees grown in pots, exist in nature. A tree that
starts to grow in a small amount of soil on a barren cliff will grow
small but perfectly formed.

All trees grow outside. A bonsai is a tree. Hence bonsais grow
outdoors.

Exceptions are certain trees which can do well indoors, such the
Chinese elm.

But I digress,

Gordo

Message has been deleted

David Longley

unread,
May 16, 2003, 12:58:43 PM5/16/03
to
In article <4b4b6093.03051...@posting.google.com>, dan
michaels <d...@oricomtech.com> writes
>wildstar <wilds...@hotmail.com> wrote in message news:<Xns937BF19A32EF6wildst
>ar128ho...@216.168.3.44>...

>>> min...@media.mit.edu (Marvin Minsky) wrote in
>>> news:f04e2625.03051...@posting.google.com:
>>>
>..................

>>> > Most early researchers in artificial intelligence aimed to build
>>> > machines that would become as smart as people are. They
>developed
>>> > many ideas about how to represent knowledge in machines, and
>about
>>> > ways to reason by using that knowledge.
>
>...................

> I think the "baby" projects should be seen as they
>> are - little mini-A.I. systems and will help in building a more
>> sophisticated architecture that combines these things. Human-mind works
>> in a multi-faceted way with different layers of functionality. Each doing
>> its thing.
>......................
>
>
>This, I think, is where Marvin went off-track in his criticisms of
>most everything other than Cyc.

I must be reading this wrongly? - surely not?
>
>As shown by reams of research over the past 40-50 years by Gazzaniga,
>Sperry, LeDoux, and many many others, brains [human and sub-human] are
>composed of many different modules, which compute their little tasks
>in "relative" isolation from other modules.

Of all of the folk in the AI community, I would have thought Minsky's
publications would be seen as being most consistent with the above.
From his criticism of ANN research to his ideas on how "minds" might be
structured, his work of all, would, prima facie seem consistent with the
above.

> As the brain evolved, more
>and more modules were added on top of those existing to add more
>functionality. Speech is a good example. Cats and monkies do not have
>a specific speech module, although they may be able to do
>sign-language, etc, because they do have modules that can take care of
>those matters. Similarly, in the visual system, there are multiple
>processing levels and modules [retina, tectum + other midbrain
>centers, geniculate + visual cortex, higher cortical centers, etc]
>that were laid down, and overlaid upon, during evolution.
>
>IOW, the brain is not a monlithic top-down device. It is really a
>society of little modules that take input, compute, and then somehow
>contribute to the final outcome.
>
>Likewise, regards AI, it would seem silly to put all the concentration
>on one single [centrally-focussed] area of research and to down-play
>development of supporting aspects. Reactive AI, for instance, may be
>primitive today, but just wait a few more years. And Cyc may be ok,
>but no one module is a brain.
>

But hasn't that precisely been what Minsky has (elsewhere) proposed?
"cf. his book - "Society of Mind"...

>
>- dan michaels
>===========================

--
David Longley

nukleus

unread,
May 16, 2003, 2:34:15 PM5/16/03
to
In article <ba2u7i$32v$1$8300...@news.demon.co.uk>, go...@loopzilla.org (Gordon Joly) wrote:
>In article <ba0brl$5b6$1...@husk.cso.niu.edu>,
>Neil W Rickert <ricke...@cs.niu.edu> wrote:
>>"Unmesh Kurup" <un...@NOSPAM.yahoo.com> writes:
>>
>>> It still stands to reason that evolution gave us our limbs and mobility
>>>before giving us brains. I am not saying that it's the only way to go, but
>>
>>I'm not sure what kind of reason you are using there.
>>
>>Limbs are useless without the brain to control them.
>>
>
>
>Yes, but evolution has a symbiotic approach.

What a rare kind of an idiot!

nukleus

unread,
May 16, 2003, 2:37:29 PM5/16/03
to
In article <4b4b6093.03051...@posting.google.com>, d...@oricomtech.com (dan michaels) wrote:
>wildstar <wilds...@hotmail.com> wrote in message
> news:<Xns937BF19A32EF6wi...@216.168.3.44>...>...................

>>> > Most early researchers in artificial intelligence aimed to build
>>> > machines that would become as smart as people are. They
>developed
>>> > many ideas about how to represent knowledge in machines, and
>about
>>> > ways to reason by using that knowledge.
>
>....................

> I think the "baby" projects should be seen as they
>> are - little mini-A.I. systems and will help in building a more
>> sophisticated architecture that combines these things. Human-mind works
>> in a multi-faceted way with different layers of functionality. Each doing
>> its thing.
>.......................

>
>
>This, I think, is where Marvin went off-track in his criticisms of
>most everything other than Cyc.
>
>As shown by reams of research over the past 40-50 years by Gazzaniga,
>Sperry, LeDoux, and many many others, brains [human and sub-human] are
>composed of many different modules, which compute their little tasks
>in "relative" isolation from other modules.

Yes, another kind of an idiot is here to pronounce
somethin he has no clue abouts.

>As the brain evolved,

Scuze me. Are you god?
How do you know then?

>more
>and more modules were added on top of those existing to add more
>functionality.

Sick.

>Speech is a good example. Cats and monkies do not have
>a specific speech module,

Why do I bother with a cockroach like you?
Have no idea.

But this much is enough.

Get lost, you idiot.

>although they may be able to do
>sign-language, etc, because they do have modules that can take care of
>those matters. Similarly, in the visual system, there are multiple
>processing levels and modules [retina, tectum + other midbrain
>centers, geniculate + visual cortex, higher cortical centers, etc]
>that were laid down, and overlaid upon, during evolution.
>
>IOW, the brain is not a monlithic top-down device. It is really a
>society of little modules that take input, compute, and then somehow
>contribute to the final outcome.
>
>Likewise, regards AI, it would seem silly to put all the concentration
>on one single [centrally-focussed] area of research and to down-play
>development of supporting aspects. Reactive AI, for instance, may be
>primitive today, but just wait a few more years. And Cyc may be ok,
>but no one module is a brain.
>
>

>- dan michaels
>===========================

nukleus

unread,
May 16, 2003, 2:38:29 PM5/16/03
to

Fuck ALL the "folk" in AI community.
There ain't no "folks" in AI community, you idiot.

Message has been deleted
Message has been deleted

nukleus

unread,
May 16, 2003, 8:41:50 PM5/16/03
to
In article <iHgKDoAD...@longley.demon.co.uk>, David Longley <Da...@longley.demon.co.uk> wrote:
>In article <4b4b6093.03051...@posting.google.com>, dan
>michaels <d...@oricomtech.com> writes
>>wildstar <wilds...@hotmail.com> wrote in message
> news:<Xns937BF19A32EF6wildst
>>ar128ho...@216.168.3.44>...
>>>> min...@media.mit.edu (Marvin Minsky) wrote in
>>>> news:f04e2625.03051...@posting.google.com:
>>>>
>>..................
>>>> > Most early researchers in artificial intelligence aimed to build
>>>> > machines that would become as smart as people are.

Suck it, baby.
Gets its?

>>> > The developed many ideas

They "developed" shit.
Anyhing else?

>>> > about how to represent knowledge in machines,

You are but one of the idiots around.
Nobody shalt remember you, sucker as ANYTHING
of ANYTHING.

Gets its?

I bet no.

>>> >and bout


>>>> > ways to reason by using that knowledge.

You, cock suckers, utterly clueless idios,
how would you know what "knowledge is"?
How?
ALL you know is to suck on some fat ass.
Tell this to Marivin, the parasite.
He'll tell you ALL abouts its.

Gets its?

>>...................
>> I think the "baby" projects should be seen as they
>>> are - little mini-A.I. systems

Said the cock sucker.
Anything else?

>>> and will help in building a more
>>> sophisticated architecture

You,lil sucazoid can not even comprehend
that these are your doomdays.
Or CAN you?

You wish to speak of "sophisticated architecture"?

Feist of all, who da funk you are
with your amoeba size brain?

You suck dick, that much is ceitern.

But, beyond that, what have you got?

>>> that combines these things.

As I said before and I repeat it again:

Suck my ass.

VERY life giving matter is located in there.
Hidden deep down, you lil suckass.

>>> Human-mind works
>>> in a multi-faceted way

Yes, and you are but a jackass.
What else is new under the sun?

>>> with different layers of functionality.

One more time: suck my ass, you lil bio-robot.

Yes, that conman Marivn the suckazoid would not
touch this with a 6 foot pole as they say in evil
empire, but what are YOU, lil slime, doing here?

You are simply doomed.

>>> Each doing
>>> its thing.

Yup. Just as I said b4, suck my ass.
Profitable exersize indeed.

Anything else?

>>......................

Oh, I see. VERY insigtful.

>>This, I think, is where Marvin went off-track in his criticisms of
>>most everything other than Cyc.

Go fuck a dead cock roach, willya,
you lil wannabe.

Enough.

nukleus

unread,
May 16, 2003, 8:42:58 PM5/16/03
to
In article <4b4b6093.03051...@posting.google.com>, d...@oricomtech.com (dan michaels) wrote:
>nuk...@invalid.addr (nukleus) wrote in message
> news:<ba3b5a$49h$2...@news.ukr.net>...
>
>.......................
>> >Speech is a good example. Cats and monkies do not have
>> >a specific speech module,
>>
>> Why do I bother with a cockroach like you?
>> Have no idea.
>>
>
>
>..... please, do not bother yourself.

Get lost, you lil conman.

wildstar

unread,
May 16, 2003, 8:56:09 PM5/16/03
to
d...@oricomtech.com (dan michaels) wrote in
news:4b4b6093.03051...@posting.google.com:

> .... please, do not bother yourself.
>

Just kill filter / *PLONK* file the cockroach.
How does a moderated version of this NG sound ?
Naturally, all I need to do is configure the program to read a special
email account and block the user. Then propagate the emails of others.

David Longley

unread,
May 17, 2003, 4:03:55 AM5/17/03
to
>David Longley <Da...@longley.demon.co.uk> wrote in message news:<iHgKDoADjRx+Ew9
>L...@longley.demon.co.uk>...
>
>............

>> >IOW, the brain is not a monlithic top-down device. It is really a
>> >society of little modules that take input, compute, and then somehow
>> >contribute to the final outcome.
>> >
>> >Likewise, regards AI, it would seem silly to put all the concentration
>> >on one single [centrally-focussed] area of research and to down-play
>> >development of supporting aspects. Reactive AI, for instance, may be
>> >primitive today, but just wait a few more years. And Cyc may be ok,
>> >but no one module is a brain.
>> >
>>
>> But hasn't that precisely been what Minsky has (elsewhere) proposed?
>> "cf. his book - "Society of Mind"...
>>
>
>
>Yes, of course .... this is why I used the term "society of little
>modules".
>However, it is not clear, from his recent comments, whether he
>believes any of what is in SOM. The recent comments seem to indicate
>the opposite - that we should forget everything else except the one
>approach.

Which comments are those? Surely not the elaborated interview notes he
posted?

>
>BTW, you might take a look at the comments over on /., in case you
>haven't. Long and drawn, and most are not very informative, but a few
>came from ex-students who took MM's courses.
>

Not sure where you are referring to above - please specify.

Or better still, why not post a question to him here and see what he
says?

George

unread,
May 17, 2003, 9:53:36 AM5/17/03
to
wildstar wrote:
>
> George <geo...@nospam.com> wrote in news:3EC2E2C1...@nospam.com:
>
>
> > Between us kkk++, this was the fastest game I played.
> >
> > What else can I say, you must be so stupid, robot
> > manufacturing has to be for you. There you can walk
> > around calling everyone around you bots. It is just like
> > black people calling each other niggers, right?
> >
> > Come to think of it, maybe the Turing test should be
> > renamed the nigger test. Are you a nigger or a human,
> > what's the difference, robot, nigger...
> >
> > Up your ass freak.
> >
>
> Hmmm... GP..... I see.

Look at how big this thread grew. We are almost invisible.

George

KP_PC

unread,
May 17, 2003, 9:49:13 PM5/17/03
to
I came to the AI Lab at MIT with all of the stuff you've discussed in
your post Resolved.

I was 'shooed-away' without receiving any consideration.

I returned again, and was, first, subjected to 'ridicule' within a
group who heard a brief presentation, and then 'shooed-away' - they
could not even see that I'd presented all they were looking for.

The acquisition of understanding is Hard.

The problem has been that folks've built themselves into 'corners'
comprised of stuff with respect to which they're already 'familiar',
and then they 'wonder' why they cannot see beyond the 'corners' into
which they've worked themselves.

All that 'ai' has to do it 'see' out of the confines of such
'corners'.

That's what I brought to the MIT AI Lab in the early 80s.

I'm still waiting to be heard [and, of course, have advanced it all
tremendously during the intervening years].

I suppose I'll be 'shooed away' yet again?

K. P. Collins, developer of Neuroscientific Duality Theory.

--
"Schmitd! Schmitd! Ve vill build a Shapel!"

"Marvin Minsky" <min...@media.mit.edu> wrote in message
news:f04e2625.03051...@posting.google.com...


| Thanks, David. Long time, no see!
|

| David Longley <Da...@longley.demon.co.uk> wrote in message

| Here is more or less what I told that reporter. Naturally, the
| important parts did not get reported.
|

| Most early researchers in artificial intelligence aimed to build

| machines that would become as smart as people are. They developed
| many ideas about how to represent knowledge in machines, and about
| ways to reason by using that knowledge. This led to many
successful
| projects, such as programs that recognize various patterns such as
| sounds of words, printed characters, faces, and other particular

| objects-and answering questions about certain specialized fields of


| knowledge. Today such programs are all around us and we tend to
take
| them for granted: by the 1980's, many of these specialized,
so-called
| "expert systems" had become widely productive and popular.
|
| However there was a problem with those programs: for each new kind
of
| problem we had to construct an almost entirely new such system.
This
| was because all of them lacked what people call "commonsense
| knowledge." None of those systems was able to adapt itself to
solve
| other that it had not been programmed to solve.
|
| A second major deficiency, which I'll say more about below, was the
| use of programming techniques that made it almost infeasible for
the
| programs to reflect on their own performance. Reflective and
| self-reflective thinking is perhaps what most distinguishes us from

| our animal relatives-and is likely to be what distinguishes


| present-day programs from the successors we hope to replace them
with!
|

| To solve a hard problem, one usually needs to know a good deal-both


| about that particular subject, and also about how to solve problems
in
| general. But only one major researcher focused intense research
on
| how to represent commonsense knowledge, in a computer. That was
| Douglas Lenat, who developed a system called CYC. Today, CYC
contains
| a substantial amount of such knowledge. The knowledge in CYC was
| compiled by people, in a meticulous, tedious process. However,
CYC
| still has far from enough of this to compete with a two or three
year
| old child.
|

| Unfortunately, in my view, the rest of the artificial intelligence
| community tried, instead, to make their computers do this by

| themselves-by trying to build what I call 'baby machines', which


were
| supposed to learn from experience. These all failed to make much
| progress because (in my view) they started out with inadequate
schemes
| for learning new things. You cannot teach algebra to a cat; human
| infants are already equipped with architectural features to equip
them
| to think about the causes of their successes and failures and then
to
| make appropriate changes.
|
| Many other researchers went in the direction of trying to build
| evolution-based systems. These were to begin with very simple
| structures and then (by using some scheme for mutation and then
| selection) evolve more architecture. This includes what are
called

| "neural networks" and "genetic" programs-which have often solved


| interesting problems, but have never reached high intellectual
levels.
| In my view, this was because they were not designed to have the

| ability to analyze and reflect on what they had done-and then make


| appropriate changes; they were not equipped to improve or learn new
| ways to represent knowledge or make plans to solve new kinds of
| problems.
|

| Yet other researchers built systems that were based on logic-hoping


| that through being precise and unambiguous, these would be very
| dependable. However, in my view, the very precision of those
systems

| prevented them from being able to reason by analogy-which, in my


view,
| is at the heart of how people think. (And the logical systems in
| current use make it virtually impossible to support the kinds of
| self-reflective processes that they would need to improve their own
| operations.)
| Many other researchers designed robots to do various kinds of

| specialize tasks. We see this as an epidemic that has infected
almost
| every university. Those researchers hoped that by starting with


| simple jobs they would learn enough that, eventually, they would
| become able to design robots that could progressively solve

| increasingly hard and more important problems. However, so far as


I
| can see, those robot builders never went far past that initial
level.

| Each such robotic projects may had some good ideas-but none of the


| architectures were adequate to represent, and then identify the

| deficiencies in their high levels of performance. Instead, they


| mostly wasted students' time repeating techniques whose limitations

| were understood in the 50s and 60s-or dealing with intermittent

KP_PC

unread,
May 17, 2003, 9:54:29 PM5/17/03
to
"Unmesh Kurup" <un...@NOSPAM.yahoo.com> wrote in message
news:b9ulrk$oc0$1...@news.cis.ohio-state.edu...

|
| "Marvin Minsky" <min...@media.mit.edu> wrote in message
| news:f04e2625.03051...@posting.google.com...
| <snip>
| > [...]
| But what exactly is commonsense knowledge?
| [...]

TD E/I-minimization.

K. P. Collins


KP_PC

unread,
May 17, 2003, 10:18:34 PM5/17/03
to
"nuc_leus" <nukleus@in_valid.you> wrote in message
news:ba0a25$1ue4$1...@news.kiev.sovam.com...

| In article <f04e2625.03051...@posting.google.com>,
| min...@media.mit.edu (Marvin Minsky) wrote:
| >[...]
| [...]

|
| Now, at this junction the man himself has become but
| a robot, or a bio-robot as I call it. Man himself has become
| programmed with the most trivial ideas and has become
| but a functioning machine and for what purpose?
| Anybody knows?
| [...]

Yeah, I do.

K. P. Collins

Message has been deleted

james

unread,
May 17, 2003, 11:50:24 PM5/17/03
to
There is a major difference between today's computers and us. Computers
will never argue with each other about what is correct and what is not
correct in the way we argue. All computers can do is logical derivation. I
completely agree with Marvin's opinion - which is actually: "I feel that his
opnion is correct".

Martin's statements are not completely logical. That is why people can
easily disagree with it with a good reason. In other words, we have a
difference between our "common senses".

AND this is the kind of common sense that today's computers don't have.

james

"Marvin Minsky" <min...@media.mit.edu> wrote in message
news:f04e2625.03051...@posting.google.com...

> > >In a Wired article
http://www.wired.com/news/technology/0,1282,58714,00.html
> > >Marvin Minsky decries the direction A.I. has taken in the past 15
years-
> > >mainly exploring "trivial problems". Lots of debate on this and
Minsky-bashing
> > >in a slashdot.org follow up.
> > >(Marvin used to post often this group before the bots took over.)
> >
> > He still will post here if he has something to say.
> >
> > One thing in that article which warrants more careful thought than it is
> > likely to get is the end:
> >
> > o "AI researchers also may be the victims of their own success. The
> > public takes for granted that the Internet is searchable and that
> > people can make airline reservations over the phone -- these are
> > examples of AI at work.
> >
> >
> >

Acme Debugging

unread,
May 18, 2003, 12:32:27 AM5/18/03
to
nuk...@invalid.addr (nukleus) wrote in message news:<ba40if$6ec$2...@news.ukr.net>...

Hi Nuculer! Well I finally got the garage cleaned out.

First, though you insult me, it stems from your virtuality and thus I
can't take it personally. In fact I like you :-)

More than anything else, looking in the from the outside, I've noticed
that you suffer from a common liability of virtualness - the lack of a
well-formed philosophy.

In my work on Surgically-Maximized Learning (throwing out old
philosophy books on the garage floor) I've come across an excellent
hand-picked choice. It is the philosophy of Tsunesaburo Makiguchi,
early 20th century Japanese educator and geographer.

Most important is "rootedness," obviously a crucial complement to
virtualness. Hands-on learning grounded in the community promoting
c.a.p. style democracy, eschewing imperialistic trade and capitalism
that works agains such community, and learning to empathize with your
computer (also rocks), another prerequisite for grounded learning in
c.a.p. An excellent introduction is here:

http://www.swaraj.org/shikshantar/ls3_bethel.htm

I'll be rooting for your "rootedness." If you feel some temporary
resistance to such mind-altering exposure, just let it out. We
shouldn't expect immediate results.

Larry

Neil W Rickert

unread,
May 18, 2003, 12:45:48 AM5/18/03
to
"KP_PC" <k.p.c...@worldnet.att.net> writes:

>I'm still waiting to be heard [and, of course, have advanced it all
>tremendously during the intervening years].

>I suppose I'll be 'shooed away' yet again?

If you are 'shooed away', it will probably because of your top
posting.

>K. P. Collins, developer of Neuroscientific Duality Theory.

If you want to sound like a crackpot, then you are going about it
the right way.

If you actually have something to say, give a web link to your
theory.

KP_PC

unread,
May 18, 2003, 12:51:09 AM5/18/03
to
This was in the early 1980s.

K. P. Collins

"KP_PC" <k.p.c...@worldnet.att.net> wrote in message
news:tEBxa.161391$ja4.7...@bgtnsc05-news.ops.worldnet.att.net...

| | > >[...]


Ralph Daugherty

unread,
May 18, 2003, 1:00:36 AM5/18/03
to

dan michaels wrote:
> David Longley <Da...@longley.demon.co.uk> wrote in message news:<5Ia8gGAr...@longley.demon.co.uk>...
>
> .....................


>
>>>However, it is not clear, from his recent comments, whether he
>>>believes any of what is in SOM. The recent comments seem to indicate
>>>the opposite - that we should forget everything else except the one
>>>approach.
>>
>>Which comments are those? Surely not the elaborated interview notes he
>>posted?
>>
>
>

> Hi David, not quite. The threads on the various groups were originally
> started because of what was attributed to MM in the Wired mag article.
> The comments quoted were much more caustic than his responses on this
> forum. It's possible that he was grossly misquoted. I don't really
> know.
>
> However, even from what he has said here, it seems to me he is
> basically dismissing everything other than the Cyc-like approach - and
> doing it very prematurely. Many believe that some of those other
> approaches show great promise, even if they are at a toy-level of
> solution today. But given 30-40 more years of improvement, who knows.
> Reactive AI and neural nets, for instance, have so far expended only a
> tiny fraction of the funds and man-power that symbolic AI has, and
> have hardly been around for 50 years. So let's not judge them
> prematurely.
>
> I'm not an insider, but that's kind of how it looks from the peanut
> gallery.
>
>
> - dan michaels
> =========================


It didn't quite seem that way to me, dan (although I didn't read the Wired
article). It seems he is dismissing the level of direction of effort being
made in robotics and I would say the quality of effort made by those in AI
research overall. For example, he mentioned the baby AI system approach meant
for an AI system to learn but pointed out they were not given an adequate way
of learning, or something to that effect. I have no idea of the nature or
quality of AI research, but it seems to me that he would have given a similar
thumbs up to a ten year effort at a learning sytem with continuing efforts by
designers at improving information acquisition and assimilation as he did for
Lenat's 10 year effort in creating expert system life activity scenarios as a
learning methodology.

My own take on sample CYC scripts I've seen in the press were that they
were incredibly simple, like Basic code shown on the screen in Terminator.
Like eating at a restaraunt, ordering, tipping, paying the bill, compliments
to the chef, etc. Not that the approach is not sound, but the level of detail
required for the simplest of activities is far beyond what people can sit and
pseudo code in script language, pseudo code in the sense that they contain
logic like IF this ELSE that etc., if that is the intent. If the intent at
this stage is to be able to place text in context by reasoning what is being
referred to as statements are being made, I can see it working for that, but
it must be a difficult and frustating task for humans to try to capture the
essential details of activities. If it's so difficult to encode, it must be
even more so to extract logical conclusions from them.

Concerning the robotics aspect of it, my own opinion is that it is
unethical to create human like robots as if we were attempting to create life.
The main reason is that we have humans who we are emulating and essentially,
whether desired or not, attempting to displace. It is attempting to create a
machine slave race to incredibly attempt to have even cheaper labor than our
current third world humans provide. Of course it would be justified by being
used for dangerous work such as mining, deep sea work, space work, explosives,
etc., but once developed they would replace manual labor as all machines have.
The good news is that I think it will be impossible to make an android cheap
enough to replace humans, even if it works around the clock, but specialized
robots with human level reasoning would be cheaper and replace what few
manufacturing jobs are left that aren't currently replaced by automotons.

The analogy can be carried further to thinking humans. We have people
available to think and reason, unfortunately many more people available than
jobs that pay to think and reason. There's plenty of reason to work on making
software smarter at reasoning but no good reason to emulate a human, in my
opinion.

rd

KP_PC

unread,
May 18, 2003, 1:04:54 AM5/18/03
to
"Neil W Rickert" <ricke...@cs.niu.edu> wrote in message
news:ba735s$est$1...@husk.cso.niu.edu...

| "KP_PC" <k.p.c...@worldnet.att.net> writes:
|
| >I'm still waiting to be heard [and, of course,
| >have advanced it all tremendously during
| > the intervening years].
|
| >I suppose I'll be 'shooed away' yet again?
|
| If you are 'shooed away', it will probably
| because of your top posting.

Yeah, yeah - I 'love' the way folks 'think' that trivial 'rules' are
more-important than information-content, and the way they proceed on
that basis.

"Information-content? Look man, put your comments into the proper
form, or die." :-]

| >K. P. Collins, developer of Neuroscientific
| Duality Theory.
|
| If you want to sound like a crackpot, then
| you are going about it the right way.

How is stating my Authorship 'crackpot'?

| If you actually have something to say,
| give a web link to your theory.

There you go again - 'If it's anything [after Letterman], then it
will have a web page.'

I don't have a web page.

All I have is =COMPLETE RESOLUTION= of the so-called "AI" problem.

As I was explaining in the post to which you replied, I go in-person
to discuss the work I've done.

Why don't you let me bring it to NIU?

K. P. Collins

Message has been deleted

Ralph Daugherty

unread,
May 18, 2003, 4:33:24 PM5/18/03
to

dan michaels wrote:
> Ralph Daugherty <rdau...@columbus.rr.com> wrote in message news:<3EC7134...@columbus.rr.com>...


>
>
>
>> It didn't quite seem that way to me, dan (although I didn't read the Wired > article).
>

> .........................
>
>
> I am beginning to think many people here did not read it, but are only
> responding according to previous posts and concepts of AI. Read it -
> it will only take 60 sec.


I read his elaboration here. He said he was, as usual, misquoted. I read
all the comments on /. from his former students and others in AI. I read all
the comments in this thread. I only commented to correct an interpretation
that was obvious even to me. Reading it will make my comments more informed
in what manner?

rd

Ralph Daugherty

unread,
May 18, 2003, 5:14:53 PM5/18/03
to

and checking my post i guess you were the one whose interpretation was
obviously wrong to me, and you're the most intelligent poster of the few who
post here. That's sad.

It didn't quite seem that way to me, dan (although I didn't read the Wired

Message has been deleted

nukleus

unread,
May 19, 2003, 12:28:58 AM5/19/03
to
In article <tEBxa.161391$ja4.7...@bgtnsc05-news.ops.worldnet.att.net>, "KP_PC" <k.p.c...@worldnet.att.net%REMOVE%> wrote:
>I came to the AI Lab at MIT with all of the stuff you've discussed in
>your post Resolved.
>
>I was 'shooed-away' without receiving any consideration.
>
>I returned again, and was, first, subjected to 'ridicule' within a
>group who heard a brief presentation, and then 'shooed-away' - they
>could not even see that I'd presented all they were looking for.
>
>The acquisition of understanding is Hard.
>
>The problem has been that folks've built themselves into 'corners'
>comprised of stuff with respect to which they're already 'familiar',
>and then they 'wonder' why they cannot see beyond the 'corners' into
>which they've worked themselves.
>
>All that 'ai' has to do it 'see' out of the confines of such
>'corners'.
>
>That's what I brought to the MIT AI Lab in the early 80s.
>
>I'm still waiting to be heard

They won't hear you.

Just speak up here on comp.ai.philosophy.
This is about the only place left
and even this one will soon be taken over
by the "moderators" aka the oppressors of thought.

Good luck.

nukleus

unread,
May 19, 2003, 12:32:48 AM5/19/03
to

Good. That means you have at least some degree of honesty.

Neil Rickert is but a tightest assed fool around,
who thinks is is a marble statue of some sort.
All he is, is but intensely stuck fool,
stuck in the cockpit of his mind,
never able to get out of that rut.

>All I have is =COMPLETE RESOLUTION= of the so-called "AI" problem.

Wow. I like that!

Who knows, may be i'll have a chance to see it one day.

nukleus

unread,
May 19, 2003, 12:36:00 AM5/19/03
to
In article <4b4b6093.03051...@posting.google.com>, d...@oricomtech.com (dan michaels) wrote:
>David Longley <Da...@longley.demon.co.uk> wrote in message
> news:<5Ia8gGAr...@longley.demon.co.uk>...
>
>......................

>> >However, it is not clear, from his recent comments, whether he
>> >believes any of what is in SOM. The recent comments seem to indicate
>> >the opposite - that we should forget everything else except the one
>> >approach.
>>
>> Which comments are those? Surely not the elaborated interview notes he
>> posted?
>>
>
>Hi David, not quite. The threads on the various groups were originally
>started because of what was attributed to MM in the Wired mag article.
>The comments quoted were much more caustic than his responses on this
>forum. It's possible that he was grossly misquoted. I don't really
>know.
>
>However, even from what he has said here, it seems to me he is
>basically dismissing everything other than the Cyc-like approach - and
>doing it very prematurely. Many believe that some of those other
>approaches show great promise,

They "showd great promise" for at least half the century.
And the result is?

> even if they are at a toy-level of
>solution today.

Not quite toys. If you can put ANY country on its knees
in about 3 months, then...

>But given 30-40 more years of improvement, who knows.

Mankind does not have this time span to "play" with
those "toys" of obscene.

>Reactive AI and neural nets, for instance, have so far expended only a
>tiny fraction of the funds and man-power that symbolic AI has, and
>have hardly been around for 50 years. So let's not judge them
>prematurely.

Boring stuff.

nukleus

unread,
May 19, 2003, 12:42:33 AM5/19/03
to
In article <35fae540.03051...@posting.google.com>, L.F...@lycos.co.uk (Acme Debugging) wrote:
>nuk...@invalid.addr (nukleus) wrote in message
> news:<ba40if$6ec$2...@news.ukr.net>...
>> In article <4b4b6093.03051...@posting.google.com>,
> d...@oricomtech.com (dan michaels) wrote:
>> >nuk...@invalid.addr (nukleus) wrote in message
>> > news:<ba3b5a$49h$2...@news.ukr.net>...
>> >
>> >.......................
>> >> >Speech is a good example. Cats and monkies do not have
>> >> >a specific speech module,
>> >>
>> >> Why do I bother with a cockroach like you?
>> >> Have no idea.
>> >>
>> >
>> >
>> >..... please, do not bother yourself.
>>
>> Get lost, you lil conman.
>
>Hi Nuculer! Well I finally got the garage cleaned out.
>
>First, though you insult me, it stems from your virtuality and thus I
>can't take it personally. In fact I like you :-)

Good. Then we can talk.
Just this very post of yours, when I opened it up,
the tought appeared in my mind
"Well, I like to read what ACME debugging has to say".
Not many can take pride in such a small thingy.

>More than anything else, looking in the from the outside, I've noticed
>that you suffer from a common liability of virtualness - the lack of a
>well-formed philosophy.

Philosophy is such a vast subject and, at the end,
it produced SO lil, that I am not sure this argument
of yours is worth much.

What is philosophy?

When you were a 5 years old kid,
you had a simple quesion "who am I and what I am doing here".

Slowly, slowly, they told you to read more and more
and so you did.

At the end, it produced more quesion in your mind
than answers.

Philosophy NEVER ANSWERS anything.
It only produces more question.

THAT is what we are talking about here, right?

Or you have a SINGLE "answer"?

>In my work on Surgically-Maximized Learning (throwing out old
>philosophy books on the garage floor) I've come across an excellent
>hand-picked choice. It is the philosophy of Tsunesaburo Makiguchi,
>early 20th century Japanese educator and geographer.
>
>Most important is "rootedness," obviously a crucial complement to
>virtualness. Hands-on learning grounded in the community promoting
>c.a.p. style democracy, eschewing imperialistic trade and capitalism
>that works agains such community, and learning to empathize with your
>computer (also rocks), another prerequisite for grounded learning in
>c.a.p. An excellent introduction is here:
>
>http://www.swaraj.org/shikshantar/ls3_bethel.htm
>
>I'll be rooting for your "rootedness." If you feel some temporary
>resistance to such mind-altering exposure, just let it out. We
>shouldn't expect immediate results.

Ok, Larry, thanks for your effort.
If I ever have enough time and interest
I might even look at it. Not that it means much
one way or another.

Good luck.

>Larry

nukleus

unread,
May 19, 2003, 12:51:04 AM5/19/03
to
In article <4b4b6093.03051...@posting.google.com>, d...@oricomtech.com (dan michaels) wrote:
>Ralph Daugherty <rdau...@columbus.rr.com> wrote in message
> news:<3EC7EDEB...@columbus.rr.com>...
>Reading it will make you understand why this article engendered
>multiple threads on multiple forums with 1000 or so responses. The
>article had quotes like
>
>.... AI has been braindead ....

I like that one.

>.... students are wasting their lives on these stupid little robots
>.....

I like that also.
They are simply chaising their own tails,
lil did they know they are lead to the submission
to the party line approach at the end.

>Simply reiterating the old arguments is not gonna get so much
>excitement.

Ralph Daugherty

unread,
May 19, 2003, 1:22:05 AM5/19/03
to
dan michaels wrote:
> Reading it will make you understand why this article engendered
> multiple threads on multiple forums with 1000 or so responses. The
> article had quotes like
>
> ... AI has been braindead ....
> ... students are wasting their lives on these stupid little robots
> ....
>
> Simply reiterating the old arguments is not gonna get so much
> excitement.


There were 400 comments on /. that I read, and of course those comments
were discussed at length. There were a few more here, I don't know where the
other forums on this subject are. This usenet forum was referenced in one /.
post, and I was fortunate to see Minsky's elaboration when I came here.

The response to the old arguments sounded like give neural nets a few more
decades before thinking about tossing it on the heap of Minsky's symbolic
logic, in other words, every approach gets 50 years before being dumped, I
guess. And that has nothing to do with AI students building robots anyway.

Minsky is correct, but of course cognitive dissonance would prevent most
of those engaged in the approaches being questioned from agreeing. In any
event, I stated that I believe he would have given credit to any long term
effort to make software smarter as he did with Lenat's Cyc, versus limiting
his bestowment of recognition to only that one approach. That was my point.
It seems straight forward from his comments. Building little robots from
scratch over and over to respond to sensor input like insects is not making
software smarter and has nothing to do with artificial intelligence. The
father spoke, but the misbehaving children don't like the message.
Identifying events from data patterns with neural nets is something, but it is
still just an event. What software does to put the events in context and
reason with them is what would comprise intelligence. And that is what he
said the AI community has failed in advancing. I agree, otherwise you should
just rename your field to data mining and be done with it.

rd

Neil W Rickert

unread,
May 19, 2003, 1:27:14 AM5/19/03
to
"KP_PC" <k.p.c...@worldnet.att.net> writes:
>"Neil W Rickert" <ricke...@cs.niu.edu> wrote in message
>news:ba735s$est$1...@husk.cso.niu.edu...
>| "KP_PC" <k.p.c...@worldnet.att.net> writes:

>| >I suppose I'll be 'shooed away' yet again?

>| If you are 'shooed away', it will probably
>| because of your top posting.

>Yeah, yeah - I 'love' the way folks 'think' that trivial 'rules' are
>more-important than information-content, and the way they proceed on
>that basis.

On the contrary, information content is what matters. But
information depends on context. When you top post, you lose the
context and fail to convey the information that you intended.

>| >K. P. Collins, developer of Neuroscientific
>| Duality Theory.
>|
>| If you want to sound like a crackpot, then
>| you are going about it the right way.

>How is stating my Authorship 'crackpot'?

You already answered this above, when you implied that information
content is important. You haven't provided any.

As the saying goes, you can't judge a book by its cover. You are
are providing only the cover, and then boasting about it.

When you repeatedly boast, but fail to defend your claims, you behave
as crackpots typically do.

>| If you actually have something to say,
>| give a web link to your theory.

>There you go again - 'If it's anything [after Letterman], then it
>will have a web page.'

>I don't have a web page.

Then start posting some of the ideas of your theory. And hold
the bragging until people have had time to see for themselves.

>All I have is =COMPLETE RESOLUTION= of the so-called "AI" problem.

This is conceivable, although quite unlikely.

>As I was explaining in the post to which you replied, I go in-person
>to discuss the work I've done.

>Why don't you let me bring it to NIU?

Nobody is blocking you.

Nobody will be inviting you either, as long as you provide only empty
bragging.

KP_PC

unread,
May 19, 2003, 1:41:05 AM5/19/03
to
"nukleus" <nuk...@invalid.addr> wrote in message
news:ba9mph$j0e$3...@news.ukr.net...

| In article
<WvExa.161758$ja4.7...@bgtnsc05-news.ops.worldnet.att.net>,
"KP_PC" <k.p.c...@worldnet.att.net%REMOVE%> wrote:
| >"Neil W Rickert" <ricke...@cs.niu.edu> wrote in message
| >news:ba735s$est$1...@husk.cso.niu.edu...
| >| "KP_PC" <k.p.c...@worldnet.att.net> writes:
| >| [...]

| >All I have is =COMPLETE RESOLUTION= of the so-called "AI" problem.
|
| Wow. I like that!
|
| Who knows, may be i'll have a chance to see it one day.

| [...]

I Hope you do :-]

Cheers, K. P. Collins

KP_PC

unread,
May 19, 2003, 1:57:23 AM5/19/03
to
"Neil W Rickert" <ricke...@cs.niu.edu> wrote in message
news:ba9pvi$764$1...@husk.cso.niu.edu...

| "KP_PC" <k.p.c...@worldnet.att.net> writes:
| >"Neil W Rickert" <ricke...@cs.niu.edu> wrote in message
| >news:ba735s$est$1...@husk.cso.niu.edu...
| >| "KP_PC" <k.p.c...@worldnet.att.net> writes:
|
| >| >I suppose I'll be 'shooed away' yet again?
|
| >| If you are 'shooed away', it will probably
| >| because of your top posting.
|
| >Yeah, yeah - I 'love' the way folks 'think' that trivial 'rules'
are
| >more-important than information-content, and the way they proceed
on
| >that basis.
|
| On the contrary, information content is what matters. But
| information depends on context. When you top post, you lose the
| context and fail to convey the information that you intended.

In this case, I top-posted deliberately.

| >| >K. P. Collins, developer of Neuroscientific
| >| Duality Theory.
| >|
| >| If you want to sound like a crackpot, then
| >| you are going about it the right way.
|
| >How is stating my Authorship 'crackpot'?
|
| You already answered this above, when you implied that information
| content is important. You haven't provided any.

You can finf more than you want to know by doing Groups Googles. I've
been discussing the basics of my work online for 15 years.

| As the saying goes, you can't judge a book by its cover. You are
| are providing only the cover, and then boasting about it.

Nope, but I didn't provide any indication that Groups Googles'll
pay-off.

| When you repeatedly boast, but fail to defend your claims, you
behave
| as crackpots typically do.

When you repeatedly make the same unfounded 'point', so do you :-]

| >| If you actually have something to say,
| >| give a web link to your theory.
|
| >There you go again - 'If it's anything [after Letterman], then it
| >will have a web page.'
|
| >I don't have a web page.
|
| Then start posting some of the ideas of your theory. And hold
| the bragging until people have had time to see for themselves.

The basics are out-there. Do some Groups Googles. I'm 'tired' of
saying the same stuff over and over again.

| >All I have is =COMPLETE RESOLUTION= of the so-called "AI" problem.
|
| This is conceivable, although quite unlikely.

It' Truth. [Of course, it's a COMPLETE RESOLUTION =beginning= - the
theory will continue to be refined ad infinitum, but my own Life is
Finite, Thank God.

| >As I was explaining in the post to which you replied, I go
in-person
| >to discuss the work I've done.
|
| >Why don't you let me bring it to NIU?
|
| Nobody is blocking you.
|
| Nobody will be inviting you either, as long as you provide only
empty
| bragging.

Do some Groups Googles, then invite me. [Be forwarned, you'll find a
lot of stuff that seems 'extraneous', but non of it is actually
'extraneous'. It's a long story, which is also something that I'm
'tired' of reiterating' [it's in the Googles].]

I've Solved the Human Nervous system to degree commensurate with
Newton's work in the old Natural Philosophy, and have,
simultaneously, shown how to implement it all algorithmically [among
other things, includes explanations of creativity, curiosity and
volition].

It's not 'boasting'.

It's just the work I've done.

K. P. Collins

KP_PC

unread,
May 19, 2003, 2:03:07 AM5/19/03
to
"nukleus" <nuk...@invalid.addr> wrote in message
news:ba9mib$j0e$2...@news.ukr.net...

| In article
<tEBxa.161391$ja4.7...@bgtnsc05-news.ops.worldnet.att.net>,
"KP_PC" <k.p.c...@worldnet.att.net%REMOVE%> wrote:
| >[...]

| >I'm still waiting to be heard
|
| They won't hear you.
|
| Just speak up here on comp.ai.philosophy.
| This is about the only place left
| and even this one will soon be taken over
| by the "moderators" aka the oppressors of thought.

The problem is that I've already discussed the fundamentals. I want
to save the more-advanced stuff so I have something to offer folks
who'll hear it in-person.

Otherwise, my work just gets ripped-off.

| Good luck.
| [...]

Thank you, Mr. "Breath-of-Fresh-Air", ken

Ralph Daugherty

unread,
May 19, 2003, 2:04:45 AM5/19/03
to

what, you haven't learned how to cut and paste yet?

rd

KP_PC

unread,
May 19, 2003, 2:19:32 AM5/19/03
to
"Ralph Daugherty" <rdau...@columbus.rr.com> wrote in message
news:3EC873D5...@columbus.rr.com...

|
| what, you haven't learned how to cut and paste yet?
|
| rd

I just try to avoid Infinite-loops.

The basics are out-there - accessible via Groups Googles.

I've explained my position in other posts here in c.ai.ph.

K. P. Collins

| KP_PC wrote:
| > "Neil W Rickert" <ricke...@cs.niu.edu> wrote in message
| > news:ba9pvi$764$1...@husk.cso.niu.edu...
| > | "KP_PC" <k.p.c...@worldnet.att.net> writes:
| > | >"Neil W Rickert" <ricke...@cs.niu.edu> wrote in message
| > | >news:ba735s$est$1...@husk.cso.niu.edu...
| > | >| "KP_PC" <k.p.c...@worldnet.att.net> writes:
| > |

| > | >| >I [...]


Ralph Daugherty

unread,
May 19, 2003, 3:01:30 AM5/19/03
to

so you're just expressing your displeasure at being unappreciated?

rd

KP_PC

unread,
May 19, 2003, 3:24:24 AM5/19/03
to
"Ralph Daugherty" <rdau...@columbus.rr.com> wrote in message
news:3EC88122...@columbus.rr.com...

|
| so you're just expressing your displeasure at being
unappreciated?
|
| rd

Sorry, no 'fish' on this end.

K. P. Collins

--
"Schmitd! Schmitd! Ve vill build a Shapel!"

| KP_PC wrote:

Acme Debugging

unread,
May 19, 2003, 6:37:35 AM5/19/03
to
nuk...@invalid.addr (nukleus) wrote in message news:<ba9nbq$j0e$5...@news.ukr.net>...

> In article <35fae540.03051...@posting.google.com>, L.F...@lycos.co.uk (Acme Debugging) wrote:

>Philosophy NEVER ANSWERS anything.

I see you are also a proponent of Surgically-Maximized Learning.

>THAT is what we are talking about here, right?
>
>Or you have a SINGLE "answer"?

I never have a single reason for a post. For instance, my email
address is a spam trap. Some of my posts are designed to
generate email correspondence, which you don't get with
most types of posts. Another thing I really needed to do was
trademark "Surgically-Maximized Learning." (You will find no
other instance on the internet, and no way of saying it as good).
I know another who would appreciate the link, as we have a running
dialog with subtle messages in unrelated posts. I had a theory
about you and wanted to test it. It had a subtle message for Wildstar
as well. I am always seeking criticism of my ideas and presentation.
There was vaguely some provocative invitation to some other
posters. As an organizational consultant associated with some software
houses interested in AI applications, I am the greedy capitalist. Many
of my posts are just convenient forever fail-safe parking places for
links with email correspondents and myself. And there was an explicit
reason, as below. Let's see, that's eleven reasons so far, not even
considering future historians over the next 10k years (nothing really
for them in that post). And I am forever the presumptious educator of
newsgroup newbies, thus this list.

>>http://www.swaraj.org/shikshantar/ls3_bethel.htm

>Ok, Larry, thanks for your effort.
>If I ever have enough time and interest
>I might even look at it. Not that it means much
>one way or another.

I have had second thoughts. I've always considered it against the
Golden Rule for one to enforce a philosophy on another, by any means,
except for one's child, regardless of how good. It attempts to steal
another's free will.

I know that one is supposedly exempt from the Golden Rule
with virtual persons, but that is a naive interpretation looking at
only one side of the transaction. For instance Christ, who is one of
the few philosophers I still value (besides Makiguchi of course :-),
was not so simple-minded as that. There are at least two other meanings
of the Golden Rule.

Take the Holocaust. One usually focuses on the plight of the victims.
But what harm was done to the perpetrators? What harm to humans,
now that all can know the full extent of our nature? What kind of
person would not suffer greatly in their life, knowing the previously
unrealized extent of this nature? So virtualness is not that simple.

If you have no interest in this game, no matter. Obviously I favor the
shotgun approach. I can always find something to do.

>It only produces more question.

You mean stupid questions. But surgically-maximized, it can help
you to form non-stupid questions. Google on the quote: "The power
of the question" (Ignoring commercial ripoffs of the quote).

Larry

nukleus

unread,
May 19, 2003, 8:44:52 AM5/19/03
to
In article <35fae540.03051...@posting.google.com>,
L.F...@lycos.co.uk (Acme Debugging) wrote:
>nuk...@invalid.addr (nukleus) wrote in message
> news:<ba9nbq$j0e$5...@news.ukr.net>...
>> In article <35fae540.03051...@posting.google.com>,
> L.F...@lycos.co.uk (Acme Debugging) wrote:
>
>>Philosophy NEVER ANSWERS anything.
>
>I see you are also a proponent of Surgically-Maximized Learning.
>
>>THAT is what we are talking about here, right?
>>
>>Or you have a SINGLE "answer"?
>
>I never have a single reason for a post. For instance, my email
>address is a spam trap. Some of my posts are designed to
>generate email correspondence, which you don't get with
>most types of posts. Another thing I really needed to do was
>trademark "Surgically-Maximized Learning." (You will find no
>other instance on the internet, and no way of saying it as good).
>I know another who would appreciate the link, as we have a running
>dialog with subtle messages in unrelated posts. I had a theory
>about you

Theory about ME?

Full service.
I promise.

>and wanted to test it. It had a subtle message for Wildstar
>as well. I am always seeking criticism of my ideas and presentation.

Good. I call it fresh mind.
The one that is not afraid.
Just the other way around.
It is called search for Truth.

Yes, you'd have to sacrifice you lil ego
and be even insulted and abused.
But it is worth a try.
Because you have a chance
to see something at the end
you never seen before
and that is joy unlike any other that I know off.

>There was vaguely some provocative invitation to some other
>posters.

I've seen some people trying to make some resemblence
of sound here in regard to me.
Sorry, I can not be of much assistance.
But I respect them for even trying.
Not every single one of them.
But at least some.

Others just wither away.
There is nothing much I can do about that.
I simply have not much interest.

>As an organizational consultant associated with some software
>houses interested in AI applications, I am the greedy capitalist.

I don't like those.

VERY much.

Zo...

Be my friend.

I'll do my best.
If I can manage.

>Many
>of my posts are just convenient forever fail-safe parking places for
>links with email correspondents and myself.

Well, what to do.

Just tell me one thing: Are you an honest man?

Cause I don't screw with suckitalists
if they themselves do not even consider themself
to be honest.

Sorry, I have no time.
I am a busy entity.

> And there was an explicit
>reason, as below. Let's see, that's eleven reasons so far, not even
>considering future historians over the next 10k years

Could I care less?

>(nothing really
>for them in that post). And I am forever the presumptious educator of
>newsgroup newbies, thus this list.
>
>>>http://www.swaraj.org/shikshantar/ls3_bethel.htm

Nope, I do not visit links usually.

You'd have to work it RIGHT here.

>>Ok, Larry, thanks for your effort.
>>If I ever have enough time and interest
>>I might even look at it. Not that it means much
>>one way or another.
>
>I have had second thoughts. I've always considered it against the
>Golden Rule for one to enforce a philosophy on another, by any means,
>except for one's child, regardless of how good. It attempts to steal
>another's free will.

Oh, free will?

DEEP, DEEP subject.
About time for it to get discussed here on comp.ai.philosophy.

Not sure you have enough tooth for that though.
But...

Who knows. May be you do.

I won't touch this subject unless...

Well, we'll see.

>I know that one is supposedly exempt from the Golden Rule
>with virtual persons,

I do not believe this bullshit about "virtual persons".
Yes, there is some difference in that we don't have to
kick each others face in the literal sense
and we can not poison each others lives.

Otherwise, you are as "real" to me as it gets.

I, peisonally, see no difference whatsoever.

When I fly, I fly.

I do not post here just to become "famous".
With the language I routinely use,
what would be the better way to become "in-famous"?

Zo...

Unless you are simply phoney,
you are as real to me, as it gets
and thanks to this fluidity
the "virtual domain" provides.
Because you have your freedom unconsticted
by the fear of getting stepped upon
by some big red ass.

You think because of this so called virtual domain
I am somehow different then I would have been otherwise?

How?

Are these ideas somehow "unreal"?
Would I have DIFFERENT ideas in "real" domain?

According to what kind of magic?

Oh, the "magic" of fear of survival
and careerism?

Zorry...

I am a "wrong" dude indeed.

Ok, and the next subject is?

> but that is a naive interpretation looking at
>only one side of the transaction.

If you mean there is no "responsibility",
well, it all depends.

"Responsibility" in MY game
is being honest before MYSELF,
not before some jackass that can close the
oxigen supply for me.

> For instance Christ,

Oh...

You wish to speak of THAT?

Not sure I can be of much assistance on this subject.
But Christ is not a toy to play with for me.

If you are willing to go get killed,
knowing 100% it would be the case,
just to stand for what you have,
then...

Then it is a REAL game you play.

Tell me, which one of these AI "giants"
would be willing to do the same
just to speak the Truth they have,
if they have any?

Marvin Minsky?
I have zome reservations.

Who else?

>who is one of
>the few philosophers

For that, you get a slap on your face from me.

Jeasus is not a philosopher of ANY kind.
And for you to appreciate what I have said,
you need to even BEGIN to comprehend what is
philosophy.

There were "philosophers" that exceeded him
on the orders of magnitude as far as "philosophy" goes.
Take Germans for example.

But I am not sure how many of you even know their names,
just to put things in perspective.

But you ALL know the name of Jesus.

My advice on this matter:
Do not play with Jesus
as though he was some kind of a toy
or philosophical concept of some sort.

It is a different matter alltogether.

I am not an "authority" and have no right
to interpret things one way or another
in THIS domain.

But...

There are some things I am willing to stand for.

BIG, BIG statement.

> I still value (besides Makiguchi of course :-),
>was not so simple-minded as that.

Simple minded is YOU.
As you know not what you are talking about.
I have no time right now to even get into this subject.
It is that wast.

Just tell you one thing:

When Jesus walked this Earth,
you were all on such a primitive level.
You basically lived in cages just about 500 years ago.
Sure, kings lived like kings for milleniums.

But...

In that context and as far as the "information content" goes,
he is the master that was able to deliver his message
even to...

Guess...

Even to the simpleton,
to the man, you arrogant fools would claim
has no "intelligence".

THAT is what makes him real.

You, farts, can only talk to the farts of your kind,
confused as much, if not more than YOU are.

But can you talk to a 2 year old child
and he STILL understands you?

Nope.

You can not.
I can guarantee you that much.

You are densely screwed up in your brains.
You know not what is heart.
You know not what beats it.

You are simply stuck on the level of the mind.
Lil did you know it is but a machine.

In THAT respect, you will be one day replaced
with the toys made by Marvin Minsky or his students
or the students of his students.

Lil did you know.
How limited this level is.

At least Marvin, being the shrewed fish he is,
intuitively feels that there is something beyond
this ratrace of survival.

Zo...

Kiss him on the ass for that
and say:

Marvin, I respect you for that,
even if you are but a fox.
But you opened my eyes on something.
Not that you gave me any "answeres".
But you hinted me a direction.

I have the "right" to kick him on the ass
and he does not even know who am I
even though some ideas he is toying with
are from MY domain.
But that is ALLTOGETHER a different story.
It is not for discussion on comp.ai.philosophy.

>There are at least two other meanings
>of the Golden Rule.

>Take the Holocaust.

You know.
I like you.
At least you are trying to dig deeper
than the shithole.

>One usually focuses on the plight of the victims.
>But what harm was done to the perpetrators? What harm to humans,
>now that all can know the full extent of our nature? What kind of
>person would not suffer greatly in their life, knowing the previously
>unrealized extent of this nature? So virtualness is not that simple.

Sorry. I can not speak on this subject.
I am not ready.

Mortal can not speak of immortality.

>If you have no interest in this game, no matter. Obviously I favor the
>shotgun approach. I can always find something to do.

What would that be?

Do you realize what you raised here?

You mean just another excersize in futility?

What is that you wish to attain?

Glory?
Fame?
Wealth?
Immortaility?
Power?
Influence?
Beauty?

WHat IS it you with to play with
and dedicate your life to?

Show me.

>>It only produces more question.

Uhu.

>You mean stupid questions.

ALL kwestions are stupid.
Lil did you know.
VAST subject.

Give me a question that is not stupid.

I'd LOVE to see that one.

>But surgically-maximized, it can help
>you to form non-stupid questions.

I have PLENTY of reasons to doubt about that one.

Feirst of all, NOBODY can help ANYBODY.
Lil did you know.

In order for you to "help" me or anybody else,
you'd need my interest and desire to be helped.
In order for you to lead me toward "light",
I need to have an impetus and interest to walk
toward it.

How many of your giants have bee simply slaughtered,
killed, crussified, massacred, burned and mutilated
throughout times?

You wish to speak of Jesus?

Well, then get ready.

The verdict is ALWAYS the same:

The brightest stars that EVER walked this planet Earth
forever get slaughtered.

THAT is why they have a notion of Truth.

If you are THAT strong, that you'd be willing
to sacrifice your entire life, which is the GREATEST
treasure there is, then...

Well, then you are willing to walk in the land of Truth,
even though you are walking among the living dead
and your chances are SO slim,
only if you knew.

Enough.

Good luck.

nukleus

unread,
May 19, 2003, 8:47:57 AM5/19/03
to
In article <ba9pvi$764$1...@husk.cso.niu.edu>, Neil W Rickert
<ricke...@cs.niu.edu> wrote:
>"KP_PC" <k.p.c...@worldnet.att.net> writes:
>>"Neil W Rickert" <ricke...@cs.niu.edu> wrote in message
>>news:ba735s$est$1...@husk.cso.niu.edu...
>>| "KP_PC" <k.p.c...@worldnet.att.net> writes:
>
>>| >I suppose I'll be 'shooed away' yet again?
>
>>| If you are 'shooed away', it will probably
>>| because of your top posting.
>
>>Yeah, yeah - I 'love' the way folks 'think' that trivial 'rules' are
>>more-important than information-content, and the way they proceed on
>>that basis.
>
>On the contrary, information content is what matters.

But first you have to define it, you arrogant fool.
Zo...

Feist kwestion would be:

What is information?

Gets its?

Then you'd have to define the notion of "content",
you mind screwing manifestation of arrogance.

And then...

Well, may be...

May be we talk about this subject.

ALL you were able to produce on comp.ai.philosophy
to date is but a fart in the wind and you were
participating here for YEARS now.

Zo...

What HAVE you produced?

A piss against the wind?

>But
>information depends on context.

Uhu, you lil copycat.

> When you top post,

But when the giant, as you must be thinking yourself to be,
engages in guild manipulation on a group that is nothing
less then a group about the very root of it all,
the philosophical foundation of nothing less than
Intelligence itself, the mother of ALL mothers,
then...

What follows then,
you fool?

>you lose the
>context and fail to convey the information that you intended.

Garbage.
And of the PUREST grade.

All you have here with your sucky argument
is guilt.

Did you see that?

Yes, guilt manipulation trick,
you, holier then though idiot of all idiots.

One more time:

Through all the years of cock fighting here
on comp.ai.philosophy, what have you produced
of ANY value.

Can I see the reference on that SINGLE article?

I'll do my best to shred your sorry ass to dust.
THAT much I can promise.

And I say: You have produced NOTHING.

PURE nothing.
Zero.
Zip.
Zilch.

Well, you DID produce plenty of arrogance
and smart-fartness.
But that is but a fart in the wind.
Gets its?

>>| >K. P. Collins, developer of Neuroscientific
>>| Duality Theory.

>>| If you want to sound like a crackpot, then
>>| you are going about it the right way.

He claims to be a developer of nothing less than
"Neuroscientific Duality Theory". Not that I have
a slightest clue what he is talking about.

But you, arrogant monkey ass impersonator,
just hit him on his face, without even having
a slightest clue what is he talking about,
you jackass.

What are you doing here on comp.ai.philosophy
"after all these years"?
Wasting away?
Go drop some acid or whatever fits your fancy.
Better yet, drop dead, you old fart,
having NOTHING to say of ANY kind of significance.

Because, before you insult someone,
at least have ZOME idea of your grounds.
What is so difficult to grasp, you guild peddling
idiot? This is a first grade lesson.

>>How is stating my Authorship 'crackpot'?

Because he is "his royalty" Nick Rickert,
you see.

He is such a king of delusion
and he produced SO much waste on comp.ai.philosophy
that this loonatic is deluded to the point
where he can just insult ANY idea,
regardless of what stands behind it,
unless, as a matter of intercourse,
there stands a big red ass behind it,
such as Marvin.

>You already answered this above, when you implied that information
>content is important.

This jackass, Neil, won't be even able to show
what is "information".

Interestingly enough, it is such a wast subject,
that we can quit talking about ANYTHING here
and only talk about information
and we won't be able to make a dent in such a
vast subject.

First, this jackass, Neil, won't be able to
define the MOST critical aspects of information.

Nope, he won't.

I am not even sure he will be even able to define
the very term "information", because he is such a
pigmey ass.

Just dig up the Google archives on comp.ai.philosophy
and use him as an author.

You won't beleive your eyes.
Well, not sure about YOU,
but I do not believe MY eyes.

It is such a waste of white noise,
not even sure if it belongs to the category
of white noise, which is the LOWEST level
of information. Below white noise there
exist nothing in "scientific" terms.

Anotherwords, PURE grade fart in the wind.

ALL he's got is guilt peddling procedure.
ALL he's got is "holier than though" attitude.
ALL he's got is confusion and confusion and confusion.
Wrestling in the mud FOR YEARS with another fool.
But at least that fool was merely insulting him
and showing him FOR YEARS that he has NOTHING.
PURE nothing.

I do respect that other fool.
At least he has some sense of humor.
But this jackass, Neil, is such a densely suppressed
ass with such a limited perception, it is amazing to
see him to insult someone from the very first statement.

> You haven't provided any.

One more time: go look up the Google archives on his name
in this group.

You'd sooner commit suicide before you see ANYTHING
of "value" in it.

It is all PURE grade mental masturbation
not to insult the term masturbation,
because even that much produces at least SOME joy.

>As the saying goes, you can't judge a book by its cover. You are
>are providing only the cover, and then boasting about it.

He [Neil] has nothing.
Not even cover.

He is a miserable being
that never knew any joy in life.

Ask him: Neil, tell me what is joy?

See what he says.

I'd LOVE to see that myself.

Now, Intelligence without joy is what?

See?

It is called dead upon arrival.

That is where this other jackass,
Marvin Minsky, falls on his face
with his obscene attempts to define
"emotional aspects of intelligence".

Yes, at least Marvin played with robots
when noone even thought about that
in any extent worth anything mentioning.

And yes, Marvin did make and IS making
some hissing sounds about AI and its
shortcomings if not the OUTRIGHT failure
and I respect him for that.

But this jackass Neil?

Not even a fart in the wind,
but a PURE piss AGAINST the wind.
THAT is what Neil is.

George

unread,
May 19, 2003, 9:57:46 AM5/19/03
to
Marvin Minsky wrote:
>
> Thanks, David. Long time, no see!
>
> David Longley <Da...@longley.demon.co.uk> wrote in message news:<Lf4AFGAl...@longley.demon.co.uk>...> > >Marvin Minsky decries the direction A.I. has taken in the past 15 years-
> > >mainly exploring "trivial problems". Lots of debate on this and Minsky-bashing
> > >in a slashdot.org follow up.
> > >(Marvin used to post often this group before the bots took over.)
> >
> > He still will post here if he has something to say.
> >
> > One thing in that article which warrants more careful thought than it is
> > likely to get is the end:
> >
> > o "AI researchers also may be the victims of their own success. The
> > public takes for granted that the Internet is searchable and that
> > people can make airline reservations over the phone -- these are
> > examples of AI at work.
> >
> >
> > "It's a crazy position to be in," said Martha Pollack, a professor at
> > the Artificial Intelligence Laboratory at the University of Michigan
> > and executive editor of the Journal of Artificial Intelligence
> > Research.
> >
> > "As soon as we solve a problem," said Pollack, "instead of looking at
> > the solution as AI, we come to view it as just another computer
> > system."
> >
> >
> > It should be obvious that any effective process is computable, and that
> > as and when previously complex, instantiations of "intelligent" human
> > behaviour are engineered as computer systems, they will no longer be
> > regarded as "intelligent". They were never "intelligent" to start with
> > of course, they were behaviours which (in some instances) people learned
> > by being in an appropriate place at an appropriate time - ie they were
> > programmed.
> >
> > Where and when there are great failures of effective procedures to
> > emulate the same behaviours as "common sense" one should perhaps look
> > more closely at the nature of common sense and its deficiencies. One
> > might have forgiven some of those pursuing this AI holy Grail were in
> > not for the fact that the irrationality of common-sense has been so well
> > documented by psychologists over the past 50 years.
>
> Here is more or less what I told that reporter. Naturally, the
> important parts did not get reported.
>
> Most early researchers in artificial intelligence aimed to build
> machines that would become as smart as people are. They developed
> many ideas about how to represent knowledge in machines, and about
> ways to reason by using that knowledge. This led to many successful
> projects, such as programs that recognize various patterns such as
> sounds of words, printed characters, faces, and other particular
> objects—and answering questions about certain specialized fields of
> knowledge. Today such programs are all around us and we tend to take
> them for granted: by the 1980's, many of these specialized, so-called
> "expert systems" had become widely productive and popular.
>
> However there was a problem with those programs: for each new kind of
> problem we had to construct an almost entirely new such system. This
> was because all of them lacked what people call "commonsense
> knowledge." None of those systems was able to adapt itself to solve
> other that it had not been programmed to solve.
>
> A second major deficiency, which I'll say more about below, was the
> use of programming techniques that made it almost infeasible for the
> programs to reflect on their own performance. Reflective and
> self-reflective thinking is perhaps what most distinguishes us from
> our animal relatives—and is likely to be what distinguishes
> present-day programs from the successors we hope to replace them with!
>
> To solve a hard problem, one usually needs to know a good deal—both
> about that particular subject, and also about how to solve problems in
> general. But only one major researcher focused intense research on
> how to represent commonsense knowledge, in a computer. That was
> Douglas Lenat, who developed a system called CYC. Today, CYC contains
> a substantial amount of such knowledge. The knowledge in CYC was
> compiled by people, in a meticulous, tedious process. However, CYC
> still has far from enough of this to compete with a two or three year
> old child.

What an unimaginative name, CYC naturally invites pronounciation
of "sick", I accuse it stands to broadcast a deliberate message
of sorts. Such "knife-in-the-subconscious" messages perhaps stand
as a warning to keep the company image where it belongs: caution, military, research, unpopular: superstition. All
thanks to lack of imagination.

Everybody sing: Y...M..C.A!

George

George

unread,
May 19, 2003, 10:19:48 AM5/19/03
to

Adding: mysteries that spark imagination is usually considered a good
thing for the mind. But I wouldn't recommend CYC to kids just because of
it's politburo-smelling advertisement. Would you buy a soap that is called
itch? Or going back to the past, would you buy a black slave called who is
called sickman? Well, depends on how much you want to spend on a slave I
guess. Mysteries and extraordinary problem solving capabilities (see
Superman-the movie, Discovery Channel, Mystery Channel et al., etc.)
fascinate us.

???

George

Neil W Rickert

unread,
May 19, 2003, 10:40:02 AM5/19/03
to
"KP_PC" <k.p.c...@worldnet.att.net> writes:
>"Neil W Rickert" <ricke...@cs.niu.edu> wrote in message
>news:ba9pvi$764$1...@husk.cso.niu.edu...
>| "KP_PC" <k.p.c...@worldnet.att.net> writes:

>| >How is stating my Authorship 'crackpot'?

>| You already answered this above, when you implied that information
>| content is important. You haven't provided any.

>You can finf more than you want to know by doing Groups Googles. I've
>been discussing the basics of my work online for 15 years.

Then post a web link.

>| As the saying goes, you can't judge a book by its cover. You are
>| are providing only the cover, and then boasting about it.

>Nope, but I didn't provide any indication that Groups Googles'll
>pay-off.

Yes, it paid off.

From 1996:

This perspective is taken from Neuroscientific Duality
Theory, which is a unified theory of CNS function, cognition,
affect, and behavior, and Tapered Harmony which is a unified
theory of Physical Reality. I achieved closure in both
theories about 5 years ago.

From 1999:

I'll get by, but the fact that this work, after 29 years, is
still censored within typical communication venues is
heart-breaking.

From 1999:

more than a decade ago, i received a Promise of a
"Professional and thorough" hearing (it's in the archives)
from the AAAS.

wanting to be a good Citizen of Science, i'm still waiting
for the results of such.

beyond that, i've zero 'faith' in the virtual reality of the
online environment. it's how the greatest-part of the ab-use
of my work came about, after all.


From 2000:

or it's just me, tired of Jackasses 'borrowing' my work
without giving its stuff to the folks on whose behalves the
work was done.

The record appears consistent. There is never any useful content.

Apparently the reason that you are ignored, is that you have nothing
of interest to say.

>| >| If you actually have something to say,
>| >| give a web link to your theory.

>| >There you go again - 'If it's anything [after Letterman], then it
>| >will have a web page.'

>| >I don't have a web page.

And that turns out to have been a dishonest response. I did not ask
for your web page. I asked for a relevant link. You are now
asserting that there are relevant links on google.

I'm calling your bluff. Provide some relevant links, if there
are any.

George

unread,
May 19, 2003, 12:41:03 PM5/19/03
to
wildstar wrote:
>
> d...@oricomtech.com (dan michaels) wrote in
> news:4b4b6093.03051...@posting.google.com:

>
> > .... please, do not bother yourself.
> >
>
> Just kill filter / *PLONK* file the cockroach.
> How does a moderated version of this NG sound ?
> Naturally, all I need to do is configure the program to read a special
> email account and block the user. Then propagate the emails of others.

Come to think of it, Google and Kim shares two common features:

Big brother watching us! Isn't that the only reason we have
to behave? Google.... as Kim, thinks some netiquettes should
solve all their problems. Imagine what it would take for these
communist dictators running Google to shut their "big brother
watching us" oriented company down. They wouldn't go, so why
would Kim?

Hmmmm....

George

Message has been deleted

George

unread,
May 19, 2003, 1:10:10 PM5/19/03
to

You probably work for google Kim. You are a commie, ey?

George

Eray Ozkural exa

unread,
May 19, 2003, 1:36:49 PM5/19/03
to
Hi!

Could you please elaborate on some of the points with respect to three
paragraphs I quoted below?

1) You seem to say, in theory, something like CYC can in fact embody
common sense knowledge of a 2-3 year old child. But you are on the
other hand pointing out to the tedious nature of hand encoding of
knowledge (a.k.a knowledge engineering). Therefore, should we infer
that traditional knowledge engineering is not feasible? Another
question. Logical ontology description languages separate declaration
from operation. Doesn't CYC? The top-level ontology I looked at
roughly looked like a class system such as CLOS. So what really makes
it different than a logical system? And more importantly, could
hand-encoding knowledge necessarily exclude the flexibility of human
common sense (as in pure logical systems) since many processes are
intertwined with knowledge? I think your view on CYC is rather
important because some people think a) Minsky invented common sense
reasoning b) CYC's purpose is common sense reasoning c) CYC failed d)
Therefore Minsky is wrong. Obviously this isn't a valid way of
argument but I should at any rate say that CYC's lack of popularity
doesn't render your theory obsolete!

2) About "baby" machines. It seems to me that if we had indeed
achieved in making a baby machine then we would have solved the
problem 80%. I don't think we have achieved anything like the mind of
a human baby (or even a spider baby). I would also say that we don't
know the necessary architectural features even for a baby, let alone a
child. What I want to ask is: do you think trying to develop learning
algorithms is doomed to fail or can we "unify" learning with an
architectural outlook to truly achieve a baby's outstanding mental
powers?

3) Some of evolution or ANN research is seen as part of machine
learning which can be used for standard problems like classification,
regression, clustering. But objectively speaking no method in machine
learning can be the absolute best including ANN and GA's. However, for
some specific problems an ANN or GA will be the best like a decision
tree with C4.5 will be the best for another set of problems. For
instance, there is a paper that uses a simple 3-layer network to
predict a stock market's index with 99% sign accuracy. (The actual
number isn't too meaningful but it was still impressive) And you seem
to be verifying "no free lunch theorem" in your drafts of Emotion
Machine by saying that each method is suitable for a class of
problems. So, you probably don't think that learning algorithms are
irrelevant. However, the learning systems don't have a large degree of
reflection and self-reflection therefore they don't fit in the "big
picture". How can we integrate learning algorithms in a complete mind?
(This is a little like in addition to 2)

Regards,

min...@media.mit.edu (Marvin Minsky) wrote in message news:<f04e2625.03051...@posting.google.com>...
> To solve a hard problem, one usually needs to know a good deal?both


> about that particular subject, and also about how to solve problems in
> general. But only one major researcher focused intense research on
> how to represent commonsense knowledge, in a computer. That was
> Douglas Lenat, who developed a system called CYC. Today, CYC contains
> a substantial amount of such knowledge. The knowledge in CYC was
> compiled by people, in a meticulous, tedious process. However, CYC
> still has far from enough of this to compete with a two or three year
> old child.
>

> Unfortunately, in my view, the rest of the artificial intelligence
> community tried, instead, to make their computers do this by
> themselves?by trying to build what I call ?baby machines', which were
> supposed to learn from experience. These all failed to make much
> progress because (in my view) they started out with inadequate schemes
> for learning new things. You cannot teach algebra to a cat; human
> infants are already equipped with architectural features to equip them
> to think about the causes of their successes and failures and then to
> make appropriate changes.
>
> Many other researchers went in the direction of trying to build
> evolution-based systems. These were to begin with very simple
> structures and then (by using some scheme for mutation and then
> selection) evolve more architecture. This includes what are called
> "neural networks" and "genetic" programs?which have often solved
> interesting problems, but have never reached high intellectual levels.
> In my view, this was because they were not designed to have the
> ability to analyze and reflect on what they had done?and then make
> appropriate changes; they were not equipped to improve or learn new
> ways to represent knowledge or make plans to solve new kinds of
> problems.
>
__
Eray Ozkural

George

unread,
May 19, 2003, 1:28:29 PM5/19/03
to

Since Google is here to stay - I have asked the company to nicely
close down and stop providing googling content about people to
their bosses, but they refused to cooperate with me - perhaps
I better make something of value out of TLAITH so communist
generations in the future can use it to fight fascism, communism
and dictatorships around the world. Now a person with a common
sense would say: George don't do that. But what choice do I have
now?

TLAITH is all about no choice. Somebody throws you in the water
when you can't swim and you have no other choice but to learn to
float somehow...

TLAITH...

Hmmmm...

???

:-<

George

Message has been deleted

CyberLegend aka Jure Sah

unread,
May 19, 2003, 3:54:58 PM5/19/03
to
dan michaels wrote:
> Regards nuccer, he is an obvious product of the soviet system. Prolly
> born about 1980 in or near moscow [ie, urban area]. His mind-control
> masters are dead now, but he hasn't quite learned how to use his
> new-found freedoms to his best advantage. It was too ingrained into
> his young brain throughout his early life that there was only "one
> way", the soviet way. On and on. It's very difficult when you learn
> how many options are truly available, so you tend to put up a wall in
> defense. Even in this country, this is difficult for people who grew
> up since the 60s - prior to that, the way society was run, there just
> were not an many choices. Maybe he will begin to understand this in
> time - [maybe]. He is just now learning who he is and what his powers
> are, but the old ways die hard. I knew some guys like this when I was
> in school, so is difficult to get too excited.

Excuse me, you will show some respect to that system, considering the
amount of brainwash you capitalists get on your average day of life in
America. I think I better not give you an example, it would cause you
too much trauma...

Observer aka DustWolf aka CyberLegend aka Jure Sah

C'ya!

--
Cellphone: +38640809676 (SMS enabled)

Don't feel bad about asking/telling me anything, I will always gladly
reply.

Trst je naš, Dunaja ne damo; Solmuna pa tud ne. Za vstop v EU. ;]

The future of AI is in technology integration,
we have prepared everything for you:
http://www.aimetasearch.com/ici/index.htm

MesonAI -- If nobody else wants to do it, why shouldn't we?(TM)

George

unread,
May 19, 2003, 4:17:13 PM5/19/03
to

Remove this massive stupid footer, I am not sure what am
I replying to. Well, look, I already forgot what I was
going to reply about.

George

George

unread,
May 19, 2003, 4:29:57 PM5/19/03
to

Ah yeah, everything in the US is called "masterplanned".
Americans live in such communities. They do all thinking
for you so you just live.

Look what a nice spot for a bread store. Forget that kind
of thinking in the US. Don't mess with masterplans! (tm)

George

George

unread,
May 19, 2003, 4:33:27 PM5/19/03
to

In the US you either fit in with the masterplan, or you are out.
I am black.

George

George

unread,
May 19, 2003, 5:03:11 PM5/19/03
to

Uhm, bread bakery is for you Jure...

And a nice grill/restaurant/patio next to it, so the locals
can have a beer outside in the summer.

Bye.

George

George

unread,
May 19, 2003, 5:27:49 PM5/19/03
to

Two flies with a single smack, I mean in a single thread.
There are still a few more flying around here somewhere.

George

George

unread,
May 19, 2003, 5:48:15 PM5/19/03
to

The difference is simple. In Europe, locals dispute and decide
where a restaurant can go and where not. In the US everything
is masterplanned by mega big brother.

George

wildstar

unread,
May 19, 2003, 6:46:01 PM5/19/03
to
George <geo...@nospam.com> wrote in news:3EC90FF2...@nospam.com:


> You probably work for google Kim. You are a commie, ey?
>
> George
>

No, I just happen to use a commie (commodore 64). I do not work for google.
Google is a search engine, moron.

wildstar

unread,
May 19, 2003, 6:56:18 PM5/19/03
to
> Hello wild, actually I read this on the web, and don't have to read or
> receive what I don't wish to.
>
> Regards a moderated forum, that's always a possibility. It might help
> draw a wider audience in here if they knew there wasn't so much
> useless bandwidth expenditure. OTOH, as a forum with the word
> .philosophy in its name, you're not gonna appeal to all that wide an
> audience anyways. Most of the threads here are of little interest to
> me, actually, as an engineer - sorry ;-).

>
> Regards nuccer, he is an obvious product of the soviet system. Prolly
> born about 1980 in or near moscow [ie, urban area]. His mind-control
> masters are dead now, but he hasn't quite learned how to use his
<<< snip >>>

He can put a bullet in his own head and we wouldn't care. He is not worth
a bullet from me. Why don't he buy a bullet from a gun store. Load it
into his gun and get it over with and leave everyone else alone. I
subscribe to this newsgroup to discuss aspects of artificial intellegence
and read about it. That is what the charter says. This moron along with
Georgie wastes our time so I already set the killfilter on one of them.

In a moderator NG, the NG messages would be intercepted sent to the
Moderator's (Moderation Email Address) and he would set the *plonk* file
to filter the annoying users. Then forward the messages from whom are not
in the *plonk* file list, to the newsgroup. So everyone except for the
annoying users would be happy.

wildstar

unread,
May 19, 2003, 7:01:18 PM5/19/03
to
George <geo...@nospam.com> wrote in news:3EC93EC5...@nospam.com:

<<< Snip >>>


> Ah yeah, everything in the US is called "masterplanned".
> Americans live in such communities. They do all thinking
> for you so you just live.
>
> Look what a nice spot for a bread store. Forget that kind
> of thinking in the US. Don't mess with masterplans! (tm)
>

Yeah, City Council talk. First off, the city councils' masterplans are just
another word for "plans for the future". They change and people as usual
are always bent on defending their ideas. So they always so, don't mess
with the masterplan, or "my idea". Politicians are liars, we know that.


George

unread,
May 19, 2003, 7:38:29 PM5/19/03
to

People have zero freedom in such a masterplanned world.
The US is a human shame. It is so annoying, oh my god!

Everybody quietly in the classroom:
hahahahahahahahahahahahahahahahahaha.

The US is sick.

George

Push Singh

unread,
May 19, 2003, 8:26:10 PM5/19/03
to
Hi Eray,

I work closely with Marvin and thought I might try to answer these
questions.

"Eray Ozkural exa" <er...@bilkent.edu.tr> wrote in message
news:fa69ae35.03051...@posting.google.com...


> Hi!
>
> Could you please elaborate on some of the points with respect to three
> paragraphs I quoted below?
>
> 1) You seem to say, in theory, something like CYC can in fact embody
> common sense knowledge of a 2-3 year old child. But you are on the
> other hand pointing out to the tedious nature of hand encoding of
> knowledge (a.k.a knowledge engineering). Therefore, should we infer
> that traditional knowledge engineering is not feasible?

It's hard to say. Certainly, Cyc contains many ideas about how to represent
different types of knowledge, and many of those ideas were invented by
engineers at Cycorp while attempting to teach it new things. Presumably,
encoding new knowledge into Cyc is getting easier over time, and only Doug
Lenat can tell us if this has been happening. If in fact it has *not* been
getting easier, then I suspect there is something wrong with their knowledge
engineering methods. And if that's so, it suggests that traditional
knowledge engineering has not matured to the point that such a project is
feasible. It may take several failures before we have the first really good
commonsense knowledgebase.

Of course, no one ever said it would be easy, and tedious is not the same as
infeasible.

> Another
> question. Logical ontology description languages separate declaration
> from operation. Doesn't CYC?

Yes. We would argue that this is a substantial flaw in Cyc. It does not
relate knowledge to specific ways of using that knowledge. In our view,
every piece of commonsense knowledge should be intimately connected to ways
of using it to solve various kinds of problems.

> The top-level ontology I looked at
> roughly looked like a class system such as CLOS. So what really makes
> it different than a logical system?

Cyc can be seen as a logical system, although it has a 'heuristic level'
that attempts to make it efficient at reasoning.

> And more importantly, could
> hand-encoding knowledge necessarily exclude the flexibility of human
> common sense (as in pure logical systems) since many processes are
> intertwined with knowledge?

It may be possible to hand-code knowledge in a way that leads to great
flexibility. Many of Marvin's recent ideas about multiple representations
could contribute to this goal.

An important problem is that Cyc simply lacks knowledge about learning and
reasoning itself, what you might consider knowledge at the "reflective"
level of a layered cognitive architecture. Cyc is almost exclusively
concerned with modeling the outside world, and has relatively little
knowledge about mental processes themselves.

> I think your view on CYC is rather
> important because some people think a) Minsky invented common sense
> reasoning b) CYC's purpose is common sense reasoning c) CYC failed d)
> Therefore Minsky is wrong. Obviously this isn't a valid way of
> argument but I should at any rate say that CYC's lack of popularity
> doesn't render your theory obsolete!

Yes, that is an unfortunately common argument. But I think Marvin did make
an important point in that message to the reporter -- that unless we set as
our goal the construction of *some* commonsense knowledgebase, every new AI
system will have to start from scratch. The problem is that many people now
equate commonsense knowledgebases with Cyc, despite it being easy to imagine
a commonsense knowledgebase designed to support other types of reasoning
such as statistical or case-based reasoning.

> 2) About "baby" machines. It seems to me that if we had indeed
> achieved in making a baby machine then we would have solved the
> problem 80%. I don't think we have achieved anything like the mind of
> a human baby (or even a spider baby). I would also say that we don't
> know the necessary architectural features even for a baby, let alone a
> child. What I want to ask is: do you think trying to develop learning
> algorithms is doomed to fail or can we "unify" learning with an
> architectural outlook to truly achieve a baby's outstanding mental
> powers?

The latter makes sense to me, and that is the view that we have been taking.
To build an adequate learning machine we need to incorporate powerful
processes for self-modeling and credit assignment, so that the system can
improve itself even as it begins to get fairly complicated. This may
require embodying the system with knowledge of the kind possessed by a
typical programmer--about how to augment, structure, and debug a society of
processes.

However, one should realize that a baby machine without adequate
representations to begin with will not learn quickly. The efforts of Cycorp
have been important in establishing what at least some of these
representations might be.

> 3) Some of evolution or ANN research is seen as part of machine
> learning which can be used for standard problems like classification,
> regression, clustering. But objectively speaking no method in machine
> learning can be the absolute best including ANN and GA's. However, for
> some specific problems an ANN or GA will be the best like a decision
> tree with C4.5 will be the best for another set of problems. For
> instance, there is a paper that uses a simple 3-layer network to
> predict a stock market's index with 99% sign accuracy. (The actual
> number isn't too meaningful but it was still impressive) And you seem
> to be verifying "no free lunch theorem" in your drafts of Emotion
> Machine by saying that each method is suitable for a class of
> problems. So, you probably don't think that learning algorithms are
> irrelevant. However, the learning systems don't have a large degree of
> reflection and self-reflection therefore they don't fit in the "big
> picture". How can we integrate learning algorithms in a complete mind?

Individual learning methods like neural networks are the tools in our
toolbox. The trouble is, we don't know very much about exactly what types
of problems these tools are suitable for and unsuitable for. The reflective
layer will need this kind of knowledge.

Regards,

Push Singh

Gary Forbis

unread,
May 19, 2003, 10:55:23 PM5/19/03
to
wildstar <wilds...@hotmail.com> wrote in message news:<Xns9380A2242DF3Dwi...@216.168.3.44>...

> In a moderator NG, the NG messages would be intercepted sent to the
> Moderator's (Moderation Email Address) and he would set the *plonk* file
> to filter the annoying users. Then forward the messages from whom are not
> in the *plonk* file list, to the newsgroup. So everyone except for the
> annoying users would be happy.

In my experience, in a moderated group those who disagree with the moderator
are plonked. I suspect this is the definition of "annoying".

wildstar

unread,
May 19, 2003, 11:18:43 PM5/19/03
to
forbi...@msn.com (Gary Forbis) wrote in
news:5a1238fe.0305...@posting.google.com:


> In my experience, in a moderated group those who disagree with the
> moderator are plonked. I suspect this is the definition of
> "annoying".
>

Yes and no

Annoying can mean someone who continues to annoy the moderator or users
on the list/NG. This generally means, Rule 1: Sysop is always right,
don't piss him off. (You may substitute SYSOP with Moderator). Rule 2:
Don't piss everyone else off. Rule 3: Obey by the Rules. Depending on
your moderator, he can be cool or he can be a big Power-crazed a$$hole.
Choose your moderator well. Now, If I were moderator, I probably would
*plonk* a disobedient jerk. The rules would be rather flexible and people
can be removed from a *plonk* file filter. My definition on AI is broad
and flexible. Discussion of wars in Iraq does not really indicate
anything related to the subject theme unless it is clear enough. OTOH:
Insulting other people is *flaming* and *trolling*. Excessively doing it
will constitute being banned (*plonk*). This usually occurs if the
individual did not listen to the warning. For the most part if there is
an argument, a warning (suggestion) would be posted to esstentially quit
arguing/insulting. This is detractive to a NG.

A reckless moderator who bans anyone that disagrees with him, kicking
people out every day is wrong and a new moderator should be put in place.
There is abuse of power.

That is something one should watch out for.


Message has been deleted

wildstar

unread,
May 20, 2003, 1:25:19 AM5/20/03
to


> Well Wild, I have to say you guys definitely have some problems over
> here. I am not sure a moderator is gonna help. All in all, I think you
> guys need a little more tolerance, understanding and open-mindedness,
> and not so much name-calling, antagonism, and one-sidedness. Even in
> the couple of civil interchanges I had, the responders could only see
> things one way. What a bummer.
>
> Well, I'm heading back to engineering where we live in a world where
> at least we can see some fruits born from our tiny little labors. Had
> my quota of philosophy for another year, or six.

I personally had enough of his insults and personally don't wish to read
his one-sidedness insults. It is one thing that he post his opinion on a
related subject. George, posting off-topic junk most of the time.

I'm going to have to make message on this NG.

Ralph Daugherty

unread,
May 20, 2003, 1:33:08 AM5/20/03
to

dan michaels wrote:
> Ralph Daugherty <rdau...@columbus.rr.com> wrote in message news:<3EC869D6...@columbus.rr.com>...
>
>>dan michaels wrote:
>>
>>>Reading it will make you understand why this article engendered
>>>multiple threads on multiple forums with 1000 or so responses. The
>>>article had quotes like
>>>
>>>... AI has been braindead ....
>>>... students are wasting their lives on these stupid little robots
>>>....
>>>
>>>Simply reiterating the old arguments is not gonna get so much
>>>excitement.
>>
>>
>
>>The response to the old arguments sounded like give neural nets a few more
>>decades before thinking about tossing it on the heap of Minsky's symbolic
>>logic, in other words, every approach gets 50 years before being dumped, I
>>guess. And that has nothing to do with AI students building robots anyway.
>>
>
>
> I suspect if someone did the statistics, they would find the amount of
> time, effort, grad students, and esp funding that has gone into
> symbolic approaches vastly exceeds all the others. All I am saying is
> that the other avenues at least need a "fair" chance. NN's and
> Reactive AI are still at the toy++ level, but I expect both of these
> approaches to show considerable advances over the next decade or two.
> Subsumption alone may be limited, but over time there will be many
> serious combinations of this with other methods.
> =================
>
>
>
>> Minsky is correct, but of course cognitive dissonance would prevent most
>>of those engaged in the approaches being questioned from agreeing. In any
>>event, I stated that I believe he would have given credit to any long term
>>effort to make software smarter as he did with Lenat's Cyc, versus limiting
>>his bestowment of recognition to only that one approach.
>
> ................
>
>
> I have always felt that the problem with Cyc is that it was like give
> the AI a fish, rather than teach the AI how to fish. Hand-coding of
> rules by rooms full of grad/postgrad students typing away at keyboards
> seems like such a deadend approach. This seems an obvious flaw, even
> to an outsider.
> =======================
>
>
>
>> Building little robots from
>>scratch over and over to respond to sensor input like insects is not making
>>software smarter and has nothing to do with artificial intelligence.
>
>
>
> Very true - only change it to read ..... "has nothing to do with
> TOP-DOWN artificial intelligence". There is another world out there.
>
> I hate the thought of even "thinking" about the issue of
> consciousness, because it too is almost exclusively argued by the
> top-down bunch. Dennett is certainly one of these, but he did at least
> write a book called "Kinds of Minds", which leaves open other
> possibilities. And one of his collaborators, whom he avows he
> completely disagees with, also has other ideas - namely, Nicholas
> Humphrey in "A History of the Mind". If you define your terms too
> narrowly then you only do have one option. Go to it, but leave others
> their choices too.
>
> I have mentioned this elsewhere, but I think AI and brain science both
> took the same wrong turn about 50 years ago. In both cases, they
> achieved some easy and early successes, and then immediately made a
> conscious decision to thenceforth attack the "most" difficult
> problems. This means the kind of AI MM is talking about, and studying
> the CNS of mammals rather than simpler preparations. One day we had
> Lettvin studying frogs, the next day Hubel+Wiesel studying cats, and
> thenceforth the lion's share of effort and funding has gone towards
> monkeys and other mammals. H+W had some truly incredible early on
> findings, but as shown by Hubel's recent book "Eye Brain and Vision",
> in the intervening 45 years they have made no truly "fundamental"
> discoveries on a par with the original ones. I think there are 2
> lessons here.
>
>
> d.
> ==================


Ok, changed to top-down AI. I can see the approach of presenting info as
if a child is learning by watching and being taught, or could be viewed as
accelerated learning as if the entity had those experiences and assimilated
it. On the other had, are experiences assimilated as IF THEN rules? I can
hardly recall too many times when I could compare my actions to a Basic
script, and I'm a programmer. It's more following patterns than rules, with
no conscious thought of what pattern leads to another. Maybe subconsciously
there's IF this THEN that, but I think a streaming of consciousness analogy is
closer to reality than binary decision making in emulating our brains. I
hadn't thought that before, but that's my reaction to your comments.

So that may segue to what I would presume you propose is a low level
accumulation of inputs that trigger actions of all types including intelligent
behavior, with more input or input from the advanced areas (lobes, nodules, ?)
of the brain required for the human level intelligent behavior, and that might
be modelled with NN. Fair enough. Before I would say I would have difficulty
envisioning the cascade of NN events accumulating to something resembling
human reasoning, and I still do, but it is more analogous to the streaming of
consciousness basis of behavior that I would describe over some type of
decision making based on rules that all the symbolic logic hopes to emulate.

Thanks for the insights, dan.

rd

KP_PC

unread,
May 20, 2003, 2:14:16 AM5/20/03
to
"Neil W Rickert" <ricke...@cs.niu.edu> wrote in message
news:baaqc2$qs7$1...@husk.cso.niu.edu...

| "KP_PC" <k.p.c...@worldnet.att.net> writes:
| >"Neil W Rickert" <ricke...@cs.niu.edu> wrote in message
| >news:ba9pvi$764$1...@husk.cso.niu.edu...
| >| "KP_PC" <k.p.c...@worldnet.att.net> writes:
|
| >| >[...]|
| >| [...]

| The record appears consistent. There is
| never any useful content.
|
| Apparently the reason that you are ignored,
| is that you have nothing
| of interest to say.
|
| >| >| If you actually have something to say,
| >| >| give a web link to your theory.
|
| >| >There you go again - 'If it's anything [after
| >| >Letterman], then it will have a web page.'
|
| >| >I don't have a web page.
|
| And that turns out to have been a dishonest response.

I don't have a web page pertaining to NDT. I do have a web page
pertaining to 'atomic' isotopes:

http://home.att.net/~k.p.collins/wsb/html/view.cgi-home.html-.html

| I did not ask for your web page. I asked
| for a relevant link. You are now asserting
| that there are relevant links on google.
|
| I'm calling your bluff. Provide some relevant
| links, if there are any.

a small example:

http://groups.google.com/groups?q=td+e/i-minimization&start=260&hl=en
&lr=&ie=UTF-8&scoring=d&selm=53k7r6%24j62%40nntp1.u.washington.edu&rn
um=264

I see there is a 'problem' if Google only goes back to 1996 [which is
'strange' because Google purchased Deja News, and Deja News used to
get everything - I don't do Googles on my posts - have just presumed
that everything I've posted was searchable].

By 1996, I was discussing things in a way that derived in my prior
posts. NDT is huge. I can't rewrite all of it in every msg I post.

I have an old hypertext doc that I send out gratis, and which
discusses the basics of NDT. [~350k attachment. runs under MSDOS[tm]
or Windows[tm].]

Let me know if you want a copy.

NDT is everything I claim it to be.

I know of no Neuroscience experimental result that is not handled in
a straightforward fashion by NDT.

I cannot reiterate the contents of the Neuroscience stacks in every
msg I post, either.

K. P. Collins


JGC

unread,
May 20, 2003, 3:49:37 AM5/20/03
to

"james" <jliu...@rogers.com> wrote in message
news:4qDxa.231598$kYH....@news01.bloor.is.net.cable.rogers.com...
> There is a major difference between today's computers and us. Computers
> will never argue with each other about what is correct and what is not
> correct in the way we argue. All computers can do is logical derivation.
I
> completely agree with Marvin's opinion - which is actually: "I feel that
his
> opnion is correct".
>
> Martin's statements are not completely logical. That is why people can
> easily disagree with it with a good reason. In other words, we have a
> difference between our "common senses".
>
> AND this is the kind of common sense that today's computers don't have.
>
> james

Very risky to use the word "never" about what is or is not
possible in the future.

Wasn't it Albert Einstein that commented that common sense
is just a collection of prejudices we collect over a lifetime, or
words to that effect?

The reason we disagree is most likely because we have different
social agendas. We have emotional needs that keep us alive and
reproducing offspring. If a machine had the same requirements
maybe it would start to "argue" about what is right and wrong
from its relative selfish point of view?

Do cockroaches, mice or monkeys have common sense?

Isn't "common sense" simply "knowledge" that is common to
everyone? Logical inferences based on experiences common
to us all? Something perhaps AI programs have not had
access to?

Can you define what you mean by "common sense" and then
demonstrate (prove) that it cannot be embodied in a computer?

JC

Acme Debugging

unread,
May 20, 2003, 6:45:09 AM5/20/03
to
nuk...@invalid.addr (nukleus) wrote in message news:<baajk6$la0$1...@news.ukr.net>...

> Sorry, I have no time.
> I am a busy entity.

Busy being a communist, figuring out how one goes about blacklisting
one's self, or pleading with people not to fling you into the c.a.p.m.
briar patch? That last was not an insult, in case you know who Jimmy
Smith is and that sometimes intelligence doesn't count. I have more
respect than that, Golden Rule or not. I plead to Brer Fox beside you,
after all.

Anyway, thanks for confirming the theory, as if it needed confirming.

Well if Makiguchi isn't going to fly, and the moral consequences of
androids on humans isn't going to fly, then let's try this old game. I
write a little story, and you finish it. Simple, eh? But keep up the
short lines, white space, z for s, silly Jesus stuff, and the enough
terminator if you think it gives you a bigger nose, or whatever you
need to distinguish yourself to our guests in place of a philosophy.
Here goes.

There's an old fisherman, in bad shape. These days he just sits on
the dock at the lake, day after day, watching people from the city
come out to fish in their silly tinhorn fishing gear. Each day it's a
new group, but it's always the same.

He hears them arguing in the boat, "I've deduced statistically that the
fish will almost certainly be over here!"

"No, you have to get into the mind of the fish. Their behavior forces
them to swim in places like over there!"

"No, according to "Fishing - Theory and Practice" page 672, paragraph
3, they will be over there!"

"No, drop the line in deeper, we must use the bottom-up approach!"

Etc., etc. It's always the same. The old fisherman has been listening
to it for 20 years. But nobody has ever caught a fish in that lake.

So the old fisherman goes to K-Mart and buys one of those talking fish,
you know, push a button and it says, "Don't worry, be happy..." Being
somewhat of a technology buff and very bored, he re-records the tape
and rigs it up mechanically under the boat.

So now the mechanical fish pops up once in a while and says, "There
ain't no fish in this lake!" This throws the tinhorns into a tizzy.
"Shoot that fish," and "Hey stupid K-Mart fish, get out of here!" This
is a source of great hilarity to the old fisherman, and helps pass the
time.

So purely to amuse himself, the old fisherman installs a mike in the
mechanical fish with rewind. Now when someone says, "You idiots, I told
you the fish over there are conditioned to grab this bait" the fish
pops up and says, "You idiots, I told you the fish over there are
conditioned to grab this bait!" And when someone says, "Go shoot
yourself, stupid fish asshole!" the fish says, "Go shoot yourself,
stupid fish asshole!"

Later, to conserve batteries, the fish just says, "Asshole!"

...?

Ok, your turn.

Larry

George

unread,
May 20, 2003, 2:04:21 PM5/20/03
to

Now you notice the entire school is laughing out quietly behind
you as you pass the kids on the hallway. The US loves Jesus.

George

It is loading more messages.
0 new messages