Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss
Groups keyboard shortcuts have been updated
Dismiss
See shortcuts

Strong AI research

7 views
Skip to first unread message

Eray Ozkural exa

unread,
Jul 12, 2004, 8:24:53 PM7/12/04
to
Hello there,

Can we make a little survey of labs that work on strong AI projects?

What do you think are the requirements for a project that we can label
as a promising strong AI candidate? For instance, what kind of
knowledge should be built-in and what kind of knowledge should it
learn by itself? (ie. should the sensory input be simply bitstrings,
what kinds of preprocessing should be present?)

Regards,

--
Eray

[ comp.ai is moderated. To submit, just post and be patient, or if ]
[ that fails mail your article to <com...@moderators.isc.org>, and ]
[ ask your news administrator to fix the problems with your system. ]

MKN

unread,
Jul 20, 2004, 9:06:41 AM7/20/04
to
er...@bilkent.edu.tr (Eray Ozkural exa) wrote in message news:<40f32bcf$1...@news.unimelb.edu.au>...

> Hello there,
>
> Can we make a little survey of labs that work on strong AI projects?
>
> What do you think are the requirements for a project that we can label
> as a promising strong AI candidate? For instance, what kind of
> knowledge should be built-in and what kind of knowledge should it
> learn by itself? (ie. should the sensory input be simply bitstrings,
> what kinds of preprocessing should be present?)
>

The generally accepted definition of the term "strong AI" is
the believe that machine that does logical processing are
in a way "thinking," and "weak AI" point of view is that
machines can act "as if" they are intelligent.

Either way, there can be people with either/or both philosophies
on the same project. In another word, this is similar to ask does
the robot ASIMO walk?

http://asimo.honda.com/inside_asimo_movies.asp

My project are pretty much on finding new cognitive tools
that help human think more effectively, similar to tools
that help us get from one place to another (cars, trains,
airplanes); they are rather remarkable machines that
we have created to enhance our physical abilities.

This is really a good time for AI research because the
tools that AI scientists are generally available at a
very reasonable price (so if you are looking to solicit
a multi-million dollar project like a particle collider
you have to think of a really good reason for that
much money). Our tools are the rather unimpressive
linux, Java, and Internet, but what can be done is
rather amazing when you coming to the lab at Saturday
mornings.

Some AI topic include:
Judicial decison process.
Complex environment analysis (to help the human
in cognitive processing).
Sport team coaching.
Human relationship analysis.
etc, etc.

MAN

Eray Ozkural exa

unread,
Jul 25, 2004, 1:14:54 AM7/25/04
to
m...@acm.org (MKN) wrote in message news:<40fd18dc$1...@news.unimelb.edu.au>...

> The generally accepted definition of the term "strong AI" is
> the believe that machine that does logical processing are
> in a way "thinking," and "weak AI" point of view is that
> machines can act "as if" they are intelligent.

The definitions are philosophical since they started with Searle's
distinction, I believe. I wonder what the "information requirements"
would be from a CS point of view, rather than a comp.ai.philosophy
point of view.

> Either way, there can be people with either/or both philosophies
> on the same project. In another word, this is similar to ask does
> the robot ASIMO walk?
>
> http://asimo.honda.com/inside_asimo_movies.asp

I agree that philosophy must not matter. However, I am not certain if
the question is similar. ASIMO walks, but has very little
intelligence.

> My project are pretty much on finding new cognitive tools
> that help human think more effectively, similar to tools
> that help us get from one place to another (cars, trains,
> airplanes); they are rather remarkable machines that
> we have created to enhance our physical abilities.

Can you give us a bit of more information? Do you have a website or
papers which explain these tools? And do not these problems look
fitting for weak AI, rather than strong AI? Or do you imply that the
distinction is vacuous?

> This is really a good time for AI research because the
> tools that AI scientists are generally available at a
> very reasonable price (so if you are looking to solicit
> a multi-million dollar project like a particle collider
> you have to think of a really good reason for that
> much money). Our tools are the rather unimpressive
> linux, Java, and Internet, but what can be done is
> rather amazing when you coming to the lab at Saturday
> mornings.

Would you be willing to tell us which lab?

> Some AI topic include:
> Judicial decison process.
> Complex environment analysis (to help the human
> in cognitive processing).
> Sport team coaching.
> Human relationship analysis.
> etc, etc.

These look quite interesting. Applications with a social bend.

Regards,

--
Eray Ozkural

Pei Wang

unread,
Sep 12, 2004, 9:59:05 PM9/12/04
to
"Eray Ozkural exa" <er...@bilkent.edu.tr> wrote in message
news:40f32bcf$1...@news.unimelb.edu.au...

> Hello there,
>
> Can we make a little survey of labs that work on strong AI projects?
>
> What do you think are the requirements for a project that we can label
> as a promising strong AI candidate? For instance, what kind of
> knowledge should be built-in and what kind of knowledge should it
> learn by itself? (ie. should the sensory input be simply bitstrings,
> what kinds of preprocessing should be present?)
>
> Regards,
>
> --
> Eray
>

I have a list of such projects at
http://www.cis.temple.edu/~pwang/203-AI/Lecture/AGI.htm, selected according
to the following criteria:
a.. Each of them has the plan to eventually grow into a "thinking machine"
or "artificial general intelligence" (so it is not merely about part of AI);
b.. Each of them has been carried out for more than 5 years (so it is more
than a PhD project);
c.. Each of them has prototypes or early versions finished (so it is not
merely a theory), and there are some publications explaining how it works
(so it is not merely a claim).
Some discussions on them can be found at the AGI mailing list (see
http://www.mail-archive.com/agi%40v2.listbox.com/).

Pei Wang

Chris Malcolm

unread,
Oct 7, 2004, 12:59:21 AM10/7/04
to
er...@bilkent.edu.tr (Eray Ozkural exa) writes:

>m...@acm.org (MKN) wrote in message news:<40fd18dc$1...@news.unimelb.edu.au>...

>> The generally accepted definition of the term "strong AI" is
>> the believe that machine that does logical processing are
>> in a way "thinking," and "weak AI" point of view is that
>> machines can act "as if" they are intelligent.

>The definitions are philosophical since they started with Searle's
>distinction, I believe. I wonder what the "information requirements"
>would be from a CS point of view, rather than a comp.ai.philosophy
>point of view.

Asking for the distinctions to be made in terms of information
requirements already makes certain presuppositions about the kinds of
implementation of mind that one is considering. It's a natural
presumption when considering the problem from a CS point of view,
which is one of the problems of taking a CS view of these
questions. Since the entire point of Searle's original Chinese Room
paper was to claim a fatal fundamental flaw in that viewpoint, to ask
such a question is begging the question.
--
Chris Malcolm c...@infirmatics.ed.ac.uk +44 (0)131 651 3445 DoD #205
IPAB, Informatics, JCMB, King's Buildings, Edinburgh, EH9 3JZ, UK
[http://www.dai.ed.ac.uk/homes/cam/]

Eray Ozkural exa

unread,
Oct 13, 2004, 4:06:48 PM10/13/04
to
c...@holyrood.ed.ac.uk (Chris Malcolm) wrote in message news:<4164cd25$1...@news.unimelb.edu.au>...

> er...@bilkent.edu.tr (Eray Ozkural exa) writes:
>
> >m...@acm.org (MKN) wrote in message news:<40fd18dc$1...@news.unimelb.edu.au>...
>
> >> The generally accepted definition of the term "strong AI" is
> >> the believe that machine that does logical processing are
> >> in a way "thinking," and "weak AI" point of view is that
> >> machines can act "as if" they are intelligent.
>
> >The definitions are philosophical since they started with Searle's
> >distinction, I believe. I wonder what the "information requirements"
> >would be from a CS point of view, rather than a comp.ai.philosophy
> >point of view.
>
> Asking for the distinctions to be made in terms of information
> requirements already makes certain presuppositions about the kinds of
> implementation of mind that one is considering. It's a natural
> presumption when considering the problem from a CS point of view,
> which is one of the problems of taking a CS view of these
> questions. Since the entire point of Searle's original Chinese Room
> paper was to claim a fatal fundamental flaw in that viewpoint, to ask
> such a question is begging the question.

I appreciate this critical comment. Let me then also ask of which
criteria we should be employing for distinguishing Strong AI research.
Should we be drawing these borders guided by the works in philosophy
of mind, psychology, neuroscience, linguistics, etc.?

Regards,

--
Eray Ozkural

Jeroen van Maanen

unread,
Oct 13, 2004, 4:08:27 PM10/13/04
to
In article <4164cd25$1...@news.unimelb.edu.au>, c...@holyrood.ed.ac.uk
says...

> er...@bilkent.edu.tr (Eray Ozkural exa) writes:
>
> [...]

> >The definitions are philosophical since they started with Searle's
> >distinction, I believe. I wonder what the "information requirements"
> >would be from a CS point of view, rather than a comp.ai.philosophy
> >point of view.
>
> Asking for the distinctions to be made in terms of information
> requirements already makes certain presuppositions about the kinds of
> implementation of mind that one is considering. It's a natural
> presumption when considering the problem from a CS point of view,
> which is one of the problems of taking a CS view of these
> questions. Since the entire point of Searle's original Chinese Room
> paper was to claim a fatal fundamental flaw in that viewpoint, to ask
> such a question is begging the question.

I strongly disagree with John Searle. And the matter is not *just*
philosophical. The question is: "what is meaningful research". I still
like the answer that Carl Popper gave: "anything that can be either true
or false can be the subject of meaningful research", (although I would
like some more room for statistical and information theoretic research).

Two hypotheses in the context of the Chinese Room thought experiment
that allow scientific treatment are:

[1] Is it true or false that conscious thought is a property of mental
states (cf. the physical theories of "ether" and "phlogiston")?

[2] Is it true or false that the implementation of the mechanism that
executes conscious thought matters to the phenomenon of thought (cf.
computer programs and computer hardware)?

I would say that it is time again for some serious "strong" AI research
again because there is still knowledge to be gained :-) .

Jeroen

P.S.

Visit also the discussion forum on my machine learning project:

http://www.lexau.org/zope/Lexau/DiscussionForum

0 new messages