Source(s)
http://cyborg.blogspot.com/2011/01/aiapp.html#risks
http://code.google.com/p/mindforth/wiki/JsAiManual#Warning
Mentifex
--
http://answers.yahoo.com/question/index?qid=20110211090723AAnEayI
> Two images of the post-Singularity future loom before me,
> an independent scholar in artificial intelligence (AI). Having
> created primitive AI software that can already think and
> learn from us "homines sapientes", I hope that my True AI
> creation will give rise to an AI Prosperity Engine that will
> raise the standard of living for all human beings. OTOH
> (on the other hand), I fear firstly that unscrupulous corporations
> may commandeer and steal the Singularity,
This is true in general of all technology, and it's the reason why
people should be working indepedently on these projects. Unfortunately,
our resources being infinitisimal compared to those of the corporations,
we'll need a lot of luck, creativity, and Internet cooperation.
> and secondly that my AI creation may run amok and destroy us all.
This is something that must be built into it from the start.
See: http://singinst.org/upload/artificial-intelligence-risk.pdf
--
__Pascal Bourguignon__ http://www.informatimago.com/
A bad day in () is better than a good day in {}.
What do you mean with "think" here? On which basis does it "think"?
(Philosophically and technologically?)
> and learn from us "homines sapientes",
How? On what (i.e. logical) basis?
> I hope that my True AI
> creation will give rise to an AI Prosperity Engine that will
> raise the standard of living for all human beings.
This is also my hope (and should be of all humans).
> OTOH
> (on the other hand), I fear firstly that unscrupulous corporations
> may commandeer and steal the Singularity,
That is possible. One way may be to build an AI basis that it is not too
useful for them.
> and secondly
> that my AI creation may run amok and destroy us all.
Oh, and your AI creation permits it probably?
Do you have written about your AI in the web?
Burkart
Therefore (such fundamental) scientific development should be as
independent from economy as possible, the latter shouldn't have
influence on it.
>> and secondly that my AI creation may run amok and destroy us all.
>
> This is something that must be built into it from the start.
Right, (also) in that way that we humans always have an influence on the
AI system, the AI system has to learn from us (normal way/"mode") and it
has to follow our order in a worst case.
Burkart
>See: http://singinst.org/upload/artificial-intelligence-risk.pdf
Thanks for the link. Interesting article!
Hans-Georg