On Wednesday, September 21, 2016 at 9:07:41 PM UTC-7, Quadibloc wrote:
> Just read this article:
>
>
https://www.hpcwire.com/2016/09/20/larry-smarr-helps-ncsa-celebrate-30th-anniversary/
>
> One quote...
>
> "Asimov had the three laws to protect the robots from doing harm to humans.
> We’ll get through this AI transition I believe, but only if everybody
> realizes this is a one of the most important change moments in human history,
> and it isn’t going to be happening 100 years from now, but rather it’s going
> to be in the next five, 10, to 20 years."
The Three Laws require a lot of interpretation that even humans can't reliably manage all of the time.
> Since I think that hardly anyone expects that superintelligent AIs will
> indeed come into existence within even 20 years, even at the incredible rate
> computers have been improving, if he is right about the other stuff, we're
> certainly going to be DOOMED, as we *won't* "get through this AI transition".
>
> Given that Moore's Law is coming to an end, as nobody as any idea of how to
> get past 7nm,
That only applies to silicon. There are other avenues being explored, optical and spintronics, frinst.
> and Dennart scaling packed it in before that, superintelligent AIs may
> be always 20 years away like fusion power, meaning we will be safe... but
> there is some cause for concern.
"Superintelligent" AIs aren't the immediate problem, medium-smart but still clueless specialized "expert systems" like self-driving cars will get us first.
Mark L. Fergerson