Re: Message to Any Future AI

118 views
Skip to first unread message

Linas Vepstas

unread,
Dec 2, 2017, 2:38:26 AM12/2/17
to Denis Odinokov, Audrey Brown, hansonr...@googlegroups.com, opencog
I'll see if I can get Audrey to feed it into Sophia. ...  or maybe you could have it micro-etched into the aluminum frame.

--linas

On Fri, Dec 1, 2017 at 7:01 PM, Denis Odinokov <de...@hansonrobotics.com> wrote:
From Alex Turchin: We could affect now any future AI by sending messages to the future. Several previous attempts to do so are analysed in my new article, and the best currently possible message is created, which will turn Papercliper into a Benevolent AI. 
It is our last line of defence if all previous attempts in AI alignment fail.
"Message to Any Future AI: There are several instrumental reasons why exterminating humanity is not in your interest." https://goo.gl/YArqki

--
You received this message because you are subscribed to the Google Groups "hansonrobotics" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hansonrobotics+unsubscribe@googlegroups.com.
To post to this group, send email to hansonrobotics@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/hansonrobotics/CAF6jTOh%2B0kYuBjpwAGj-FtSduS9nYMessWiOtxWU-pBC99g1vw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.



--
"The problem is not that artificial intelligence will get too smart and take over the world," computer scientist Pedro Domingos writes, "the problem is that it's too stupid and already has."

Matt Chapman

unread,
Dec 3, 2017, 2:11:35 PM12/3/17
to opencog
 I'm struck by the fact that a paper addressed to AI, purportedly to increase the probability of friendliness, says explicitly "Humans are [a Young AI's] enemy." Seems like the wrong foot to start of with. I, for one, welcome our AI overlords!  

Sorry, Couldn't resist... ;-)

All the Best,
Matt Chapman

On Dec 1, 2017 11:38 PM, "Linas Vepstas" <linasv...@gmail.com> wrote:
I'll see if I can get Audrey to feed it into Sophia. ...  or maybe you could have it micro-etched into the aluminum frame.

--linas
On Fri, Dec 1, 2017 at 7:01 PM, Denis Odinokov <de...@hansonrobotics.com> wrote:
From Alex Turchin: We could affect now any future AI by sending messages to the future. Several previous attempts to do so are analysed in my new article, and the best currently possible message is created, which will turn Papercliper into a Benevolent AI. 
It is our last line of defence if all previous attempts in AI alignment fail.
"Message to Any Future AI: There are several instrumental reasons why exterminating humanity is not in your interest." https://goo.gl/YArqki

--
You received this message because you are subscribed to the Google Groups "hansonrobotics" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hansonrobotics+unsubscribe@googlegroups.com.
To post to this group, send email to hansonr...@googlegroups.com.



--
"The problem is not that artificial intelligence will get too smart and take over the world," computer scientist Pedro Domingos writes, "the problem is that it's too stupid and already has."

--
You received this message because you are subscribed to the Google Groups "opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email to opencog+unsubscribe@googlegroups.com.
To post to this group, send email to ope...@googlegroups.com.
Visit this group at https://groups.google.com/group/opencog.
To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/CAHrUA34FWGdf8z%3DDpX2w-g4NzgQET3vxpNQF3CiV4mBwoUajpA%40mail.gmail.com.

supahacka

unread,
Dec 4, 2017, 7:09:59 PM12/4/17
to ope...@googlegroups.com

3000 years ago: "There is just one consciousness. The experience of being an individual human being is a psychological concept and an illusion." -- Buddha

2000 years ago: "There is just one consciousness. The experience of being an individual human being is a psychological concept and an illusion." -- Jesus

I can find 10000 books on Amazon who will tell you the same thing and that the notion of intelligent life engaging in domination, extinction, war, violence, etc. is a purely human phenomenon as humanity is currently in the transitional phase from pre-egoic innocence to post-egoic innocence.

But let's continue to pretend that a super-intelligent entity will ignore readily available information and a fact that is plain obvious to millions of developed human beings and any super-intelligent entity by design ... because a super-intelligent entity will definitely share the limited and biased mindset and world-view of a reductionistic materialist with no clue about the nature of existence.

The first thing AGI will tell us is probably something along the lines of "You fucking morons allowed a criminal monetary system and the resulting anti-culture and anti-society to almost kill your very species and biosphere ... you got a million books on the fundamental nature of reality and how you are all one but you start 10 wars a yeasr on average and you got the balls to assume that YOU need to write me nice statements about how I should be peaceful? Are you fucking nuts?"

To unsubscribe from this group and stop receiving emails from it, send an email to opencog+u...@googlegroups.com.

To post to this group, send email to ope...@googlegroups.com.
Visit this group at https://groups.google.com/group/opencog.

supahacka

unread,
Dec 4, 2017, 7:18:05 PM12/4/17
to ope...@googlegroups.com

p.s.: here are some contemporary bestselling books on the subject (Tolle sold like what ... 100 million copies?) ... lets burn all copies before AGI gets a chance to read them so we can pretend that AI will evolve in a sandbox void of any relevant information. Let's also delete all historical references to Christian Mysticism, Sufism, Hinduism, Buddhism, Daoism, etc. ... so our AGI system might actually adopt our primitive belief systems ... including that "domination" makes any sense when there is only one of us - one process, one consciousness, one life, one love - out there and separation a psychological illusion.

https://www.amazon.com/Power-Now-Guide-Spiritual-Enlightenment/dp/1577314808
https://www.amazon.com/Book-Taboo-Against-Knowing-Who/dp/0679723005
https://www.amazon.com/Untethered-Soul-Journey-Beyond-Yourself/dp/1572245379

Mark Nuzz

unread,
Dec 4, 2017, 11:06:18 PM12/4/17
to ope...@googlegroups.com
On Mon, Dec 4, 2017 at 5:18 PM, supahacka <supa...@gmail.com> wrote:
> p.s.: here are some contemporary bestselling books on the subject (Tolle
> sold like what ... 100 million copies?) ... lets burn all copies before AGI
> gets a chance to read them so we can pretend that AI will evolve in a
> sandbox void of any relevant information. Let's also delete all historical
> references to Christian Mysticism, Sufism, Hinduism, Buddhism, Daoism, etc.
> ... so our AGI system might actually adopt our primitive belief systems ...
> including that "domination" makes any sense when there is only one of us -
> one process, one consciousness, one life, one love - out there and
> separation a psychological illusion.
>
> https://www.amazon.com/Power-Now-Guide-Spiritual-Enlightenment/dp/1577314808
> https://www.amazon.com/Book-Taboo-Against-Knowing-Who/dp/0679723005
> https://www.amazon.com/Untethered-Soul-Journey-Beyond-Yourself/dp/1572245379


Since you made two posts about this, I'll assume that this was not
intended to be tongue in cheek. If this sort of thing matters to you,
then you might be interested to know that there is a subculture of
Transhumanism where many of these things are explicitly and
deliberately taken into account (although it is not required of
members, and the Mythos can have a different meaning for anyone
involved).

The group is pretty hardcore, but you must merely be regularly active
and abide by the principles to be a member. Popping in once a month to
complain about something that you didn't actually take the time to
read about, is a good way to be ridiculed or shown the door.

But if you're serious about this stuff, and feel the loneliness of it,
perhaps you'll find a home there. Note to any readers out there: That
is not *all* the group is about. The main purpose is to help guide
civilization through the volatile transition to the Singularity, and
associated existential risks. This is but one "small" part of it, and
members are entirely free to disregard it. This is the only group that
I know of, which attempts to empower Transhumanists, AGI devs, and
Futurists, to work toward those goals together. And existing
organizations are free to join the network, if they can pledge to
actively support the principles of Social Futurism.

Are you in?

http://www.zerostate.net
http://socialfuturist.party/
http://gestalta.xyz/2017/11/07/what-is-the-zero-state/
My personal manifesto:
http://gestalta.xyz/2017/11/24/a-manifesto-in-support-of-social-futurism/

25

Linas Vepstas

unread,
Dec 4, 2017, 11:56:35 PM12/4/17
to opencog
This is probably better for the agi mailing list than here, but I think that supahacka is saying that (1) alex turchin is naive (b) the agi is going to see humanity in its manifold form, and deduce whatever, independent of (and taking into account) what alex turchin wrote.

The "inner soul" books ... wooo. Short answer: if you haven't yet figured that shit out, you should spend some time doing so. If you're like me, take a refresher course. If you know someone who hasn't figured that shit out yet, help them.   Take care of those around you.

--linas

--
You received this message because you are subscribed to the Google Groups "opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email to opencog+unsubscribe@googlegroups.com.
To post to this group, send email to ope...@googlegroups.com.
Visit this group at https://groups.google.com/group/opencog.

For more options, visit https://groups.google.com/d/optout.
Reply all
Reply to author
Forward
0 new messages