This message was sent by Google Scholar because you're following new results for [].![]()
--
You received this message because you are subscribed to the Google Groups "cap-talk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cap-talk+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/cap-talk/CAK5yZYg0j66ivpod_H9fK9S054mwBQ%2BkUAkki4Yhygubh%2B-xew%40mail.gmail.com.
Haha, this is the funniest subthread on cap-talk in a while for me.
Tell me, is a skull a membrane? It seems to certainly be a kind of
perimeter-based security.
I'll stop there before my joking musings get even worse. ;)
To view this discussion on the web visit https://groups.google.com/d/msgid/cap-talk/875z3cv4ll.fsf%40dustycloud.org.
On Feb 1, 2021, at 9:11 AM, Christopher Lemmer Webber <cwe...@dustycloud.org> wrote:or by
recalibrating things to accommodate the manipulation.
On 1 Feb 2021, at 18:35, Christopher Lemmer Webber <cwe...@dustycloud.org> wrote:
Thus when people say "gosh, I hope we never find out that quantum
mechanics are fully deterministic, because that would really undermine
free will" I think "buddy, if you really need errors in your system to
believe in your agency-equivalence-you-call-free-will you've got a
pretty poor model". If I create completely deterministic computer
programs which achieve equivalent senses of capacity to express their
own interests as fellow humans, I believe it would be wrong to not give
them equal consideration as humans, both in terms of moral treatment but
also as in terms of moral responsibility.
And if that isn't the case, why am I bothering to try to convince you?
;) (But you may already agree.)
Still, the possibility of being immersed in a deterministic universe
should give us pause for reflection on how to *use* our agency... play
the best role we can, in this moment, to make the world better.
Enjoying the privilege of experiencing this time-slice as a conscious
agent,
- Chris
Still, the possibility of being immersed in a deterministic universe
should give us pause for reflection on how to *use* our agency... play
the best role we can, in this moment, to make the world better.
On 2 Feb 2021, at 15:12, Bill Frantz <fra...@pwpconsult.com> wrote:On 2/2/21 at 7:33 AM, neil....@forgerock.com (Neil Madden) wrote:If an AI crosses that boundary of sophistication (to moral agency) it’s because somebody has either explicitly designed it to do so or was reckless enough to design an intelligent artefact they didn’t understand. In both cases the person who designed it will *always* bear responsibility for the operation of that AI.
I thought all of the modern machine learning AIs, like face recognition, aren't understood by their creators. Come to think of it, I don't really understand my children either, although I like them.
--
You received this message because you are subscribed to the Google Groups "cap-talk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cap-talk+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/cap-talk/9D24103A-4B89-4C2B-B656-0AA2BB1F2DD0%40forgerock.com.
On 2/2/21 at 7:33 AM, neil....@forgerock.com (Neil Madden) wrote:
>If an AI crosses that boundary of sophistication (to moral
>agency) it’s because somebody has either explicitly designed
>it to do so or was reckless enough to design an intelligent
>artefact they didn’t understand. In both cases the person who
>designed it will *always* bear responsibility for the operation
>of that AI.
I thought all of the modern machine learning AIs, like face
recognition, aren't understood by their creators. Come to think
of it, I don't really understand my children either, although I
like them.