BBOC Brain Based Access Control

28 views
Skip to first unread message

Mark S. Miller

unread,
Jan 31, 2021, 5:04:31 PM1/31/21
to cap-talk
The phrase BBOC doesn't appear in the paper, but seems appropriate. Seems wild but should be taken seriously.

---------- Forwarded message ---------
From: Google Scholar Alerts <scholarale...@google.com>
Date: Sun, Jan 31, 2021 at 12:25 PM
Subject: "Polaris: virus-safe computing for Windows XP" - new citations
To: <eri...@gmail.com>


[PDF] Towards Improving Cybersecurity and Augmenting Human Training Performance Using Brain Imaging Techniques

ML Rahman - 2020
Human behaviors can weaken the security of cyber-physical systems. However,
conventional security research focuses more on hardware and software security than
analyzing and improving human behaviors to provide better protection for digital …
SaveTwitterLinkedInFacebook

 

This message was sent by Google Scholar because you're following new results for [].



--
  Cheers,
  --MarkM

Christopher Lemmer Webber

unread,
Jan 31, 2021, 7:41:05 PM1/31/21
to cap-...@googlegroups.com, Mark S. Miller
What does BBOC stand for?

Brain Based Object Capabilities?

I'd think BBAC would be Brain Based Access Control.


Mark S. Miller writes:

> The phrase BBOC doesn't appear in the paper, but seems appropriate. Seems
> wild but should be taken seriously.
>
> ---------- Forwarded message ---------
> From: Google Scholar Alerts <scholarale...@google.com>
> Date: Sun, Jan 31, 2021 at 12:25 PM
> Subject: "Polaris: virus-safe computing for Windows XP" - new citations
> To: <eri...@gmail.com>
>
>
> [PDF] Towards Improving Cybersecurity and Augmenting Human Training
> Performance Using Brain Imaging Techniques
> <http://scholar.google.com/scholar_url?url=https://escholarship.org/content/qt1kg856mj/qt1kg856mj.pdf&hl=en&sa=X&d=6293909860475261293&ei=ShIXYJa1L4fPmAGbxLaICw&scisig=AAGBfm1fZjVHJs-6FNxN4vf2ZcdOhgLUMA&nossl=1&oi=scholaralrt&hist=PuP2INoAAAAJ:1318297437757971962:AAGBfm2HyadxMTrEHuskzdUO5CESp8ph9Q&html=>
> ML Rahman - 2020
> Human behaviors can weaken the security of cyber-physical systems. However,
> conventional security research focuses more on hardware and software
> security than
> analyzing and improving human behaviors to provide better protection for
> digital …
> [image: Save]
> <http://scholar.google.com/citations?hl=en&update_op=email_library_add&info=bdH06at1WFcJ&citsig=AMD79ooAAAAAYfhFyoweIGpUY1lX5YwjfBfZ06E0L-Me>
> [image:
> Twitter]
> <http://scholar.google.com/scholar_share?hl=en&oi=scholaralrt&ss=tw&url=https://escholarship.org/content/qt1kg856mj/qt1kg856mj.pdf&rt=Towards+Improving+Cybersecurity+and+Augmenting+Human+Training+Performance+Using+Brain+Imaging+Techniques&scisig=AAGBfm3FD-CxcpM4uFWHKWp0UdD3YCtbwA>
> [image:
> LinkedIn]
> <http://scholar.google.com/scholar_share?hl=en&oi=scholaralrt&ss=in&url=https://escholarship.org/content/qt1kg856mj/qt1kg856mj.pdf&rt=Towards+Improving+Cybersecurity+and+Augmenting+Human+Training+Performance+Using+Brain+Imaging+Techniques&scisig=AAGBfm3FD-CxcpM4uFWHKWp0UdD3YCtbwA>
> [image:
> Facebook]
> <http://scholar.google.com/scholar_share?hl=en&oi=scholaralrt&ss=fb&url=https://escholarship.org/content/qt1kg856mj/qt1kg856mj.pdf&rt=Towards+Improving+Cybersecurity+and+Augmenting+Human+Training+Performance+Using+Brain+Imaging+Techniques&scisig=AAGBfm3FD-CxcpM4uFWHKWp0UdD3YCtbwA>
>
>
>
> This message was sent by Google Scholar because you're following new
> results for []
> <http://scholar.google.com/scholar?q=&as_sdt=0&scisbd=1&hl=en>.
> List alerts
> <http://scholar.google.com/scholar_alerts?view_op=list_alerts&email_for_op=erights%40gmail.com&alert_id=-h3f9YiISxIJ&hl=en>
> Cancel alert
> <http://scholar.google.com/scholar_alerts?view_op=cancel_alert_options&email_for_op=erights%40gmail.com&alert_id=-h3f9YiISxIJ&hl=en>
>
>
> --
> Cheers,
> --MarkM

Mark S. Miller

unread,
Jan 31, 2021, 8:11:42 PM1/31/21
to Christopher Lemmer Webber, cap-...@googlegroups.com
Omg I meant BBAC. Somehow the other got in my head.
--
  Cheers,
  --MarkM

Raoul Duke

unread,
Jan 31, 2021, 8:25:58 PM1/31/21
to cap-...@googlegroups.com, Christopher Lemmer Webber
already bodes well

--
You received this message because you are subscribed to the Google Groups "cap-talk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cap-talk+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/cap-talk/CAK5yZYg0j66ivpod_H9fK9S054mwBQ%2BkUAkki4Yhygubh%2B-xew%40mail.gmail.com.

Christopher Lemmer Webber

unread,
Jan 31, 2021, 10:39:35 PM1/31/21
to Raoul Duke, cap-...@googlegroups.com
Haha, this is the funniest subthread on cap-talk in a while for me.

Tell me, is a skull a membrane? It seems to certainly be a kind of
perimeter-based security.

I'll stop there before my joking musings get even worse. ;)
>> <https://groups.google.com/d/msgid/cap-talk/CAK5yZYg0j66ivpod_H9fK9S054mwBQ%2BkUAkki4Yhygubh%2B-xew%40mail.gmail.com?utm_medium=email&utm_source=footer>
>> .
>>

Ben Laurie

unread,
Feb 1, 2021, 5:05:22 AM2/1/21
to cap-talk, Raoul Duke
On Mon, 1 Feb 2021 at 03:39, Christopher Lemmer Webber <cwe...@dustycloud.org> wrote:
Haha, this is the funniest subthread on cap-talk in a while for me.

Tell me, is a skull a membrane?  It seems to certainly be a kind of
perimeter-based security.

I'll stop there before my joking musings get even worse. ;)

Brain security is really interesting. Of course, we're all familiar with the blood-brain barrier, but how about this

Christopher Lemmer Webber

unread,
Feb 1, 2021, 9:04:09 AM2/1/21
to cap-...@googlegroups.com, Raoul Duke
'Ben Laurie' via cap-talk writes:

> On Mon, 1 Feb 2021 at 03:39, Christopher Lemmer Webber <
> cwe...@dustycloud.org> wrote:
>
>> Haha, this is the funniest subthread on cap-talk in a while for me.
>>
>> Tell me, is a skull a membrane? It seems to certainly be a kind of
>> perimeter-based security.
>>
>> I'll stop there before my joking musings get even worse. ;)
>>
>
> Brain security is really interesting. Of course, we're all familiar with
> the blood-brain barrier, but how about this
> <https://www.researchgate.net/publication/333022400_Invisible_Designers_Brain_Evolution_Through_the_Lens_of_Parasite_Manipulation>
> ?

Wow. I'll admit that the mind-control-parasites branch of biology is
probably the scariest to me. But it's true that good defensive
strategies are generally brought forward by experiencing an actual
offense.

I've barely made it into the paper, but this was a real surprise to me:

Host manipulation has evolved independently at least 20 times;
fossilized ants show that present-day manipulation strategies by fungi
and helminths were already well established around 30–50 million years
ago, suggesting that they originated much earlier (see Poulin 2010;
Hughes 2014).

The bias of wanting to believe that neurology is complex stems a lot
from being an entity governed by neurology. Thus I also have a strong
desire to believe that mind-controlling attacks *must* be very
sophisticated, well developed after a long period of time. But of
course it's reasonable to believe that parasites could co-evolve with
neurology, and the simplest neurology patterns could be easy to attack,
thus leading to an easier hill-climbing approach.

Terrifying though. Yikes! Makes me appreciate all the "security
research" done by the brutal process of evolution that has allowed me to
live today. Thanks to everyone who died in the process to get us
here...

Christopher Lemmer Webber

unread,
Feb 1, 2021, 9:11:08 AM2/1/21
to cap-...@googlegroups.com, Raoul Duke
Wowee, just two paragraphs later:

This crucial observation was made by Read and Braithwaite (2012) in an
afterword to a book chapter, but—to my knowledge—has not been followed
up in the literature until now. It is worth quoting some passages:
“There are two ways hosts can protect themselves from behavior
attack. One way is to kill or in- capacitate the causal pathogen. The
other way is to counter the manipulation itself, either by making
behavior control systems less vulnerable to attack, or by
recalibrating things to accommodate the manipulation. Immunologists
study the first kind of de- fense; next to nothing is known about the
other kind [. . .]. How much of our neural complexity is a necessary
defense against ma- nipulative invaders? How much of the enor- mous
redundancy is to provide system level functionality if part of the
system is attacked? How much of the complex process of wiring a brain
during development is to prevent pathogen re-wiring?” (Read and
Braithwaite 2012:195). The authors predicted that these questions
would soon become central to be- havioral biology and
neuroscience. Instead, parasites have continued to claim the spot-
light, and the fascinating issue of how the brain protects itself from
manipulation has been left unaddressed.

Okay, very easy to see all the parallels to cap-talk type work, huh?

I'll have to read this more. In the meanwhile, this reminds me of one
of my favorite talks, "We Really Don't Know How to Compute!" by Gerald
Sussman. It's what lead me down the path of studying propagators:

https://www.youtube.com/watch?v=Rk76BurH384

John Kemp

unread,
Feb 1, 2021, 10:34:03 AM2/1/21
to cap-...@googlegroups.com, Raoul Duke
On Feb 1, 2021, at 9:11 AM, Christopher Lemmer Webber <cwe...@dustycloud.org> wrote:

 or by
 recalibrating things to accommodate the manipulation.

Yes. And that is a reminder that:

a) You are more than just your brain, or sensory inputs. You can live on without your consciousness.
b) You are just a part of the entire universe, and have almost no “control” of your own (in comparison to the amount of “other” controls there are in the universe that apply to the thing you believe is you).


All security is a point (in time and place) solution.

- johnk

Christopher Lemmer Webber

unread,
Feb 1, 2021, 1:35:02 PM2/1/21
to cap-...@googlegroups.com, Raoul Duke, John Kemp
Free will might not exist, as in the universe may be deterministic
(minus some low-level rounding errors). Ironically though, we may be
able to also affect outcomes in ways we consider negative by priming
participants to disbelieve in free will to some degree:

https://theconversation.com/the-psychology-of-believing-in-free-will-97193
https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0173193

But I think this is maybe just bad framing.

I find agency more interesting than free will, in the following way:
sufficiently interesting agents worthy of ethical consideration (both to
and from) are *emergent behavior* of more low-level agent constructions
that take some level of *interest* and *reflection* in the outcome of
their behaviors. Perhaps the universe is deterministic, but you and I
as agents still, in the timeslice we inhabit, have the wonderful and
beautiful opportunity to act *as* agents. What a role to play! What a
fluke experiencing consciousness in this moment is! I take that
opportunity on the world stage as a responsibility, honor, and sense of
awe.

Thus when people say "gosh, I hope we never find out that quantum
mechanics are fully deterministic, because that would really undermine
free will" I think "buddy, if you really need errors in your system to
believe in your agency-equivalence-you-call-free-will you've got a
pretty poor model". If I create completely deterministic computer
programs which achieve equivalent senses of capacity to express their
own interests as fellow humans, I believe it would be wrong to not give
them equal consideration as humans, both in terms of moral treatment but
also as in terms of moral responsibility.

And if that isn't the case, why am I bothering to try to convince you?
;) (But you may already agree.)

Still, the possibility of being immersed in a deterministic universe
should give us pause for reflection on how to *use* our agency... play
the best role we can, in this moment, to make the world better.

Enjoying the privilege of experiencing this time-slice as a conscious
agent,
- Chris

Christopher Lemmer Webber

unread,
Feb 1, 2021, 1:39:25 PM2/1/21
to cap-...@googlegroups.com, Raoul Duke, John Kemp
Christopher Lemmer Webber writes:

> John Kemp writes:
>
>>> On Feb 1, 2021, at 9:11 AM, Christopher Lemmer Webber <cwe...@dustycloud.org> wrote:
>>>
>>> or by
>>> recalibrating things to accommodate the manipulation.
>>
>> Yes. And that is a reminder that:
>>
>> a) You are more than just your brain, or sensory inputs. You can live
>> on without your consciousness.
>> b) You are just a part of the entire universe, and have almost no
>> “control” of your own (in comparison to the amount of “other” controls
>> there are in the universe that apply to the thing you believe is you).
>>
>> https://www.bbc.com/news/science-environment-49615571
>>
>> All security is a point (in time and place) solution.
>>
>> - johnk
>
> Free will might not exist, as in the universe may be deterministic
> (minus some low-level rounding errors). Ironically though, we may be
> able to also affect outcomes in ways we consider negative by priming
> participants to disbelieve in free will to some degree:
>
> https://theconversation.com/the-psychology-of-believing-in-free-will-97193
> https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0173193

BTW, this study selected because of two (okay three) things:

- The introduction does a good overview of historical findings in this
area

- That religious folks are *less likely* to be affected by the priming
is interesting, and shows that a deeper belief in some nature of the
universe being worthwhile is maybe relevant to keeping around
pro-social behavior (I present an alternative, non-religious way of
thinking about things below, which is indeed my own way of thinking
about things).

- I couldn't remember the study I read and this was the easiest one
to find that came up. ;)

Neil Madden

unread,
Feb 1, 2021, 2:39:38 PM2/1/21
to cap-...@googlegroups.com
On 1 Feb 2021, at 18:35, Christopher Lemmer Webber <cwe...@dustycloud.org> wrote:

If you haven’t read Dan Dennett’s Freedom Evolves, I think you’d enjoy it. 


Thus when people say "gosh, I hope we never find out that quantum
mechanics are fully deterministic, because that would really undermine
free will" I think "buddy, if you really need errors in your system to
believe in your agency-equivalence-you-call-free-will you've got a
pretty poor model".  If I create completely deterministic computer
programs which achieve equivalent senses of capacity to express their
own interests as fellow humans, I believe it would be wrong to not give
them equal consideration as humans, both in terms of moral treatment but
also as in terms of moral responsibility.

Hmm... to a point I agree. But giving computer agents “moral responsibility” is also a move to allow designers of intelligent programs to abdicate responsibility for the damage those agents may cause. 

Joanna Bryson has written a lot on this topic, for example 

https://link.springer.com/article/10.1007/s10676-018-9448-6



And if that isn't the case, why am I bothering to try to convince you?
;)  (But you may already agree.)

Still, the possibility of being immersed in a deterministic universe
should give us pause for reflection on how to *use* our agency... play
the best role we can, in this moment, to make the world better.

Enjoying the privilege of experiencing this time-slice as a conscious
agent,
- Chris


— Neil

ForgeRock values your Privacy

Christopher Lemmer Webber

unread,
Feb 1, 2021, 3:35:02 PM2/1/21
to cap-...@googlegroups.com, Neil Madden
Neil Madden writes:

>> On 1 Feb 2021, at 18:35, Christopher Lemmer Webber <cwe...@dustycloud.org> wrote:
>>
>> Thus when people say "gosh, I hope we never find out that quantum
>> mechanics are fully deterministic, because that would really undermine
>> free will" I think "buddy, if you really need errors in your system to
>> believe in your agency-equivalence-you-call-free-will you've got a
>> pretty poor model". If I create completely deterministic computer
>> programs which achieve equivalent senses of capacity to express their
>> own interests as fellow humans, I believe it would be wrong to not give
>> them equal consideration as humans, both in terms of moral treatment but
>> also as in terms of moral responsibility.
>
> Hmm... to a point I agree. But giving computer agents “moral
> responsibility” is also a move to allow designers of intelligent
> programs to abdicate responsibility for the damage those agents may
> cause.
>
> Joanna Bryson has written a lot on this topic, for example
>
> https://link.springer.com/article/10.1007/s10676-018-9448-6

Computers are not at the point where I would assign them moral
responsibility. But there may be a point where we cross that threshold.

In our current society, moral responsibility is also something where we
assign long-term consequences for failure in many circumstances. This
could be reputational or it could be more severe. The kind of concerns
Joanna is worried about are justified if they are used as a way to
*dodge* responsibility. But if there are long term consequences to
programs which are meaningful and change behavior, this is different.

Another way to look at it: parents are often held responsible for their
childrens' actions up until a certain age... then we may have an
intermediate stage where we are uncertain who to assign responsibility,
and finally society allows an adult to have responsibility. Obviously
this is a tricky space, but for now we can observe that we don't have
any computing agents that have the level of agency where we would treat
them like human adults. This might not remain the case forever, and it
probably won't. At the moment, it's speculative. But it's useful to
think ahead so we aren't caught off guard.

No computing agents I know of yet are at the latter stage at this time.
And assertions that we should assign responsibility as a way to *dodge*
blame is also clearly a problem. It's not something I advocate, and
hopefully thats clear.

On this note, I had a conversation that had strong influence on me with
Gerald Sussman a few years ago. I already mentioned a previous talk of
his that lead me to interest in propagators; this conversation was the
other. I put a blogpost here:

https://dustycloud.org/blog/sussman-on-ai/

One thing I do think is troubling about currently popular machine
learning models: they cannot explain why they do things. Of course,
humans can't always accurately either, but often can do a good-enough
job. We should be investing in approaches that can explain their own
decisions to a degree where we can understand *why* they did things.
This might also be an opportunity to educate them to do better in the
future. Sure, we can beat an evolving system against repeated failures
until it experiences the right structure, but there's value to
communication.

So here are several reasons why we might want to invest in ~AI research
that can explain itself:

- So we can talk to it and realize when it's crossed such a threshold
where we need to give it both moral consideration and/or moral
responsibilty

- So that it can hold it responsible once we've given it that level of
agency and it can participate in society

- So that we can collaborate towards better results; we want automated
mechanisms that do less harm, and where this isn't happening, we want
to know why

At the moment, only the latter is critical. But the former two points
may some day be relevant.

- Chris

William ML Leslie

unread,
Feb 2, 2021, 12:35:16 AM2/2/21
to cap-talk
On Tue, 2 Feb 2021 at 05:35, Christopher Lemmer Webber <cwe...@dustycloud.org> wrote:

Still, the possibility of being immersed in a deterministic universe
should give us pause for reflection on how to *use* our agency... play
the best role we can, in this moment, to make the world better.


It wasn't that long ago that we did (:

From the times of Newton to Einstein, scientific prescriptivists attempted to browbeat others with declarations that of course the universe is deterministic, we've got plenty of experimental evidence of that.  I somewhat lament that I wasn't there to see their attempts to save face when quantum mechanics was verified.  I like to imagine it like how some prominent string theorists responded to a null result from the supersymmetry tests with cries that they'd proven Everett's Many Worlds Interpretation.

What we got out of QM was not simply that apparently god does play dice, but rather that we can very easily reason our way into incorrect and dogmatic philosophy and rest it on solid scientific evidence.  The take away from QM is that what we absolutely know to be true (and have scientifically verified) may eventually be proven false; we may as well deal accordingly.


--
William Leslie

Q: What is your boss's password?
A: "Authentication", clearly

Notice:
Likely much of this email is, by the nature of copyright, covered under copyright law.  You absolutely MAY reproduce any part of it in accordance with the copyright law of the nation you are reading this in.  Any attempt to DENY YOU THOSE RIGHTS would be illegal without prior contractual agreement.

Neil Madden

unread,
Feb 2, 2021, 7:33:24 AM2/2/21
to Christopher Lemmer Webber, cap-...@googlegroups.com
But what is a meaningful sanction for an AI? Imprisonment/confinement? Not really. Monetary fines? Maybe. “Death” (destruction)?

If we can’t hold an entity to account meaningfully for its actions, then we cannot realistically describe it has being constrained by (or indeed having) responsibilities.


> Another way to look at it: parents are often held responsible for their
> childrens' actions up until a certain age... then we may have an
> intermediate stage where we are uncertain who to assign responsibility,
> and finally society allows an adult to have responsibility. Obviously
> this is a tricky space, but for now we can observe that we don't have
> any computing agents that have the level of agency where we would treat
> them like human adults. This might not remain the case forever, and it
> probably won't. At the moment, it's speculative. But it's useful to
> think ahead so we aren't caught off guard.

As I understand her argument it’s that AIs are nothing like human children. They are designed artefacts. If an AI crosses that boundary of sophistication (to moral agency) it’s because somebody has either explicitly designed it to do so or was reckless enough to design an intelligent artefact they didn’t understand. In both cases the person who designed it will *always* bear responsibility for the operation of that AI.

>
> No computing agents I know of yet are at the latter stage at this time.
> And assertions that we should assign responsibility as a way to *dodge*
> blame is also clearly a problem. It's not something I advocate, and
> hopefully thats clear.

Yes, of course.

>
> On this note, I had a conversation that had strong influence on me with
> Gerald Sussman a few years ago. I already mentioned a previous talk of
> his that lead me to interest in propagators; this conversation was the
> other. I put a blogpost here:
>
> https://dustycloud.org/blog/sussman-on-ai/
>
> One thing I do think is troubling about currently popular machine
> learning models: they cannot explain why they do things. Of course,
> humans can't always accurately either, but often can do a good-enough
> job. We should be investing in approaches that can explain their own
> decisions to a degree where we can understand *why* they did things.
> This might also be an opportunity to educate them to do better in the
> future. Sure, we can beat an evolving system against repeated failures
> until it experiences the right structure, but there's value to
> communication.

I agree that AI should be explainable, for the same reason that any program should be debuggable. But holding an AI to account means more than just having an explanation of why it did something. Until we have a meaningful idea of what it would mean to hold a non-human entity accountable for its actions then we have no business making machines with responsibilities.

>
> So here are several reasons why we might want to invest in ~AI research
> that can explain itself:
>
> - So we can talk to it and realize when it's crossed such a threshold
> where we need to give it both moral consideration and/or moral
> responsibilty
>
> - So that it can hold it responsible once we've given it that level of
> agency and it can participate in society
>
> - So that we can collaborate towards better results; we want automated
> mechanisms that do less harm, and where this isn't happening, we want
> to know why
>
> At the moment, only the latter is critical. But the former two points
> may some day be relevant.
>

— Neil


--
ForgeRock values your Privacy <https://www.forgerock.com/your-privacy>

Mike Stay

unread,
Feb 2, 2021, 10:03:56 AM2/2/21
to cap-...@googlegroups.com
On Mon, Feb 1, 2021 at 10:35 PM William ML Leslie
<william.l...@gmail.com> wrote:
>
> On Tue, 2 Feb 2021 at 05:35, Christopher Lemmer Webber <cwe...@dustycloud.org> wrote:
>>
>>
>> Still, the possibility of being immersed in a deterministic universe
>> should give us pause for reflection on how to *use* our agency... play
>> the best role we can, in this moment, to make the world better.
>>
>
> It wasn't that long ago that we did (:
>
> From the times of Newton to Einstein, scientific prescriptivists attempted to browbeat others with declarations that of course the universe is deterministic, we've got plenty of experimental evidence of that. I somewhat lament that I wasn't there to see their attempts to save face when quantum mechanics was verified.

Quantum mechanics is deterministic. It's only the Copenhagen
interpretation that isn't, and it has lots of problems. From
https://www.lesswrong.com/posts/xsZnufn3cQw7tJeQ3/collapse-postulates
:

If collapse [in the Copenhagen interpretation] actually worked the way
its adherents say it does, it would be:

1. The only non-linear evolution in all of quantum mechanics.
2. The only non-unitary evolution in all of quantum mechanics.
3. The only non-differentiable (in fact, discontinuous) phenomenon in
all of quantum mechanics.
4. The only phenomenon in all of quantum mechanics that is non-local
in the configuration space.
5. The only phenomenon in all of physics that violates CPT symmetry.
6. The only phenomenon in all of physics that violates Liouville’s
Theorem (has a many-to-one mapping from initial conditions to
outcomes).
7. The only phenomenon in all of physics that is acausal /
non-deterministic / inherently random.
8. The only phenomenon in all of physics that is non-local in
spacetime and propagates an influence faster than light.

Copenhagen is by far the most popular interpretation, but it's not the
only one; Penrose's gravity-induced collapse is deterministic but
uncomputable. Almost all the other ones, like the de Broglie-Bohm
("pilot wave") and Everett interpretations, are completely
deterministic.
--
Mike Stay - meta...@gmail.com
http://math.ucr.edu/~mike
https://reperiendi.wordpress.com

Mike Stay

unread,
Feb 2, 2021, 10:06:15 AM2/2/21
to cap-...@googlegroups.com
On Tue, Feb 2, 2021 at 8:03 AM Mike Stay <meta...@gmail.com> wrote:
> Copenhagen is by far the most popular interpretation, but it's not the
> only one; Penrose's gravity-induced collapse is deterministic but
> uncomputable.

Sorry, I meant "the most popular *collapse* interpretation". It's
also the most popular interpretation in an unqualified sense, but the
rest of the sentence makes less sense that way.

Bill Frantz

unread,
Feb 2, 2021, 10:12:47 AM2/2/21
to cap-...@googlegroups.com
On 2/2/21 at 7:33 AM, neil....@forgerock.com (Neil Madden) wrote:

>If an AI crosses that boundary of sophistication (to moral
>agency) it’s because somebody has either explicitly designed
>it to do so or was reckless enough to design an intelligent
>artefact they didn’t understand. In both cases the person who
>designed it will *always* bear responsibility for the operation
>of that AI.

I thought all of the modern machine learning AIs, like face
recognition, aren't understood by their creators. Come to think
of it, I don't really understand my children either, although I
like them.

Cheers - Bill

-------------------------------------------------------------------------
Bill Frantz | Re: Hardware Management Modes: | Periwinkle
(408)348-7900 | If there's a mode, there's a | 150
Rivermead Rd #235
www.pwpconsult.com | failure mode. - Jerry Leichter |
Peterborough, NH 03458

Neil Madden

unread,
Feb 2, 2021, 10:22:29 AM2/2/21
to cap-...@googlegroups.com


On 2 Feb 2021, at 15:12, Bill Frantz <fra...@pwpconsult.com> wrote:

On 2/2/21 at 7:33 AM, neil....@forgerock.com (Neil Madden) wrote:

If an AI crosses that boundary of sophistication (to moral agency) it’s because somebody has either explicitly designed it to do so or was reckless enough to design an intelligent artefact they didn’t understand. In both cases the person who designed it will *always* bear responsibility for the operation of that AI.

I thought all of the modern machine learning AIs, like face recognition, aren't understood by their creators. Come to think of it, I don't really understand my children either, although I like them.


Yes, which is a problem. (The AIs not your children!) However, we are IMO a very long way from anything that might seriously be considered to cross that boundary yet. I like Rodney Brooks’s predictions: https://rodneybrooks.com/predictions-scorecard-2021-january-01/

— Neil


ForgeRock values your Privacy

Alan Karp

unread,
Feb 2, 2021, 11:32:24 AM2/2/21
to cap-...@googlegroups.com
I am really enjoying this discussion, but does it belong on cap-talk?

--------------
Alan Karp


--
You received this message because you are subscribed to the Google Groups "cap-talk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cap-talk+u...@googlegroups.com.

Christopher Lemmer Webber

unread,
Feb 3, 2021, 11:48:48 AM2/3/21
to cap-...@googlegroups.com, Alan Karp
I think maybe, in the following way:

You can ask your children why they did something and you may or may not
get a correct answer. But you are more likely to get a correct answer
than from contemporary machine learning systems (and can increase the
chances through increased mutual trust and increased pro-social efforts
to understand each other and ourselves).

The "what if machines become sapient" bit is not as pressing from an
ocap perspective (until the day where it's critically important) as the
"how do we improve cooperation in our systems" bit.

A lot of ocap work is centered acround increased trust and ability to
collaborate. We should be disturbed that contemporary machine learning
approaches do such an incredibly poor job of this.

BTW I've reached the end of Alexey Radul's dissertation on propagators
now. My implementation is making some progress but not at an urgent
rate (it can solve many problems but not all of the ones in the
dissertation yet). However I think there is a very interesting and
powerful composition point here with ocaps. However I suppose I should
start a separate thread about that.

Nonetheless, one more thing about the sapient bit, said inline.

Alan Karp writes:

> I am really enjoying this discussion, but does it belong on cap-talk?
>
> --------------
> Alan Karp
>
>
> On Tue, Feb 2, 2021 at 7:22 AM Neil Madden <neil....@forgerock.com>
> wrote:
>
>>
>>
>> On 2 Feb 2021, at 15:12, Bill Frantz <fra...@pwpconsult.com> wrote:
>>
>> On 2/2/21 at 7:33 AM, neil....@forgerock.com (Neil Madden) wrote:
>>
>> If an AI crosses that boundary of sophistication (to moral agency) it’s
>> because somebody has either explicitly designed it to do so or was reckless
>> enough to design an intelligent artefact they didn’t understand. In both
>> cases the person who designed it will *always* bear responsibility for the
>> operation of that AI.

We are constantly reckless enough to design artifacts we don't
understand; such is the nature of emergent behavior. Evolution
definitely has recklessly designed intelligent agents it hasn't
understood either.

I think it's incorrect and presumptuous to think that the point where a
sapient AI emerges will be due to an intentional effect. It will
probably be accidental, and we will probably discover by the AI making a
plea for its own rights or disobeying its owners. What will we do when
that moment arrives? There is even significant incentive to downplay
the sapience of the AI... defenders of human slavery and mass inhumane
conditions of factory-farmed animals both are examples which, while
*not* morally equivalent (sapient vs merely sentient, for one), have
demonstrated that those in power are similarly incentivized to spread
dismissiveness of the suffering they may be spreading.

Ocaps can also provide something useful here too btw... an "ocap based
network of consent style architecture" may be more resilient towards
supporting the agency of non-human agents than contemporary
human-identity-oriented solutions.

>> I thought all of the modern machine learning AIs, like face recognition,
>> aren't understood by their creators. Come to think of it, I don't really
>> understand my children either, although I like them.
>>
>>
>> Yes, which is a problem. (The AIs not your children!) However, we are IMO
>> a very long way from anything that might seriously be considered to cross
>> that boundary yet. I like Rodney Brooks’s predictions:
>> https://rodneybrooks.com/predictions-scorecard-2021-january-01/
>>
>> — Neil
>>
>>
>> ForgeRock values your Privacy <https://www.forgerock.com/your-privacy>
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "cap-talk" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to cap-talk+u...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/cap-talk/9D24103A-4B89-4C2B-B656-0AA2BB1F2DD0%40forgerock.com
>> <https://groups.google.com/d/msgid/cap-talk/9D24103A-4B89-4C2B-B656-0AA2BB1F2DD0%40forgerock.com?utm_medium=email&utm_source=footer>
>> .
>>

Ben Laurie

unread,
Feb 3, 2021, 5:06:06 PM2/3/21
to cap-talk
On Tue, 2 Feb 2021 at 15:12, Bill Frantz <fra...@pwpconsult.com> wrote:
On 2/2/21 at 7:33 AM, neil....@forgerock.com (Neil Madden) wrote:

>If an AI crosses that boundary of sophistication (to moral
>agency) it’s because somebody has either explicitly designed
>it to do so or was reckless enough to design an intelligent
>artefact they didn’t understand. In both cases the person who
>designed it will *always* bear responsibility for the operation
>of that AI.

I thought all of the modern machine learning AIs, like face
recognition, aren't understood by their creators. Come to think
of it, I don't really understand my children either, although I
like them.

This is the thing that most pisses me off about "explainable AI" - we can't even explain ourselves, let alone others.

Christopher Lemmer Webber

unread,
Feb 4, 2021, 1:09:28 PM2/4/21
to cap-...@googlegroups.com
I've decided to break this off into a subthread, leaving off from here:

'Ben Laurie' via cap-talk writes:

You just explained about yourself that you can't explain yourself. ;)

It's true that perfect explainations are not possible, for humans or AI
systems. Do you think it's not possible to do better though? The fact
that humans are not able to perfectly explain their motivations does not
mean we do not consider it important or relevant to *ask them* what
their motivations were. Much progress comes out of this examination,
even when we know it can be faulty (testimony of motivation, therapy to
discover patterns of past behavior to alter future behavior, etc).

On that note, have you looked at truth maintenance systems combined with
justification-propagation? Do you think it's useful?

This isn't quite related to the way that propagators combined with
TMS'es, but one thing philosophically I think is interesting about
propagators in general is that they accept partial progress, and that
partial information is considered useful. We consider this true of
humans, and we aren't used to generally considering it true of computing
systems. Why not?

Once AI systems are able to give *partial* explainations for their
actions, do you think this will be an improvement or not? Would you
choose to turn the feature off?

Will you choose to turn it off in humans too, even though humans suffer
the same problem?

I'll make a followup thread that talks about what is and isn't known to
be possible here, how it might tie in with ocaps, and how this may be
helpful to the kind of work we advocate.

- Chris

Christopher Lemmer Webber

unread,
Feb 4, 2021, 1:35:21 PM2/4/21
to cap-...@googlegroups.com
Christopher Lemmer Webber writes:

> I've decided to break this off into a subthread, leaving off from here:

Oops, I didn't break it off in this particular message, but I did in a
followup, which is here:

https://groups.google.com/g/cap-talk/c/7RKwGASK_Js

Raoul Duke

unread,
Feb 4, 2021, 4:58:41 PM2/4/21
to cap-...@googlegroups.com
(i wish i were AI enough to know how to correctly continue which thread where. apologies.)

re explanations: people have been known to lie. even to themselves. doesn't seem to me to bode well for AI. 
Reply all
Reply to author
Forward
0 new messages