Applications of Algorithmic Probability to the Philosophy of Mind

51 views
Skip to first unread message

Gabriel Leuenberger

unread,
Jan 4, 2017, 5:27:48 AM1/4/17
to MAGIC
This is the draft paper that all the fuss was about lately:
Applications of Algorithmic Probability to the Philosophy of Mind
It is an improved and extended version of the 'universal algorithmic ethics' draft paper.
Suggestions for improvements are welcome.

potapov

unread,
Jan 4, 2017, 6:03:47 AM1/4/17
to MAGIC
Section "4.3  Universal Algorithmic Ethics" sounds very similar to my earlier paper "Universal empathy and ethical bias for artificial general intelligence" (Journal of Experimental & Theoretical Artificial Intelligence. 2014. V. 26. Iss. 3. P. 405-416). Maybe, Eray was right about you? ;)
The idea is probably not absolutely the same, but seemingly related.
Also, you concept of ethics seems quite controversial...

среда, 4 января 2017 г., 13:27:48 UTC+3 пользователь Gabriel Leuenberger написал:

Gabriel Leuenberger

unread,
Jan 4, 2017, 7:01:51 AM1/4/17
to MAGIC
Thank you, I should definitely cite that. Is the agent in your paper the one that I compared to in section 4.4.6 ?

Gabriel Leuenberger

unread,
Jan 4, 2017, 1:48:05 PM1/4/17
to MAGIC
Ok, I updated my paper by adding a sentence to section 4.4.3 that contains the citation.


On Wednesday, 4 January 2017 12:03:47 UTC+1, potapov wrote:

potapov

unread,
Jan 5, 2017, 2:51:42 AM1/5/17
to MAGIC


среда, 4 января 2017 г., 15:01:51 UTC+3 пользователь Gabriel Leuenberger написал:
Thank you, I should definitely cite that. Is the agent in your paper the one that I compared to in section 4.4.6 ?

It is difficult to say for sure since you don't introduce the notion of "compassion" formally, but I think that this "Consider a machine that can delude an agent into believing that everyone else feels great while in reality making everyone else feel miserable. Agent-2 would choose to utilize this horrible machine whereas Agent-1 would choose to avoid utilizing this machine"  might not true for our agent. The agent tries to recover "true" states of the environment and to learn to assign values to them. If it knows that this will be a delusion, it will not assign any positive value to it, and will not to choose to be deluded since it will prevent the agent to optimize such values.

Gabriel Leuenberger

unread,
Jan 5, 2017, 5:05:28 AM1/5/17
to MAGIC
It is difficult to say for sure since you don't introduce the notion of "compassion" formally, but I think that this "Consider a machine that can delude an agent into believing that everyone else feels great while in reality making everyone else feel miserable. Agent-2 would choose to utilize this horrible machine whereas Agent-1 would choose to avoid utilizing this machine"  might not true for our agent. The agent tries to recover "true" states of the environment and to learn to assign values to them. If it knows that this will be a delusion, it will not assign any positive value to it, and will not to choose to be deluded since it will prevent the agent to optimize such values.

It could also be that it is not entirely clear because back then you were not using the space-time embedded framework yet, so it's perhaps not clear what delusion would mean in the AIXI framework since it assumes perfect memory. 

potapov

unread,
Jan 5, 2017, 6:29:55 AM1/5/17
to MAGIC
Not exactly. We consider the agent that builds representations of the environment on different levels of abstraction, so unlike AIXI it can empirically learn to imprecisely represent its own mental states and to value them. Thus, our agent will or will not accept 'brain surgery' depending on what it learned to value. But I agree that answers to these questions were not worked out in our framework, because other researchers were not interested in it, and I lost my own interest in its further development and in the AGI safety problem in general. Nevertheless, I still believe that this framework is more natural than to try to make an agent ethical a priori, when it doesn't possess any knowledge about the real world...
Reply all
Reply to author
Forward
0 new messages