no need for anthropic reasoning

237 views
Skip to first unread message

Wei Dai

unread,
Feb 15, 2001, 7:47:27 AM2/15/01
to everyth...@eskimo.com
The selection of the proper reference class is a serious problem for
anthropic reasoning. But perhaps anthropic reasoning is not necessary to
take advantage of a theory of everything.

Consider how an non-sentient being (excluded by most proposals for the
reference class) can use a TOE. Imagine a non-sentient oracle that was
built to accomplish some goal (for example to maximize some definition of
social welfare) by answering questions people send it. The oracle could
work by first locating all instances of itself with significant measure
which are about the answer the question it's considering (by simulating
all possible worlds and looking for itself in the simulations). Then for
every potential answer, it computes the approximate consequences for the
meta-universe if all of its instances were to give that answer. Finally
it gives the answer that would maximize the value function.

Sentient beings can follow the same decision procedure used by the oracle.
Suppose you are faced with a bet involving a tossed coin. There is no need
to consider probabilistic questions like "what is the probability that the
coin landed heads?" which would involve anthropic reasoning. You know that
there are worlds where it landed heads and worlds where it landed tails,
and that there are instances of you in all of these worlds which are
indistinguishable from each other from the first-person perspective. You
can make the decision by considering the consequences of each choice if
all instances of you were to make that choice.

jue...@idsia.ch

unread,
Feb 16, 2001, 12:50:36 PM2/16/01
to everyth...@eskimo.com
>From Wei Dai, Thu, 15 Feb 2001 05:00:23:

<<<
Sentient beings can follow the same decision procedure used by the oracle.
Suppose you are faced with a bet involving a tossed coin. There is no need
to consider probabilistic questions like "what is the probability that the
coin landed heads?" which would involve anthropic reasoning. You know that
there are worlds where it landed heads and worlds where it landed tails,
and that there are instances of you in all of these worlds which are
indistinguishable from each other from the first-person perspective. You
can make the decision by considering the consequences of each choice if
all instances of you were to make that choice.
<<<

This reminds me of excellent recent work by Marcus Hutter:
"Towards a universal theory of AI based on algorithmic
probability and sequential decision theory"
ftp://ftp.idsia.ch/pub/techrep/IDSIA-14-00.ps.gz

JS

Jacques Mallah

unread,
Feb 16, 2001, 10:23:09 PM2/16/01
to everyth...@eskimo.com
>From: Wei Dai <wei...@eskimo.com>

>The selection of the proper reference class is a serious problem for
>anthropic reasoning.

Yes, though there have been suggestions about it.

>But perhaps anthropic reasoning is not necessary to
>take advantage of a theory of everything.
>Consider how an non-sentient being (excluded by most proposals for the
>reference class) can use a TOE. Imagine a non-sentient oracle that was
>built to accomplish some goal (for example to maximize some definition of
>social welfare) by answering questions people send it.

Any reasonable goal will, like social welfare, involve a function of the
(unnormalized) measure distribution of conscious thoughts. What else would
social welfare mean? For example, it could be to maximize the number of
thoughts with a "happiness" property greater than "life sucks".

>The oracle could work by first locating all instances of itself with
>significant measure which are about the answer the question it's
>considering (by simulating all possible worlds and looking for itself in
>the simulations).

So you also bring in measure that way. By the way, this is a bad idea:
if the simulations are too perfect, they will give rise to conscious
thoughts of their own! So, you should be careful with it. The very act of
using the oracle could create a peculiar multiverse, when you just want to
know if you should buy one can of veggies or two.

>Then for every potential answer, it computes the approximate consequences
>for the meta-universe if all of its instances were to give that answer.
>Finally it gives the answer that would maximize the value function.

I'm glad you said approximate. Of course Godel's theorem implies that
it will never be able to exactly model a system containing itself.

>Sentient beings can follow the same decision procedure used by the oracle.
>Suppose you are faced with a bet involving a tossed coin. There is no need
>to consider probabilistic questions like "what is the probability that the
>coin landed heads?" which would involve anthropic reasoning.
>You know that there are worlds where it landed heads and worlds where it
>landed tails, and that there are instances of you in all of these worlds
>which are indistinguishable from each other from the first-person
>perspective. You can make the decision by considering the consequences of
>each choice if all instances of you were to make that choice.

You need to know which type of thought has greater measure, "I saw
heads, and ..." or "I saw tails, and ...". I call the measure of one,
divided by the total measure, the *effective* probability, since it
(roughly) plays the role of the probability for decision theory. But you
have a point in a way ...
Decision theory is not exactly the same as anthropic reasoning. In
decision theory, you want to do something to maximize some utility function.
By contrast, anthropic reasoning is used when you want to find out some
information.
This difference could be important in terms of the concern over the
reference class: while the reference class for anthropic reasoning may leave
out all thoughts that don't employ anthropic reasoning, there is no reason
that the utility function shouldn't depend on *all* thoughts. (It could
also depend on other things, but any sane person will mainly care about
thoughts.)
For example, suppose if I choose A I will be more likely to later think
about anthropic reasoning than if I choose B. This will affect my guess as
to how likely I am to choose A, but it will not affect my decision unless I
place a special utility on (or have a special dislike of) thinking about it.
But the reference class issue has another component that does still come
into play. How am I to evaluate my utility function, unless I know how to
identify and measure conscious thoughts in the math model?
By the way, I don't think you should say "first-person perspective".
Some people on this list think it means something other than just saying you
look at all instances of a particular class of thoughts, but it doesn't.
But that's for another post and another day. (I guess I'll have to
partially back up James Higgo.)
If you really want to see a "first-person perspective", see
http://hammer.prohosting.com/~mathmind/1psqb-a6.exe

- - - - - - -
Jacques Mallah (jackm...@hotmail.com)
Physicist / Many Worlder / Devil's Advocate
"I know what no one else knows" - 'Runaway Train', Soul Asylum
My URL: http://hammer.prohosting.com/~mathmind/
_________________________________________________________________
Get your FREE download of MSN Explorer at http://explorer.msn.com

Wei Dai

unread,
Feb 17, 2001, 4:28:29 AM2/17/01
to Jacques Mallah, everyth...@eskimo.com
On Fri, Feb 16, 2001 at 10:22:35PM -0500, Jacques Mallah wrote:
> Any reasonable goal will, like social welfare, involve a function of the
> (unnormalized) measure distribution of conscious thoughts. What else would
> social welfare mean? For example, it could be to maximize the number of
> thoughts with a "happiness" property greater than "life sucks".

My current position is that one can care about any property of the entire
structure of computation. Beyond that there are no reasonable or
unreasonable goals. One can have goals that do not distinguish between
conscious or unconscious computations, or goals that treat conscious
thoughts in emulated worlds differently from conscious thoughts in "real"
worlds (i.e., in the same level of emulation as the goal-holders). None of
these can be said to be unreasonable, in the sense that they are not
ill-defined or obviously self-defeating or contradictory.

In the end, evolution decides what kinds of goals are more popular within
the structure of computation, but I don't think they will only involve
functions on the measure distribution of conscious thoughts. For example,
caring about thoughts that arise in emulations as if they are real (in the
sense defined above) is not likely to be adaptive, but the distinction
between emulated thoughts and real thoughts can't be captured in a
function on the measure distribution of conscious thoughts.

> So you also bring in measure that way. By the way, this is a bad idea:
> if the simulations are too perfect, they will give rise to conscious
> thoughts of their own! So, you should be careful with it. The very act of
> using the oracle could create a peculiar multiverse, when you just want to
> know if you should buy one can of veggies or two.

The oracle was not meant to be a realistic example, just to illustrate my
proposed decision procedure. However to answer your objection, the oracle
could be programmed to ignore conscious thoughts that arise out of its
internal computations (i.e., not account for them in its value function)
and this would be a value judgement that can't be challenged on purely
objective grounds.

> You need to know which type of thought has greater measure, "I saw
> heads, and ..." or "I saw tails, and ...". I call the measure of one,
> divided by the total measure, the *effective* probability, since it
> (roughly) plays the role of the probability for decision theory. But you
> have a point in a way ...
> Decision theory is not exactly the same as anthropic reasoning. In
> decision theory, you want to do something to maximize some utility function.
> By contrast, anthropic reasoning is used when you want to find out some
> information.

Anthropic reasoning can't exist apart from a decision theory, otherwise
there is no constraint on what reasoning process you can use. You might as
well believe anything if it has no effect on your actions.

Reply all
Reply to author
Forward
0 new messages