In particular, I'd like to know if it has any bearing on the possibilities
of either perfect precognition or rational decision making.
-------------------------------------------------------------------------
The situation involves a being X. X is precognizant. In the first
version of the problem, X is perfect in this power. If you like, X
is God. In the second version, X is only partially precognizant, but
has a very very good track record--at least 99% accurate according to
all studies.
X is also very rich and completely honest. X puts an unknown amount
of money in two boxes A and B. X tells you that he put $1K (= one
thousand dollars) in box A. X also tells you that he put either $0
or $1M (= one million dollars) in box B. You are now given one chance
to earn some quick and easy money. Your only options are
(1) the one-boxer option:
Open box B only.
(2) the two-boxer option:
Open box A and B both.
You are not going to be given a second chance nor a third option. X
furthermore tells you that he put $1M in box B if X predicted you would
follow option (1) only. He then tells you he put $0 in box B if he
predicted you would follow option (2) only, or if you end up deciding
to use a randomization device (other than, if you wish, your own free
will).
The question is, what do you pick, in either version? And why?
Let me emphasize, there is no retroactive changing of the contents of
box B. Either there is a million dollars waiting for you in box B or
there isn't. There definitely is a thousand dollars waiting for you
in box A.
And if it all seems too simple to you, would it make any difference if
the boxes were transparent?
ucbvax!brahms!weemba Matthew P Wiener/UCB Math Dept/Berkeley CA 94720
I would be willing to believe that with sufficient knowledge of the state
of my brain a sufficiently resourceful opponent could indeed predict with
high probability my response to this situation (given sufficient evidence
to this effect). I also believe that the rational course of behavior is
to take both boxes. Certainly this maximizes your expected yield in the
given situation. But, as I noted, I do expect to get only $1000 this way.
The only way to "win" this game is to make the (conscious or unconscious)
decision *in advance* to take only the one box. In fact, no matter what your
choice at the actual event, you are always at least as well off to have been
committed beforehand to take only the one box!
Nevertheless, to take only one box is by its nature an *irrational*
decision. Not irrational in terms of results, but irrational when contrasted
with desirable behavior in other circumstances.
So essentially you have to decide, *in advance*, that you are going to
make an irrational decision in certain circumstances. This advance decision
is, in itself, rational, since its result can be foreseen to be favorable.
But it represents a compromise. Such a movement away from rationality has
its own costs in all sorts of other situations. I personally place such a
high value on rational behavior that I consider the cost to be too great.
I admit that this something of a cop-out, in that my unwillingness to make
this compromise is largely based on my low estimate of the probability of
such a situation. Suppose our society gave this test to every individual at
a certain age. Then the compromise would certainly be worthwhile. Even the
known existence of such a precogniter might sway me; I don't know. All I
know is that at the moment I am not willing to make this compromise with
irrationality.
-- David desJardins
> I would be willing to believe that with sufficient knowledge of the state
>of my brain a sufficiently resourceful opponent could indeed predict with
>high probability my response to this situation (given sufficient evidence
>to this effect). I also believe that the rational course of behavior is
>to take both boxes. Certainly this maximizes your expected yield in the
>given situation. But, as I noted, I do expect to get only $1000 this way.
I find it hard to see why if the "irrational" course of behavior
nets $1000000 while the "rational" yields only $1000 you persist in
calling the behavior of opening only one box "irrational".
> The only way to "win" this game is to make the (conscious or unconscious)
>decision *in advance* to take only the one box. In fact, no matter what your
> Nevertheless, to take only one box is by its nature an *irrational*
>decision. Not irrational in terms of results, but irrational when contrasted
>with desirable behavior in other circumstances.
>
> So essentially you have to decide, *in advance*, that you are going to
>make an irrational decision in certain circumstances. This advance decision
>is, in itself, rational, since its result can be foreseen to be favorable.
I think the irrationality you perceive is not in the behavior of the
box-taker, but in the situation. In spite of saying that you accept it
as conceivable, you appear to be implicitly rejecting it. Fine, except
that this is the premise. The premise *must* be accepted before attempting
to find the rational answer to the problem. The rational answer then is
(by definition, I maintain) the one which gives you the highest return.
This is the *same* definition of rationality which we employ under other,
less peculiar, circumstances.
>But it represents a compromise. Such a movement away from rationality has
>its own costs in all sorts of other situations. I personally place such a
>high value on rational behavior that I consider the cost to be too great.
Why not say instead that the cost of not employing our usual standard
of rationality even in unusual circumstances might become great? In this
hypothetical case, it has lost you $999,000.
I think one source of difficulty is the idea "taking the second box
won't change the circumstances; it can't change what is in the boxes
already". But the *premise* says that deciding to take both boxes is
a circumstance which does affect what is in the boxes. You should either
accept the premise, or maintain that it is impossible.
ucbvax!brahms!gsmith Gene Ward Smith/UCB Math Dept/Berkeley CA 94720
ucbvax!weyl!gsmith "DUMB problem!! DUMB!!!" -- Robert L. Forward
Along the same lines, consider the consequences of behaving completely
rationally (whatever that means) at all times. Along comes your
Nefarious Adversary, hell bent on making personal gains at your
expense. To the extent that Dr. NA can mimic your rational decision
making process, he can anticipate your behavior, and use that foreknowledge
to his own advantage. Realizing the dangers of behaving in too *predictable*
a fashion, you encorporate a degree of randomness into your strategy.
Now you are beginning to look like a Random Number Generator to Dr. NA.
You have vexed him and defeated his strategy.
Question: From the outside, do you *appear* to be behaving "rationally"
or "irrationally"? That is, does rational behavior mean "predictable"
behavior or "successful" behavior, or what? (Or not?) Is the appearance
that you present a function of the observer? (That is, does it depend
on how the observer answers the second question in this paragraph?)
So we return to my opening dilemma. What *exactly* do we mean by
completely rational behavior?
--Barry Kort ...ihnp4!houxn!kort
If the guy's for real (which I doubt), I get my cool $1 Million.
If the guy's a fake, then either I bilk him out of $1 Million
or I prove he's a fake when Box B turns up empty. As for the
$1000, that's small potatoes. I wouldn't worry about that when
I have a chance to either profit big or expose a world-class fraud.
--Barry Kort ...ihnp4!hounx!kort
The statement of the puzzle is misleading because it encourages us to make
a single, specific decision. Actually, this is just one instance of a more
general problem, on which a decision of principle must be made. The more
general problem is that sometimes, aiming directly at what we value is not
the most effective way to achieve it. The solution to the general problem
is to adopt a set of dispositions (emotions, decision-rules, etc.) that
will be likely to have the best consequences.
In other words, one should think of the question not as "should I take
one box, or two?" but as "should I be the kind of person who takes one
box, or the kind who takes two?" But now the answer is obvious. (Not
quite: since there are no Perfect Predictors in the real world, and
since it would complicate one's psychology needlessly to become the
kind of person who takes one box, we shouldn't bother. But the people
who live in that imaginary world with the Perfect Predictor should.)
--Paul Torek torek@umich
Aha! The problem is that we are interpreting the problem differently.
Your concept of precognition apparently involves the direct observation of
future events; i.e. the reversal of cause and effect. If this were the
case of course one would take only one box.
In article <12...@ucbvax.BERKELEY.EDU> desj@brahms (David desJardins) writes:
> I would be willing to believe that with sufficient knowledge of the state
>of my brain a sufficiently resourceful opponent could indeed predict with
>high probability my response to this situation (given sufficient evidence
>to this effect).
In other words, the precognition on which I am basing my statements is
essentially a sophisticated modeling process which attempts to predict from
observable data how I will respond to a given situation.
If I were presented with incontrovertible evidence of successful precog-
nition this is the working assumption I would make about its methodology,
and thus my response to the situation would be based on this assumption.
Reread my posting and see if it makes any more sense in this context.
-- David desJardins
I would like to thank David for clarifying my original posting: in my
second version, where the being X has a 99+% success rate, I said he
used precognition. I suppose the best way to distinguish the two cases
is by agreeing to use different words: "precognition" means seeing the
future by some non-predictive "direct" method whereas "prognostication"
means seeing the future by some super sophisticated predictive method.
So I wish to amend my original statement of Newcomb's problem to allow
for three versions:
(1) X is 100% precognizant. ( God ? )
(2) X is 99+% precognizant. ( Dave Trissel ? )
(3) X is 99+% prognosticant. ( Barry Kort ? )
(The concept of 100% prognosticant is empty under the meaning I'm using.)
Furthermore, does it make a difference to you if the odds are lowered to 80% ?
Does it make a difference if you've studied past runs in versions (2), (3) and
noticed which way they err: randomly, always/usually putting in the $1M and
surprising the two-boxers, always/usually putting in $0 and really surprising
the hell out of the one-boxers?
> The question is, what do you pick, in either version? And why?
>
> And if it all seems too simple to you, would it make any difference if
> the boxes were transparent?
> ucbvax!brahms!weemba Matthew P Wiener/UCB Math Dept/Berkeley CA 94720
In the first case, with a perfect precogniter, choosing only box B
nets you a million dollars. Choosing the two box option gets you $1000.
And, of course, it would make no difference if the boxes were transparent,
I'll still take the million bucks in box B. If there weren't a million
bucks in box B, I'd still choose it, just so I could show this precognitor
to be a charlatan.
In the second case, with a 99% accurate precogniter, choosing only
box B yields an expected payoff of $990,000. Choosing both boxes yields
an expected payoff of $10,990. With opaque boxes, I'd have to choose just
box B. With transparent boxes, I'd arrive with a blindfold on, and open
just box B, and hope the precogniter hadn't made a mistake. It would be
tempting to peek, and choose the other option if there was no megabuck in
B, or to choose the two-box option in order to get the extra kilobuck.
However, if I do this, unless the precognitor was in error, I'm liable
to find only $1000 there, and choose the two-box option, just as predicted.
It's this kind of paradox that suggests the impossibility of precognition.
If X predicts that I'll open both boxes, and so doesn't put the megabuck in
box B, peekers will see that and choose both boxes. If X predicts that I'll
open just box B, greedy peekers will pick both boxes, invalidating X's
prediction. Non-greedy peekers will fullfill the prophesy, of course.
What does this have to do with faith? I'm really not sure, but the
non-peekers made out best in this carefully produced example which assumes
the existence of a precogniter. Maybe it's meant to point out that the
faithful will make out best if there exists a god who has made up certain
arbitrary rules? I feel compelled to note that they'll do less well than
the peekers if they're wrong about this assumption.
--
Jeff Sonntag
ihnp4!mhuxt!js2j
We mean, behavior that follows the norms of reason. Some examples: modus
ponens and the law of non-contradiction are norms of reason. Somewhat
more controversially, the principle "maximize subjective expected utility"
is a norm of reason. Still more controversial are norms for the formation
of "expectation" in this sense (i.e., probability judgements) and "utility"
functions (i.e., value systems).
--Paul Torek torek@umich
(Of course, if *I* were doing the prediction, I'd starve the
bastard into submission by predicting that he'd eat at least
one of his choices.)
--Barry Kort ...ihnp4!hounx!kort
>> What *exactly* do we mean by completely rational behavior? [Kort]
>We mean, behavior that follows the norms of reason. Some examples: modus
>ponens and the law of non-contradiction are norms of reason. Somewhat
>more controversially, the principle "maximize subjective expected utility"
>is a norm of reason. Still more controversial are norms for the formation
>of "expectation" in this sense (i.e., probability judgements) and "utility"
>functions (i.e., value systems).
Is it not the case that the "norms of reason" are not only controversial,
they have evolved steadily over historical times, with major and lasting
contributions from many times and cultures? Even today, we see such
contributions as Axelrod's work on the Evolution of Cooperation, and
new branches of Logic such as Combinatorial Logic, Frege Logic, and
the like (or dislike). Is it not the case that each generation of
philosophers found a dilemma in the preceding structure, and resolved
it by enlarging the dimensionality of the space in which the philosophical
structure resided? (Example: Law of the Excluded Middle. We now have
propositions which are formally undecidable as True or False. They
have "truth-value" somewhere in betweens, as in Fuzzy Logic.)
--Barry Kort ...ihnp4!hounx!kort
All that you have proved here is that perfect precognition is
inconsistent with free will. If the hypotheses of the puzzle were
ever truly met (with transparent boxes), then the person faced
with the "choice" on decision day would really have no choice;
since, BY HYPOTHESIS, the perfect precognicent is a PERFECT
PRECOGNICENT, the "chooser" WILL make the appropriate selection.
Conclusion: Either free will and perfect precognition are incompatible,
or one must have a great deal of constraints on the perfect precognicent
being on exactly when he is allowed to exercise his power. (Perhaps
this is why we haven't heard from God in such a long time?)
-Ben (ucbvax!brahms!lotto / lo...@brahms.Berkeley.EDU)
(Dept of Mathematics, UC Berkeley, Berkeley, CA 94720)
I for one am not ready to see LEM go. Non-propositions must be
distinguished from unprovable-but-true propositions. And then there's
the "quantum logic" issue (is that what you're talking about?), but
I don't see the need for "quantum logic" ... see L. Jonathan Cohen,
``Can Human Irrationality be Experimentally Demonstrated?'' _Behavioral
and Brain Sciences_ 1981. (Also, Cohen refers to Haack, _Deviant Logic_,
which I haven't read but sounds interesting...)
--Paul Torek torek@umich
>All that you have proved here is that perfect precognition is
>inconsistent with free will. If the hypotheses of the puzzle were
>ever truly met (with transparent boxes), then the person faced
>with the "choice" on decision day would really have no choice;
>since, BY HYPOTHESIS, the perfect precognicent is a PERFECT
>PRECOGNICENT, the "chooser" WILL make the appropriate selection.
It seems to me that precognition would still be compatible with
free will in one agent -- the one with precognition. This agent could
figure out what it was going to do by deciding, and then would of course
know. Maybe God is free and we are all robots?
ucbvax!brahms!gsmith Gene Ward Smith/UCB Math Dept/Berkeley CA 94720
Fifty flippant frogs / Walked by on flippered feet
And with their slime they made the time / Unnaturally fleet.
"A logical mind is tolerant of tension, capable of
postponement, rich in countercathexis, and ready
to judge reality on the basis of its own experience."
Since I consider the quality of "logic" to be more demanding and
constaining than the quality of "rationality, I would be interested
to hear reactions on the paradox from those more experienced and
urgently challenged than I am.
Now there's your definition of the physicality of the godhead.
Tom Schlesinger
Plymouth State College
Plymouth, N.H. 03264
decvax!dartvax!psc70!psc90!tos