On 4/27/2022 10:38 AM, smitra wrote:
> On 27-04-2022 04:08, Brent Meeker wrote:
>> On 4/26/2022 5:32 PM, smitra wrote:
>>
>>> On 27-04-2022 01:37, Bruce Kellett wrote:
>>> On Tue, Apr 26, 2022 at 10:03 AM smitra <
smi...@zonnet.nl> wrote:
>>>
>>> On 24-04-2022 03:16, Bruce Kellett wrote:
>>>
>>> A moment's thought should make it clear to you that this is not
>>> possible. If both possibilities are realized, it cannot be the
>>> case
>>> that one has twice the probability of the other. In the long run,
>>> if
>>> both are realized they have equal probabilities of 1/2.
>>>
>>> The probabilities do not have to be 1/2. Suppose one million people
>>>
>>>
>>> participate in a lottery such that there will be exactly one winner.
>>>
>>> The
>>> probability that one given person will win, is then one in a
>>> million.
>>> Suppose now that we create one million people using a machine and
>>> then
>>> organize such a lottery. The probability that one given newly
>>> created
>>> person will win is then also one in a million. The machine can be
>>> adjusted to create any set of persons we like, it can create one
>>> million
>>> identical persons, or almost identical persons, or totally different
>>>
>>>
>>> persons. If we then create one million almost identical persons, the
>>>
>>>
>>> probability is still one one in a million. This means that the limit
>>>
>>> of
>>> identical persons, the probability will be one in a million.
>>>
>>> Why would the probability suddenly become 1/2 if the machine is set
>>> to
>>> create exactly identical persons while the probability would be one
>>> in a
>>> million if we create persons that are almost, but not quite
>>> identical?
>>
>> Your lottery example is completely beside the point.
>>
>> It provides for an example of a case where your logic does not apply.
>>
>>> I think you
>>> should pay more attention to the mathematics of the binomial
>>> distribution. Let me explain it once more: If every outcome is
>>> realized on every trial of a binary process, then after the first
>>> trial, we have a branch with result 0 and a branch with result 1.
>>> After two trials we have four branches, with results 00, 01, 10,and
>>> 11; after 3 trials, we have branches registering 000, 001, 011, 010,
>>>
>>> 100, 101, 110, and 111. Notice that these branches represent all
>>> possible binary strings of length 3.
>>>
>>> After N trials, there are 2^N distinct branches, representing all
>>> possible binary sequences of length N. (This is just like Pascal's
>>> triangle) As N becomes very large, we can approximate the binomial
>>> distribution with the normal distribution, with mean 0.5 and
>>> standard
>>> deviation that decreases as 1/sqrt(N). In other words, the majority
>>> of
>>> trials will have equal, or approximately equal, numbers of 0s and
>>> 1s.
>>> Observers in these branches will naturally take the probability to
>>> be
>>> approximated by the relative frequencies of 0s and 1s. In other
>>> words,
>>> they will take the probability of each outcome to be 0.5.
>>
>> The problem with this is that you just assume that all branches are
>> equally probable. You don't make that explicit, it's implicitly
>> assumed, but it's just an assumption. You are simply doing branch
>> counting.
>>
>> But it shows why you can't use branch counting. There's no physical
>> mechanism for translating the _a_ and _b_ of _|psi> = a|0> + b|1>_
>> into numbers of branches. To implement that you have put it in "by
>> hand" that the branches have weights or numerousity of _a _and _b_.
>> This is possible, but it gives the lie to the MWI mantra of "It's just
>> the Schroedinger equation."
>>
>
> The problem is with giving a physical interpretation to the
> mathematics here. If we take MWI to be QM without collapse, then we
> have not specified anything about branches yet. Different MWI
> advocates have published different ideas about this, and they can't
> all be right. But at heart MWI is just QM without collapse. To proceed
> in a rigorous way, one has to start with what counts as a branch. It
> seems to me that this has to involve the definition of an observer,
> and that requires a theory about what observation is. I.m.o, this has
> to be done by defining an observer as an algorithm, but many people
> think that you need to invoke environmental decoherence. People like
> e.g. Zurek using the latter definition have attempted to derive the
> Born rule based on that idea.
>
> I.m.o., one has to start working out a theory based on rigorous
> definitions and then see where that leads to, instead of arguing based
> on vague, ill defined notions.
"Observer as an algorithm" seems pretty ill defined to me. Which
algorithm? applied to what input? How does the algorithm, a Platonic
construct, interface with the physical universe? Decoherence seems much
better defined. And so does QBism.
Brent